Making a Measure: Capturing Vendor Business Health in Local Markets
By Clare Clingain and Emily Sloane, International Rescue Committee (IRC)
If you want to understand something, you probably need to measure it – and measure it well. Chances are, someone has already tried to measure what you want to examine, but if that’s not the case, it’s important to design a tool that’s fit for the intended purpose and to test it out in multiple contexts.
When the IRC set out to understand vendor experiences in markets affected by crises, we couldn’t find any publicly accessible tools that captured vendor or business-level health, despite the plethora of market-level assessment tools available. As a part of a larger research project funded by USAID (Generating evidence on the effects of cash relief on local markets), we set out to design and pilot a vendor/business health tool in humanitarian contexts.
While the immediate purpose of the tool was to support our research project, we anticipate that this resource will be useful to other agencies who wish to measure the vendor, or business-level, impacts of market-based programming, including emergency relief and market support activities, whether for research that builds on the IRC’s research project or as part of more routine program monitoring and evaluation.
Because creating a good measurement tool is an iterative process, we did our best to quantify what did and did not work during out two pilots of the tool in Chad (2019) and Colombia (2021). Our research teams surveyed a total of 151 vendors across five markets in Chad and 100 vendors across two market areas in Colombia. Full details on the study methodology and findings can be found on this website in English, French, and Spanish.
The tool consisted of six sections: general business activity, business capacity, sales, investments, robustness to shock, and customer base. We also included a set of questions on market activity to be administered to market associations or other relevant organizations. However, this section led to one of our quickest learnings – not every market will have an organization that is well suited to fill this out, as was the case in Colombia.
We intentionally designed ways to incorporate learning into the tool. We embedded two mini experiments in the survey to understand which type of response option would work best for certain questions. First, we asked vendors to broadly categorize the products that they sold (ex. perishable foods, clothing, phone cards, etc.) while also asking them to name the key commodities they sold (ex. flour, sugar, shoes, etc.). We then recoded the product names to fit into the categories from the other response option to see if they lined up. Our second mini experiment was in the capacity section, where vendors were asked about the quantity of each key commodity they currently had in stock. Vendors were randomly assigned to either get a preset list of cut-offs (< 10, > 10 but ≤ 20, > 20 but ≤ 50, > 50 but ≤ 100, > 100) or to fill in a number. We then categorized the fill-in response to match the preset categories, and tested for equality of the two distributions. In both experiments, the two distributions matched up pretty well, leading us to recommend using the granular fill-in response rather than the multiple choice options, as this will allow for richer data that can be re-categorized at the user’s discretion since the preset categories we used may not work in all contexts or for all market actors.
While these two experiments pointed to approaches that worked well, our pilots of the tool also taught us which approaches didn’t work. Initially, we asked vendors to report how long it took them to restock globally. Yet the data were so varied and hard to interpret that we decided the next iteration of the tool should ask for restock time per product to produce more meaningful data. In other instances, we weren’t able to make precise recommendations, since what we saw seemed like it might depend very much on the contexts of Chad and Colombia.
The IRC has proposed several other changes to the vendor health tool, as feasible based on the findings from the pilots, to improve its use and usability. Future piloting of the tool will only bring about new ideas and better ways to measure vendor-level outcomes. Although we hope to use this tool ourselves, we hope others in the humanitarian space will test it out and share what they learn. The only way we can improve measurement is if we are open and transparent about what does and doesn’t work.
- Markets in Crises