An Evaluator Evaluates the Evaluation Process: Elizabeth Dunn on effectiveness assessments

Elizabeth Dunn is a fairly regular presenter for the USAID Breakfast Seminar Series and yesterday was a perfect example of why. Yesterday, during her presentation on “Assessing the Effectiveness of Value Chain Development in India and Zambia,” Dr. Dunn offered an insightful look at evaluations, why we do them and how we can do them better. The topic was timely too, both building upon a presentation by Zan Northrip (the facilitator) this past summer as well as relating to the new evaluation policy announced last month by USAID Administrator Rajiv Shah. It all comes down to the two purposes of evaluation: accountability and learning (also known in years past as “proving and improving”).

Dunn’s review of effectiveness assessments (the evaluation approach used) was based on her experience with the GMED project in India and PROFIT project in Zambia. These are private sector development (PSD) projects working in a number of different value chains to seek systematic changes. As she worked through the assessments of these projects, she also used the opportunity to capture lessons learned about the evaluative process itself.

As Dunn explained, effectiveness assessments were originally designed as part of the USAID Accelerated Microenterprise Advancement Project (AMAP) as a new way to deal with the complexity of PSD projects which, more often than not, include policy-level intervention, major shifts in the business and enabling environment requiring project evolution, spillover effects, time lags, and other challenges. The resulting process has four major steps: 1) causal model/research design, 2) longitudinal survey, 3) process evaluation (i.e. review of what actually happened during implementation, as opposed to what was originally planned), and 4) qualitative field study.

Dunn stressed the importance of step three because changes in the external environment so often cause changes to project implementation and evaluations need to look not only at what was supposed to happen, but what actually did happen. Since it is the rule rather than the exception that exogenous factors will change or shift mid-project, the process evaluation can help evaluators determine how those changes affected the project and what that might mean for the assessment process. In Zambia, the process evaluation wasn’t conducted and Dunn thought that the effectiveness assessment lost a lot as a result.

In the end, I took away two key points on how to conduct good evaluations:

Dunn talked about how while real estate agents always tout “location, location, location,” evaluators should adopt a mantra of “design, design, design.” I took this to mean that evaluation can’t be an afterthought, but that it needs to be designed right alongside the project itself. Otherwise, how can you get good baseline data? During the Q&A session, Northrip took the connection between project design and evaluation design to a humorous extreme saying that if evaluators designed projects, there would only be one time-bound activity with very clear intervention and control groups. Since that isn’t going to happen any time soon, nor would we want it to, Dunn urges evaluators to use all of the tools in their toolbox (causal model, project implementation plan, sound data collection methods, process evaluation, etc.) when designing assessments. But even if you’ve got a design you feel confident about, you still have to be prepared to make changes along the way. That brings us to my second take-way.

Because all projects face shifting external factors, they usually end up evolving from the initial design to some extent (at least they should if they still hope to be relevant). In the old model where M&E only happens at the end, this wouldn’t be as important. But if evaluation (or its sub-components, like data collection) is a more integrated activity, the evaluation design will have to evolve as well. For this reason, Dunn recommends a shotgun approach, which I took to mean as measuring a lot of stuff to start out with and then only keeping what is relevant.

In closing, Dr. Dunn recognized that the audience still had a lot of questions “about the numbers,” so she encouraged everyone to check out the assessment for GMED India.