How to communicate a null-results impact evaluation

Illustrative purpose only

Development programs aim to improve outcomes for poor and vulnerable populations, from better health and increased learning to higher productivity and incomes. Impact evaluations apply research tools to assess whether those intended outcomes were achieved because of the program. Despite the best intentions of implementers to design effective programs, impact evaluations sometimes show that an intervention had no effect (null impacts) or even that the program left participants worse off than they would have been absent the program (negative impacts). While such findings provide valuable learning, they can be disheartening for program staff, donors, and others who invested time, effort, and resources into an initiative that was intended to help. 

For evaluators, communicating null or negative results can present multiple challenges, especially if the findings contradict the prevailing beliefs about a program’s effectiveness. For example, in a multi-year program, data from monitoring systems might show that over time, outcomes are progressing in the desired direction, generating expectations amongst program staff and donors that the program is contributing to those improved outcomes. However, if those improvements are driven by external factors (secular trends), rather than the program, the impact evaluation will show that the control group experienced similar improvements, leading to the conclusion that the program had null effects. 

Here are six tips for evaluators on communicating null results:

  • Don’t lose sight of the program’s development objectives and opportunities for improving the intervention model. Many times, in null result scenarios, the impact evaluation will show that the original problem (poverty, malnutrition, low levels of learning, etc) continues to be highly relevant, and may offer insights on promising improvements to the intervention model. This is an opportunity to make evidence-informed changes to the intervention model without losing sight of the original objectives.
  • Follow the program’s theory of change and triage with other evidence sources. When communicating null results, walk through the various levels of the program’s theory of change, connecting the dots between inputs, outputs, intermediate outcomes, and final outcomes (or impacts). Diagnostic studies, qualitative evidence, and process evaluations can be powerful complements for making sense of null results. If the process evaluation found that the program was not delivered as planned, or the evaluation showed low levels of program participation, the null results findings will be less surprising when presented in that context. 
  • Cross-reference multiple sources of data and tell a dynamic story of what happened over time. Audiences tend to be more receptive of null or negative results if evidence from more than one data source tells the same story. For example, evaluators might be able to show that trends over time are consistent between evaluation survey data and program monitoring data, or that results from survey data are confirmed with external remote-sensing data. If time series data are available, it can be helpful to show how key indicators evolved over time across treatment and control groups, especially if null results reflect secular trends where outcomes are improving over time, but independently of the intervention. 
  • Be specific and let the data do the talking. When presenting and interpreting the results of an impact evaluation, present the data objectively in terms of what the study found rather than what the program did (or did not) do, and avoid value judgments. A statement such as “the impact evaluation did not identify a significant effect on outcome X” is preferable to “the program did not have an impact on X” or “the program didn’t work”. 
  • Be upfront and transparent about limitations of the impact evaluation. Impact estimates tend to rely on assumptions and their interpretation may require understanding multiple nuances and caveats. For example, studies are usually set up to detect a given effect size, so the null result should be contextualized in the original study design. Some identification strategies produce estimates of “local” effects, that is, effects that apply to a narrow subset of the population. If that is the case, then it is possible to say that the impacts are null for some sub-populations, but the impacts for everyone else are not known. If the study sample was subject to attrition, or there were changes in timelines that affected how long the population was exposed to the intervention, those are important nuances that may determine whether further investment in data collection is needed before reaching conclusive results. 
  • Work with implementation partners to gather lessons and recommendations, and get in front of the news cycle. Null results offer an important opportunity for learning and introspection, and program implementers should be given the opportunity to process the implications of the evaluation findings and formulate recommendations and policy implications. Partners can take credit for investing in evidence generation and get in front of null results by publicly communicating them with concrete evidence-informed policy recommendations.

Add new comment

Authors

Sebastian Martinez Sebastian MartinezDirector of Evaluation

About

Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members. Guest blogs are by invitation.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.

Archives

Authors

Sebastian Martinez Sebastian MartinezDirector of Evaluation

About

Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members. Guest blogs are by invitation.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.

Archives