3ie is currently funding 100 impact evaluations in low and middle-income countries spread across Africa, Asia and Latin America. We are now in a unique position to learn a lot about what’s working well in designing and conducting impact evaluations and what can be done better to ensure that research produces reliable and actionable findings.

But as grant makers we usually ‘see and experience’ our projects only on paper. We miss out on listening to the voices and perspectives of field workers, project staff and junior research staff. To get a sense of what has been happening on the ground, we recently carried out a field monitoring visit to four 3ie supported projects in diverse sectors in one African country. Most of the visits were to the field site and the meetings mainly with the implementing agency staff.

And we did learn a lot through this field trip, particularly about the relationship between implementing agencies and researchers, challenges involved in implementing a project and an impact evaluation, and the work being carried out to engage stakeholders and disseminate research findings. Some of the lessons we learned raise further questions.

Local researchers listed as ‘Principal Investigators’ on the 3ie grant application had little engagement in the impact evaluation.
This finding was true for all the projects visited. At 3ie, research teams that include developing country researchers receive higher scores in their grant applications. Not surprisingly then, many of the grant applications we receive usually have researchers from the country of the evaluation listed as Principal Investigators. But on the ground it was a different story.

Local ‘Principal Investigators’ may have been involved in determining the main evaluation questions and giving inputs on the context. However, their involvement was certainly not substantial. In one case, the local ‘Principal Investigator’ was significantly involved in the implementation of the intervention but not the impact evaluation.

So why were the local researchers not involved? Were the local researcher names hurriedly added just to meet 3ie’s requirements? Did the lead Principal Investigators think the researchers lacked the capacity to contribute to the impact evaluation? Funders of impact evaluations like 3ie need to address these questions so that they can tweak their own requirements from grantees. We need to reflect more on the ways in which lead Principal Investigators can involve local researchers and build their capacity to conduct impact evaluations.

Researchers need to work with implementing agencies/governments to address challenges related to the implementation of a Randomised Controlled Trial.
Implementing a Randomised Controlled Trial was quite challenging for one implementing agency. During the course of the implementation, some programme participants in the control group felt that they were ‘discriminated’ against. In one particular instance, the discontent among the participants led to clashes with the project staff. Overall, the implementing agency staff felt that they had to compromise on their integrity to safeguard the integrity of research.

So what are the ways researchers can assuage the fears and concerns of both beneficiaries and implementing agency staff?

Getting the buy-in and involvement of implementing agencies is important for generating actionable evidence from an impact evaluation.

One of the projects visited was a striking illustration of how the disconnect between the researchers and the implementing NGO could well have been the main reason for no take-up of the impact evaluation findings. The impact evaluation did not have a clear theory of change. The intervention that was evaluated was poorly designed. But what makes it worse is that the evaluation does not pick up on the fact that intervention was unsuccessful. The NGO has now changed track and moved on to a new programme. The end result: an impact evaluation with no actionable or credible evidence.

If there are many implementing agencies and stakeholders involved in an impact evaluation, getting them to agree on all aspects of the project could lead to delays in the project.
Getting a project off the ground can be a serious challenge if it requires considerable amount of time and diplomatic effort in getting stakeholders to agree on the design of the intervention. In one case, the delay in implementation reduced the duration of the programme. The delay has implications on the findings of the impact evaluation.

Delays in getting an impact evaluation article published can be an obstacle to using evidence

The long wait (as long as a year or more) to get published in academic journals can be an impediment for implementing agencies since there is an embargo on releasing the findings. Implementing agencies want to cut to the chase. They want to go all out, discuss, disseminate and use the evidence from an impact evaluation.

And finally, some implementing agencies think that by conducting an impact evaluation, they will appear as accountable and credible organisations.
An impact evaluation is seen as a gateway to more project funding. While this is not necessarily bad news, the benefit of conducting an impact evaluation should ideally also extend to the production of evidence that is used for designing more effective policies and programmes.

Leave a comment

Enter alphabetic characters only.