WACIE Helpdesk: A new source of support for evidence-informed decision-making in West Africa

In the countries where the West Africa Capacity-building and Impact Evaluation (WACIE) program works, the gaps in evaluation experience are stark, as the program's scoping study found. Accordingly, the use of evidence in policymaking is limited. But the barrier is capacity, not interest – the same study found that the desire among policymakers and evaluation professionals for support and capacity building was nearly-universal.

Addressing the need for timely and reliable evidence in the time of COVID-19

The COVID-19 pandemic brings the importance of high-quality, timely and relevant evidence to the fore. Governments all over the world justify radical policies to control and manage the pandemic with reference to evidence.

Be careful what you wish for: cautionary tales on using single studies to inform policymaking

For a development evaluator, the holy grail is to have evidence from one’s study be taken up and used in policy or programming decisions that improve people’s lives. It’s not easy. Decisions are based on many factors. The availability of evidence is just one of them. And of course, even when evidence is taken up, it does not mean that it will lead to the right decision.

Learning power lessons: verifying the viability of impact evaluations

Learning from one’s past mistakes is a sign of maturity. Given that metric, 3ie is growing up. We now require pilot research before funding most full impact evaluation studies. Our pilot studies requirement was developed to address a number of issues, including assessing whether there is sufficient intervention uptake, identifying or verifying whether the expected or detectable effect is reasonable and determining the similarity of participants within clusters.

Implementing impact evaluations: trade-offs and compromises

In June this year, 3ie and the International Fund for Agricultural Development organised a workshop where we had several productive discussions around two key questions: Are impact evaluations answering policy-relevant questions and generating useful evidence? What are the challenges faced in designing and implementing impact evaluations of cash transfers and agricultural innovation programmes?

Impact Evaluation: How the Wonkiest Subject in the World Got Traction

Creating 3ie was the outcome of the Evaluation Gap Working Groupthat we led along with Nancy Birdsall to address the limited number of rigorous impact evaluation of public policies in developing countries. As CGD celebrates its 15th year, it is worth considering what made that working group so successful, the obstacles we confronted, and the work that still remains to be done.

Toward evidence-informed policies for achieving the Sustainable Development Goals

So, 2015 has arrived and the Millennium Development Goals (MDGs) are to be replaced by the Sustainable Development Goals (SDGs). But shouldn’t we stop and ask how we have done on the MDGs first? “How we have done” can be seen an outcome monitoring question: have the targets been reached? But since we have fallen far short on some targets, such as access to improved sanitation, we need to dig deeper and ask which policies have been successful in helping achieve the targets.

Making impact evidence matter for people’s welfare

The plenary session at the Making Impact Evaluation Matter conference in Manila made clear that impact evidence – in the form of single evaluations and syntheses of rigorous evidence – do indeed matter. Two key themes were (1) strong evidence about the causal effects of programmes and policies matter to making decisions that improve the welfare of people living in low- and middle-income countries and (2) that, to make impact evaluation matter more, we need to continue to make efforts to build capacity to generate, understand, and use such evidence in those same countries.

Early engagement improves REDD+ and early warning system design and proposals

At 3ie, our mission is to fund the generation and sharing of sound, useful evidence on the impacts of development programmes and policies work. Actually, we’re more curious (or nosy) than that. For impact evaluation that matters, we need to know which bits of a programme worked, which didn’t, why and through which mechanisms, in which contexts and for what costs.

Gearing up for Making Impact Evaluation Matter

Over the last week, 3ie staff in Delhi, London and Washington were busy coordinating conference logistics, finalising the conference programme, figuring out how to balance 3ie publications and clothing in their suitcases, and putting the last touches to their presentations. This is usual conference preparation for a conference that is going to be different. Why is this conference different? The participant mix – more than 500 people – is balanced among policymakers, programme managers and implementers, and researchers.