Launch of 3ie’s policy and institutional reform ‘Methods Menu’

In May 2021, the Millennium Challenge Corporation (MCC) and 3ie began a partnership to advance the learning and knowledge base around the evaluation, design, and implementation of policy and institutional reform (PIR) interventions. We conducted a scoping review that identified gaps in the way PIR interventions are studied and evaluated, and we are now ready to launch an interactive ‘Methods Menu’ that aims to address many of those gaps. The menu aims to increase awareness and uptake of complexity-informed and mixed-methods approaches to strengthen the way PIR is evaluated and implemented.

Sounds good... but what will it cost? Making the case for rigorous costing in impact evaluation research

Imagine two government programs—a job training program and a job matching program—that perform equally well in terms of boosting employment outcomes. Now think about which is more cost-effective. If your answer is ‘no idea’ you’re not alone! Most of the time, we don’t have the cost evidence available to discern this important difference.

3ie’s Agricultural Risk Insurance Evidence Programme: a structured approach to impact evaluations

With climate change becoming a reality, agricultural productivity has suffered considerably. This has put at risk the livelihood of the majority of the world’s poor, who are dependent on agriculture and related activities. Various risk mitigation solutions such as improved seeds and drought irrigation have shown promising results, but the role of transferring risk via agricultural insurance demands deeper exploration.

Too difficult, too disruptive and too slow? Innovative approaches to common challenges in conducting humanitarian impact evaluations

Over 200 million people are in urgent need of humanitarian assistance across the world today.  In 2017, the UN-coordinated appeals reported a shortfall of 41 per cent, despite receiving a record amount of funding. As the demands on these limited funds increase, there is a concurrent increase in the need for high-quality evidence on the most effective ways to improve humanitarian programming.

Learning power lessons: verifying the viability of impact evaluations

Learning from one’s past mistakes is a sign of maturity. Given that metric, 3ie is growing up. We now require pilot research before funding most full impact evaluation studies. Our pilot studies requirement was developed to address a number of issues, including assessing whether there is sufficient intervention uptake, identifying or verifying whether the expected or detectable effect is reasonable and determining the similarity of participants within clusters.

If you want your study included in a systematic review, this is what you should report

Impact evaluation evidence continues to accumulate, and policymakers need to understand the range of evidence, not just individual studies. Across all sectors of international development, systematic reviews and meta-analysis (the statistical analysis used in many systematic reviews) are increasingly used to synthesise the evidence on the effects of programmes.

What did I learn about the demand for impact evaluations at the What Works Global Summit?

At the recently concluded What Works Global Summit (WWGS) which 3ie co-sponsored, a significant number of the sessions featured presentations on new impact evaluations and systematic reviews. WWGS was a perfect opportunity to learn lessons about the demand for and supply of high-quality evidence for decision-making because it brought together a diverse set of stakeholders. There were donors, knowledge intermediaries, policymakers, programme managers, researchers and service providers. They came from both developed as well as developing countries.

Let’s bring back theory to theory of change

Anyone who has ever applied for a grant from 3ie knows that we care about theory of change. Many others in development care about theory of change as well. Craig Valters of the Overseas Development Institute explains that development professionals are using the term theory of change in three ways: for discourse, as a tool, and as an approach.

The pitfalls of going from pilot to scale, or why ecological validity matters

The hip word in development research these days is scale. For many, the goal of experimenting has become to quickly learn what works and then scale up those things that do. It is so popular to talk about scaling up these days that some have started using upscale as a verb, which might seem a bit awkward to those who live in upscale neighbourhoods or own upscale boutiques.

Private outcomes and the public interest: a call for more impact evaluations?

The 2015 Year of Evaluation has now come and gone. There were many noteworthy events (more than 80 conferences, workshops, seminars and the like, according to some accounts), most of which focused on the needs in developing countries. Participants included some of the best known from the evaluation community across the public or non-government sectors. However, the interesting question raised in these events was, Where was the private sector?