Proof-of-concept evaluations: Building evidence for effective scale-ups

16 December 2014
Author: Heather Lanthorn

I delivered a talk at 3ie’s Delhi Seminar Series on a recently published PLoS ONE paper) and follow-up research. This project was a randomised experiment evaluating the potential for text messages to remind malaria patients to complete their treatment course of antimalarial medication.

Myths about microcredit and meta-analysis

10 December 2014
Author: Hugh Waddington

It is widely claimed that microcredit lifts people out of poverty and empowers women. But evidence to support such claims is often anecdotal.

Demand creation for voluntary medical male circumcision: how can we influence emotional choices?

01 December 2014
Author: Eric Djimeu, Annette Brown

This year in anticipation of World AIDS Day, UNAIDS is focusing more attention on reducing new infections as opposed to treatment expansion.

How big is big? The need for sector knowledge in judging effect sizes and performing power calculations

27 November 2014
Author: Howard White

A recent Innovations for Poverty Action (IPA) newsletter reported new study findings from Ghana on using SMS reminders to ensure people complete their course of anti-malaria pills. The researchers concluded that the intervention worked. More research is needed to tailor the messages to be even more effective.

Calculating success: the role of policymakers in setting the minimum detectable effect

24 November 2014
Author: Shagun Sabarwal

When you think about how sample sizes are decided for an impact evaluation, the mental image is that of a lonely researcher laboring away on a computer, making calculations on STATA or Excel. This scenario is not too far removed from reality.

But this reality is problematic. Researchers should actually be talking to government officials or implementers from NGOs while making their calculations. What is often deemed as ‘technical’ actually involves making several considered choices based on on-the-ground policy and programming realities.

“Well, that didn’t work. Let’s do it again.”

11 November 2014
Author: Howard White

Suppose you toss a coin and it comes up heads. Do you conclude that it is a double-headed coin? No, you don’t. Suppose it comes up heads twice, and then a third time. Do you now conclude the coin is double-headed? Again, no you don’t. There is a one in eight chance (12.5 per cent) that a coin will come up heads three times in a row. So, though it is not that likely, it can and does happen.

Requiring fuel gauges: A pitch for justifying impact evaluation sample size assumptions

17 October 2014
Author: Eric Djimeu, Benjamin DK Wood

We expect researchers to defend their assumptions when they write papers or present at seminars. Well, we expect them to defend most of their assumptions. However, the assumptions behind their sample size, determined by their power calculations, are rarely discussed. Sample sizes and power calculations matter. Power calculations determine sample size requirements, which match budget constraints with minimum sample size requirements.

Unexpected evidence on impact evaluations of anti-poverty programmes

28 September 2014
Author: Martina Vojtkova

The first email that caught my eye this morning as I opened my inbox was Markus Goldstein’s most recent World Bank blog post, “Do impact evaluations tell us anything about reducing poverty?” Having worked in this field for four years, I too have been thinking that we were in the business of fighting poverty, and like him, I expected that impact evaluations, especially impact evaluations of anti-poverty programmes, would tell us whether we are reaching the poor and helping

How 3ie is tackling the challenges of producing high-quality policy-relevant systematic reviews in international development

26 September 2014
Author: Hugh Waddington

At its annual colloquium being held in Hyderabad, India, the Cochrane Collaboration is focusing on evidence on the global burden of disease of mostly treatable illnesses that are concentrated among populations living in low- and middle-income countries (L&MICs).  We already have a lot of systematic review evidence about what works to prevent and treat them.  Yet they remain prevalent due to the lack of resources, implementation capacity and population attitudes.

Making impact evidence matter for people’s welfare

24 September 2014
Author: Heather Lanthorn, Birte Snilstveit

The opening ceremony and plenary session at the Making Impact Evaluation Matter conference in Manila made clear that impact evidence – in the form of single evaluations and syntheses of rigorous evidence – do indeed matter.