• 0
  • 1
  • 2

Latest blogs

Understanding what’s what: the importance of sector knowledge in causal chain analysis

My recent blog, How big is big enough?, argued that you need sector expertise to judge if the effect of a programme is meaningful rather than just statistically significant. But the need for sector expertise goes far deeper than that. I have recently been reading impact evaluations of water supply and sanitation studies. The studies by the non-sector researchers (mostly economists) collect data on the outcome of interest, usually child diarrhoea. But they do little more than that.

What’s wrong with evidence-informed development? Part 2

3ie’s recent systematic review of farmer field schools (FFS) found that these programmes worked as pilots and small- scale programmes. But the few impact evaluations  of  national-level programmes found no impact.  The evidence suggested that problems in recruiting and training appropriate faciliators impeded the scale-up of the experiential learning model of farmer field schools.

Evidence gap maps: an innovative tool for seeing what we know and don’t know

Whether you are a research funder, decision maker or researcher, keeping up with the ever expanding evidence base is not easy. Over 2600 impact evaluations and 300 systematic reviews assessing the effects of international development interventions have been completed or are ongoing to help answer that question and understand how, why and at what cost.  Despite this increase in quality evidence, more evidence is needed, which is why funders and researchers continue to fund and produce new research.

What’s wrong with evidence-informed development? Part 1

On my reading list as an undergraduate in development studies was Peter Laslett’s The World We Have Lost, This is a social history that challenges the view that pre-industrial England was a stagnant society. Rather, it had many of the features of industrial or even modern Britain.

How to peer review replication research

“The 3ie replication process differs in important ways from the standard research community-led peer-review process in academic journals. We have been explicitly instructed by 3ie staff not to discuss our experiences with the replication process at any length in this note, including our views on the weaknesses of their current system and the review standards they employ.

Proof-of-concept evaluations: Building evidence for effective scale-ups

I delivered a talk at 3ie’s Delhi Seminar Series on a recently published PLoS ONE paper) and follow-up research. This project was a randomised experiment evaluating the potential for text messages to remind malaria patients to complete their treatment course of antimalarial medication.

Myths about microcredit and meta-analysis

It is widely claimed that microcredit lifts people out of poverty and empowers women. But evidence to support such claims is often anecdotal. A typical micro-finance organisation website paints a picture of very positive impact through stories: “Small loans enable them (women) to transform their lives, their children’s futures and their communities. 

Demand creation for voluntary medical male circumcision: how can we influence emotional choices?

This year in anticipation of World AIDS Day, UNAIDS is focusing more attention on reducing new infections as opposed to treatment expansion. As explained by Center for Global Development’s Mead Over in his blog post, reducing new infections is crucial for easing the strain on government budgets for treatment as well as for eventually reaching “the AIDS transition” when the total number of people living with HIV begins to decline.

How big is big? The need for sector knowledge in judging effect sizes and performing power calculations

A recent Innovations for Poverty Action (IPA) newsletter reported new study findings from Ghana on using SMS reminders to ensure people complete their course of anti-malaria pills. The researchers concluded that the intervention worked. More research is needed to tailor the messages to be even more effective.

Calculating success: the role of policymakers in setting the minimum detectable effect

When you think about how sample sizes are decided for an impact evaluation, the mental image is that of a lonely researcher laboring away on a computer, making calculations on STATA or Excel. This scenario is not too far removed from reality. But this reality is problematic. Researchers should actually be talking to government officials or implementers from NGOs while making their calculations


Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members. Guest blogs are by invitation.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.