Latest blogs

If the answer isn’t 42, how do we find it?

Those of you around my age may be familiar with Douglas Adams’ Hitchhikers Guide to the Galaxy in which the answer to the question ‘What is the meaning of life, the universe and everything?’ turns out to be the number 42.  We wish that systematic reviews could be like that. Throw all the evidence into a big number cruncher and out pops a single answer.

The efficacy – effectiveness continuum and impact evaluation

This week we proudly launch the Impact Evaluation Repository, a comprehensive index of around 2,400 impact evaluations in international development that have met our explicit inclusion criteria. In creating these criteria we set out to establish an objective, binary (yes or no) measure of whether a study is an impact evaluation, as defined by 3ie, or not.

Is independence always a good thing?

Evaluation departments of development agencies have traditionally jealously guarded their independence. They are separate from the operational side of agencies, sometimes entirely distinct, as in the case of UK’s Independent Commission for Aid Impact or the recently disbanded Swedish Agency for Development Evaluation. Staff from the evaluation department, or at least the head, are often not permitted to stay on in any other department of the agency once their term ends.

Failure is the new black in development fashion: Why learning from mistakes should be more than a fad

During a meeting at the Inter-American Development Bank (IADB) last week, I mentioned the UK Department for International Development’s moves toward recognising failure, and the part recognizing failure has in learning (see Duncan Green’s recent blog on this).   Arturo Galindo, from IADB’s Office of Strategic Planning and Development Effectiveness, responded by picking up a copy of their latest Development Effectiveness Overview, and openi

When will researchers ever learn?

I was recently sent a link to this 1985 World Health Organization (WHO) paper which examines the case for using experimental and quasi-experimental designs to evaluate water supply and sanitation (WSS) interventions in developing countries. This paper came out nearly 30 years ago. But the problems it lists in impact evaluation study designs are still encountered today. What are these problems?

Institutionalising evaluation in India

The launch event of Independent Evaluation Office (IEO) which happened in Delhi, included an eclectic mix of presenters and panelists consisting of key policymakers (including the chairperson of India’s planning commission), bureaucrats, India-based researchers and representatives from the Indian media. The discussions at the event brought to the fore several challenges that the IEO will face as it moves forward:

How much evidence is enough for action?

One of the most useful ways in which evidence from rigorous evaluations can be used is to help policymakers take decisions on going to scale. Notable recent examples of scaled-up interventions based on high-quality synthesised evidence are conditional cash transfers programmes and early child development (pre-school) programmes.

The Global Open Knowledge Hub: building a dream machine-readable world

The word ‘open’ has long been bandied about in development circles. We have benefited in recent years from advocacy to increase open access to research articles, and open data shared by researchers or organisations. But open systems that enable websites to talk to each other (e.g. open application programming interface) have been a little harder to advance into greater use, simply because they are not built for non-technical users.

Opening a window on climate change and disaster risk reduction

Nature has provided us some stark recent reminders that our climate is changing, often towards the extremes. Super Typhoon Haiyan slammed the Philippines. The ‘polar vortex’ blanketed the United States in snow. While East Coasters in the United States may still feel some of the polar sting, it is the world’s poorest and most vulnerable that feel the sustained harms of climate change.

When is an error not an error?

Thomas Herndon, Michael Ash, and Robert Pollin (HAP) in their now famous replication study of Reinhart and Rogoff’s (R&R) seminal article on public debt and economic growth use the word “error” 45 times. At 3ie, we are more than a year into our replication programme, and we are seeing a similar propensity for replication researchers to use the word “error” (or “mistake” or “wrong”) and for this language to cause contentious discussions between the original authors and replication researchers.

About

Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members. Guest blogs are by invitation.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.

Archives