M&E: A tale of two brothers

Monitoring and Evaluation (M&E) are always mentioned together but in practice these two disciplines pretty much evade each other. This is despite the fact that they could be highly beneficial to each other, and if carefully combined, also to the intervention.

Does development need a nudge, or a big push?

Sending people persuasive reminder letters to pay their taxes recovered ₤210 million of revenue for the UK government. Getting the long term unemployed to write about their experiences,  increased their chances of getting a job. Placing healthy choices of food –like fruit instead of chocolate- in obvious locations improves children’s eating habits.

Evidence Matters and so does blogging

3ie is not just a grant-making institution. As a knowledge broker, we promote theory-based, and policy-relevant impact evaluations and systematic reviews. Blogs are an increasingly important way that 3ie can be communicating its messages more widely.  Our methods blogs have covered the importance of mixed methods and participatory approaches, various perspectives on causal chain analysis (see here and here), and how to promote randomised control trials effectively.

The HIV/AIDS treatment cascade

One of the reasons we appreciate international days is that they prompt us to pause and reflect on what we’ve been doing in the past year, as well as think about what the next year will bring.  On this International AIDS Day, our first reflection is realising how much we have grown our HIV/AIDS programming in 3ie in 2013.

A pitch for better use of mixed methods in impact evaluations

At the opening session of 3ie’s recent Measuring Results conference, Jyotsna Puri, Deputy Executive Director and Head of Evaluation at 3ie, said, “It takes a village to do an impact evaluation.” What she meant was that, for an impact evaluation to be successful and policy relevant, research teams need to be diverse and include a mix of disciplines, such as statisticians, anthropologists, economists, surveyors, enumerators and policy experts, as well as use the most appropriate mix of evaluation and research methods.

Shining a light on the unknown knowns

Donald Rumsfeld, a former US Secretary of Defense, famously noted the distinction between known knowns(things we know we know), known unknowns (things we know we don’t know), and unknown unknowns (things we don’t know we don’t know). In international development research, these same distinctions exist.

How will they ever learn?

The low-quality of education in much of the developing world is no secret. The Annual status of education report (Aser), produced by the Indian NGO Pratham, has been documenting the poor state of affairs in that country for several years. The most recent report highlights the fact that more than half of grade five students can read only at grade two level. Similar statistics are available from around the world.

Can we learn more from clinical trials than simply methods?

What if scientists directly tested their drug ideas on humans without first demonstrating their potential efficacy in labs? This question sounds hypothetical because we all know that using untested drugs can be potentially dangerous. If we were then to use the same logic, should we not be exercising similar caution with randomized controlled trials (RCTs) of social and economic development interventions involving human subjects?

The importance of buy-in from key actors for impact evaluations to influence policy

At a public forum on impact evaluation a couple of years ago, Arianna Legovini, head of the World Bank’s Development Impact Evaluation programme (DIME), declared that ‘dissemination is dead’. But her statement does not imply that we should stop the dissemination of impact evaluation findings for influencing policy.

Moving impact evaluations beyond controlled circumstances

The constraints imposed by an intervention can often make designing an evaluation quite challenging. If a large-scale programme is rolled out nationally, for instance, it becomes very hard to find a credible comparison group. Many evaluators would shy away from evaluating programmes when it is hard to find a plausible counterfactual. Since it’s also harder to publish the findings of such evaluations, there don’t seem to be many incentives for evaluating such programmes.