Latest blogs

The importance of buy-in from key actors for impact evaluations to influence policy

At a public forum on impact evaluation a couple of years ago, Arianna Legovini, head of the World Bank’s Development Impact Evaluation programme (DIME), declared that ‘dissemination is dead’. But her statement does not imply that we should stop the dissemination of impact evaluation findings for influencing policy.

Moving impact evaluations beyond controlled circumstances

The constraints imposed by an intervention can often make designing an evaluation quite challenging. If a large-scale programme is rolled out nationally, for instance, it becomes very hard to find a credible comparison group. Many evaluators would shy away from evaluating programmes when it is hard to find a plausible counterfactual. Since it’s also harder to publish the findings of such evaluations, there don’t seem to be many incentives for evaluating such programmes.

Placing economics on the science spectrum

Where does economics fit on the spectrum of sciences? ‘Hard scientists’ argue that the subjectivity of economics research differentiates it from biology, chemistry, or other disciplines that require strict laboratory experimentation. Meanwhile, many economists try to separate their field from the ‘social sciences’ by lumping sociology, psychology, and the like into a quasi-mathematical abyss reserved for ‘touchy-feely’ subjects, unable to match the rigor required of economics research.

Home Street Home

The International Day for Street Children provides a platform to call for governments to act and support the rights of street children across the world. But a recent study shows that we have very little evidence on the ways in which we could act most effectively to address the needs of street children. We need such studies as this is the only way we can avoid spending on ineffective programmes and channel funding to programmes that do work.

Uganda shows its commitment to evaluation

Uganda’s cabinet has just approved a new monitoring and evaluation policy, which will be officially endorsed and disseminated next month. It comes as a positive signal after several donors suspended aid to the Government and provides a solid foundation to boost the country’s commitment to evidence informed policy-making.

Tips on selling randomised controlled trials

When we randomise, we obviously don’t do it across the whole population. We randomise only across the eligible population. Conducting an RCT requires that we first define and identify the eligible population. This is a good thing. Designing an RCT can help ensure better targeting by making sure the eligible population is identified properly.

Collaborations key to improved impact evaluation designs

Do funding agencies distort impact evaluations? A session organised by BetterEvaluation on choosing and using evaluation methods, at the recent South Asian Conclave of Evaluators in Kathmandu, focused on this issue. Participants were quite candid about funding agencies dictating terms to researchers. “The terms of reference often define the log frame of evaluation (i.e. the approach to designing, executing and assessing projects) and grants are awarded on the basis of budgets that applicants submit.

Of sausages and systematic reviews

We know that systematic reviews can be a very good accountability exercise in helping answer the question “do we know whether a particular programme is beneficial or harmful?”. So instead of cherry picking our favourable development stories, we collect and synthesise all the rigorous evidence.

Using the causal chain to make sense of the numbers

At 3ie, we stress the need for a good theory of change to underpin evaluation designs. Many 3ie-supported study teams illustrate the theory of change through some sort of flow chart linking inputs to outcomes. They lay out the assumptions behind their little arrows to a varying extent. But what they almost invariably fail to do is to collect data along the causal chain. Or, in the rare cases where they do have indicators across the causal chain, they don’t present them as such.

Of rumours related to blood, poison and researchers

The attempt to collect blood samples of children for a malaria treatment intervention in Kenya met with stiff opposition from the study community. There were rumours of blood stealing, covert HIV testing and suspicion about the safety of the study drugs. It may be quite easy to attribute this rumour to ignorance and superstition. But these rumours do not come out of the blue. Historical, anthropological and sociological accounts can trace the roots of such distrust and suspicion.

About

Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members. Guest blogs are by invitation.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.

Archives