Unexpected and disappearing outcomes: Why relying on proxy outcomes is often not enough

flickr_IITA

In the early years of the Second World War, British intelligence undertook one of its first exercises in strategic deception. To divert the attention of occupying Italian forces from a planned attack on Eritrea by troops based in Sudan, the British engaged in various activities to make the Italians think an attack was going to be launched on British Somaliland from Egypt.  The British were successful in making the Italians believe that an attack was coming. But, thinking they would not be able to withstand the attack, the Italians moved their troops into Eritrea – exactly where the British didn’t want them to be.

The lesson the intelligence officer in charge took from this experience was that you need to focus on what you want the enemy to do.  To his surprise, when he spoke with many of his colleagues about their planned intelligence operations, they couldn’t answer the question of what they were really hoping to achieve by their actions.

A recent World Bank Independent Evaluation Group blog questioned the need for training in evaluation.  This blog raises a legitimate question. Are we – and that includes 3ie – providing training in evaluation because ‘that is what we do’ or have we really thought through what we would like to see happen as a result?  What we would like to see happen is an increase in the production and use of evidence from impact evaluations and systematic reviews. But is that what we will achieve?

With deliberate irony, in my training sessions, I use training itself as an example of the funnel of attrition.  There is a very long causal chain between offering a training course and increasing the use of evidence in policy.  The right people have to know about the course. They also have to want to and be able to attend it. Having arrived at the venue, they have to show up for my session. They then have to stay awake, pay attention, understand, agree and absorb.  And finally, they need to retain what they have heard and act upon it.  Of every 100 people who register, I would say fewer than ten take direct action as a result of the training.

So, how can we push the causal chain a bit to increase the numbers? A good starting point is that ‘if you don’t use it, you lose it’. Training in isolation is of little use. People need opportunities to apply what they have learned.  To this end, 3ie trainings are ‘experiential learning workshops’ structured around developing proposals for evaluations of programmes suggested by participants before the workshop. With this engagement, participants may either directly follow up with an impact evaluation, or at least see possibilities for applying these ideas in their work.  Where possible, our workshops are linked to follow up grant activities. If the participants are policymakers or programme implementers, they get to use it right away in their interaction with 3ie-supported research teams.

We can also reinforce elements of the causal chain in other ways. In the last workshop I ran during Evaluation Week in Uganda, I introduced end of the day quizzes. In true school fashion, participants swapped papers and marked their neighbour’s paper as we went through the answers. The test was a good opportunity to clarify and reinforce some key concepts .And it was also widely enjoyed. See Box 1 for some sample questions. But the scores were a salutary reminder of the funnel of attrition: there was far from perfect knowledge translation!

And finally, we can of course evaluate ourselves. Many training courses have exit surveys, which are a useful monitoring tool for immediate feedback.  At 3ie, we have done follow up surveys after six months, where we have asked about how people have used what they have learned from training.  But really, as evaluators, 3ie and others providing these trainings, ought to be collecting some evidence on their impact. We shouldn’t be relying on output data of the number of people trained or participant satisfaction as our performance measure.  There is an opportunity here for those who teach impact evaluation to practice what they preach.

(The anecdote in the opening paragraph is from ‘Operation Fortitude: the greatest hoax of the Second World War’ by Joshua Levine, an account of British strategic deception around the allied landings in Normandy in June 1944.)

Add new comment

This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Authors

Howard white Howard WhiteDirector, GDN Evaluation and Evidence Synthesis Programme

About

Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members. Guest blogs are by invitation.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.

Archives

Authors

Howard white Howard WhiteDirector, GDN Evaluation and Evidence Synthesis Programme

About

Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members. Guest blogs are by invitation.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.

Archives