Causal pattern of N impact evaluations

Addressing attribution of cause and effect in small n impact evaluations: towards an integrated framework

November 17, 2011

Speakers: Howard White; Daniel Phillips, 3ie

The drive to demonstrate results has led to an increased focus on impact evaluation, though this has largely led to an increase in large n evaluations involving tests of statistical differences in outcomes between treatment and comparison groups. The need for ‘small n’ approaches (where n is the sample size) arises when data is only available for one or a few units of assignment with the result that tests of statistical significance are not possible.  Examples include capacity building in a single organisation or significant heterogeneity with sub-groups too small for statistical analysis.

The latest seminar in the 3ie-LIDC series ‘What works in international development?’ draws on work by Howard White and Daniel Phillips to explore how small n impact evaluations can address the attribution question of how far an intervention has altered the ‘state of the world’. Large n evaluations do this by statistical means, but small n evaluations build a case by opening the ‘black box’ lying between cause and effect and assembling an in-depth account designed to demonstrate beyond reasonable doubt the link between an intervention and observed changes.

Various methodologies suitable for small n analysis were assessed with a view to investigating how they tackle attribution. Key steps emerging from the analysis included the importance of: clearly setting out the attribution question(s), outlining an intervention’s theory of change, identifying other potential causal mechanisms and critically appraising the evidence in order to document each link in the actual causal chain. However, the question of what constitutes valid evidence for each link in the causal chain is one that many of these methodologies only touch upon; bias arises in small n evaluation if there is a systematic tendency to over or under-estimate the strength of the causal relationship, caused by bias in collection or interpretation and analysis of data.  

The conclusions build upon much that is already well known in evaluation; we argue that small n analysis can address the crucial question of attribution by defining the intervention to be evaluated, identifying the evaluation questions being asked and setting out a theory of change along with alternative potential causal explanations, identifying the mix of methods to answer evaluation questions and creating a clear data collection and analysis plan designed to tackle potential biases.

View the Vodcast of this presentation.

Scroll to Top