Misdiagnosis and the evidence trap: a tale of inadequate program design

Lubuto-library-project

Imagine you wake up tomorrow with a headache, sore throat and fever, perhaps nothing unusual at this time of the year. You drag yourself out of bed and head to your doctor to ask her for something to make you feel better. However, if you had first looked up your symptoms on the net, you would have been surprised to find that headache, sore throat and fever can be caused by 136 different conditions, among them typhoid fever, measles, and brain tumour. Most probably, the doctor will prescribe common flu medication and you will feel better soon, but what if you had any of these other, more serious illnesses?

Misdiagnosis is also a widespread phenomenon in international development. An example: conditional cash transfer (CCT) programs have successfully increased the frequency with which poor people had health check-ups and improved school attendance of poor families’ children. Alas, the evidence that CCTs have improved education and health outcomes has been mixed at best.[1],[2] The reason could be misdiagnosis of the bottlenecks for improving human capital outcomes. These may have been on the supply side of the services (no medicine, no teachers etc.). Therefore, there was a mismatch between diagnosis and treatment, and we should not conclude that CCTs are ineffective at improving health and education outcomes in general. Indeed, if the health centers and schools had been staffed adequately, resourced and trained to service the additional demand, and if non-attendance had been verifiably due to the costs, then CCT programs would likely have led to improved health and education outcomes.

Development interventions have similarities to medical treatments: if you treat superficial symptoms rather than the underlying pathology, or if you give the wrong medicine, you will not cure the illness. In medicine, you would not say that the medicine was ineffective in general; you would say that the doctor misdiagnosed the pathology. Similarly, in international development we can only judge the effectiveness of an intervention after we ascertained that it was designed to address the main underlying problem or ‘binding constraint’. Yet, all too often, as impact evaluators we judge the effectiveness of development interventions without knowledge of whether policy makers and aid agencies established the correct diagnosis of the root cause (or causes) of a certain development problem prior to designing an intervention to address it.

The right diagnosis is a necessary condition to achieve impact in development interventions. As a theoretical argument, this is generally accepted. Nevertheless, to our knowledge there is scant attention paid to this in practical impact evaluation work. We use impact evaluations to assess the effectiveness of development interventions on a variety of outcomes, to assess what “works” and what does not.  When a large number of impact evaluations exist that look at similar interventions on similar outcomes, findings on what works and not may be synthesized in systematic reviews. However, impact evaluations may not pick up on “misdiagnosis” or “missing diagnoses” of the problem, thus also limiting the available information for systematic reviews. In recent promising changes to systematic review methods (in a forthcoming systematic review, Do participation and accountability improve development outcomes?  by Hugh Waddington, Ada Sonnenfeld, Jennifer Stevenson and team), 3ie is classifying impact evaluation studies according to whether underlying developmental bottlenecks were identified, the respective evidence was presented, and the intervention theory proposed to address these bottlenecks.

When we try to test the correlation between appropriate diagnoses and the effect size of development outcomes, as measured in rigorous impact evaluations, we run into a number of issues. Firstly, we cannot rely on impact evaluation reports as they generally give little detail of the diagnostic work and research that went into informing the intervention approach. Secondly, even where we find the relevant project documents that describe the intervention, we find that we cannot rely on their diagnostic claims as objective evidence, as the section describing the problem could have been drafted to fit the pre-identified solution, which called precisely for the intervention that the leading organization was specialized in. Thirdly, in the interesting cases where proper diagnostic work has been carried out (and we were lucky enough to find out about it), it is not always clear that the insights were used correctly to design an intervention.

Therefore, we are calling on the development and evaluation community to prove us wrong or confirm our priors (namely that correct diagnosis is more likely to lead to positive effects):

  • If you know of a project design that was properly informed by diagnostic work (that explained the root causes for why something is happening or not happening based on rigorous research) and was subject to rigorous (counterfactual) impact evaluation (randomized or non-randomized), we want to hear from you!
  • If you know of a project design that was not informed by diagnostic work but was subject to rigorous impact evaluation, we want to hear from you!

Meanwhile, the main takeaway for you as a consumer of evidence is the following: not all claims about the effects of development interventions are reliable. Well-informed intervention decisions require reliable information. Therefore, before judging ‘what works’, ask whether the root causes of a certain development problem were adequately identified and taken into account in an intervention. If the answer is ‘no’ or ‘don’t know’, be careful not to judge the intervention per se but rather question the expertise of the people who designed it or the incentive-structure of the institution that supported the project.

[1] Snilstveit et al., 2016,The impact of education programmes on learning and school participation in low- and middle-income countries, 3ie systematic review summary 7.

[2] Gaarder et al., 2010, ‘Conditional cash transfers and health: unpacking the causal chain’, Journal of Development Effectiveness, Vol. 2, Issue 1.

Add new comment

This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Authors

Marie Gaarder Marie GaarderExecutive Director, 3ie

About

Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members. Guest blogs are by invitation.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.

Archives

Authors

Marie Gaarder Marie GaarderExecutive Director, 3ie

About

Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members. Guest blogs are by invitation.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.

Archives