Shining a light on the unknown knowns

Shining a light on the unknown knowns

Donald Rumsfeld, a former US Secretary of Defense, famously noted the distinction between known knowns(things we know we know), known unknowns (things we know we don’t know), and unknown unknowns (things we don’t know we don’t know). In international development research, these same distinctions exist. There is published evidence that can be used to inform programmes and policies; there are research questions that have yet to be tested and interventions that have yet to be evaluated; and there are, we certainly hope, ideas and innovations that have yet to be developed or discovered. But what about the fourth category? Is there such a thing as unknown knowns?

In this context, unknown knowns would be findings from research and evaluations that never become publicly known. As it turns out, unknown knowns are a very real problem in social science research and within that, development impact evaluation. Two causes of unknown knowns are reporting bias and publication bias, with the latter often exacerbating the former. Reporting bias is when a researcher selectively reports results after seeing them. Humphreys et al. (2013) call this ‘fishing’ and explain that “a model is ‘fished’ when the decision to report the model depends on the realisation of the conclusion.” Fishing can occur when the researcher wants to support a particular conclusion or when she simply wants her results to be as statistically significant, interesting, or unexpected as possible. Journal editors and referees who only want to publish articles with statistically significant or interesting or unexpected results perpetuate these incentives for reporting bias on the part of researchers. Meanwhile, researchers who resist fishing and submit articles without compelling findings are often unable to get these articles published—publication bias.

These biases have been well known for years. In 1979, Robert Rosenthal wrote:

“For any research area, one cannot tell how many studies have been conducted but never reported. The extreme view of the ‘file drawer problem’ is that journals are filled with the 5 per cent of the studies that show Type I errors, while the file drawers are filled with the 95 per cent of the studies that show non-significant results.” (Rosenthal 1979 p. 638)

Rosenthal’s extreme view of the file drawer problem is indeed extreme. However, if one were to count hypotheses tested instead of studies conducted, the notion that the percentage of hypotheses with unreported, non-significant results might far exceed the percentage of hypotheses with statistically significant results becomes uncomfortably difficult to ignore.

Why are these biases such a big concern for development impact evaluations? Because the external validity of findings from the impact evaluation of one programme depends in large part on having several impact evaluations of similar interventions across different contexts. Taken together—preferably in a systematic review—consistent findings across several studies can provide compelling evidence for development programme design and policymaking. If the sample of evaluation findings is biased, the evidence will be wrong. Take, for example, conditional cash transfers. There is widespread evidence that these programmes can indeed produce positive results on use of health services and school enrolment (3ie 2010). Many governments have implemented conditional cash transfer programmes based on that evidence. But what if we found out that there have been even more studies (or even more equally plausible specifications tested) that found null or negative results but were never reported?

Just as the problem has been understood for a long time, possible solutions have been known for a long time. One solution, which helps to address the reporting bias problem within studies, is internal replication. (Watch this space for exciting news from 3ie’s Replication Programme coming soon.) Another, which helps to address both reporting and publication bias is registration.

Registration, or more specifically prospective registration, is the public statement—before data are collected or analysed—of the outcomes to be measured and the hypotheses to be tested. Over time, a registry serves as a database of information on prospective, ongoing, or unpublished studies. Such a database can be used by consumers (readers, editors, referees, policy analysts) of a specific study in order to better understand how the published findings evolved from the planned analysis. Registries can also be used by systematic reviewers and others who are looking for bodies of evidence in order to identify studies that may not have been published or completed. Registries have been successfully used in medical research for several decades, and in the last few years there has been a call for registration for social science and development research. (See Humphreys et al. 2013 and Rasmussen et al. 2011.)

Today, 3ie launches the Registry of International Development Impact Evaluations (RIDIE). This registry has the primary objective of improving the evidence from impact evaluations in order to benefit policymaking and programme design in low- and middle-income countries. To that end, RIDIE is designed to include all rigorous impact evaluation designs—experimental and quasi-experimental—and is focused on evaluations in low- and middle-income countries.

3ie is not alone in this endeavour. We are one of a group of organisations providing registries for social science research. Experiments in Governance and Politics has developed a registry focused on political science research. The Abdul Latif Jameel Poverty Action Lab has created a trials registry for the American Economics Association that focuses on experimental research in economics and other social sciences. 3ie is working closely with these organizations to ensure that these registries become inter-operable in a way that will allow for users to search across all three platforms from a single place.

Please take a look at RIDIE: http://ridie.3ieimpact.org/. In the academic spirit of incentive games, 3ie is running a five-prize lottery to encourage early registrations in RIDIE. Find all the details for how you can win here!

Add new comment

Authors

Annette BrownFormer director, 3ie

About

Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members. Guest blogs are by invitation.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.

Archives

Authors

Annette BrownFormer director, 3ie

About

Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members. Guest blogs are by invitation.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.

Archives