Evidence impact: Claiming the influence of studies with confidence
In a previous blog, we described our definition of 'evidence impact.' Now, given how messy the decision-making process can be in the real world, how can we be sure that a study and its findings made an impact?
For example, we got a tip-off that one of our evaluations may have informed pollution regulations in India from a 2015 government report mentioning the evaluation. When we verified the impact claim, we found that the changes were not exactly what the evaluation had recommended, adding some uncertainty. Sometimes decision makers might refer to studies to provide justification for the choice they wanted to make anyway. How could we ascertain it was a real case of evidence impact?
We continued our investigation and found more proof. The study had also influenced other decision-makers, who cited the evaluation in their policies years later. With all these pieces, we had the confidence to claim an impact, as detailed in this evidence impact summary and brief.
This process is the product of years of experience investigating evidence impact claims to verify them. By using contribution tracing, we systematically evaluate each potential claim to be sure that a particular decision was influenced by study evidence. We include them among our Evidence Impact Summaries only once we are sure.
Here’s how it works.
We start by reviewing the documentation and records for each study. In some cases, researchers report how stakeholders might be using results from a study. These ‘tip-offs’ help form claims about evidence use. Alternatively, keyword-based online alerts help construct evidence impact claims. Once the claim statement is formulated, we sketch out the possible causal pathway or the steps through which each study might have contributed to decisions.
Our next task is to list the information we need to verify each step in the pathway. We categorise each piece of information as necessary (what we 'expect to see') or sufficient (what we would 'love to see'). This supporting information could include the minutes of relevant meetings, draft policies on an implementer’s letterhead, press releases, media articles, or the testimony of an official.
Next, team members review the list of supporting information for each evidence use claim. This ‘claims jury’ ensures that collective knowledge about specific contexts helps us avoid making overly broad claims or asking for unnecessary information. If the claims jury decides more information is needed, we try to collect it, generally by calling stakeholders and decision makers or conducting searches online.
Like any evaluation approach, our contribution tracing-based approach does not work for all potential cases. It is suited to verifying the impacts of single studies, where project teams monitor and document progress in engagement with potential users of study evidence. This approach does not tell us how much a study affected a decision, just that it did, so we cannot compare studies to see which had more of an effect. Nonetheless, this approach helps build credible narratives of evidence impact, dozens of which are published here. If you would like to know more, read our measurement approach here or write to us at influence@3ieimpact.org.
Comments
impact and process evaluation studies are necessary to have confidence in making policies. we need to have more advocacy for conducting evaluation studies which give clear cut directions and inputs for policy formulation
Add new comment