Evidence Dialogue: For development institutions, learning requires more than collecting data
The world's development institutions collect lots of data – but do they learn from it? Perhaps not as much as they should. Between the challenges of quick timelines, rigid systems, and disconnects between implementers and evaluators, development agencies do not always walk the talk of evidence-informed decision-making, 3ie's expert panel agreed.
"We collect lots of results data…what we know, though, is that there's something wrong," said Håvard Mokleiv Nygård, Director of Knowledge and Evaluation of the Norwegian Agency for Development Cooperation. "We're not leveraging all of the data that we have to produce the knowledge that we need."
Successfully structuring development programming decisions around clear evidence-based criteria can yield benefits far beyond tweaks to intervention designs.
"Rigorous use of evidence, I think, helps to depoliticize our work," said Millennium Challenge Corporation Chief Economist Mark Sundberg. "Having bipartisan support in congress is critical to the survival of the agency… [and evidence use] even helps depoliticize decision-making within the agency."
Development professionals need to pause to synthesize and internalize new evidence, reflect on its implications, and translate those implications into course corrections, said Stacey Young, USAID's agency knowledge management and organizational learning officer.
"In addition to building the stock of knowledge, we need to facilitate the flow of knowledge across our staff, partners, and programs," Young said. "All of this requires time…we're simply doing too much too fast to achieve the full learning process."
The choice to move fast can be well-intentioned, said Alison Evans, Director General of Evaluation at the World Bank Group.
"[People think] 'We need to get the money out of the door, and we'll reflect later on,' and later never quite comes," Evans said.
Development institutions do tend to have structures for incorporating evidence into decision-making, but they sometimes turn into box-ticking exercises, Evans said.
"While these institutions have these corporate results systems…they're actually not the best systems to assure evidence-informed and adaptive decision-making," Evans said.
Systems which require evidence to be incorporated very early in a project's development can help, said Kumar Iyer, director general of delivery at the Foreign, Commonwealth & Development Office.
"We don’t really let things get off the ground without evidence being incorporated right at the beginning," Iyer said. "If you've got your theory of change really clearly set out at the beginning, that helps you as you're monitoring your delivery."
Plenty of challenges remain, including the need to bridge the divide between those who view themselves as "implementation" or "operations" people and those who view themselves as "research" or "evaluation" people, panelists said.
Another divide exists between funding and implementing organizations.
"Often it's partners embedded in partners that are actually implementing on the ground," Nygård said. "To really get this to work, we need a similar level of evidence thinking all the way through the chain, all the way to the ground."
Of course, better development programming is the ultimate goal.
"Really, what counts at the end of the day is the difference and the impact you make on the ground," Iyer said.
Add new comment