I still remember my early days as a fresh PhD economist at the Inter-American Development Bank. My job was to help design health and social programs across Latin America. I arrived eager, idealistic, and armed with economic theory. I quickly noticed something: everyone—government officials, colleagues, even myself—had strong views about what the “right” structure of a program should be. If you wanted to reduce the incidence of child malnutrition, you gave out food. If you wanted more women to work, you subsidized childcare. If you wanted to change people’s behaviors, for example towards more healthy practices or cooperative approaches, you needed to influence their personal beliefs.
These ideas felt intuitive, even obvious. Yet as the years went by, I watched some of those same programs get restructured—or quietly abandoned—because they weren’t delivering the results we expected. That experience planted a seed of doubt in me: how much of what we “know” about what works is really just assumption?
Over time, I found that careful impact evaluations have a remarkable way of humbling us. They take the comfortable certainties of “common sense” and confront them with counterfactual-based evidence – evidence that strips away other factors than the policy or program we are interested in and that may have contributed to the outcomes. And more often than not, the results are surprising.
Take child malnutrition and food assistance. Peru’s remarkable success in reducing chronic child malnutrition came not from pouring in more food aid, but from shifting focus to vaccinations, growth monitoring, and breastfeeding support. Malnutrition in that case was more about disease and care practices, not food quantity1.
Or consider childcare and women’s labour force participation. The assumption was clear: if childcare was more affordable, mothers would flood into the workforce. But a randomized evaluation in Egypt offering free nursery access showed otherwise. Hardly anyone used the nurseries, and uptake of job placement services was even lower. The real barrier wasn’t price—it was social norms2.
As for youth vocational training, many of us assumed that adding on-the-job training and counselling would boost results. Yet a large randomized trial in Upper Egypt revealed that vocational training alone improved employment, while the extras added no measurable benefit. Sometimes, less really is more3.
With cash transfers, many people assumed that emergency cash assistance would be immediately consumed. But the results of our recent long-run impact evaluation in Colombia showed that households experienced reduced food insecurity nearly two years after transfers ended4.
In the field of conflict prevention and peacebuilding also there are compelling examples5. In Rwanda, radio programs aiming to reduce intergroup conflict and promote reconciliation shifted norms and behaviours with respect to intermarriage, open dissent, trust, empathy, cooperation, and trauma healing, while the program did not significantly change listeners’ personal beliefs6. And in the Democratic Republic of Congo, efforts to spur dialogue after a radio soap opera that emphasized conflict reduction through community cooperation backfired, fuelling polarization instead of tolerance7.
Perhaps the most sobering example comes from cost-effectiveness analysis. It’s tempting to fund what seems “most effective” in absolute terms. Yet GiveWell’s comparison of insecticide-treated nets, deworming, and cash transfers shows that the best “bang for the buck” in areas where malaria and parasites are endemic often lies in interventions that don’t look most impressive on the surface.
For policymakers—and for all of us working in development—the lesson is clear: assumptions are not facts. Evidence doesn’t just test programs; it tests our mental models. By doing so, it saves resources, reveals the true levers of change, and sometimes opens doors we didn’t even know existed.
I often think back to those early days at the IDB, sitting in conference rooms full of certainty. If I could go back, I would tell my younger self: be curious, be skeptical, and above all, be humble. Because in development, what “everyone knows” is often exactly what we need to question.
Let me end with a secret. In interviews with researchers and evaluators, I always ask my favorite question: When was the last time you changed your mind based on evidence? For me, these stories—of assumptions challenged, of humility gained—are the most powerful. I would love to hear yours. What examples have made you rethink what you thought you knew about development?
1Cordero, L. et al. 2022. Claves de la reducción de la desnutrición crónica infantil en el Perú: el case del presupuesto por resultados. Documento de Políticas Públicas. DPP.No07-2022. Instituto de Gobierno y de Gestión Pública. Universidad de San Martin de Porres.
2Caria, S. et al. 2025. The barriers to female employment: Experimental evidence from Egypt. G²LM|LIC Working Paper No. 92. February 2025.
3Crépon, B., Fadlalmawla, N., Rizk, R., & Hussein, A. (2025). Intensifying vs. targeting support: Evidence from a job training experiment [Work in progress].
4Celhay, P, and Sebastian Martinez. 2025. AND Dignidad long-term impact evaluation. 3ie Impact Evaluation Report 145. New Delhi: International Initiative for Impact Evaluation (3ie). Available at: https://doi.org/10.23846/ADNIE145
5Gaarder, M. M. and J. Annan (2013), Impact Evaluation for Peacebuilding: challenging preconceptions, in Winckler, O., Kennedy-Choane, M. and B. Bull (eds.), Evaluation methodologies for aid in conflict, Routledge.
6Paluck, Elizabeth Levy, 2009a, Reducing Intergroup Prejudice and Conflict Using the Media: A Field Experiment in Rwanda, Journal of Personality and Social Psychology 96, no. 3 (March): 574-587, http://www.ncbi.nlm.nih.gov/pubmed/19254104.
7Paluck, Elizabeth Levy, 2010, Is It Better Not to Talk? Group Polarization, Extended Contact, and Perspectives Taking in Eastern Republic of Congo, Personality and Social Psychology Bulletin 36 no. 9: 1170-1185.