Insights from the Development Evidence Portal – the Middle East and North Africa
Our latest piece in the ‘state of the evidence’ series – which interprets data by unpacking the studies in our Development Evidence Portal (DEP) – focuses on the Middle East and North Africa (MENA) region. . The DEP currently includes 740 impact evaluations (IE) and 352 systematic reviews (SR) from the region, which is the second smallest amongst regions containing low- and middle-income countries (L&MICs). Interested readers are encouraged to further explore the DEP for additional data not covered in this blog.
Click here to explore the 3ie MENA dashboard
Let’s look at the three key trends we find in the dashboard:
1) Evidence from the region is heavily concentrated in one country, Iran. Over 60% of the IEs (numbering 475) from the MENA region evaluate an intervention within Iran, with no other single country having evidence greater than 70 IEs. The only two countries that account for more than 50 IEs are Egypt (numbering 61) and Jordan (numbering 53). Therefore, when considering trends within the region it is important to explore the trends within Iran and the trends seen outside of Iran.
2) Health studies dominate the region, but outside of Iran there is greater variation amongst the sectors evaluated. Health studies account for 71% of all IEs within the region but beyond Iran, where 84% of IEs are health focused, less than half of the studies evaluate a health sector intervention. Looking beyond Iran also highlights a greater focus on other sectors. Only 3% of IEs from Iran evaluate social protection interventions, while this number rises to 20% across other countries within the region.
3) Though the majority of research for Iran comes from locally affiliated researchers, this number drops significantly across the rest of the region. In contrast to the wider IE trends that came with the rise of the number of evaluations throughout the 2010s, 97% of IEs evaluating an intervention within Iran have at least one author affiliated to an institution based within the country. Outside of Iran, this number drops significantly. Countries such as Morocco and Yemen, which have moderate numbers of IEs for this region, have fewer than 50% of IEs where at least one author is affiliated to a local institution.
What are the consequences of having lower levels of local input in research?
To help us understand possible reasons behind this and its consequences, Hesham Shafick, a political economist and public policy advisor, whose works focuses on the reflections of post-colonial relations in the North-South development cooperation, shares a short note with his reflections.
Hesham Shafick, Partner & Programme Director, Synerjies Center for International and Strategic Studies
“It is noteworthy that the number of local researchers authoring IEs outside of Iran drops significantly and represents a minority of IEs for half of the countries from the MENA region. This underscores the region's reliance on external expertise in conducting research endeavors with high technical demands, but also contributes to its perpetuation by hindering the growth of local capacities in the field of IE.
However, this represents only a fraction of the issue. The most crucial element of this imbalance, I contend, lies in its role in distorting local priorities by placing the ultimate authority on evaluating program performance in the hands of external experts. Typically, both local and external program designers and implementers heavily depend on evaluation reports to guide their corrective actions, subsequent iterations, and future strategies. As a result, the geographical concentration of powerful IE institutions indirectly contributes to the concentration of power on program priorities and performance indicators.
Originating institutionally and culturally from distant contexts, evaluation documents incrementally draw attention to both issues and measurements that may not necessarily align with the priorities and perspectives of the local context. This distortion not only affects the contextual relevance of corrective interventions but also influences the narrative and logic of implemented development programs writ large, as it creates an incentive to tailor both pre-planned and reported performance indicators to the preferences of external evaluators.
Most troubling is the compounding effect of such an externalization of IE activities. The sway of these external subjectivities not only perpetuates dependence on external evaluators but also tilts the evaluation agenda toward priority areas, performance indicators and other facets of impact evaluation in which external expertise is considered more relevant. This, in turn, intensifies the trend of channeling most IE assignments to non-MENA institutions, a practice which constrains the growth potential of local and regional institutions and hence reinforces the cyclical self-perpetuating nature of the situation criticized.
The influence of this regional imbalance extends beyond shaping the priorities of programs subjected to evaluation; it also affects the selection of which programs are evaluated and which are not. The emphasis on the health sector (trend 2) attests to this. The sector constitutes no more than 5.5% of the Egyptian government’s total spending and 4.37% of the Egyptian GDP, yet accounts for 57% of IEs from Egypt. If the percentage from government spending indicates political priority and the percentage from GDP indicates societal/economic priority, then neither in both cases comes close to the priority given to the sector by evaluation institutions.
My argument does not insinuate a deliberate plot to manipulate impact evaluation in favor of external stakeholders. Instead, it underscores an unintended outcome resulting from the concentration of IE activities in non-local, primarily western, settings. This implication arises from the fact that evaluators are humans inherently shaped by their own contexts, priorities, frameworks, and subjectivities.
One solution to the problem indicated would be intensifying the focus on capacity development of researchers in the region. But without a sustainable stream of opportunities for these researchers to utilize these capacities, test them out in the real world and create careers out of them, these capacity sharing measures would not only go waste, but could even be yet another west-emigrating budget item on the local IE consultants’ balance sheets."
Interested readers are encouraged to further explore the DEP for additional data not covered in this blog. Read the first two parts of the series for insights on the Latin American and the Caribbean and Sub-Saharan Africa regions.
Add new comment