M&E: A tale of two brothers

Birindwa Pascal_M&E

Monitoring and Evaluation (M&E) are always mentioned together but in practice these two disciplines pretty much evade each other. This is despite the fact that they could be highly beneficial to each other, and if carefully combined, also to the intervention.

Howard White in his opening remarks at 3ie’s recent Measuring Results conference emphasised the need for the ‘harmonisation’ of M&E.  Evaluators, he argued, can use monitoring data to verify the theory of change of an intervention. An impact evaluation could do this by using monitoring data to identify internal implementation bottlenecks. But in reality, impact evaluators rarely go beyond using monitoring data on take-up or attendance.

So, how can we step up our monitoring game? Monitoring information systems could go beyond capturing information on standard outputs and collect data that tests whether the intervention is working.

Let’s look at an intervention aimed at teaching children in India about gender equity.  There are a number of ways that both monitoring and evaluation can supplement and complement each other. The NGO already uses its monitoring system to capture children’s enrolment and attendance.  But it could also capture relevant information on the field staff.  The teachers could be tested for their familiarity with the training material. The monitoring system could also use vignettes to assess whether the attitude and behaviour of the teachers are in line with the prescribed code of conduct. (Most of the discussion on the importance of mixed methods has focused on impact evaluation but monitoring could also benefit from a little more qualitative investigation, as Bamberger et al. claim.)

By expanding the set of questions that the monitoring system collects, the NGO could get a whole lot of relevant quantitative and qualitative information. This information would enable monitoring staff to identify the teachers that require extra training and the areas where they require it. From an impact evaluator’s perspective, this information can also feed into the investigation of the causal chain of the intervention. And any impact evaluator would be happy to access data on possible heterogeneity across teachers.

The collaboration between monitoring and impact evaluation thus does not have to be a one-way street. An impact evaluation can also offer supplementary data that can be used to validate the monitoring information system’s data. But for this to happen, monitoring and impact evaluation need to use identical indicators.

Naturally, Howard White is not the only one who is convinced that an end to the brotherly quarrel will help advance both disciplines. There are several who point out the practical problems preventing this harmonisation between monitoring and impact evaluation. Duvendack and Pasanen argue that monitoring does not provide sufficient information on intermediary outcomes. They also say that the lack of collaboration between M&E can be blamed on the poor quality of monitoring data and the lack of insights that the data produce on causality.

I agree and disagree with them. Yes, monitoring does not always produce high quality data. But several implementing agencies are setting up reliable computer systems and overcoming this hurdle. This means that data is becoming available in an easily digestible format for impact evaluators. Although one should not overstate the contribution of digitization to data quality, it is still a big leap from storing files in boxes.

While it may be true that the lack of exogenous variation in monitoring data prevents us from making any causal statements, it can be argued that this is not part of monitoring’s mandate. But we can however use monitoring data to draw interesting conclusions. Another presenter at 3ie’s conference, Prof. Laxminarayan from the Public Health Foundation of India, gave an example that illustrates this. The study he presented featured a monitoring system for the storage of vaccines.  Temperature loggers are placed in vaccine cooling storages to report on temperature fluctuations and send warning messages if the temperature rises above or drops below a level that is detrimental to the vaccines. If the cooling is excessive or insufficient, it causes damage to the vaccines.

The data produced by this system is highly reliable and the results of the study will be important. The loggers will reveal that the vaccines undergo large fluctuations in temperature – a fact that would have otherwise remained unrecognised

So what is the way forward in bringing about harmony between M&E? In their working paper ‘It’s all about MeE’ Pritchett et al. suggest hammering out an operational plan of having different combinations of monitoring (M), rigorous impact evaluation (E) and experiential learning (e), that feeds back results into the intervention design. It however remains to be seen, as well as tested, if this approach will be applied or will succeed in breaching the gap between M&E.

There are also ways that 3ie can contribute: For example, we could require grant applicants to describe the monitoring data that is available to them for the intervention they want to assess. We could also ask how that data will be used and what additional questions the monitoring system will pursue in order to support and improve the evaluation. For all of this to happen, the study team and the intervention staff would need to sit down together and think up a strategy of how to integrate monitoring and rigorous impact evaluation.

Add new comment

This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Authors

About

Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members. Guest blogs are by invitation.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.

Archives

Authors

About

Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members. Guest blogs are by invitation.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.

Archives