Tips on selling randomised controlled trials

RCTs

Development programme staff often throw up their hands in horror when they are told to randomise assignment of their intervention. “It is not possible, it is not ethical, it will make implementation of the programme impossible”, they exclaim.

In a new paper in the Journal of Development Effectiveness I outline how different randomised controlled trial (RCT) designs overcome all these objections. Randomisation need not be a big deal.

When we randomise, we obviously don’t do it across the whole population. We randomise only across the eligible population. Conducting an RCT requires that we first define and identify the eligible population. This is a good thing. Designing an RCT can help ensure better targeting by making sure the eligible population is identified properly.

It is very rare that the entire eligible population gets the programme from day one. Usually some of the eligible population are excluded, at least temporarily because of either resource or logistical constraints. We can exploit the fact of ‘untreated but eligible’ to get a valid control group. A common way of doing this is to ‘randomise across the pipeline’. Let’s say a programme is to reach 600 communities over three years. These communities can be divided into three equal groups to enter the programme in years 1, 2 and 3. We randomly choose which community gets in to which group. This approach was used in the evaluation of the well-known Progresa conditional cash transfer programme in Mexico. The communities receiving the intervention in year 3 acted as a control group for the first two years.

If the entire eligible population can be treated, then we can use a ‘raised threshold design’. The eligibility criteria can be slightly relaxed to expand the eligible group. For example a vocational training programme in Colombia admitted around 25 trainees to each course. Course administrators were asked to identify 30 entrants from the 100 or so applicants for each course. Twenty-five of these 30 were randomly picked to be admitted to the course, while the remaining five entered the comparison group. This approach virtually made no difference to the programme. The same number would have taken the course, and the same number would have been rejected, even in the absence of randomisation.

The ‘raised threshold design’ can also be applied geographically. If you plan to implement a programme in 30 communities, identify 60 which are eligible and randomly assign half to the comparison group.

And you need not randomly assign the whole eligible population. In the first example with 600 communities, power calculations would probably show that at most 120 of the 600 are needed for the evaluation. So for 480 of the communities, that is 80 per cent of the eligible population, the programme can be implemented in whatever way the managers want. It’s just for the remaining 20 per cent that we need to be randomising the order in which the communities enter the programme. Everyone gets the programme, and in the planned time frame. As evaluators, we are just requesting a change in the order for a small share of them. Randomisation is no big deal

There may still be objections to RCTs in cases where the control group does not receive the programme. And indeed in clinical trials it is the norm that the control group do get a treatment rather than no treatment. They usually get the existing treatment. We can do the same when we evaluate development interventions. In any case, it’s more likely that policymakers want to know how the new programme compares to existing programmes, rather than how it compares to doing nothing. Or the control group can get some basic package (A), and the treatment group receives the basic package plus some other component we think increases effectiveness (A+B). Or we can use a three treatment arm factorial design: A, B and A+B.

And finally, for programmes which are indeed universally available, such as health insurance, an encouragement design can be used. These designs randomly allocate an intervention, such as information, to one group. This approach creates a new group of programme participants, whose outcomes can be compared to those in areas which have not received the encouragement, thus allowing calculation of the impact of the programme. These designs do not affect the programme in anyway, other than to increase take up.

So randomisation is indeed not a big deal. Various evaluation designs make little actual difference to the intervention. And what about ethics, you may ask. In most cases it is unethical not to do an RCT if you can. We don’t know if most programmes work or not, and so we need rigorous evaluations to provide that information.

Add new comment

This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Authors

Howard white Howard WhiteDirector, GDN Evaluation and Evidence Synthesis Programme

About

Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members. Guest blogs are by invitation.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.

Archives

Authors

Howard white Howard WhiteDirector, GDN Evaluation and Evidence Synthesis Programme

About

Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members. Guest blogs are by invitation.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.

Archives