I recently wrote a blog on ten things that can go wrong with randomised controlled trials (RCTs). As a sequel, which may seem to be a bit of a non-sequiter, here are twelve tips for selling RCTs to reluctant policymakers and programme managers.

  1. The biggest selling point is that a well-designed RCT tells the clearest causal story you can. There is, by design, no link between beneficiary characteristics and programme assignment. So any difference in outcomes must be because of the programme, and not any underlying difference in treatment and control groups.  Moreover…
  2. RCTs are easy to understand.  You just need to look at the difference in mean outcomes between treatment and control. That is easy to calculate and easy to present. It’s true that in economics we usually calculate the mean difference using a regression with control variables added, but it can readily be presented as a simple mean difference.
  3. RCTs are a fair and transparent means of programme assignment.  In a typical development programme, intended beneficiaries and even agency staff, have no idea how or why communities get chosen to benefit from the programme. This is changing with RCTs which have lotteries to choose programme beneficiaries. A lottery is much fairer than deals done in backrooms which may be prone to political interference.  It is conducted in a transparent way. Public drawings are held with key stakeholders present. They take part in a lottery to decide who gets the programme. Although this is a fairer approach, there still objections that RCTs are unethical because the intervention is withheld from the control group. But,
  4. It is not necessary to have an untreated control group. An RCT may have multiple treatment arms, at its simplest, comparing intervention A with intervention B, where treatment B may be – as with many clinical trials – what is being done already. And a factorial design adds a third treatment arm which receives both A and B. This helps us answer the question of whether the two interventions work better together or separately.
  5. RCTs can lead to better targeting.   Randomisation doesn’t mean you randomise across the whole population. Randomisation occurs across the eligible population, so the intervention is still targeted as planned. Since an RCT requires you to clearly identify and list the eligible population, it may result in better targeting than would have been achieved without this discipline.
  6. Randomisation doesn’t have to interfere with programme design or implementation. For a start, you don’t need to randomise across the whole intended beneficiary population. Once you do the power calculations, you will have the size of the sample required for randomisation, And for a large programme, it is likely that only a subset of the intended beneficiary population is required. The programme managers can do what they like with the rest, which may well be the majority.
  7. Or minor adjustments can be made to the eligibility criteria (a ‘raised threshold design’) to yield a valid control group in a non-intrusive way.  Oversubscription can be generated by adjusting the threshold.  Participants are then selected at random, that is by a lottery.
  8. An encouragement design randomly assigns an encouragement to participate in the programme, not the programme itself. This will have no effect on how the programme is run, and will in fact additionally yield useful information on increasing take-up of the programme.
  9. Finally, a pipeline RCT exploits the fact that the programme is being rolled out over time andthat there are almost certainly untreated members of the eligible population who can form a valid control group. The RCT therefore simply just randomises the order of the treatment.
  10. Well-designed RCTs can open the black box. RCTs need not just focus on the ‘does it work question’. They can also look at variations in intervention design to determine how to make it work better. But even if it doesn’t do that, then
  11. The black box can be a blessing not a curse since RCTs cut through complexity. The causal chain of a programme can sometimes be too complex to unravel. RCTs can therefore help establish causality in the face of complexity. So in conclusion
  12. It is unethical not to do RCTs: Without RCTs, we are spending billions of dollars on programmes which, at best, we have no evidence for. And in reality many of these programmes are not working. So, it’s better to spend a few million on well-designed RCTs than billions on failed development programmes.

Given all these arguments it is not surprising that there have been 100s of RCTs of development programmes in recent years. But there are still many gaps in our knowledge of what works and why in achieving development goals. So let’s have more and better studies for better policies and programmes and better lives.

Leave a comment

Enter alphabetic characters only.