(This is a guest post by Joe Muggs)
One of the reasons I have always been a political agnostic is that I grew up in a family of health economists (I know – the glamour!), so I have witnessed again and again how solid evidence is deliberately ignored as a matter of course by policy-makers when it doesn’t suit a particular agenda. Statists, the religious of all stripes and free marketeers alike are all capable of getting incredibly antsy when presented with numbers they don’t like.
This article (preview only without subscription), in last week’s New Scientist brought back to me all those times I’ve seen beleaguered researchers and economists feeling as if they are doomed eternally to suffer Cassandra Complex.
Because politicians are keen to look confident and decisive, they instead tend to prefer simple evaluations of policies that are “doomed to success”, according to Laurence Moore, a social scientist at Cardiff University, UK.
“They’re almost designed to show that the idea is a good idea,” he says. “It’s well intentioned, but politicians are not open to the idea that a rigorous evaluation might help them get things better. Rigorous evaluations are perceived as threatening rather than supportive of better policy.”
And yet, it suggests, randomised controlled trials, already the “gold standard” in medical/pharmaceutical research are perfectly applicable to social policies as well.
“Relative to where we’ve got to in medicine, it’s a disaster,” says Lawrence Sherman, a criminologist at the University of Cambridge and the Jerry Lee Center of Criminology at the University of Pennsylvania, Philadelphia. “Anything involving the treatment of people from education to social services and crime prevention is mostly terra incognita.”
The piece mentions obvious examples where overtly ideological campaigns have deliberately ignored the results of rigorous trials, as with the US federal government continuing to fund “abstinence-only” sex eduction campaigns in schools and in the developing world – but even governments less bull-headed than the Bush administration show a distinct aversion to hard evidence.
Talking to a relative about this topic at the weekend, I was saddened, though not surprised, to hear that when the Department of Health recently put the evaluation of the Sure Start Initiative (a programme to co-ordinate from birth social care for children of low-income families) out to tender, they stipulated that this must not be done by randomised controlled trial. Another programme, the Nurse Family Partnership Scheme, is to be evaluated by RCT, but only because David Olds, the pioneering University of Colorado professor who devised the NFPS, has insisted that it be rigorously trialled in each new territory into which it is introduced, as it already has been in the States – an insitence which is apparently the cause of immense consternation in the DoH.
RCTs are not a magical gateway to the truth. They can be manipulated by only testing for aspects that are likely to give a ‘favourable’ result, or by testing comprehensively but only releasing the favourable results – though legislation looks likely to deal with these practices following the controversy about hidden side-effects of SSRIs, and is being talked about to force cosmetics companies release the results of the millions of dollars’ worth of research that they do each year even when this shows little benefit from using their products. Harvesting data about individuals in order to evaluate social policies is bound to get privacy campaigners up in arms. But RCTs are the single best tool we have to see whether policies actually work, something which is particularly useful with policies that seem counter-intuitive: the only reason that Restorative Justice schemes can make it past the Daily Mailites of this world – who scream “soft option!” when anything involving anything so wet-liberal as talking is suggested for the justice system – is that researchers like Lawrence Sherman have proved that it works.
Sherman calls the beginnings of using RCTs in social policy “a step towards a rational society and a fulfilment of the 18th-century Enlightenment”, but the consensus amongst the researchers interviewed by New Scientist is that social policy is some 50 years behind medicine in this regard. Things are changing: the Social Care Institute for Excellence, a sister organisation to the National Institute for Clinical Excellence, is about to begin operations, and scientists like David Olds are beginning to insist on proper evaluation of the policies they help develop. But will any government ever have the courage to stand up and say that as a matter of policy they are willing to have their initiatives put to the test? Or is the natural politician’s terror of being proved wrong too ingrained?