MENU
Your location:

Scepticism of “evidence-based policy” is a good thing

    A few weeks ago I wrote an article for City AM explaining why I hate the term “evidence-based policy”. It’s chucked around like confetti  in the wonk-ish world of think-tanks, and is usually met unthinkingly with generous knowing nods of approval whenever it is said. “We need to establish what works,” it is exclaimed, “that’s why we need evidence-based policy”. And people react as if the orator has said something as profound as the ‘I Have A Dream’ speech.

    What a great development, everyone sits around thinking. Why did nobody think of this before? Yes, let’s look at facts and try to measure the effects of different policies, and make judgments on what is best based on those facts. It’s so simple, but yet so revolutionary!

    OK, that’s enough sarcasm, but you get my drift. The truth is everyone uses evidence in backing up or formulating a case for a particular idea, but there are usually unrecognised huge leaps of faith between this and the declaration that there is ‘clear evidence’. What makes me annoyed though is that people use the term “evidence-based policy” as a way of trying to disarm perfectly reasonable alternative public viewpoints. An IEA report out today makes this case explicitly by looking at a few different examples of real-world policy making (on minimum alcohol pricing, climate policy, passive smoking and happiness engineering) and how people wittingly or unwittingly make assumptions or judgements that mean their evidence is incomplete or they fail to consider alternative perspectives.

    Let’s think about this logically. Suppose in society there is something that is recognised by a broad range of people as being “a problem”. What is the first question that should occupy us? Is it: what can the Government do to solve this problem? Many people would probably answer in the affirmative. But a leap of faith has already been made.  It is a judgement in itself to assume that a policy response is required from government, as opposed to individuals. Immediately then, there are already two streams of evidence to consider: evidence on what policy responses from government may work to alleviate “a problem”, evidence on what factors – aside can government – can lead to or ameliorate “a problem” in the first place.

    OK, so suppose we’ve got over this hurdle. There is something considered “a problem” which government has tasked itself with dealing with. The next question is “what evidence am I interested in?” In Jamie Whyte’s piece, he takes the example of alcohol consumption, which many people consider to be “a problem”. Proponents of minimum alcohol pricing, for example, explain there is evidence that a higher minimum price leads to less drinking, and thus ameliorates many of the costs of alcohol consumption. They present this as clear evidence. After all, what clearer evidence could there be than the effects on consumption of changes in price.

    But already there are several more big leaps of faith. First of all, the researchers have only taken into consideration the costs of alcohol, not the benefits. People obviously place a value on drinking, and so you’d expect their utility to fall if some of them were priced out of drinking as much. Second, the analysis ignores substitution costs. Some people who are priced out of alcohol consumption might decide instead to purchase other drugs or substances, brew their own or buy on the black market. This has other costs not measured within their basic model. And third, even if we take the researchers arguments as given, the fact that they haven’t advocated much, much higher minimum alcohol prices suggests they recognise a trade-off between reducing alcohol consumption and liberty, but there’s no modelling of what the value of liberty is.

    There are some other great examples of the difficulty of evidence-based policy in the paper. One is that often valuing externalities or “social costs” is extremely subjective and difficult, but the final evidence is then presented as hard. Another is that people undertaking “evidence-based policy” are presented as if they do not have their own ideologies or vested interests. A third, that in many areas you get expert slippage – where, for example, a climate science might present evidence of climate change and use this credibility to advocate carbon taxes (which he/she knows little about). And fourth, the simple fact that the question of “who shall decide what is best?” is ignored - evidence-based policy is often paternalistic and imposes policies on society which people wouldn’t want. I’d add to all this that in many areas of public policy, it is incredibly difficult to run randomised control trials or observe natural experiments.

    Why does all this matter? Am I and others simply being defeatist about the ability to develop good policy. No. Of course we should all agree that using facts to develop policy is extremely important. And we do. But there should be much more questioning about policies which purport to be “evidence-based” than the current deferential treatment we often see. Should this be the role of government? Who is deciding? Have we thought about possible secondary effects like substitution? Have we taken account the loss of freedoms? What are the assumptions or priors behind what is being advocated? What information is impossible to examine?

    Healthy scepticism like this is a good thing. Because in much of public policy, there are no full solutions, only trade-offs. In fact, recognising the limits of our own knowledge and our inability to find solutions to all problems would be a great step forward in making better policy.

    Ryan joined the Centre for Policy Studies in January 2011, having previously worked for a year at the economic consultancy firm Frontier Economics.

    Be the first to make a comment

    Centre for Policy Studies will not publish your email address or share it with anyone.

    Please note, for security reasons we read all comments before publishing.