Governance

Three barriers that make it hard for policymakers to use the evidence that development researchers produce

  • Blog Post Date 20 September, 2017
  • Articles
  • Print Page
Author Image

Adnan Khan

London School of Economics (LSE)

a.q.khan@lse.ac.uk

Author Image

Asim Khwaja

Harvard Kennedy School

khwaja@harvard.edu

There has been a surge in policy research globally over the past two decades that is geared to promote evidence-based policymaking. But can policymakers put this evidence to use? Based on a survey of civil servants in India and Pakistan, this column finds that simply presenting evidence to policymakers doesn’t necessarily improve their decision-making.



In international development, the ‘evidence revolution’ has generated a surge in policy research over the past two decades. We now have a clearer idea of what works and what doesn’t. In India, performance pay for teachers works: students in schools where bonuses were on offer got significantly higher test scores (Muralidharan and Sundararaman 2011). In Kenya, charging small fees for malaria bed nets doesn’t work – and is actually less cost-effective than free distribution (Cohen and Dupas 2010). The American Economic Association’s registry for randomised controlled trials now lists 1,287 studies in 106 countries, many of which are testing policies that very well may be expanded.

But can policymakers put this evidence to use?

Here’s how we did our research

We assessed the constraints that keep policymakers from acting on evidence. We surveyed a total of 1,509 civil servants in Pakistan and 108 in India as part of a programme called Building Capacity to Use Research Evidence (BCURE), carried out by Evidence for Policy Design (EPoD) at Harvard Kennedy School and funded by the British government.

We found that simply presenting evidence to policymakers doesn’t necessarily improve their decision-making. The link between evidence and policy is complicated by several factors.

1. There are serious constraints in policymakers’ ability to interpret evidence

We asked civil servants to interpret numerical information presented in a two-by-two table, shown below, that displays the crop yields obtained by farmers who had or had not used a new seed variety. We asked whether farmers who had used the new seeds were more likely to harvest more crops than those who had not used the new seeds.

To answer the question, respondents had to calculate ratios and note that among farmers using the new variety, about three times as many saw yields increase rather than decrease – while among farmers who didn’t adopt the new variety about five times as many saw yields increase. Comparing these ratios would show that farmers using the old seed variety were the ones who were more likely to see crop yields rise.

Farmer yield question

Our civil servants couldn’t interpret the table; their answers were no more accurate than if they had guessed randomly. When we asked open-ended questions, we found that many respondents misinterpreted the data in the table. Either they compared only the numbers in the top row or only the numbers in the left column. They did not convert those numbers into ratios.

We also showed respondents a bar chart displaying unemployment rates in three districts and asked which district had the largest number of unemployed people, without giving them the population data. Only 17% of mid-career civil servants in Pakistan responded correctly that they did not have enough information to answer.

Civil servants in India and Pakistan are well educated and selected through competitive exams. Our results are consistent with psychology researchers’ findings that common heuristics – rules of thumb we use to make complex decisions quickly – inhibit people without statistical training from correctly interpreting quantitative information. In a study of US adults, for example, only 41% correctly answered a question like the one about farmer yields above (Kahan et al. 2013).

When we asked survey respondents what barriers they faced in relying on evidence to make decisions, they answered that the most serious barriers were a lack of training in both data analysis and in how to evaluate and apply research findings. Such training might be useful. Our colleague Dan Levy and others at Harvard Kennedy School used these insights to design digital learning modules for using data and evidence.

2. Organisational and structural barriers get in the way of using evidence

These policymakers agreed that evidence should play a greater role in decision-making but offered a variety of reasons for why it does not. Notably, few mentioned that they had trouble getting data, research or relevant studies. Rather, they said their departments lacked capacity to analyse or interpret data, that they had to make decisions too quickly to consult evidence, and that they weren’t rewarded when they did. Apparently, just disseminating evidence may not be enough.

In conversations, civil servants told us that their organisations centralised decisions, strongly favoured the status quo, and discouraged innovation. Here’s how one civil servant in Lahore, Pakistan, put it: “The people at the top are not interested in evidence. They want things done their way irrespective of the outcome.”

3. When presented with quantitative vs. qualitative evidence, policymakers update their beliefs in unexpected ways

We then compared whether our Pakistani policymakers were more likely to change their views about a government policy after we presented them supporting evidence that was either ‘soft’ – meaning it was anecdotal, based on a single story – or ‘hard’, meaning it was quantitative and based on a representative survey or experiment. Both types of evidence increased the policymaker’s support for a policy, although, on average, hard evidence changed minds significantly more.

But it depended on what kind of policy we were talking about. In most cases, support changed more in response to data. Numbers supporting a reduction in customs tariffs to increase revenue or privatising the postal service, led to much greater increases in support for these policies compared with anecdotes. But in others, support changed more in response to stories. For example, we gave a quote from a parent about how tracking and reporting on teacher attendance and school facilities had increased the quality of their child’s education. We offered similar stories for other policy options, but this quote struck a chord where numbers could not.

For some policies, providing evidence in favour of the policy actually reduced support. When we gave hard evidence in support of a policy, our civil servants reduced their support for four out of 12 policies – although the drop was not statistically significant. Positive stories about appointing specialists instead of generalists to senior civil service positions led to decreased support for appointing specialists.

The implication here is that decision-making depends not only on the quality of evidence that is presented, but also on the context in which this evidence is received. In cases where the policymaker holds strong beliefs and is inclined to discount evidence, an intervention to soften the policymaker’s priors may be more useful than generating rigorous evidence.

Researchers and policymakers alike say that government would be more effective if decisions were made based on carefully derived evidence. But before that can happen, civil servants need better training in interpreting data and governments need to introduce systems that support and incentivise the use of evidence.

This post originally appeared as "These 3 barriers make it hard for policymakers to use the evidence that development researchers produce" in The Monkey Cage at The Washington Post on 13 August 2017.

Further Reading

  • Cohen, Jessica and Pascaline Dupas (2010), “Free distribution or cost-sharing? Evidence from a randomized malaria prevention experiment”, The Quarterly Journal of Economics, Vol. 125, Issue 1. Available here.
  • Kahan, Dan M, Ellen Peters, Erica Cantrell Dawson and Paul Slovic (2013), “Motivated Numeracy and Enlightened Self-Government”, Behavioural Public Policy, 1, 54-86. Available here.
  • Muralidharan, Karthik and Venkatesh Sundararaman (2011), “Teacher Performance Pay: Experimental Evidence from India”, Journal of Political Economy, 119(1):39-77. Available here.
No comments yet
Join the conversation
Captcha Captcha Reload

Comments will be held for moderation. Your contact information will not be made public.

Related content

Sign up to our newsletter