Social Evolution Forum
FIND sef:
Errors, interests, and values: Why better advocacy may produce little change.
Sanket-Sanjay-Khuntale-India-Arts-Culture-Open-Winner-Sony-World-Photography-Awards-2012

Richerson offers a set of strategies for scholars who seek to influence policy. Using Sabatier’s Advocacy Coalition Framework (ACF), he identifies that their research must be methodologically rigorous, they must identify and enter the relevant policy networks, and they must be willing to invest long-term commitment to the cause. As a plan of action for policy-minded scientists, I find the argument compelling and useful in many ways.

But let me start by taking a step back and imagining what an ideal relationship between policy and science might look like. We might imagine that, in principle, this relationship is uniquely instrumental: scientists produce generalizable knowledge about the world and policymakers can use that information to make effective decisions on behalf of a political community. Indeed, their respective jobs would seem to make them natural allies. This relationship can be especially effective when there is a marketplace of ideas that operates to weed out falsehoods and fallacies from public discourse, leaving only fact and truth. “Merchants of doubt,” as Richerson points out, may emerge and seek to distort the proper functioning of that marketplace in the short run, but in the long run, truth prevails.

Of course, as John Maynard Keynes once remarked: “in the long-run, we are all dead.” It may be that in the long run, sustained coalitional engagement can lead to desired policy outcomes. However, there are several obstacles that perniciously undercut the link between science, advocacy and policy in the short run. I identify three: cognitive biases that defy accurate information processing, political interests that compel elites to neglect or politicize unfavorable evidence, and the tendency for humans to reject evidence-based arguments in favor of moral imperatives. I discuss each in turn.

First, social and cognitive psychologists have discovered that humans (and therefore policymakers) are subject to a range of biases that predispose us toward errors of judgment that can have detrimental effects on decision-making and policy. For example, we know that, either in general or under specific circumstances, individuals are subject to optimistic overconfidence (falsely overestimating one’s probability of success), confirmation bias (discounting information that is inconsistent with one’s beliefs), and the general effort to reduce uncertainty and cognitive dissonance in one’s environment. All these biases afflict policymakers and undermine the cool collection and evaluation of evidence.

A second factor that interrupts the instrumental relationship between science, advocacy and policy is that sometimes policymakers don’t hear us simply because they have an interest in not hearing us – and not because of poor science or ineffective advocacy. This is particularly troublesome because even though we can often learn about our human biases and act to correct for them, when elites have a vested interest in ignoring evidence, coalitional advocates may find themselves as ineffective as those who are ‘outside the policy discourse.’ This problem is prevalent across policy areas, but I find an example from intelligence failure to be instructive.

In his analysis of intelligence failures, Joshua Rovner identifies three ways in which the relationship between the intelligence community and the policy community can break down. First, policymakers may simply neglect intelligence. This is often, but not always, due to the operation of biases mentioned above (e.g. confirmation bias). Second, decision makers may be excessively deferential. This can backfire upon scientists and undermine their legitimacy when policymakers listen but the evidence is faulty. Third, the relationship breaks down when policymakers directly manipulate the evidence generated by the intelligence community or pressure it into producing estimates consistent with their own policy goals. The now famous case in international relations is the highly politicized claims that Iraq possessed nuclear weapons in 2003. What these failures suggest is that knowledge of why, when, and how policymakers respond to scientific evidence is at least as important as knowing how scientists can better organize to deliver that evidence.

I have argued that even well-organized advocacy may fail to achieve desired policy outcomes because, essentially, elites either inadvertently or purposefully fudge the evidence. A third factor that disrupts the instrumental relationship between science and policy is that political interest is often framed in terms of values, not facts. When facts collide with values, humans tend to react with outrage in one or a combination of ways: discredit the source (scientists!), dispute the completeness of the data, or reject the validity of the methodology.

For example, Ginges and Atran argue that people’s support for policy decisions – such as whether to go to war – is a product of deontological reasoning, meaning that we “follow a rule-bound logic of moral appropriateness,” regardless of the material benefits of the policy. This means that policy is often guided predominantly by moral values rather than scientific evidence of cause and effect. When policy threatens values, scientists will need to do more than join advocacy networks and do good science; they will need to change hearts and target values – something for which they are poorly equipped. Did Americans land a man on the moon because aerospace engineers finally mastered the ins-and-outs of policy advocacy? Not at all. Rather, the space program was an instrument of the prevailing ideological conflicts of its time. Values propelled science, and not the reverse.

What lessons can be derived from this discussion? The first two lessons are that advocacy, while important, must take greater stock of psychological bias and political interest. Richerson’s ‘hard-headed realism’ is incomplete without it. Elites have their own interests and therefore should not be conceived of as merely the ‘ear’ for which advocacy coalitions compete. The final lesson is that values trump facts. There is a reason why the Golden Rule is more memorable than Pythagoras’ Theorem. We are a social species that prefers to cast political problems in moral, rather than causal terms. Indeed, scientists may do well to join advocacy coalitions for this very reason, as Richerson argues. However, scientists are poorly equipped to lead this fight and might consider sticking to what they are best at: knowledge over rhetoric.

A final challenge is to address is Richerson’s following claim: “Scientists who wish to remain above the rough-and-tumble of advocacy coalition politics face a fatal problem. Policy thought pieces delivered from a completely disinterested perspective are outside the policy discourse.” I’ve argued that, for many issues, involvement in advocacy coalitions is unlikely on its own to have significant impact on policy, and when policy does change, it’s not always clear that this was necessarily because of better advocacy. However, this argument places me in an awkward catch-22: scientists who ‘get involved’ will find their evidence and arguments filtered and manipulated by policymakers, while on the other hand, scientists who stand above the fray will not be heard at all.

I don’t believe the situation is as grim as all that, and I don’t believe that scientists should avoid policy engagement at all costs. Certainly scientists should become involved in advocacy coalitions when their passions and values commit them toward that end. When scientists blend scientific evidence with moral force, they are bound to have great impact indeed. The political success of the Montreal Protocol is illustrative in that regard.

Although there may be conditions under which better advocacy can have great impact on policy, it’s not clear to me that this pathway is either necessary or sufficient for that outcome. Therefore, scientists would do well to think a bit more carefully about what is actually in their comparative advantage to deliver, and to give greater weight to psychological bias, political interest, and sacred values when entering the battlefield of coalitional politics. Scientists who ignore these realities may suffer the fate of the Red Queen in Alice’s Wonderland, running desperately faster and faster only to stay in place. We falsely imagine that a failure of effort, speed, or organization is what keeps us in place. Tragically, if our response to this problem is that we must join a running group and give it time, we may be sorely disappointed in the outcome.

1 Comment

Join the discussion

One Comment

  1. Malcolm Kirkpatrick says:

    “The now famous case in international relations is the highly politicized claims that Iraq possessed nuclear weapons in 2003. ”
    Nobody claimed in 2003 that Iraq then or ever possessed nuclear weapons. Nobody ever claimed that Iraq possessed nuclear weapons in 2003. President Bush claimed that Iraqi officials had sought refined uranium ore in Niger. That is what the Niger Minister of Mines told Ambassador Wilson.