The psychology of reasoning has yielded may surprising and seemingly discouraging results for decades, but no convergence whatever in explanations of reason—until Hugo Mercier and Dan Sperber’s well-received and highly original The Enigma of Reason (Harvard, 2017). Brian Boyd interviews the authors about their findings.

Introduction/Exposition

Brian Boyd: You write that reason has often been seen as a superpower, but that so much evidence in the psychology of reasoning over the last 50 years shows that humans usually reason badly. You wryly conclude: “the idea of a failed superpower makes little sense.” Many will know some but not all of the evidence showing the weakness of reason. Can you summarize or illustrate the most telling kinds of evidence of people’s failures in reasoning?

Hugo Mercier and Dan Sperber: Psychologists love devising tricky problems, problems which have a compelling intuitive answer that turns out to be wrong. Take for example the well-known bat and ball problem:

If a baseball and a bat cost $1.10 together, and the bat costs $1.00 more than the ball, how much does the ball cost?

Most people initially think that the correct answer is 10c… when in fact it is 5c (we’ll let the readers work this one out, if they haven’t already). Such problems seem to illustrate the superiority of reason over intuition: our intuitions make a basic mistake, which can then be corrected by reason. The issue is that, actually, most people don’t correct their intuitions! Everyone who’s gone to school could, in principle, solve this problem but most people get stuck on the 10c answer. Even in a situation which would be ideal for reason to correct mistaken intuitions, it fails abysmally.

And why reason fails is even more damning. We fail at such tasks because, when we produce reasons, we are biased and lazy. Biased because we overwhelmingly look for reasons that support whatever belief or decision we have already reached intuitively—we have what we call a “myside bias.” Lazy because we don’t bother checking whether these reasons are really good (and even when we do, we are still biased and tend to over-evaluate our own reasons). Combined, laziness and myside bias make it very unlikely that reason manages to correct our own mistaken intuitions.

BB: You contrast the standard account of reason, which you call “intellectualist,” with your own, which you call “interactionist.” What are the differences? How do the intellectualists account for the evidence about the shortcomings of reason? What do you argue reason is for? How does that function explain the shortcomings of lone reasoning?

HM & DS: According to the intellectualist position (which is held, more or less explicitly, by the majority of philosophers, psychologists, and, probably, laypeople as well), the main function of reason is to help the individual reasoner reach better decisions and sounder beliefs. It is also, if need be, to correct mistaken intuitions in the process. Our interactionist view is that the main functions of reason are social: to give reasons to justify ourselves and to produce arguments to convince others—and to evaluate these reasons.

Reason’s features are the polar opposite of what we should expect if the intellectualist position were correct—when we produce reasons, we are biased instead of objective, lazy instead of demanding. Few intellectualists deny the existence of these biases. In fact most of these biased have been discovered by them. On the other hand, intellectualists don’t have a good account for the biases of reason. They sometimes try shift the blame and claim that these biases come from our intuitions, not from reason itself. We strongly disagree; these biases are specific to reason: only reason, for instance, has a myside bias, which would be disastrous in other cognitive mechanisms.

BB: You predict that reason will work much better in a social, dialogical, context, as a demanding and objective assessor of others’ arguments. Yet many suppose others gullible. What evidence points toward the superior performance of reason in evaluating others’ reasons or arguments rather than in producing one’s own? Have you been able to test the prediction further?

HM & DS: There is a tremendous amount of data showing that when people attempt to solve problems in small groups, whoever has the best solution is much more likely to convince the others. Or, if bits of the best solution can be found in different members, they manage to stitch them together to create a better solution. In these contexts, the production of reasons is still biased and lazy, but what explains the success of group reasoning is the evaluation of other people’s reasons, which is much more objective and demanding.

Initially, members of such groups each produce arguments in defence of their own individual viewpoint and, as a result, many of these arguments are weak. But then, in interacting with one another, people evaluate each other’s reasons, accepting the good ones, and rejecting the weaker ones, pushing those who offered weak reasons either to accept the better ones or to provide even better alternatives.

It is worth stressing that people are much better at evaluating the reasons of others than their own. With some colleagues, we have performed a series of experiments—reported in our book—that shows people often reject their own reasons once we make them believe that they are someone else’s: now they become more demanding. We’ve also shown that, not always but often enough, people are able to evaluate the reasons they are presented with objectively. As a result they may end up reaching conclusions that are at odds with what they thought before.

Testimony is something you have to accept on trust. A reason or an argument is something you can examine and evaluate on its own merits. As a result, people can recognize a good argument even when it is presented by someone they don’t trust.

BB: Some have misunderstood your social explanation of reason as implying we argue mainly to deceive and manipulate others—as a kind of late echo of the Machiavellian intelligence hypothesis. But you actually offer a very positive account of reason working well socially, even prosocially, in the context for which it was adapted. Can you offer some examples?

HM & DS: In the context of the evolution of communication, a purely, or even a mostly Machiavellian mechanism wouldn’t fly. More specifically, if listening to other people’s reasons didn’t bring a net benefit—if we were more often manipulated than enlightened—we would evolve to stop listening to reasons. Most everyday interactions in which we exchange reasons are cooperative—we are constantly giving reasons to convince people we are cooperating with to do things one way rather than another, or to justify ourselves in their eyes. When scholars (not to mention pundits) discuss reasons and arguments, however, they rarely pay attention to these everyday events. They focus instead on more salient but not at all representative interactions such as presidential debates (where the debaters are not really trying to convince one another but are playing to the gallery).

Mechanisms

 BB: Your book could have been two books: one making the case for an evolutionary account of reasoning as a cognitive tool for social cooperation, and explaining the weaknesses and strengths of reason accordingly; the other explaining the mechanism of reason, as a metacognitive intuitive inferential module focused on reasons. The latter is much more technical than the former, and hard going, and still only conceptual rather than neuroscientific. Why did you fuse both parts of the inquiry into one? Do you have plans to join with neuroscientists to test solitary versus shared reasoning (the latter, admittedly, very difficult in current fMRI machines)?

HM & DS: Your question makes a contrast between, on the one hand, “conceptual” inquiry (by which you seem to mean philosophical rather than empirical) and, on the other hand, “neuroscientific” inquiry, as if the only kind of empirical and scientific approach to the mind was the neurological. We don’t buy that. Neuroscience, it is true, plays an important role in many areas of cognitive psychology, but only in combination with behavioural experiments (which have become more and more sophisticated). In some areas, neurological evidence has had a huge impact: think for instance of the discovery of the dorsal and ventral pathways in visual perception. In other areas such as the study of reasoning, the impact of neuroscientific methods has been much more limited so far.

Still, we have both been involved in studies on reasoning or argumentation combining neurological and behavioural evidence and, sure, we look forward to neurological evidence playing a greater role in the domain. We would reject, however, the view that experimental psychology work on reasoning is not scientific or is less scientific. We do not view our chapters on the mechanisms of inference and reasoning—where we discuss many experimental findings—as “only conceptual.” Note that, even if it were so, the real question would be: is the theory testable? Is it true? Testable, it is: it is in the process of being tested. As to whether it is true, let’s wait for the evidence of many more tests, and for more discussion of this evidence.

You also ask, why did we put in one and the same book our work on the mechanisms of reason and our work on its function? Because we believe that form follows function, and that we could better make sense of both by showing how they fit together. True, our chapters on how people reason are a bit more demanding than those on why they reason, but, judging from readers’ reactions, not forbiddingly so. We found developing an integrated story so much more interesting; we hope that readers will agree and find it well worth the effort.

Modularity

BB: You posit that reason is a module. That’s a term so much less in vogue than twenty years ago that many are reluctant to use it at all. What do you mean by “module”?

HM & DS: By “module” we mean, roughly, a specialized mechanism that is a relatively autonomous component of a larger system and that, typically, evolved or developed in a distinctive way. Mechanisms can be more or less specialized and autonomous and hence more or less modular, and systems too can be more or less modular in the sense of being more or less composed of modules (and sub-modules). The notion of a module so understood is common in engineering, artificial intelligence, and biology. In psychology and in philosophy of mind, we have an odd situation. Jerry Fodor introduced in his 1983 book, The Modularity of Mind, a much more restrictive notion of modularity. He did so in order to argue on two fronts. The view that perception is highly context-dependent was then quite dominant. Fodor wanted to argue that, in fact, perception is carried out by what we now call “Fodorian modules” that are specialized for processing sensory and linguistic inputs, automatic, and essentially context-independent. Central cognitive processes, he argued, are on the contrary what he called “isotropic” and “Quinean”, i.e. fully integrated, hence not modular at all. Both arguments have more or less faltered, but, in psychology and philosophy of mind, Fodor’s fairly idiosyncratic definition of a module has remained as a benchmark and is used mainly to object to any claim that this or that mechanism or system is modular.

We do not use the now obsolete Fodorian notion of a module but the ordinary one. More precisely, since we are talking about biologically evolved mechanisms, what we mean when we say that a cognitive mechanism such as reason is modular is that it is a biological module with a cognitive function. Does anybody really want to deny that there are such modules? Actually, most people working in cognitive and neurocognitive science talk quite commonly about autonomous and specialized biological mechanisms with a cognitive function but they don’t use the M-word. As you rightly note, the word “module” is not in vogue and many are reluctant to use it. So, people prefer to talk of “mechanisms,” “systems,” “processors,” “devices,” and so forth.

Why is it that so many cognitive scientists avoid the term “module”? It is, we believe to distance themselves from evolutionary psychologists such as Cosmides, Tooby, or Pinker, whose views are seen as quite controversial (even if, actually, these views have influenced the whole field). Evolutionary psychologists have famously argued that the mind is richly or even massively modular (again, not in the Fodorian sense). The mind, as they see it, is made of many specialized and autonomous mechanisms. These mechanisms are biologically evolved, or they are acquired thanks to evolved learning mechanisms, or, we would add, that may result from a process of re-modularization of neuronal tissues with a different initial function (as in the case of the “visual word form brain area” studied by Stanislas Dehaene in his research on the psychology of reading).

Why do we go against the vogue and talk of “module”? Because we are unabashed evolutionary psychologists. Actually, one of us, Dan, was the first (in 1994) to suggest that the mind might well be “massively modular” and he has discussed modularity in detail over the years. Another reason to talk of modules is that, unlike “mechanism” and other such words, “module” goes with an adjective, “modular,” and a noun “modularity” that are useful to formulate interesting questions such as how modular is a given system, or what is the role of modularity in evolution (a question richly discussed in recent evolutionary biology).

Reason and Language

BB: As far as I recall, you never theorise in your book about the relation between language and reason and their evolution: you focus on the function of reasoning now, in modern fully language-endowed humans, from hunter-gatherers to scientists. From your account, reason is entirely dependent on a highly explicit language, one that can incorporate reasons. So could there have been nothing akin to or approaching reason before language? Or to put this another way: crows and orangutans can solve physical problems by non-routine methods. Irene Pepperberg’s African gray parrot, Alex, could solve simple problems of abstract classification and enumeration. Are these not kinds of reasoning? Is there no continuity between animal thought and human reasoning?

HM & DS: Actually, we have a section of chapter 9 entitled “Reason relies on language.” More generally, what we argue is that reasons are used in communicative interaction. It goes without saying that such interactions would be so much more limited without language. We assume then that the selection pressures that favoured the evolution of reason came only in full force once some form of verbal communication had already evolved. That said, reason is a distinct mechanism, and the linguistic creatures that we are can also deploy reason in non-verbal communicative interactions: pointing, or showing something can, in many circumstances, serve to convey a reason.

In other animals, what we find are mechanisms of inferences, some quite elaborate, involved in various instances of problem-solving. These inferences are not about reasons and hence are not examples of reason in our sense. There may well be some continuity between non-human primates and human problem solving, but we see no precursor of reason in other animals. Mind you, if some other animals were discovered to think in term of reasons, we wouldn’t be taken aback but, on the contrary, quite excited!

BB: Just before our ancestors evolved language, they would presumably have already been very highly social and would have been making inferences about others’ reasons for acting and perhaps even their reasons for inferred beliefs. Could reason have evolved from understanding these inferences, or would the ability to articulate reasons have been a necessary precursor to evolving a capacity to reason?

HM & DS: It is, we would argue, a common mistake (with philosophical antecedents, for instance in the work of Donald Davidson about reasons and causes) to equate inferences about beliefs and desires with inferences about reasons. We argue in detail for disentangling the attribution of mental states from the much more specialized capacity to attribute reasons. Once this is done, then we can ask how the two capacities are related, in evolution, in development, and in cognition.

Reasons and Errors

BB: Your view of reason enriches our understanding of human hypersociality. We earn our reputations not only by our actions and the observations and reports others make of our actions, but also by our actively presenting to others reasons for our actions. This helps form and maintain norms. You have cited the considerable research that shows we often do not understand the reasons for what we do, and stress that the accuracy of our sense of our reasons is not what matters: “Invoking reasons as motivations of one’s past views and actions expresses a recognition of the normative aptness of these reasons and a commitment to being guided by similar reasons in the future. For our audience, this commitment to accepting responsibility and to being guided in the future by the type of reasons we invoked to explain the past is much more relevant than the accuracy of our would-be introspections.” Here I am reminded of the emphasis in evolutionary accounts of religion, especially David Sloan Wilson’s, that what matters for human evolution is motivating socially useful behaviour rather than in being factually accurate. Do you accept that parallel?

HM & DS: In general, we would argue that cognition fulfils its biological function by being informative enough to guide behaviour. Attribution of reasons is not accurate as a psychological theory—nor are folk-biology or folk-physics—but it is informative enough to guide trust or distrust, mutual expectations, and coordination in a variety of cases. It causes not just socially useful behaviour but socially well-informed behaviour.

While social transmission of information has to be globally beneficial, it also allows a variety of absurd, false or misleading beliefs to proliferate in human populations. Religious beliefs provide prime examples of this. Unlike David Sloan Wilson, we favour a view of religion as a by-product rather than as an adaptation. We are not convinced that the overall effects of religion are beneficial or that the behaviours religion motivates are on balance socially useful. While we greatly appreciate Wilson’s contribution to the study of religion, we do not agree with it or see a parallel with our account of reason.

Multilevel selection

BB: You reject any link between your explanation of reason and multilevel selection. Yet you seem to understand group selection as implying that all selfish impulses must be suppressed if group selection is to apply. That’s not how I understand it: we still have self-serving and even selfish impulses, and we act on them; but various motivations and mechanisms can also dampen them or enhance prosocial impulses sufficiently for a group to cohere well enough to outdo other competing groups with weaker cooperative inclinations or mechanisms. You explain reason as a means for improving cooperation. Why can this not contribute toward multilevel selection, through both biological and cultural means, through the evolution of a reason module and the cultural enhancement of the inclination to reason together?

HM & DS: We are not, in our book, taking a stance on multilevel selection generally. What we are pointing out, to avoid possible misunderstandings, is that our explanation of the evolution of reason is not based on group-level selection. In fact it is quite antithetic to a group-level selection explanation. In our account, the main selection pressure leading to the evolution of reason results from the risks incurred in trusting others. Trust, we have argued, has to be buttressed by a variety of mechanisms of epistemic vigilance. Reason is one of these mechanisms. If humans could systematically trust one another’s testimony and advice, they wouldn’t need reasons to be convinced. Reasons, we claim, take over when trust is insufficient to get a message across.

Contrast this to a multilevel selection perspective: If human cooperation resulted from group-level selection, then the incentives of in-groups would be largely aligned; they could trust one another to very high degree; they would have no use for a costly mechanism that produces reasons to convince reluctant others or to be convinced in a discerning way.

Yes, we argue that, under certain conditions, reason can make cooperation more effective and more fruitful, but since we don’t take for granted that cooperation must be an effect of group-level selection, this is irrelevant to the issue.

Precursors?

BB: You present your “interactionist” theory as at odds with received “intellectualist” assumptions. As you note, Karl Popper’s ideas inspired Peter Wason to invent his now famous selection task, and, as you observe, the psychology of reasoning “has to a large extent become the psychology of the Wason task.” But were you aware that Popper seems to have intuited, although without developing as you do, an interactionist rather than an “intellectualist” (as he terms Cartesian rationalism) view of reason? In the early 1940s, he rejected the idea of reason as a faculty of the mind, as you do, and pointed to “the social character of reasonableness. . . Reason, like language, can be said to be a product of social life. . . . Admittedly, we often argue with ourselves ; but we are accustomed to do so only because we have learned to argue with others. . . we owe our reason, like our language, to intercourse with other men” (The Open Society and Its Enemies, 1945; Princeton, 1966, v. 2, 224). Popper explains that his position “is very different from the popular, originally Platonic, view of reason as a kind of ‘faculty’, which may be possessed and developed by different men in vastly different degrees. . . . Clever men may be very unreasonable ; they may cling to their prejudices and may not expect to hear anything worth while from others. According to our view, however, we not only owe our reason to others, but we can never excel others in our reasonableness in a way that would establish a claim to authority; authoritarianism and rationalism in our sense cannot be reconciled, since argument, which includes criticism, and the art of listening to criticism, is the basis of reasonableness” (v. 2, 226).

HM & DS: The idea that the primary uses of reason are social and that individual reasoning derives from argumentation is associated in the twentieth century not so much with Popper as with Perelman and Toulmin. Anyhow, it is an old idea, as old as ancient Greek philosophy and rhetoric. Michael Billig’s Arguing and Thinking has a great review of the history of this idea, and Catarina Dutilh Novaes has done remarkable work on the importance of dialogue for the historical emergence of logic. Such a view, by the way, was also quite common among developmental psychologists in the early twentieth century. So what is our contribution? Well, we have put the whole issue in a naturalistic and evolutionary perspective; we have  discussed the place of reason among other inferential mechanisms; we have produced both novel evidence and a reinterpretation of past evidence.

Implications?

BB: Your account of reason seems to have clear implications for pedagogy (as your comments on teaching critical thinking indicate) and for at least trying to maximize the productive rather than the polarizing use of reason in social life. Could you explain both the pedagogical implications, and the possibilities and problems of enhancing reason in the modern world?

HM & DS: In the field of education, the practice of collaborative, or cooperative, learning has a long history. In many schools across the world, pupils attempt to solve problems and understand difficult concepts by discussing together. Hundreds of publications show the benefits of these methods. What we hope to bring to this field is twofold. First, a better understanding of why collaborative learning can bring such benefits. Second, a better understanding of the conditions under which it’ll work optimally. For instance, we believe it is critical that some of the pupils have understood the relevant concepts, or parts of them, before discussing with each other—a new understanding is unlikely to emerge completely de novo from discussion.

This applies to decision making in other contexts. Experiments show that discussing in small groups allows jury to render fairer verdicts, forecasters to make better predictions, doctors to make better diagnoses, jurists to make better judicial decisions, and so on. Hopefully, our theory explains these effects, and highlights the constraints bearing on them.

Regarding politics, one of the lessons might be one of cautious optimism. People are far from being as pig-headed as they are often thought to be. In most cases, they react to good arguments by accepting their conclusion at least to some degree. When citizens discuss policy together, they tend to become more enlightened and to find mutually agreeable compromises. Sure there are very salient cases of argumentation going bad, but they are the exception rather than the rule. If unwarranted pessimism regarding the power of reason led us to engage less in argumentation, this alas could turn into a sad self-fulfilling prophecy.

Published On: March 12, 2018

Brian Boyd

Brian Boyd

Brian Boyd is University Distinguished Professor of English at the University of Auckland. His evolutionary research focuses on literature, especially fiction, and on art, in their relation to evolution: as evolved behaviors, as appealing to evolved minds, as depicting behaviors and life histories shaped by evolution. He is particularly interested in the costs and benefits of earning or paying attention to art. His books include On the Origin of Stories: Evolution, Cognition, and Fiction, Why Lyrics Last: Evolution, Cognition, and Shakespeare’s Sonnets, the co-edited Evolution, Literature, and Film: A Reader, and the co-authored On the Origin of Art. He is editor of the book series Evolution, Cognition, and the Arts. Known best as a scholar of writer Vladimir Nabokov, he was drawn into evolutionary work partly by Nabokov’s interest as a lepidopterist in evolution, and partly by his own interest in the evolutionary epistemology of philosopher Karl Popper, on whom he is writing a biography.

5 Comments

  • Ralph Haygood says:

    Discussions of human cognition tend toward typology: “it” – monolithic singular – is this or that or the other. The present discussion is no exception; for example, “only reason [monolithic singular] … has a myside bias”. Of course, there are justifications for such phrasings. However, the present discussion is not only of human cognition but also of its evolution, and a crucial ingredient of evolution is heritable variation. Our ancestors didn’t all think alike – if they had, no cognitive evolution would have occurred – and neither do we. (For example, I wasn’t fooled by the “bat and ball problem”, and I suspect many other readers of this post weren’t either.) I’ve seen little effort to delineate the variation in people’s cognitive defects, much less its heritability, let alone its relationship to genetic or environmental factors. Eventually, studies of the evolution of human cognition need to go there. I expect their findings will illuminate not only our past but also, perhaps more importantly, our future, assuming our cognitive defects allow us one.

    • Rory Short says:

      What is not mentioned in the interview, or in this comment, is the possibility that through our consciousness we seem to have the ability to pick up information that helps us to come to useful conclusions. We Quakers regard this as the workings of the Light in us and our meetings for business are founded on it.

  • Kenneth Blanchard says:

    The baseball example is somewhat misleading. “One dollar more” could mean “after I’ve paid for the ball, how much more do I need to purchase a bat?” In that case, 10 cents is the right answer. Or it could mean “Ok, I’ve a nickle for the ball, now I need to cover the ball again plus the cost of the bat.” In that interpretation, a nickle is the right answer. Logic is about determining precisely what someone is saying. Manipulating the language doe not demonstrate poor reasioning but poor writing.

  • Clive says:

    “Reason’s features are the polar opposite of what we should expect if the intellectualist position were correct—when we produce reasons, we are biased instead of objective, lazy instead of demanding. Few intellectualists deny the existence of these biases. In fact most of these biased have been discovered by them. On the other hand, intellectualists don’t have a good account for the biases of reason. They sometimes try shift the blame and claim that these biases come from our intuitions, not from reason itself. We strongly disagree; these biases are specific to reason: only reason, for instance, has a myside bias, which would be disastrous in other cognitive mechanisms.”

    Many of us have a hard time admitting when we’re wrong, but, even more so, we don’t like admitting that something about our logic (or belief systems) is flawed. So we rationalize, justify, and sometimes fictionalize our stories, telling rose-colored lies to downplay our mistakes and make our choices and behaviors seem less faulty.

    Rationalization — it’s what helps us sleep better at night.

    Right about now, you’re probably shaking your head, thinking “I don’t do that,” but you do. No one is immune to self-justification to some degree, and that’s okay because recent research findings suggest that these behaviors aren’t entirely our fault. Our brains work in overdrive to preserve our self-image and support our attitudes, even when evidence indicates otherwise.

    The mind reassures us, and because of this we often don’t realize that it is shaping our behavior. Mental stunts that take place when we rationalize result from cognitive dissonance, a term coined by the social psychologist Leon Festinger. Cognitive dissonance occurs whenever a person holds two conflicting ideas, beliefs, or opinions, so we try to find ways to reduce it and let our minds rest easy.

    Brain MRI scans show that when we’re confronted with dissonant information and use rationalization to compensate, the reasoning areas of our brains essentially shut down while the emotion circuits of the brain light up with activity. In other words, emotions trump logic. Researchers have also concluded by this information that once our minds are made up, it’s hard to change them; even reading information that goes against our initial point of view only adds to the justifying that we were right.

    http://brainworldmagazine.com/the-neuroscience-behind-rationalizing-our-mistakes/

  • Clive says:

    “It is worth stressing that people are much better at evaluating the reasons of others than their own. With some colleagues, we have performed a series of experiments—reported in our book—that shows people often reject their own reasons once we make them believe that they are someone else’s: now they become more demanding.”

    “Our brains work in overdrive to preserve our self-image and support our attitudes, even when evidence indicates otherwise.”

    In the early stages of beginning a contemplative practice (and for the first few minutes of each new contemplative experience), you’re simply observing your repetitive thoughts. The small, ego self can’t do this because it’s rather totally identified with its own thoughts and illusions, which are all the ego has. In fact, the ego is a passing game. That’s why it’s called the false self. It’s finally not real. Most people live out of their false self, so “they think they are their thinking.” They don’t have a clue who they are apart from their thoughts. What you are doing in contemplation is moving to a level beneath your thoughts: the level of pure and naked being. This is the level of pure consciousness. This is not consciousness of anything in particular; it’s simply naked awareness.

Leave a Reply to Rory Short Cancel Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.