This View of Life Anything and everything from an evolutionary perspective.
FIND tvol:
Profiles in Evolutionary Moral Psychology: Richard Joyce
Michael Price
Michael Price
is Senior Lecturer in Psychology, and co-Director of the Centre for Culture and Evolutionary Psychology, at Brunel University, London.

As part of the “Profiles in Evolutionary Moral Psychology” interview series, This View of Life had the opportunity to speak with Richard Joyce, Professor of Philosophy at Victoria University of Wellington. Professor Joyce is well-known for his research on meta-ethics (the branch of moral philosophy concerned with the most fundamental properties of moral systems, rules, and judgments), and his book The Evolution of Morality (MIT press, 2006) is one of the most highly-regarded treatises on meta-ethics from an evolutionary perspective.

In his interview, Joyce goes into extensive and illuminating detail about his research approach and influences, and about why he believes that evolutionary biology offers an indispensable foundation for moral philosophy.

MICHAEL PRICE: What can evolutionary approaches tell us about human moral systems, that other approaches cannot tell us? That is, what unique and novel insights about morality does an evolutionary approach provide?

Sign up for our newsletters

I wish to receive updates from:

RICHARD JOYCE: There are two questions that one can ask about human moral systems. First, one can wonder why our moral systems have the content that they do: Why does one culture encourage the private ownership of land while another culture does not allow it? Why does one group permit homosexuality while another group has norms forbidding it? (And so forth.) Second, one can wonder why human cultures have moral systems at all. By “moral system” here I don’t mean ways of behaving but ways of thinking. So we’re not asking why humans cooperate—the rough answer to that, presumably, is that we are so much better off cooperating than trying to go it alone. We are asking, rather, why our cooperative behavior is governed by moral thinking: Why do we classify the world in terms of good and bad, right and wrong, virtuous and evil, and so on? While an evolutionary approach to morality may certainly have something to say about the first question—about the content of moral systems—I think it is at its strongest as a way of addressing the second question.

Moral evaluation (e.g., “He’s a good guy”) is a different kind of mental activity from descriptive appraisal (e.g., “He’s tall”), and humans evidently have a brain capable of both kinds of activity. When we investigate the most basic faculties of the brain, an evolutionary approach is more likely to be enlightening than one focused on a narrower time-frame. By comparison, if we were to ask why humans have two legs, or can’t breathe underwater, or remember faces better than numbers, the answers surely lie in the ways in which our ancestors adapted to their environment over a very long period of time. I’m inclined to think the same about the human trait of evaluating the world in moral terms. We do it because we have a brain designed to do it, and we have a brain designed to do it because having such a brain was in some manner adaptive to our distant ancestors.

Saying this does not exclude the significance of learning. It is abundantly clear that to a very large extent we learn morality: people raised in homogenous societies that permit cannibalism are not likely to judge the practice prohibited; people raised in societies that forbid marriage between cousins will probably be opposed to such unions, and so on. So learning is certainly going to be an important part of the answer to the first question mentioned above, and there is much to be gained from understanding this moral learning process in a non-evolutionary manner (by which I don’t mean an approach opposed to evolution, but rather one which simply doesn’t particularly mention it). However, the hypothesis in which I am interested, directed at the second question, is compatible with learning, for the hypothesis is really that evolution has designed us for this particular kind of learning: identifying, acquiring, and internalizing the moral norms of our culture. By analogy, it is clear that we learn our local language (Italian babies start to speak Italian, Japanese babies grow up speaking Japanese, etc.), yet the evidence suggests that we come into the world with mental faculties prepared for this kind of learning: seeking out linguistic stimuli and processing it in a special manner.

This view of morality could, of course, be false. It may be that the human capacity to evaluate the world in moral terms is a kind of by-product of faculties that evolved for other purposes—a by-product that became manifest relatively recently, perhaps when humans started living together in cities, interacting with strangers, protecting accumulated wealth, and such like. I am not familiar, however, with any plausible account of how this might have happened. And even if this were true, so long as our interest remained in why, at the most basic level, humans engage in moral assessment, then we would want to know the details of how and why this by-product trait emerged (precisely which adaptations does it depend on?, etc.). Either way, if we are wondering why the odd phenomenon of human moral judgment exists at all, then the answers must lie in our deep past.

PRICE: The ordinary view in biology is that adaptations evolve primarily to promote individual fitness (survival and reproduction of self/kin). Do you believe that this view is correct, with regard to the human biological adaptations that generate moral rules? Does this view imply that individuals moralize primarily to promote their own fitness interests (as opposed to promoting, e.g., group welfare)?

JOYCE: I’ve got no dog in this fight (as James Baker once declared), but I will say this much. I am yet to encounter an argument that moves me to think that group selection must be appealed to in order to explain human morality. And I am also disposed to endorse the principle articulated by George Williams that “one should postulate adaptation at no higher a level than is necessitated by the facts” (Adaptation and Natural Selection, 1966; p.262). Thus I am provisionally a fan of explaining human morality by reference to individual selection.

I believe that one can explain a lot via postulating that our ancestors developed mental equipment to cope with reciprocal exchanges (whether of concrete goods, favors, labor, sexual access, or information). In a reciprocal exchange (of the sort I have in mind) each individual is better off than if he or she were not engaged in the interaction, and thus any adaptations that emerge to govern such exchanges will be explained by reference to the enhancement of individual fitness. Basic moral thinking might be one such adaptation: where an individual remains committed to an ongoing cooperative interaction (despite temptation to defect) through thoughts of having incurred an obligation, beliefs that cheating the other party would be wrong or unfair, judgments that someone who defects on such deals is a bad person, and so on.

There are two additional comments I’d like to make about this hypothesis.

First, one might object that a great deal of our moral code has nothing to do with reciprocal exchanges. But this complaint misses the obvious fact that our moral codes are malleable. I would not appeal to ancestral reciprocity to answer the first question mentioned above—about the content of moral systems—but it is promising as an answer to the second question—about why we developed the capacity to think in moral terms at all. Once those faculties were in place (faculties for thinking of actions as obligatory, people as bad, etc.), then they were available for new social uses. I don’t think anyone discussing this matter denies this basic fact. Even someone who wants to explain the origin of moral thinking via group selection is likely to allow that cultural pressures may create moral norms that count against the good of the group. By the same token, even if the benefits of reciprocation were the main adaptive pressure in the emergence of the human moral faculty, one shouldn’t expect to find that matters pertaining to reciprocity are the only thing with which modern moral systems are ever concerned.

Second, one might object that if a person enters a reciprocal exchange for the benefit it affords, then this is an act of selfishness, which is the antithesis of morality. This common objection is conceptually confused. The benefit to individuals might explain why the neural mechanisms which motivate the behavior evolved, but it simply doesn’t follow that the benefit is what psychologically motivates the behavior in the individual. By comparison, if I see my daughter fall over in the playground, then a suite of strong emotive and behavioral mechanisms immediately kicks in: I feel a jolt of anxiety and rush to her aid. Now, it is entirely possible (indeed, plausible) that there’s an evolutionary explanation for why my brain does this: it’s a way of motivating behavior that protects my offspring and thus my own genetic fitness. But this evolutionary explanation has nothing to say about what my motivation is—most importantly, it certainly doesn’t follow that my helping behavior is “really selfish.” In all probability, my motivation is exactly what it appears to be: a genuinely altruistic concern for my daughter’s wellbeing.

(Incidentally, the wording of your question “Does this view imply that individuals moralize primarily to promote their own fitness interests” harbors exactly this kind of ambiguity. The evolutionary explanation of why individuals have the psychological mechanisms that enable moralizing may be entirely individualistic, i.e., pertaining to their own fitness interests, yet what motivates the individual’s moralizing at a psychological level may be a purely altruistic affair.)

PRICE: What work by others on the evolution of morality (or just on morality in general) have you found most enlightening?

JOYCE: When I started out thinking about the evolution of morality (call it twenty years ago) there wasn’t much literature and what there was wasn’t very rigorous, but some books that really stimulated my thinking were Michael Ruse’s Taking Darwin Seriously (1986), Richard Alexander’s The Biology of Moral Systems (1987), Robert Frank’s Passions within Reason (1988), and Frans de Waal’s Good Natured (1996). (One of the things I like about the field is its interdisciplinary relevance. The four authors just mentioned, for example, are, respectively, a philosopher, an entomologist, an economist, and a primatologist.) Since then the field has burgeoned and become vastly more complex and nuanced. There are too many good books and articles for me to pick out just a few (though for every good one there are several not-so-good ones). The best are those that remain soundly empirically anchored.

As a philosopher (a meta-ethicist, to be precise), I am interested not merely in the empirical question of whether and how the human moral sense evolved, but also in what implications there may be for perennial questions in moral philosophy. Since the time of the ancient Greeks philosophers have wrangled over whether moral norms might be objectively binding, or a human construct, or perhaps a mistaken way of thinking altogether. Might evolutionary data contribute to settling such disputes? A debate in which I have been much involved in recent years concerns what have come to be called “evolutionary debunking arguments.” Michael Ruse, Sharon Street, and myself are usually identified as the “debunkers” (or, as Ruse said to me recently, the “villains”). Lately there has been a mini-explosion of literature of opposition, and much of it is very worthwhile. A year ago I was grappling with Roger White’s “You just believe that because…” (Philosophical Perspectives 2010), Erik Wielenberg’s “On the evolutionary debunking of morality” (Ethics 2010), Guy Kahane’s “Evolutionary debunking arguments” (Noûs 2011), and Kevin Brosnan’s “Do the evolutionary origins of our moral beliefs undermine moral knowledge?” (Biology and Philosophy 2011). You’ll notice that these are all from 2010 and 2011; since then my attention has been on other matters, so there’s probably a whole lot more by now!

PRICE: Which of your own publications are most relevant to an evolutionary understanding of morality?

JOYCE: My book The Myth of Morality (2001) contained a chapter on the evolution of morality, and that’s where my engagement with the topic really started. Even as I wrote that work I felt that there was much more to say on the matter, so my next book, The Evolution of Morality (2006) was a kind of grand expansion of that chapter. This book remains my most sustained discussion of the topic, and I pretty much stand by most of what I said on that occasion, though a lot of it was rough round the edges.

The Evolution of Morality splits into two parts: one discussing the evolution of morality in an empirical vein (a kind of evolutionary psychology, if you like), the other exploring philosophical implications (pressing the aforementioned “evolutionary debunking of morality” argument). A lot of my subsequent work has followed this division. A couple of recent papers where I elaborate on the empirical issue are “The many moral nativisms” (in Sterelny, Joyce, Calcott, & Fraser (eds.), Cooperation and its Evolution, 2013) and “The origins of moral judgment” (Behaviour, 2014). And a couple where I continue to refine the debunking argument and respond to its critics are “Irrealism and the genealogy of morals” (Ratio, 2013) and “Evolution, truth-tracking, and moral skepticism” (in Reichardt (ed.), Problems of Goodness: New Essays on Metaethics, forthcoming). Of these four papers, the one I judge best is “Irrealism and the genealogy of morals.”

PRICE: Which results or ideas from your work do you regard as most significant?

JOYCE: As a philosopher I don’t see myself as contributing much to the empirical program of establishing whether the human moral faculty is a biological adaptation; that’s really the work for others. Where I do hope I’ve had some useful things to say concerns the conceptual clarification of the hypothesis. If the question under scrutiny concerns the origin of moral judgment, then we need to understand what a moral judgment is; it’s no good arguing about the evolution of trait X if you don’t really know what X even is. But many of the empirical researchers who are, for example, well placed to discuss evolutionary biology, or investigate primate sociality, or model the emergence of cooperation in game theoretic terms, are not really qualified to answer tricky queries about what exactly a moral judgment is. (This limitation doesn’t always prevent them from developing confident theories about the evolution of morality!)

Despite the work I’ve done, I confess that regarding the empirical question I still think of myself as a bit of a Sunday painter. The place where I feel more in my proper professional box is discussing the philosophical implications of the evolution of morality. I fear, though, that in this realm my intellectual obsessions take on those qualities so characteristic of professional philosophy—namely, the pursuit of matters that seem abstruse and disconnected from practical concerns, in a manner that tends to baffle one’s more empirically-minded colleagues. Nevertheless (unapologetically), it is here that I do the work that I think of as my most significant.

One such area (mentioned earlier) is the exploration of debunking arguments, the gist of which is as follows. Evolutionary hypotheses about the origin of morality seem to indicate that the adaptive pay-off of moral thinking for our ancestors lay in the improvement of social cohesion, not in the accurate representation of a realm of moral facts. (Contrast our evolved visual system, which was adaptive to our ancestors only because it presented them with a roughly accurate image of reality.) This observation would appear to have an undermining effect on morality, for if a judgment is formed through a process that is insensitive to the facts, then we would usually class that judgment as lacking justification. (If a clock reads 10:15am regardless of the time of day, then it should not be trusted as an indicator of the time.)

The conclusion that our moral judgments lack justification would be a pretty astounding result. I am far from the first person to suggest such a thing; moral skepticism has a tradition starting with the Greeks. Nor am I the first person to suggest that moral skepticism might be established by data concerning the genealogy of our moral faculty; Nietzsche, Freud, and Marx all seemed to accept some version of this argument. Even the prospect of morality being undermined by a specifically Darwinian genealogy was discussed by Ruse back in the 1980s. Nevertheless, I think my work has contributed to the crystallization of the dialectic in the last few years, and I’d like to think I’ve added a few original twists and turns to the plot.

PRICE: What are the most important unsolved scientific puzzles in evolutionary moral psychology?

JOYCE: I’m not sure that there are any solved puzzles in evolutionary moral psychology! The basic question we’re all interested in here, I take it, is whether human moral judgment is an adaptation. But progress is hampered on two fronts: researchers generally have only an inchoate idea of what they mean by “moral judgment” (and even when they have a precise idea, it doesn’t always mesh with others’ precise ideas), and researchers cannot agree on what kind of evidence would settle the matter. The former is really a conceptual problem, but the latter could be called a “scientific puzzle,” and it really would be great to see this matter solved in order for the field of evolutionary psychology to advance. I’m not suggesting that we need to develop a framework where one can expect incontrovertible and demonstrable evidence that something is (or is not) an adaptation; but it sure would be nice to know what evidence we might look for which all sensible parties agree provides good confirming support for (or against) the hypothesis.

For example, much ink has been spilt arguing over the extent to which there are cross-cultural universals in human morality. While this is certainly interesting, it is far from clear what relevance it has as evidence for or against the adaptational hypothesis. On the one hand, morality could be an adaptation that admits of malleability (either accidentally or as a design feature); on the other hand, morality might be entirely a culturally acquired trait but one that nevertheless has certain universal manifestations. Or to take another example (over which almost as much ink has been split): there has been some lively debate about a moral “poverty of the stimulus” argument, according to which morality appears in infant development in advance of environmental factors sufficient to explain its acquisition. But even if such evidence were forthcoming (about which the jury is still out), it couldn’t establish that morality is a biological adaptation, for such evidence couldn’t distinguish between this hypothesis and one according to which morality is a by-product of other adaptations which happen to come on-line developmentally early.

So the main unsolved puzzle about evolutionary moral psychology is the biggest one of all: We don’t know whether human moral systems are the product of Darwinian selection. And my comments have highlighted the fact that it is not just for want of empirical data that this remains a challenge; a genie could offer the intellectual community all the empirical data it could ever wish for and the puzzle would remain. Before data can support conclusions we need conceptual clarification and a settled methodological framework. In this respect, the intellectual grounding of evolutionary moral psychology, and evolutionary psychology in general, is still very much a work in progress.


Richard Joyce’s homepage at Victoria University of Wellington (includes links to his online papers).

A page about Joyce’s book The Evolution of Morality, and the book’s page at

Richard Joyce interviewed about The Evolution of Morality on ABC International Radio.
(To read the ABC transcript on the above linked page, click on “show” on the right hand side of that page across from “Transcript”.)

1 Comment

Join the discussion

One Comment

  1. Mark Sloan says:

    I find a lot to agree with here and always enjoy reading Richard Joyce’s pieces about the complex and tricky intersection of moral philosophy and the science of morality where the possibility of misunderstanding is high. And, of course, what I have to say here may be an example of that misunderstanding.

    However, rather than debunking morality, it seems to me that understanding morality as the product of evolutionary processes confirms that there is “objective moral truth”. This seems opposite of Joyce’s conclusion which I understand to be consistent with “morality is an illusion” (in Michael Ruse’s turn of phrase).

    What is going on? I understand that “objective moral truth” in moral philosophy normally refers to universally binding morality, what people ‘ought’ to do regardless of their needs and preferences. Obviously, science’s objective truth can authoritatively only tell us what morality ‘is’, not how we are somehow obligated to behave (not what morality ‘ought’ to be). So we can confidently say “universally binding morality is an illusion” will find no contradiction in any science of morality. (I’ll leave it to others to argue that science proves the claim to be true.) But this has very different implications than “morality is an illusion” which I find misleading.

    Returning to a normal meaning of objective in science (meaning cross species mind independent), what might that objective moral truth be? Martin Nowak (editor) in Evolution, Games, and God and Herbert Gintis in The Bounds of Reason: Game Theory and the Unification of the Behavioral Sciences represent the view that there is an objective truth about morality’s function. (Universal objective moral truth is not found explicitly in species dependent biological implementations or culturally dependent moral norms which both are only fallible heuristics for that function.) We can variously describe that function as “to increase the benefits of cooperation in groups by altruistic cooperation strategies” (Gintis) or as I like to put it, “to overcome the cross-species universal dilemma of how to obtain the benefits of altruistic cooperation without being exploited”.

    Perhaps in the next few years, and “some conceptual clarification and a settled methodological framework” as Joyce suggests, some version of that “objective moral truth” from science will become generally accepted. Then, with help from moral philosophers, that firmer foundation can be put to work defining moral codes that will better achieve common human goals and desires.