Are there ‘bad’ kinds of moralizing? If so, does understanding morality as the product of evolutionary processes (of the biological and cultural kind) reveal why ‘bad’ moralizing has so persistently existed and allow us to sort out the ‘bad’ from the ‘good’?

Steven Pinker argued last December at The Economist’s ‘World in 2013 Festival’ that ‘bad’ moralizing has been the source of most human violence. (See the first 7.5 minutes or so of the video below.) His talk, contrasted with some insights from Jonathan Haidt’s work on “moral foundations” provides a good kickoff example for discussions of some of the issues in, and potential rewards of, understanding morality as products of evolutionary processes.

“You might think that . . . The world needs more morality. If we can find out what makes people moral we can get people to do more of it. In fact, I would say that the main conclusion of a lot of this research is that that is exactly the opposite of what we should do.” –Steven Pinker

Be assured this is only Steven Pinker’s ‘hook’ opener. He goes on to explain that he actually favors MORE morality of the sort that increases what he describes as the “rational thought” based ultimate goal of morality: flourishing and reducing harm.

Pinker wants LESS moralizing of the sort that motivates punishment of behaviors that merely disrespect moral ‘authority’ or violate moral ‘purity’. In his 2011 book, The Better Angels of Our Nature, Pinker argues that this sort of ‘moralizing’ has motivated most of the violence in human history, and continues to do so. Understanding why people persistently do this sort of moralizing offers the opportunity to argue against it and the violence it motivates.

But why are people intuitively motivated to punish people who merely disregard moral authority or violate moral purity? If no one is really harmed, why are some people so upset and even motivated to murder by behaviors such as homosexuality, a ‘purity’ violation in some cultures, or blasphemy, a ‘disrespect of authority’ violation?


Jonathan Haidt

Psychologist Jonathan Haidt’s empirical work on moralizing offers important clues about our peculiar moral intuitions. He argues that people around the world unconsciously refer to six universal “moral foundations” in making day to day moral judgments based on usually near-instant moral intuitions. In terms of their moral/immoral aspects, the first three of these foundations are care/harm, fairness/cheating, and liberty/oppression. Judgments based on Haidt’s first three foundations of moralizing tend to reliably increase flourishing and reduce harm, so they should win Pinker’s approval.

Haidt’s second three universal moral foundations are loyalty/betrayal, authority/subversion, and sanctity/degradation. Here, loyalty, authority, and sanctity are as defined by a moral in-group, a group of people who merit special moral concern. The moral upside of these foundations is that they can increase the benefits of living in that in-group. For example, by favoring the in-group and standardizing its morality, these moral foundations can provide a powerful means of increasing flourishing within a family, a group of friends, a tribe, or a religion. The moral downside is that out-groups (or dissenters within the in-group) who have other loyalties, respect other authorities, and hold other ideas sacred can then be demonized as immoral and, justified by their immorality, persecuted and even exploited for the benefit of the in-group.

Loyalty, authority, and sanctity foundations correspond to the “authority” and “purity” bases of moralizing that Pinker identifies as the source of most human violence and much of its suffering.

But Haidt is not so quick as Pinker to reject these last three foundations as always bad. Haidt points out in his book The Righteous Mind that social liberals place more importance on the first three foundations, but social conservatives place near equal importance on all six. Further, liberals would do well to understand the social power of all six, and the diverse moral perspectives that different distributions of ‘foundation’ emphasis can produce in human beings.

So who is right, Pinker or Haidt? I would argue that each is right, in his own way. Pinker is right that we should trust our rationality to guide our ultimate goals for moral codes. After all, it is uncontroversial that science can tell us only what ‘is’, and can help define the best means to achieve our ultimate goals, but science is necessarily silent about what our ultimate goals ‘ought’ to be. Haidt is right that we ignore the evolutionary origins of half of our moral intuitions at the peril of designing moral codes that are ineffective in achieving whatever our ultimate goals for moral codes are.

By revealing the evolutionary origins of morality (again, both its biological and cultural origins), science may be able to tell us what moral codes and social arrangements are most likely to achieve whatever ultimate goals we choose for moral codes. Perhaps a lot of people, including many moral philosophers, may agree that an ultimate goal for morality something like the commonly proposed “increasing flourishing and reducing harm” will do until something better turns up. Maybe moral philosophers will someday agree on a different ultimate goal, or perhaps just a more refined version of this goal, that people will prefer. Then, I expect understanding morality as a product of evolutionary processes will still be useful, and maybe even critical, to defining moral codes most likely to achieve that ultimate goal.

Published On: September 5, 2013

Mark Sloan

Mark Sloan

Mark Sloan is TVOL Morality Topic Associate Editor. He is a retired aerospace engineer with degrees in physics and engineering. His main interest is how insights from the science of morality might be made culturally useful. This effort necessarily spans relevant science and moral philosophy. In particular, he is interested in morality’s ultimate source, morality’s strange bindingness quality, and why and how societies might choose to apply insights from science to refine their moral codes to better meet human needs and preferences. His blog is scienceandmorality.com.

9 Comments

  • GAD says:

    Nice article. I think I lean more to Pinker then Haidt, so maybe 70/30 split.

  • ASD says:

    In a perfect world we’d do away with the last three foundations. In the real world, societies that do away with the last three foundations fall to societies which embrace all six. So the only way we’ll achieve a perfect world in which we don’t need the last three is to exterminate all those who refuse to give them up.

  • Mark Sloan says:

    ASD, the last three foundations (loyalty, authority, and purity) are powerful means of increasing the benefits of cooperation in groups such as families, tribes, and religions.

    The moral problems connected to them that Pinker points out, such as violence and exploitation of out-groups, arise when these foundations are employed in ways that decrease the benefits of cooperation between groups, as I expect Haidt would agree.

    Consider the example of loyalty. The loyalty foundation is the basis of being more loyal to family, friends, and the many other groups you belong to than other families, people you do not know, and groups you do not belong to. Without the loyalty foundation, families, friends, and other groups lose most of the reason they exist: to preferentially aid each other. It seems to me that it would be a grim world indeed without preferentially aid from families, friends, and other social groups.

  • ASD says:

    Isn’t it also true that the world would be better if everyone showed that same sense of “loyalty” to everyone else, and not just their families and friends? To treat everyone as equals, and to give “preferential” treatment to everyone? (Which is actually to give preferential treatment to no one.) How is the world better if we allow people in Africa to starve in order to be loyal to and give preferential treatment to our immediate family?

  • Mark Sloan says:

    ASD, so why do I claim it would be a grim world without preferential treatment to families, friends, and other groups we all belong to?
    First, as social animals, much of our experience of well-being comes from cooperation in such in-groups. These groups could not persist without such preferences.
    Second, everyone showing preferences for their families, friends, and other groups appears to be the best strategy for increasing overall well-being.
    It is an additional complexity to figure out the optimum balance between moral concern for in-groups and out-groups such as starving people in Africa that you will never meet. But, with the defined goal of maximizing overall well-being, that is a complexity we can address.

  • David Shalen says:

    Pinker is claiming that the right medicine is exhibited by certain historical trends and that this is empirically attested by large-scale statistical studies of actual human history. Yet he does not address one crucial, and obvious, fact; each and every one of these developments, while almost certainly materially motivated, had ‘moralistic’ argument patterns of it’s own, without which the development as it actually occurred would probably be impossible. As I understand it, the claim to discuss morality empirically relies on treating moral justifications as a category of justification. That’s fine, but you can’t look at only the negative results of it. There is moral justification involved in both mass historical movements and the everyday maintenance of the order here praised. How can Pinker claim to be doing even speculative proto-science (or proto-social science) of moral justifications yet treat morality as if it must only be involved in violence, for some unstated and hardly empirical reason?

    Also, Pinker is quite obviously fighting other people’s moral views with his own. However unobjectionable, however simple, however universal, his views happen to be, they are categorically moral views. He is engaging in moral justification of certain actions. I don’t blame him for this; it’s unavoidable. Rather I don’t understand his case fully. Are vaguely utilitarian morals not ‘moralistic’ at all? You need moral foundations for an argument that my utility and your utility have some equal or comparable importance.

    A real science of this matter must address more difficult questions, like the relationship between material needs and the structure of moral arguments, which would have to be understood to be interfered with in any coherent purposeful way. Another issue is whether early moral justification shaped human evolution itself, rather than just the other way around. I grant we have nothing but wild speculation at the moment into either. But no analysis which just assumes morality is a module sui generis and independent of other tools of human conceptual analysis can possibly get there.

  • David Shalen says:

    I for one certainly could use some clarification about morality from science that is not the morality of evolution as you succinctly but effectively describe it. I have a ready grasp only on the following gloss: given an ought claim from moral philosophy or simply a practical imperative, you can reason from science that it applies in a certain range of situations, but perhaps not in others. E.g. the argument I learned in demography class that wealth provides substitutes for the biological imperative of furthering yourself by having children, if it is science, could modify a person’s opinion about whether we ought to criticize as either immoral or irrational birthing many children into poverty. But I fear this gloss is inadequate for what you describe, it only serves to modify, not to create, moral imperatives. Did you have in mind an emergent set of principles driven by such understanding but ultimately rooted in our preferences, biologically hard-wired or otherwise?

    Where do you stand on the fallaciousness of the naturalistic fallacy, the idea of deriving an ought from an is? To me, it’s a pretty tricky problem.

    As for the fascinating evolutionary question about the origins of morality, which I have written a speculative essay about myself, I firmly stand by the explanatory merit and plausibility of the view they shaped each other co-evolutionarily. However, I find the origin-nature link in need of defense. It isn’t just obvious that an account of the origins of a human trait necessarily sheds a ‘what is’ type of light on the object (human morality in this case) as it exists. Human speech might exist partly because of the need for coordinated hunting, or childbearing with natural hips. Are those thereby keys to linguistics? Maybe. Looking forward to a post on the co-evolution of biological humans and moral culture for sure though!

  • Mark Sloan says:

    David, I’ll draft up a piece clarifying the critical differences between the science of morality, morality from science, and morality of evolution.

    Regarding the naturalistic fallacy, in my view science can only describe what ‘is’ and is silent on what ‘ought’ to be. Also, I do not understand how rational thought might derive ‘oughts’ from what ‘is’, but I am happy to leave that difficult question to others. 

    I am happy to leave the ‘ought’ from ‘is’ problem to others for the following reason.  Societies commonly share goals such as increasing well-being and enforce norms (moral codes) that they expect will be likely to achieve those goals. At least for those societies, whether an ‘ought’ can be derived from an ‘is’ is irrelevant. They already know their ultimate goal for moral behavior.

    A piece comparing different ideas among professionals in the field about “what shaped what” regarding our biology and our moral codes will take longer but might also be interesting for readers.

    I expect you and I agree that our moral biology and cultural moral codes each form part of the environment for the other’s evolution and thereby have shaped each other, at least since the emergence of culture.

    But there is another perspective (consistent with the above standard coevolutionary process view) that I think also has merit. This is the idea that the nature of our physical reality produces a universal dilemma concerning how to obtain the benefits of cooperation without being exploited. In this perspective, moral behaviors are a class of strategies that evolutionary processes have chanced across that solve the universal cooperation/exploitation dilemma. Thereby, we have a description of what morality ‘is’ that is independent of both culture and species, just as good science should be.

  • Mark Sloan says:

    David, several of your points, and at least some of Pinker’s talk, are about what can be called “morality from science”, what moral codes ‘ought’ to be as illuminated by objective insights from the science of morality. Possibly we agree that the “morality from science” area of study is outside the domain of science (what ‘is’) and is in the domain of moral philosophy, what ‘ought’ to be.

    Claims about what morality ought to be will probably inevitably sneak in here from time to time, but, as you may be suggesting, are not part of objective science in its normal sense.

    The chief topic in this Morality section will be “the science of morality” with an evolutionary perspective. This subject includes everything objective science can tell us about the origins, mechanisms, and functions of ‘moral’ behaviors, both as motivated by our biology and as advocated by cultural moral codes.

    However, questions like “Did cultural moral norms shape human moral biology, or the other way around, or has each shaped the other, or were they both shaped by some force external to both biology and culture?” are fully in the domain of science. Comparing the four main proposals on this topic might be an interesting future topic.

    Your comment also suggests that readers might find a post worthwhile that clarifies the differences between “the science of morality”, “morality from science”, and the infamous and discredited “morality of evolution” (the now strange idea that the process of evolution or increasing reproductive fitness is somehow inherently moral). 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.