What is universal about morality is morality itself: cross-culturally, people think that some behaviors are wrong, and so deserve disapproval and punishment.6 However, while the capacity for moral judgment is universal, there is tremendous variability in which behaviors people think are immoral.5 We suspect that debates about moral universals are actually motivated by a different question: Can people reach a consensus about which actions should be morally prohibited?

Moral consensus is not only an abstract philosophical matter. People are worried about moral consensus for good reason. When a community disagrees about the moral laws of the land, they can no longer rely on the rules to settle conflicts. Disputes are more likely to escalate with costly consequences for everyone.2,3 Moreover, rival coalitions struggle to impose their rules on those who disagree, further fomenting costly fighting.10

In this context, what is universal about morality is disagreement. In all societies, people disagree, often violently, about which actions are immoral. For instance, recently, a prominent politician in India offered a $1.5 million bounty to decapitate a popular actress who portrayed a Hindu queen in a way he thought was morally offensive. Artistic expression or capital offense? Unfortunately, these kinds of moral disagreements are universal.

Human moral judgment allows virtually any action to become a prohibited and punishable offense. A key reason is that individuals need to keep track of the moral rules in a community so they can avoid crossing moral boundaries. Because moral rules are variable and changing, people need a flexible moral psychology that can moralize whichever actions are taboo in a given group.9 However, this does not mean that people only passively accept their group’s rules; they also actively advocate for the moral rules they prefer, especially when they can find supporters to join their cause.7

We can find a path to moral consensus by focusing on our shared concerns for people’s welfare, rather than contentious and divisive moral principles.

Many moral prohibitions have strategic consequences because they constrain some people more than others.4 When a particular action is punished – such as same-sex marriage, eating beef, black magic, disobeying authority, stem cell research – the subset of people who want to take that action are worse off; those who don’t want to do the action are unaffected or even gain a relative advantage. Given these strategic consequences, people tend to fight to sway the rules that affect them the most.11

This means that people’s efforts to persuade a community to adopt a moral rule – thou shalt not X – are essentially efforts to coerce a subset of the community into a moral regime they would rather not be in. In practice, then, a society’s morality creates a form of mob rule in which the moral prohibitions are determined by the most powerful coalition, which is often the one backed by the more numerous faction. Majoritarian political regimes, while having many virtues, allow majorities to coerce minorities with the sticks afforded them by moral rules.

Amid all of this conflict, however, our moral psychology does have elements that can promote consensus. When almost everyone benefits from a moral prohibition, it generally becomes a matter of consensus because everyone ends up advocating for the same rules. This applies to the most universal prohibitions such as those against (unprovoked intentional) killing, harming, stealing, and lying. These agreeable morals can be leveraged to build consensus.

This idea underlies utilitarian philosophy, that a rule should be adopted if it leads to net benefits to society.1 This philosophy basically attempts to build consensus around the concept of welfare, while diminishing the large variety of contentious moral rules about other matters such as taboos surrounding food, sex, or supernatural beliefs.

We can find a path to moral consensus by focusing on our shared concerns for people’s welfare, rather than contentious and divisive moral principles. All normal humans have at least some sense of compassion and concern for others’ welfare. Importantly, our sense of compassion is psychologically distinct from our moral principles and prohibitions2. Contrary to traditional views, people do not actually need moral rules to care about others’ well-being. Instead, we should aim to use our universal sense of compassion to guide the choice of moral prohibitions toward greater consensus.

This idea differs from what we typically see in politics, where politicians appeal to coalitions and moral principles, emphasizing who is right and who is wrong.8,11 In contrast, leaders who wish to build a broad consensus should emphasize how their policies will improve people’s welfare, especially by meeting people’s most pressing needs.

References

  1. Bentham, J. (1789). An Introduction to the Principles of Morals and Legislation. London: T. Payne and Son.
  2. DeScioli, P., & Kurzban, R. (2009). Mysteries of morality. Cognition, 112(2), 281-299.
  3. DeScioli, P., & Kurzban, R. (2013). A solution to the mysteries of morality. Psychological Bulletin, 139(2), 477.
  4. DeScioli, P., Massenkoff, M., Shaw, A., Petersen, M. B., & Kurzban, R. (2014). Equity or equality? Moral judgments follow the money. Proceedings of the Royal Society of London B: Biological Sciences281(1797), 2014-2112.
  5. Haidt, J. (2007). The new synthesis in moral psychology. Science316(5827), 998-1002.
  6. Hauser, M. (2006). Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong. Ecco/HarperCollins Publishers.
  7. Kurzban, R., Dukes, A., & Weeden, J. (2010). Sex, drugs and moral goals: Reproductive strategies and views about recreational drugs. Proceedings of the Royal Society of London B: Biological Sciences277(1699), 3501-3508
  8. Petersen, M. B., (2016). Evolutionary political psychology. Handbook of Evolutionary Psychology. Buss, D. M. (ed.). 2 ed. Wiley, Vol. 2, p. 1084-1102.
  9. Rozin, P. (1999). The process of moralization. Psychological Science, 10(3), 218-221
  10. Tooby, J., & Cosmides, L. (2010). Groups in Mind: The Coalitional Roots of War and Morality, from Human Morality & Sociality: Evolutionary & Comparative Perspectives, Henrik Høgh-Olesen (Ed.), Palgrave MacMillan, New York, pp. 91-234.
  11. Weeden, J., & Kurzban, R. (2014). Hidden Agenda of the Political Mind: How Self-Interest Shapes Our Opinions and Why We Won’t Admit It. Princeton University Press.

This article is from TVOL’s project titled “This View of Morality: Can an Evolutionary Perspective Reveal a Universal Morality?” You can download a PDF of the project [here], comment on this article below, or comment on the project as a whole in the Summary and Overview.

Published On: May 17, 2018

Robert Kurzban

Robert Kurzban

Robert Kurzban is a Professor of Psychology at the University of Pennsylvania. He received his PhD at the University of California Santa Barbara in 1998 and received postdoctoral training at Caltech in the Division of Humanities and Social Sciences, UCLA Anthropology, and the University of Arizona’s Economic Science Laboratory with Vernon Smith. He investigates a wide array of topics, including morality, cooperation, friendship, mate choice, supernatural beliefs, modularity, and self-control. He is the Editor-in-Chief of Evolution and Human Behavior the Director of Undergraduate Studies in his department and the President of the Human Behavior and Evolution Society.

 

Peter DeScioli

Peter DeScioli

Peter DeScioli is an Assistant Professor of Political Science at Stony Brook University. He completed his PhD at the University of Pennsylvania in 2008, and he was a postdoctoral fellow at the Economic Science Institute at Chapman University, the Departments of Psychology and Economics at Brandeis University, and the Department of Psychology at Harvard University. His research examines how principles of strategy shape elements of human psychology, including moral judgment, alliances, ownership, and procedures for collective decisions such as voting, consensus, and leadership. He is the Associate Director of the interdisciplinary Center for Behavioral Political Economy at Stony Brook University. 

Comment

2 Comments

  • Mark Sloan says:

    Robert and Peter,

    You appear comfortable with the idea that the ultimate goal of moral behavior is people’s welfare as you describe or perhaps the similar increased “well-being” Andy Norman describes.

    But there is also a question about what ‘means’ for accomplishing these goals are moral.

    As I commented to Andy Norman, and as described in my essay, it may be the case that science can tell us what universally moral ‘means’ ‘are’. If that is the case, then we have the potential for combining both universally moral ‘means’ with a universally moral ‘end’ (goal of increased welfare?).

    If those universally moral means are defined by science and that science at least supports claims of a universally moral goal of increased well-being or welfare, then we have the basis of a well grounded universal morality. (Though a morality perhaps without any claims for innate oughts or bindingness.)

    Such a universal morality could be the opposite of contentious and divisive. Due to our evolutionary history it will be uniquely harmonious with our moral sense because it is what largely shaped our moral sense. Occasional dissonance with our moral sense and existing cultural moral codes may be readily explained. Diversity in cultural moral codes can be understood as different applications of this morality due to different circumstances and histories. Finally, its grounding in objective science may be a powerful force for reducing divisiveness concerning morality.

  • David Sloan Wilson says:

    I like this commentary for pointing out that moral rules are being negotiated all the time, leading to the conclusion that moral disagreement is morally universal. Once we see morality as a mechanism of group coordination and social control, then the content of moral norms must change for groups to adapt to changing environments. A sufficiently fine-grained look at any moral system will reveal it’s own evolution in progress.

    I think it’s important to understand how this plays out in small-scale societies before getting to large-scale societies. The small-scale process will be closest to what evolved by genetic evolution, whereas large-scale processes will require culturally evolved mechanisms that interface with genetically evolved mechanisms. Along these lines, I think that this passage from your commentary might be more descriptive of large-scale societies than small-scale societies:

    “This means that people’s efforts to persuade a community to adopt a moral rule – thou shalt not X – are essentially efforts to coerce a subset of the community into a moral regime they would rather not be in. In practice, then, a society’s morality creates a form of mob rule in which the moral prohibitions are determined by the most powerful coalition, which is often the one backed by the more numerous faction. Majoritarian political regimes, while having many virtues, allow majorities to coerce minorities with the sticks afforded them by moral rules.”

    This strikes me as exactly wrong for small groups. The whole point of a moral system is to subordinate the disruptively selfish interests of members to the collective interests of the group. The most powerful members of a group are the ones that need to be watched and regulated most closely. That’s why power differentials are suppressed in small groups. Here is a example from the anthropological literature. The Nuer are a leaderless society that have only one kind of chief called a leopard skin chief, who turns out to be a specialist in conflict resolution. When there is a homicide, the murderer attempts to seek the protection of the leopard skin chief, who negotiates a payment in cattle rather than the conflict spiraling out of control in revenge murders. The Nuer are aware of the function of the leopard skin chief and purposely choose someone from an unimportant lineage, because someone from an important lineage would have vested interests. That’s the key point of the example for the purposes of this commentary. This moral system is not about the powerful majority imposing itself upon minorities. It is about the subordination of power.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.