I understand morality as a human invention, underpinned by our evolved emotional tendencies and our existential situation. Morality is thus a technology that responds to our needs as social animals1,5,6. It facilitates cooperation, originally in small groups in competition with each other and with the local non-human predators3,6. It provides a counterweight to our limited rationality, information, intelligence, and sympathies6.

From the viewpoint of an ordinary person who has been socialized into a particular moral system, the local moral norms will usually seem like something more impressive and metaphysical than I’ve described. They will appear to be categorically authoritative; they will be experienced as objective requirements for conduct. For most people, that is, their society’s standards for conduct appear to be necessitated by a mind-independent reality that transcends any mere social institutions and anyone’s contingent desires and attitudes2,4,6. In typical societies, this appearance is taken to be the reality and given some kind of supernatural explanation.

The outer limits of moral possibility are established by the emotional tendencies that prepare us to be morality-making beings.

If objective moral requirements are construed as transcending human nature itself, and as binding upon all rational beings that might exist in the universe, it is implausible that they exist—and indeed, the idea seems to defy coherent explanation2,4,6. Might there, nonetheless, be one true morality for human beings (not necessarily for whatever other rational beings happen to exist) grounded in a common human nature and transcending the desires and attitudes of particular people and the varied moral systems of actual societies6?

This still seems unlikely. It requires a more harmonious and purposive conception of human nature than appears scientifically and historically plausible6,7. We probably won’t discover a single perfect way of life for either individuals or societies. That said, not just any set of proposed norms can form a viable moral system. Natural boundaries are shaped by the function of morality in facilitating social cooperation. The outer limits of moral possibility are established by the emotional tendencies that prepare us to be morality-making beings. In particular, we care most about ourselves (as individuals), our offspring, kin, mates, and other affiliates1. We show some restraint in hurting each other, a degree of natural kindness and reciprocity, positive attitudes to helpfulness, and a disposition to seek vengeance when betrayed and to punish non-cooperators3.

Moral systems vary considerably, but some virtues of character, such as courage and honesty, are likely to be regarded highly in any human society. Conversely, no human society can tolerate unlimited ruthlessness in social, sexual, and economic competition within the group; more specifically, each society insists on limits to intra-group violence. A full and systematic understanding of the phenomenon of morality would include both the possibilities for variation in moral systems and the boundaries within which variants proliferate.

Against that background, our modern moral predicament involves at least two interrelated problems. First, we increasingly live in societies that contain relatively little in the way of a unitary moral system. Instead, contemporary societies blend different groups with complex, diverse, yet intertwined, histories, and with their own religious and moral traditions. Rival traditions often confront each other within the same society, struggling for political and cultural supremacy3.

Second, the world’s societies—again with divergent moral traditions—increasingly need to cooperate with each other to handle problems on a very large scale6. In this situation, our existing moralities and our evolved emotional tendencies do not necessarily serve us well. They helped us to cooperate and survive in small, often mutually suspicious, groups. Arguably, they are not so helpful when we come to terms with global issues of climate change, epidemic diseases, and the spread of massively destructive weapons.

References

  1. Churchland, Patricia S. (2011). Braintrust: What Neuroscience Tells Us about Morality. Princeton University Press.
  2. Garner, R. (1994). Beyond Morality. Temple University Press.
  3. Greene, J. (2013). Moral Tribes: Emotion, Reason, and the Gap between Us and Them. Penguin.
  4. Joyce, R. (2001). The Myth of Morality. Cambridge University Press.
  5. Kitcher, Philip (2011). The Ethical Project. Harvard University Press.
  6. Mackie, J. (1977). Ethics: Inventing Right and Wrong. Penguin.
  7. Williams, Bernard (1995). Evolution, Ethics and the Representation Problem. In Making Sense of Humanity and other Philosophical Papers 1982–1993. Cambridge University Press, pp. 100-110.

This article is from TVOL’s project titled “This View of Morality: Can an Evolutionary Perspective Reveal a Universal Morality?” You can download a PDF of the project [here], comment on this article below, or comment on the project as a whole in the Summary and Overview.

Published On: May 17, 2018

Russell Blackford

Russell Blackford

Russell Blackford is an Australian philosopher, legal scholar, and literary critic. He is editor-in-chief of The Journal of Evolution and Technology, and holds an honorary research appointment at the University of Newcastle, NSW. He is the author or editor of numerous books, including The Mystery of Moral Authority (Palgrave, 2016), Philosophy’s Future: The Problem of Philosophical Progress (co-edited with Damien Broderick; Wiley-Blackwell, 2017), and Science Fiction and the Moral Imagination: Visions, Minds, Ethics (Springer, 2017).

Comment

8 Comments

  • Mark Sloan says:

    Russell,
    I’ve enjoyed your writings, particularly on moral bindingness, for years.
    Yes, a universal moral principle that is somehow “binding upon all rational beings that might exist in the universe” does seem to defy coherent explanation. I refer to such things as “magic oughts”.
    But, in contrast, a moral principle that is universal because it is a necessary component of all cooperation strategies relevant to human morality seems possible. Assume for a moment that the science showing “morality as cooperation” is true in the normal scientific sense and such a universal moral principle exists. Then, all well informed rational people would necessarily advocate this principle as universally moral. However. they might not feel bound by it. So perhaps a moral principle can be universal but not mysteriously binding?
    Bernard and James Gert say in the SEP that “the term ‘morality’ can be used … normatively to refer to a code of conduct that, given specified conditions, would be put forward by all rational persons.” This is close to what I suggested above all well informed rational people might advocate as universally moral. I see nothing in either phrase about bindingness. So perhaps there is a universal moral principle that is not “binding upon all rational beings that might exist in the universe” but is normative in the sense advocated by the Gerts (which may be as normative as it gets in our universe). Possible?

    • This raises large questions, so I’ll confine myself to a few observations about Bernard Gert and his identification of a “common morality.”

      First, I should observe that even moral error theorists and the more thoughtful moral relativists (such as David Wong) do not deny the existence of a recognizable core (or a framework, if you prefer) of morality that is fairly much inevitable across the range of human societies if morality is to play the role of facilitating social cooperation, especially by reducing harms.

      As I understand his work, Gert sees moral relativism as the claim that all moral norms are merely arbitrary. That, however, is not the position of sophisticated relativists such as Wong, who make the far weaker, more plausible claim that there is no single, “true,” comprehensive system of morality, even though there seems to be a cross-cultural moral core. Vulgar, simplistic forms of moral relativism, such as that famously demolished by Bernard Williams, have given the work of moral relativists an unnecessarily bad name with philosophers (while becoming too prevalent in wider social discussion; first-year philosophy students often have to be disabused of vulgar relativist ideas).

      Gert attempted to identify, describe, and justify a cross-cultural morality that could, with only some loss of meaning, be boiled down to two principles: don’t intentionally harm other people without good reason; and conduct yourself in ways that make you a trustworthy person, such as by not, without good reason, cheating, lying, breaking promises, or breaking the local laws. Gert bases this on controversial analyses of such concepts as rationality, impartiality, and harm. Those analyses might not all hold up against full scrutiny, but they do contain a great deal that strikes me as useful and insightful. In that way alone, Gert made a significant contribution to our understanding of the phenomenon of morality.

      However, as he acknowledges, his theorizing is of limited assistance as a guide to action in cases of controversy. It provides reasons why all societies would develop somewhat similar moral systems to apply to their members, or at least to the dominant demographic group. It does not provide a reason why a militarily strong society would apply its system to its weaker neighbors, or even why a dominant group in a society would apply its system to a subordinate group. It does not establish that moral rules are exceptionless, or provide a decision procedure that uniquely identifies which exceptions are justified by good reasons. As Gert freely acknowledges, it does not even show that it is irrational to act selfishly, heedless of the common morality, if you are strong enough or clever enough to get away with it. It does show why the rest of us would be rational to try to identify and rein in people who act like this.

      Furthermore, Gert attempts to work out what moral rules would be agreed to by rational people (roughly, people who do not have self-destructive impulses and do not wish to be harmed or to harm themselves), if they were committed to reaching agreement, and if they relied only on basic knowledge of the world available to all rational people – for example, knowledge of human vulnerability and fallibility. He is unable to rule out that people who have, or claim to have, additional, special, esoteric knowledge (for example religious knowledge) could insist on a different set of rules.

      Gert also argues that the rules that would be agreed by rational people, using only the knowledge available to all rational people, would end up being somewhat open textured. That is, they would not provide unique and correct answers to all moral questions. For example, they would fail to solve whether or not abortion is morally impermissible. Again, they would not provide determinate answer to questions about how we should treat non-human animals: they would neither compel nor rule out including non-human animals in our circle of moral consideration. We could expect different societies to answer such questions in different ways, and there would be no objectively correct answer as to who was right.

      I find a great deal of value in Gert’s analysis, for example the reasons he gives as to why actual moral systems tend to emphasize not causing harms rather than attempting to maximize happiness or preference-satisfaction. In a sense, Gert defends a deflationary conception of morality, though it might be that, as he claims, most or all of our commonsense morality is defensible, particularly against more totalizing systems such as Kantianism and utilitarianism.

      Gert has made a useful contribution to understanding morality, but his work has limitations if we expect it to solve divisive issues such as abortion, impositions of religious moralities (the fact that Gert does not consider these systems of conduct to be moralities does not rule out people who subscribe to them attempting to impose them on others, based on their claims to esoteric knowledge), or relationships among rival societies. Gert’s approach might be helpful for thinking about universal harms that can be averted only by global cooperation, but I would counsel against becoming too invested in it as a breakthrough in dealing with intractable issues.

      • Mark Sloan says:

        Russell,

        Gert’s definition of normative, “what all (well-informed) rational people would put forward…” seems perfect for science-based claims about what ‘is’ universally moral. That is, just as we expect all well-informed rational people to advocate for what the provisional truth of science ‘is’, we would expect them to also advocate for the scientific truth about what ‘is’ universally moral. Right? Further, Gert’s definition of normative is silent, just as science is, on any innate source of bindingness (what everyone ‘ought’ to do regardless of their needs and preferences). Thus, both Gert’s definition and science’s definition of what ‘is’ universally moral hold that what is universally moral is a separate quality from what is innately binding. That seems to me a potentially big deal for ethics.

        I argue that science of the last 50 years or so fully supports something like “Behaviors that solve the cooperation/exploitation dilemma without exploiting others are universally moral”. Of course, such versions of “morality as cooperation” cannot answer all moral questions because they only define moral ‘means’ and are silent both on moral ‘ends’ (aside from a vague “increase the benefits of cooperation”) and on who is included in “others” who are not to be exploited. On the other hand, if people generally agree that the ultimate goal of moral behavior is something like “increased well-being” and no one is to be exploited, then science’s limitations become almost unimportant to its cultural utility for refining moral codes and resolving moral disputes in the public arena.

        I am optimistic that what science tells us morality ‘is’ forms a new, easily understandable, culturally useful, grounding for ethics.

        Of course, religious people in particular will be reluctant to give up their sometimes harmful moral norms aimed at increasing cooperation in their in-groups by exploiting out-groups (such as “women must be submissive to men” and “homosexuality is evil”). However, the science based intellectual knowledge of the shameful origins of such moral norms may tilt the arc of religious morality more in the direction of justice and well-being.

        • To be honest, Mark, I’ve struggled to reply to your further comment as I’m unsure of how to understand some of it. In particular, I think the expression “universally moral” is very slippery. It can mean more than one thing, and there’s a danger of using it equivocally through a passage of argument – when the use of more specific terminology on each occasion would bring out problems.

          However, you ask me whether I’d advocate for the scientific truth about what is universally moral. If by that you’re asking me whether I advocate for historical and anthropological investigation of what commonalities exist among the moral systems of the range of historical and current human societies, yes I do advocate for that sort of study.

          Further, I expect that we will find commonalities. In that sense, my original piece could just as easily have been placed in the “Yes” as the “No” group (as David Sloan Wilson noted of Massimo Pigliucci’s contribution).

          I think that there will be identifiable commonalities, since, as I originally put it, the content of humanity’s varied moral systems is “underpinned by our evolved emotional tendencies and our existential situation.” Human moral systems are developed by creatures with a particular evolved psychology, living in environments that share sufficiently common features that we can talk meaningfully of a human situation. For example, we are vulnerable in certain ways, we can be harmed (including by each other) in certain ways, we are fallible and limited in our knowledge, and we need to cooperate as social animals in order to survive.

          That being so, I’d expect all societies to show a kind of moral core – perhaps something like what H.L.A Hart famously called the minimum content of natural law. Hart’s discussion of this in Chapter IX of The Concept of Law is still a good beginning point in thinking about all this. The core will, I think, inevitably include norms relating to restrictions on inflicting harms (not necessarily applying, at least with full force, beyond the group) and norms relating to trust (e.g. restricting lying, promise-breaking, and non-compliance with the group’s more detailed rules and rituals assigning social responsibilities).

          The element of truth in moral relativism is that any such core does not amount to a comprehensive system of how to live. Different societies will develop their own comprehensive systems of norms and institutions, and in modern circumstances we have very large societies in which rival systems of moral norms contend for authority within the same society. It is also worth bearing in mind that the rival systems are each likely to claim some kind of supernatural, or at least strongly objective, authority – authority that they do no really possess, but which makes their adherents unwilling to compromise.

          Again, it also worth bearing in mind that neither the common core I’ve referred to nor any extant comprehensive moral system is likely to be adequate for the kinds of problems that confront us as a species in the twenty-first century. This is not to say that we should abandon norms that restrict, say, lying, breaking promises, or intentionally harming others. But we may need to develop new norms for getting along and solving problems in social environments, and in a global environment, very different from those in which the phenomenon of morality arose. I see a gap between our contemporary needs (on one hand) and both the core of common morality and the rival comprehensive moral systems that currently exist (on the other hand). You could call it a “morality gap” by analogy to the “empathy gap” identified by J.D. Trout and others.

          That is not to reject your optimism about what we can achieve. It is, however, to suggest that we may need new principles of cooperation that are not very obvious to beings like us, and may even be counterintuitive to many people. Certainly, I see a gap between a very basic moral core that might be identifiable such as “don’t harm others (at least in the group) without good reason (perhaps relating to the overall security of the group), and try to be a trustworthy and competent member of the group” and what might be needed to solve large problems in our current circumstances. We can, by all means, appeal where relevant to some such basic moral core that might be empirically identifiable. It would be recognized with some variation across all societies, and I expect it would be adequate for many day-to-day interactions. But for other purposes we need more than that, and we’ll have to work it out through a conscious process of thought, research, and discussion.

          • Mark Sloan says:

            Russell,

            Thanks for your reply!

            Let’s consider what kind of science based, culturally useful moral principle or principles all well-informed, rational people would put forward as “universally moral” and thus be normative by Gert’s definition.

            For reasons such as you describe, mere commonality of a “moral core” in all past and present cultures (an empirical matter of cultural anthropology and perhaps psychology) by itself seems insufficient. In addition to appearing inadequate for defining a full moral system, our understanding of what is common to all past and present cultural moralities has differed over our history and may continue to do so.

            But the science of the last 50 years or so is consistent (as I describe in my essay) with our descriptively moral cultural norms (diverse, contradictory, and bizarre as they are) being solutions to a single, simple problem that is innate to our physical reality and must be solved by all beings who form highly cooperative societies. That problem is how to sustainably obtain the benefits of cooperation without that cooperation being destroyed by exploitation. (This is a difficult problem to solve since exploitation is virtually always the winning strategy in the short term and can be in the long term.)

            Assume there is a principle that is a necessary component of all solutions to this cooperation/exploitation dilemma. Then that universal principle will solve the cooperation/exploitation dilemma for all species for all time. Wouldn’t this solution to the cooperation/exploitation dilemma then be put forward by all well-informed rational beings as universally useful for forming and maintaining highly cooperative societies?

            But why would all such well-informed, rational beings consider this principle universally ‘moral’? They would know that ‘punishment’ of exploitation is a part of most solutions to the cooperation/exploitation dilemma. So violation of this special category of cultural norms must include motivation to punish violators (even though violators may not actually be punished or they may just be punished by social gossip or isolation). In human societies, cultural norms whose violation are commonly thought to deserve punishment are called “moral norms” and our moral sense’s judgements of right and wrong come with motivation to punish wrongdoers (even when the wrongdoers are ourselves). Whatever this special category of norms is called by different species in different times, that name will translate as “moral” due to its “violators deserve punishment” component.

            Thus, such a universal principle that solves the cooperation/exploitation dilemma would be put forward by all well-informed, rational beings as universally moral – meeting Gert’s definition of normative.

            Further, I argue such a universal principle exists and it is “solve the cooperation/exploitation dilemma without exploiting others”. More useful, though necessarily flawed, heuristics for this principle include “increase the benefits of cooperation without exploiting others” and even more useful (but more flawed) “Do to others as you would have them do to you”.

            I am optimistic such a principle can become recognized as a product of robust science and can become – based on a lot of “thought, research, and discussion” as you note – culturally useful in resolving moral problems as diverse as human rights, economic justice, and how to share the burden of taking care of our most critical shared ‘commons’, the earth’s ecosystem.

  • David Sloan Wilson says:

    Thanks for this thoughtful commentary. I agree wholeheartedly with the first two paragraphs but, along with Mark Sloan, I think that the concept of morality can be generalized beyond the human case to cover all cooperative species, on this or any other planet. It is telling that people throughout history have used social insect colonies and single bodies as metaphorical ideals for human society. Evolutionary theory provides an explanation for why this might be so. We can confidently predict that if aliens exist and are technologically sophisticated enough for space travel, they must have evolved ways to coordinate their activities and suppress self-serving behaviors within their groups that would be recognizable to us as a moral system.

    Of course, you’re right that it’s difficult to expand the moral circle beyond small groups, ect., but I think it’s important to note the progress that has been made during the last 10,000 years of human history. Back then, the idea of cooperative human society at the scale of hundreds of millions of people would have been inconceivable. The question for me is whether a global morality is theoretically possible and whether there is a feasible way to get there. Let’s outline the positive agenda, while remaining realistic about its challenges!

    • My long reply to Mark Sloan’s question may sound pessimistic, but I’m not as pessimistic as it might sound. For example, I do think that there is scope to persuade some (many?) people who insist on their esoteric moral systems not to employ those systems as guides to public policy (as opposed to guides to their own conduct). Some will be resistant to this, but some will be prepared to make the distinction and to acknowledge the force of the relevant arguments. When they do so, they should be praised – whereas, at the moment, there are tribal tendencies to greet them with suspicion on all sides. Likewise, I think there are some pretty good reasons and arguments to avoid systems of social oppression and acts of war, to cooperate in working against global warming and global health crises, and so on. All those reasons and arguments will need to appeal to values that people actually have, and it won’t be the case that all people will be compelled in the same direction on pain of just being factually wrong or outright irrational. But many people have values that can be appealed to in this process of discussion. They may be able to enter into a large amount of beneficial mutual cooperation beyond their existing societies and demographics, without necessarily abandoning their more esoteric moralities as guides to their own conduct (many people, if not most, would see this as a reasonable compromise; those who don’t will likely strike the rest of us as fanatics). My 2012 book, Freedom of Religion and the Secular State, deals with some aspects of this.

  • Russell Blackford writes: “no human society can tolerate unlimited ruthlessness in social, sexual and intra-group violence.” Mark Sloan writes: “Whatever this special category of norms is called by different species in different times, that name will translate as “moral” due to its “violators deserve punishment” component… a universal principle that solves the cooperation/exploitation dilemma” No human society exists without morality, but what is the basic form of this morality? It’s not a principle but a system, or way of doing things. (The principle is secondary, it comes after the fact. It’s a system that involves collective judgement and collective enforcement of judgments. It can operate on every human scale. Each increase in scale involves new moral dilemmas and necessitates new institutions such as language, religion, legal systems, governments, mass communication media, etc. As Gert shows it works by making the rules easy to understand and easy to follow. Mostly this is done by thou shalt nots. Even “keep your promises” can be framed as “Do not break your promises.” Why concentrate on the negative? Because the basis of morality is a system of collective enforcement. The rules have to be simple, easy to understand, easy to follow. As Gert puts it, it’s easy to follow the rules, because most behaviour is allowed. A few simple dont’s that everyone understands gives us the basis along with the requirement that everybody be involved in enforcement. As Elinor Ostrom showed, it is the mutual feedback between the commitment of individuals, involvement in enforcement and strong sense of community identity that protects and maintains a common pool resource. Morality was the first common pool resource. The first human group agreed to a rule and agreed to punish rule breaking collectively, rather than through dominance relations. That was the positive feedback loop that makes human social systems self-organizing.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.