Skip to content

Morality and evolution 2: Moral facts

Last updated on 22 May 2014

[Morality and Evolution 1 2 3 4 5 6 7]

In a forthcoming paper, “Evolution and Moral Realism”, Kim Sterelny and Ben Fraser, of the Australian National University, have argued that there can be moral facts that evolution by selection tracks. Their argument is that moral reasoning is complex, and relies upon rapid judgements (what Daniel Kahneman 2011 calls “System 1” thinking) and slower, reflective judgements (“System 2”[1]). Moral truths, they argue, consist in truths about human cooperation and social practices that support cooperation. And these can be tracked by selection. Cooperation is a human-typical trait, and benefits accrue to those who can exploit it. Moral choices are, they argue, “robust decision heuristics” based upon partial information.

I do not wish to argue against their view, as I agree with it, but it seems to me this is hardly the sort of moral realism that has traditionally been argued for by moral realists. Rules like “murder is wrong” are appealed to as end-in-themselves. It may very well be the case – it almost certainly is for humans – that cooperation is beneficial to organisms that are social. There is a great literature in both philosophy and economics to this effect. But why is this a moral point? Morality may have the effect of improving the well-being of humans, but that is a natural fact about humans (presumably, intelligent reptiles that lacked a social life to speak of might find cooperation is fitness-lowering). The facts that Sterelny and Fraser appeal to are instrumental facts for moral reasoning, and not, I believe, moral facts at all.

This is another failure to build a Milvian bridge. Suppose you have two societies: one in which something like the Objectivist libertarian view of life – each for themselves – is lauded. The other values cooperation. It is fairly obvious that the individualistic society will not be able to build the infrastructure of science, engineering and exchange that the cooperative society can, and so members of the former will, on average, do worse than members of the latter. These are, however, not moral facts. They are simply facts about human survival in uncertain times. As I said in the last post, any society in which psychological altruism deviates in favour of cooperation will do well unless it happens to be in the minority – “upright people” tend to do worse in times of real disruption and gangster societies.

I do not believe that we can say that the evolution of cooperative societies tracks moral facts. Instead it tracks fitness-enhancing (and objective) facts about our biology and environment. As social evolution tends to operate many orders of magnitude faster than biological evolution (not always: diseases and resistance to disease evolve more rapidly than, say, the evolution of high density urban populations), our inherent biological dispositions form something very close to a fixed background for moral evolution. To this extent, Sterelny and Fraser are on the money.

Survival, however, is not a good thing by definition. As some have said, better to be an unhappy Socrates than a happy pig, and perhaps it is better to die as a human with dignity than live as a human without it. The Utilitarian doctrine is not a given axiom of morality. The “doctrine of utility” as it was once called might in fact explain why moral rules are the way they are, but there is a distinction to be had between saying that we can explain morality, and justifying moral choices or rules. Perhaps it is best to simply pick a fundamental value like flourishing or happiness and build the rest on that. 

You will recall that I said that we cannot build a Milvian Bridge for morality, and I think it is still true considering Sterelny and Fraser’s arguments (caveat: I have only read the draft, not the published version). Facts do enter into moral rules, but they are not moral facts unless one deflates moral reasoning to utilitarian reasoning, either explicitly or implicitly. That is, utility is either the goal of the moral agent, or it is the implicit reason why those moral rules apply for an agent. On the implicit version, morality is like Marxian false consciousness: we give moral reasons because we are simply unaware of the social selection processes and the fitness-enhancing aspects of the rules that we inherit from our milieu. Moral justification on this account is telling ourselves stories so we are comfortable with the rules.

If moral facts are real, then there has to be some environmental element to them for selection, social or biological, to track them. What kind of environmental aspects could there be to a principle like the Golden Rule? Not the functional aspects of it, which are the instrumental properties, but the value of treating others as you would wish them to treat you? Could it be possible that others might wish to be treated in ways you do not? If you inherit wealth, you may very well wish to be treated differently than those who do not wish to be treated. A Rawlsian Veil of Ignorance, while it may be just, is not a fact of nature – indeed, the opposite is usually the case; nepotism and cronyism is rife in human history (including the present industrial age).

Stag hunt

However, it should be said that explaining moral behaviour relies heavily upon considerations of the fitness-enhancement of cooperation and the consequent analyses of the iterated Prisoner’s dilemma, and more recently the Stag Hunt, an example of Rousseau’s taken up by Brian Skyrms. The Stag Hunt considers the payoff to the hunter by remaining in a collective hunt for stags, or taking off (defecting) and chasing a rabbit that may be easier to catch. Given our environmental and social challenges, cooperation is usually a fitness-enhancer, and it tracks both the resources afforded by the environment (stags or rabbits) and our likely payoff from cooperation.

So I think that morality is not something that is an on-track process of evolution of any kind. Moral reasoning did not evolve to track moral facts, but to track, as Sterelny and Fraser say, the know-how needed to live in a social symbolic reasoning ape society.

References

  1. Fraser, Ben, and Kim Sterelny. forthcoming. “Evolution and Moral Realism.”
  2. Kahneman, Daniel. 2011. Thinking, fast and slow. London: Allen Lane.
  3. Skyrms, Brian. 2004. The stag hunt and the evolution of social structure. Cambridge, UK: Cambridge University Press.

Notes

  1. There has been some neurobiological reaction against Kahneman’s schema [see here and here], but I think it doesn’t affect the argument here. We enculturate our children into making certain kinds of judgements quickly through the use of prototypical examples and reinforcement, and some of those examples are the result of deliberative thinking.

11 Comments

  1. DiscoveredJoys DiscoveredJoys

    A Milvian bridge – between is and ought.

    I could argue that evolutionary processes are part of the World of Facts, but Moral Imperatives are part of the World of Concepts. Now (big step) if you define that truth can only apply to facts but wisdom can only apply to concepts (thus defining ‘different ways of knowing’ out of the frame) you can arrive at a situation where particular moral concepts are *wise* for particular contexts. In as far as our conceptual lives are lived against a world arising from evolutionary facts our moral wisdom may reflect our evolutionary dispositions – but it doesn’t have to.

    Not so much a Milvian bridge as two camps of people waving at each other across an unbridgeable gorge.

  2. Larry Moran Larry Moran

    It’s true that the cooperative society will likely be more successful than the individualistic society. But this doesn’t have to be due to a genetic difference. It could be mostly cultural and have nothing to do with biological evolution.

  3. Stephen Watson Stephen Watson

    Perhaps it is best to simply pick a fundamental value like flourishing or happiness and build the rest on that.

    This. I often find myself bewildered by discussions of moral realism, because they seem to assume this without even mentioning — let alone demonstrating — it. The importance of human happiness (except our own, subjective, individual, happiness, and that often only in the short to medium term) is not a fact about the universe. It is true that, for both practical social reasons and psychological emotional reasons, the happiness of others is a contributor to my own happiness. But even that has biases and limits — it’s weighted far more heavily on the welfare of my family and friends than on someone I’ve never met. And it’s an argument that has no purchase on a psychopath in a position of power — I can’t appeal to them for mercy; I can only band together with my fellow cooperators to restrain the thug by any means at our disposal.

  4. Larry, consider the evolution of cooperative societies throughout the entire history of primate evolution. Do you really suppose that genetic evolution had nothing to do with any primate evolution of cooperative societies?

    • Larry Moran Larry Moran

      I think it’s very likely that alleles that allow for large cooperative societies have become fixed in the human population over time. These alleles were not fixed BECAUSE they were beneficial in large scoieties. I think it’s very likely that our ancestors lived in relatively small family groups for the past several million years. They only started to live in larger groups with strangers during the last 10,000 years or so and that’s not enough time to have fixed alleles that control most of the modern behavior (morality) that we have see today.

      During the past 10,000 years I suspect that most societies were “individualistic” in the sense that John was talking about. Even today I’ve heard that there are modern industrialized societies (nations) that don’t have public health insurance for everyone. Can you believe it?

      • I think it likely that the genetic dispositions that allow us to live in large societies are not genes that evolved for that purpose, but that we piggyback large societies on genetic dispositions that evolved for small society cooperation.

        But I disagree about most societies being individualistic. I think we are natively eusocial, and render mutual aid, because that is our species’ typical behaviour. And that is because the groups we evolved to live in were small related family groups of up to a few hundred (Dunbar’s Number).

        Even in large societies, the number of friends and relatives we track individually is around Dunbar’s Number (250±), and we associate in subnetworks of these. The rest we behave towards according to some rules of classification that save us cognitive load and working memory (never trust an Albanian [from Aunt Julia and the Scriptwriter]).

        Those load-saving rules are closely related to moral rules, along with the rules for behaving to kin and kith. Next post…

  5. John, based on what you said about Kim Sterelny and Ben Fraser’s moral realism, I suppose that their instrumental facts are moral facts in the case of a population of mature reproductive agents who are capable of philosophical contemplation. These moral facts are relational to the contemplative agents. For example, Peter Railton’s consequentialist moral realism explains how moral facts are relational.

    Peter Railton, “Moral Realism,” The Philosophical Review 95 (April 1986): 163-207.

  6. Yes, nothing has fixed in the last 10,000 years. In fact, nothing has fixed since the first branching of modern humans.

    Fortunately, that universal health care allele is spreading….

  7. Larry, sorry that I missed that you said “large cooperative societies” and not mere “cooperative societies.” I erroneously responded as if you said “cooperative societies.”

    Also, this old internet browser cannot place my replies in the correct place.

  8. I’m trying to understand the Kim Sterelny and Ben Fraser model of “selection tracks.” For example, Does this model imply that the existence of moral facts indicates that evolutionary mechanics such as natural selection in another galaxy similar to ours would necessarily develop intelligent beings that can comprehend the moral facts? Or do “selection tracks” merely mean that moral facts could prod natural selection if other prerequisite circumstances fell into place? Or is “selection tracks” something completely different than my thoughts?

Comments are closed.

Optimized by Optimole