Last updated on 22 May 2014
In a forthcoming paper, “Evolution and Moral Realism”, Kim Sterelny and Ben Fraser, of the Australian National University, have argued that there can be moral facts that evolution by selection tracks. Their argument is that moral reasoning is complex, and relies upon rapid judgements (what Daniel Kahneman 2011 calls “System 1” thinking) and slower, reflective judgements (“System 2”). Moral truths, they argue, consist in truths about human cooperation and social practices that support cooperation. And these can be tracked by selection. Cooperation is a human-typical trait, and benefits accrue to those who can exploit it. Moral choices are, they argue, “robust decision heuristics” based upon partial information.
I do not wish to argue against their view, as I agree with it, but it seems to me this is hardly the sort of moral realism that has traditionally been argued for by moral realists. Rules like “murder is wrong” are appealed to as end-in-themselves. It may very well be the case – it almost certainly is for humans – that cooperation is beneficial to organisms that are social. There is a great literature in both philosophy and economics to this effect. But why is this a moral point? Morality may have the effect of improving the well-being of humans, but that is a natural fact about humans (presumably, intelligent reptiles that lacked a social life to speak of might find cooperation is fitness-lowering). The facts that Sterelny and Fraser appeal to are instrumental facts for moral reasoning, and not, I believe, moral facts at all.
This is another failure to build a Milvian bridge. Suppose you have two societies: one in which something like the Objectivist libertarian view of life – each for themselves – is lauded. The other values cooperation. It is fairly obvious that the individualistic society will not be able to build the infrastructure of science, engineering and exchange that the cooperative society can, and so members of the former will, on average, do worse than members of the latter. These are, however, not moral facts. They are simply facts about human survival in uncertain times. As I said in the last post, any society in which psychological altruism deviates in favour of cooperation will do well unless it happens to be in the minority – “upright people” tend to do worse in times of real disruption and gangster societies.
I do not believe that we can say that the evolution of cooperative societies tracks moral facts. Instead it tracks fitness-enhancing (and objective) facts about our biology and environment. As social evolution tends to operate many orders of magnitude faster than biological evolution (not always: diseases and resistance to disease evolve more rapidly than, say, the evolution of high density urban populations), our inherent biological dispositions form something very close to a fixed background for moral evolution. To this extent, Sterelny and Fraser are on the money.
Survival, however, is not a good thing by definition. As some have said, better to be an unhappy Socrates than a happy pig, and perhaps it is better to die as a human with dignity than live as a human without it. The Utilitarian doctrine is not a given axiom of morality. The “doctrine of utility” as it was once called might in fact explain why moral rules are the way they are, but there is a distinction to be had between saying that we can explain morality, and justifying moral choices or rules. Perhaps it is best to simply pick a fundamental value like flourishing or happiness and build the rest on that.
You will recall that I said that we cannot build a Milvian Bridge for morality, and I think it is still true considering Sterelny and Fraser’s arguments (caveat: I have only read the draft, not the published version). Facts do enter into moral rules, but they are not moral facts unless one deflates moral reasoning to utilitarian reasoning, either explicitly or implicitly. That is, utility is either the goal of the moral agent, or it is the implicit reason why those moral rules apply for an agent. On the implicit version, morality is like Marxian false consciousness: we give moral reasons because we are simply unaware of the social selection processes and the fitness-enhancing aspects of the rules that we inherit from our milieu. Moral justification on this account is telling ourselves stories so we are comfortable with the rules.
If moral facts are real, then there has to be some environmental element to them for selection, social or biological, to track them. What kind of environmental aspects could there be to a principle like the Golden Rule? Not the functional aspects of it, which are the instrumental properties, but the value of treating others as you would wish them to treat you? Could it be possible that others might wish to be treated in ways you do not? If you inherit wealth, you may very well wish to be treated differently than those who do not wish to be treated. A Rawlsian Veil of Ignorance, while it may be just, is not a fact of nature – indeed, the opposite is usually the case; nepotism and cronyism is rife in human history (including the present industrial age).
However, it should be said that explaining moral behaviour relies heavily upon considerations of the fitness-enhancement of cooperation and the consequent analyses of the iterated Prisoner’s dilemma, and more recently the Stag Hunt, an example of Rousseau’s taken up by Brian Skyrms. The Stag Hunt considers the payoff to the hunter by remaining in a collective hunt for stags, or taking off (defecting) and chasing a rabbit that may be easier to catch. Given our environmental and social challenges, cooperation is usually a fitness-enhancer, and it tracks both the resources afforded by the environment (stags or rabbits) and our likely payoff from cooperation.
So I think that morality is not something that is an on-track process of evolution of any kind. Moral reasoning did not evolve to track moral facts, but to track, as Sterelny and Fraser say, the know-how needed to live in a social symbolic reasoning ape society.
- Fraser, Ben, and Kim Sterelny. forthcoming. “Evolution and Moral Realism.”
- Kahneman, Daniel. 2011. Thinking, fast and slow. London: Allen Lane.
- Skyrms, Brian. 2004. The stag hunt and the evolution of social structure. Cambridge, UK: Cambridge University Press.
- There has been some neurobiological reaction against Kahneman’s schema [see here and here], but I think it doesn’t affect the argument here. We enculturate our children into making certain kinds of judgements quickly through the use of prototypical examples and reinforcement, and some of those examples are the result of deliberative thinking.