Social evolution

[Morality and Evolution 1 2 3 4 5 6 7]

In any field that has statistical variation, it is necessary to isolate the variables. Biology is all about statistical variation of populations, and so we must expect that any account of morality that is based upon biology will have variation along a number of axes. Here I wish to sketch out what the variables might be.

All human dispositions vary in individuals [Why dispositions and not behaviours or beliefs? Read my series on Evopsychopathy. We evolved to be disposed to acquire beliefs and behaviours in particular ways, not to acquire beliefs and behaviours directly as a result of any level of selection]. That is, a population will have a distribution curve of traits for any traits that can vary individually and independently. Given my account of moral dispositions in the previous posts, I suggest that we can usefully begin to visualise moral dispositions in the following manner:

  1. There will be tails for any distribution that are seen as over-cooperative (“saintly”) or under-cooperative (“evil”). This is a fact of statistical distributions.
  2. Norm enforcement exists to deal with the detection and sanctions of deviant behaviours like these.
  3. Depending on the value of the variable, sanctions will be applied against such deviation:
    1. Rewards for cooperators, unless the cooperation causes coordination problems (there is such a thing as being too saintly)
    2. Punishments for defectors: precautionary measures included.
  4. If there is a stable mode in the populational distribution, it will represent the average best tradeoff of costs over benefits to cooperation. However, in unstable environments (say, constant invasion, or changing trade conditions), the mode may be a transitory optimum, and possibly one that is best adapted to past conditions, given the delay in social adaptation. If things are too chaotic, it may be the mode represents only the more conserved behaviours, and not any kind of optimum at all.
  5. If there is a skewed distribution of behavioural dispositions, then the null hypothesis must be that there is some selective pressure to which a population has not yet adapted, and a mode that trades off costs and benefits (at that level) optimally. If there is a flat normal distribution, then either there is no selection on that axis, or it has been greatly relaxed. A tight and high normal distribution suggests high levels of selection on that axis.

The implications of these considerations are that the mere existence of a norm is no guarantee that it tracks past success (at that level of fitness bearer), and so we ought not expect that the moral landscape is an adaptive landscape to which morals have adapted, as Sam Harris has it doing. To show that this is the case, we would need to show that the context of the norm (biological, cultural, economic, etc.) is stable for long enough for the distribution of the population to normalise under selection. But let us suppose that we have such a case. What then?

Individuals within the population will tend also to vary in their dispositions over some curve. Some will be at the tails, most will be at the mode or near it, and some in between. What are these dispositions? From my fundament (sorry, from fundamental considerations) I offer these as a first cut:

  1. Other-regarding v self-regarding. This is a general cognitive style, not restricted to morality. It has been called the hererist–autist axis in studies of autism spectrum disorders, which can be seen as the extreme end of a general dispositional distribution.
  2. Virtue consequence. This is where an agent treats rules for their own sake (virtue) or as expedient means to an end (consequence). This is a standard distinction in moral philosophy.
  3. Conservatism v progressivism. This is the disposition of an agent to adopt or retain rules. Some people are disposed towards novelty (“early adopters”), while others are disposed against it (“late adopters”). In times of moral norm change, there will be some who hold out against changing behaviours no matter the consequences.
  4. Narrow v wide scope. Some will tend to broaden the scope of cases under which a rule is applied, while others will tend to be more restrictive. We might call the narrow case appliers legalists and the broad case appliers liberalists.
  5. Reflexivity v mimicry. Some tend to reflect upon the intension, or meaning, of a norm in their moral development. Others simply follow the observed pattern and do not reach any kind of reflective equilibrium (and of course most are somewhere in between these poles).

Now it is possible that these are not independent; that all conservatives are self-regarding virtue ethicists, narrow in their application of norms, and mimics; but if so, that is to be demonstrated empirically. I suspect that while there may be concentrations like this, there will be a spread of alternatives in the population. The modal distributions on this 5-space will be contextual and historical instances of strategies that work in a given environment, not universals about human behaviours. These are context-dependent peaks in the 5-space, and what counts as fit in moral terms depends on what resources and costs there are in the environment. Dawkins once used the metaphor of a Gangster Society (like 1930s Chicago). If you live in such a society, you have limited alternatives to be nice. If, on the other hand, you live in a Society of Friends, it is easier and more rewarding to be a grifter.

Societies can be flat, or they can be structured. The larger the population and the more diverse the ethnicity, the more hierarchical I expect a society to be. That is, those who have all the wealth and control will be a small, ethnically constrained, elite class. In a structured society like this, where upward mobility is difficult if not impossible, pressure from below will tend to undermine the societal norms in favour of class or ethnic norms. Conflict will ensue (this is hardly surprising news).

And finally, I would suggest that moral norms are heavily scaffolded. By this I mean that the development of moral norms is something that individuals receive from the social order and enforcement as they mature. Nobody reconstructs moral norms individually. You get them from your socialisation. An anarchical morality is a theoretical possibility, but in fact, it is impracticable. Even the most libertarian of rugged individuals gets their moral norms through socialisation. You couldn’t get them by experience. Consider how that would have to play out:

  • You would have to draw general conclusions from first principles, and where would they come from?
  • You would need to set up your own norms as hypotheses, out of an indefinitely large number of alternatives, and test them against the behaviours of members of your society.
  • You would deed to have some independent criteria for success of these hypotheses.

In short, moral norms are under determined by observation. Now suppose that you simply do what others do (mimic behaviours), and are sanctioned for this (rewarded or punished). Your norms will evolve rather quickly as action-guiding rules, without much if any reflection upon them and their justification. In short, the reward and lack of punishment is the criterion of success. This is much simpler.

But even reflection upon norms is scaffolded. Norms are justified by the social context (“God wills it”; “it is your duty as an X”, “what would life be like if everyone did that?”). When you first start to reflect upon the norms, these questions and a host of proffered answers, are the tools an agent will begin with. Few range far from these. Reflection is widespread, but not very deep.

All this leads to the diversity of moral behaviours and development one sees in a complex society, and indicates the difficulty in finding universals among all societies. Biology underpins moral behaviour, but at best our biology is an adaptation to constantly changing environments rather than to a singular social structure (whether 1950s suburban American capitalism, or to semi-nomadic foraging societies).

The final post will be “How should I choose to be moral, given evolution?”

[Morality and Evolution 1 2 3 4 5 6 7]

I should note that there is no set historical sequence implied in the levels 0 to 4, apart from the fact that we were primates before we were humans, so some sort of historical transition from 0 to 1 must occur before any of the others. Each layer contains the contexts of the lower layers, though, so we are always apes, humans, sometimes transitional, sometimes urban, sometimes imperial, and sometimes industrial. Consequently, one cannot draw inferences from one level (say the WEIRD – Westernised, Educated, Industrial, Rich, Democratic – students of the University of Chicago) to all peoples in all contexts.

The layers interact fluidly, not deterministically. The speed and selection processes of each level will vary according to how stable the environment at that level is, and the stability of the fitness bearers, and the different levels will weakly interact, much of the time. Each level of ethical selection can superintend a lower level, but there is a limit to the degree and stability of that superintendence. For example, while our cultural ability to cook food may modify our jaw structure over many generations, if we collapse our society and lose fire, that influence of culture upon morphology will cease and biology will rapidly rebound (because the alleles are already in the population for robust jaws). Likewise, an ethical trait in an urban society may rapidly dissipate if society collapses back to kin groups and the HSSS.

The scope of the fitness is constrained by the extent of the fitness bearer. This truism means that where a social class lacks control (rewarding cooperation and punishing defection) the ethical precepts of that class lose all ethical weight for the other agents in the society. There will be conservatives (“it was better when the Party was in control”) and radicals (“we should defeat the Party now!”) for this process, of course, so this will not happen equally or uniformly across all situations in a complex society or ecological context. Those in marginal environments may revert to traditional ethical schemes more rapidly than those in resource-rich environments.

One implication I would like to emphasise is that a universal ethics requires a universal fitness bearer. If members of the tribe are held to be pre-eminent, the ethical conditions of the foreigner are not taken into account. Only with universal abstract agents like “human”, “adult”, or the extension of such categories as “child of [the] God” to those outside the religion can there be a universal ethics; otherwise those outside the favoured group, class or ethnicity are generally seen as sub-human in some fashion: childlike, less well developed, savage, incapable of love, altruism or charity, and so on. A modern example, just to show that these levels do not exist in a homogenous fashion through societies even when they are imperial or industrial, is when members of a dominant religion state outright that non-members are “unable to fully experience love”, or “life” or whatever. Every Easter, some bishop or pastor makes exactly this claim.

The notion of a “world citizen” is a prerequisite for a universalist ethics, and such ideas will continually battle against the smaller-in-scope notions of “us” against “them” for every kind of divide from empire to ethnicity or class, to simple kin relationships. I am most certainly not suggesting that such ethics are impossible; but they are not foregone conclusions of the world process, either. Partly this is because the more general the interest bearer, the weaker the individual benefit, and so the less rational it is for an agent to choose to act in a universal fashion unless the payoffs are exceedingly strong if successfully installed.

Famously, Peter Singer extended the ethical scope to include all sensitive species; one might wonder if this is in fact practicable. The payoffs, evolutionarily, will have to be very great for it to be the case that we successfully include other species in our moral equations. Again, I am not saying it cannot be done, and it is done to a degree in the prohibitions against cruelty to animals, but this might be seen simply as a consequentialist argument (children and adults who are cruel to animals will be cruel to other people as well).

To understand how this might happen, we need to understand the relation between biology and culture in ethics as in other domains. It has been argued that “genes have culture on a leash” and that biology is, if not destiny itself, then a major contributor to destiny. I think this is a mistake, and offer instead what I call the laminar flow theory of the relationship between culture and biology (and between culture at one level and at another):

Hydrologists know that large bodies of water, such as rivers and seas, can have currents moving at different rates, with different properties like salinity, temperature and direction of flow. These form laminar layers. A deep fast layer may not greatly affect a shallow slow layer, and all such combinations, but at the interface between them, there will be turbulence that does have some cross-layer effect. I consider the relations between the levels of biology and culture like a laminar flow: the influences are not one-way, nor even constant, between the layers. Time for a diagram:


The strengths and rates of the interactions here are notional: they may vary in any fashion depending upon the interaction type, the degree to which it is endogenously changing, and the selective value of each. The point is merely that each level can influence those above and below it and in variable ways. Lower does not always imply slower. Culture can be conservative and biology change in a rapid fashion, and vice versa. There may be general rules for this, but they aren’t apparent to me.

So moral duties to other entities will depend very much upon the degree of fitness that the moral layer imputes, the stability of selection for those moral duties, the influence on that layer of moral discourse of other layers, and so on. In short, it’s a blooming mess. We can, however, make some general rules of thumb.

I think that moral duties are less strong the less local the beneficiaries. I care more about the welfare of my children than I do about the welfare of my neighbour’s children in general, and more about them than the children of those over the hill, and so on. Moral concern is not transitive. However, I can care more about the moral rule than about my own children if conditions (in me, and in my social context) are right: consider the attitudes of “true believers” in communist societies. Likewise I can care equally about the welfare of a child portrayed pathetically on television (by an agency that wants your money; one hopes altruistically) as I do about my neighbour’s child, and so on.

The less like my own kind (social, ethnic, class) the less likely I am to extend equal moral rights. Justifications of behaviour, however, need not be explicit in this way. Sometimes I might say that whites should get more rights than blacks because I am white. More likely I am going to tell some story about God, or history, or genes, that justifies the view that I am inclined to have because I have an interest in the status quo, or the status I would like to be quo. False consciousness is ubiquitous.

Claims that morality is a summary of what previous societies found to work towards flourishing are partially right, but not simply true. If you restrict the scope of “society” and “worked” and “flourishing” to this class or that ethnicity and so on, then it is a truism: moral rules survive to the degree they help those that hold them to survive. But it does not mean that all past moral rules tend towards flourishing of all society, as conservatives often think. Nor does it mean that none do, as radicals often think. The reality is a matter of empirical investigation.

In the next post, I will attempt to discuss the variations in moral dispositions as a function of population structure. After that I will try to think through the moral implications: if these are all the outcome of evolutionary processes, what should I, as an agent, do and think? Then I will collapse in pain…

[Morality and Evolution 1 2 3 4 5 6 7]

If we agree that morality enhances fitness, because it enables cooperation, several questions arise: what sort of fitness enhancement does it provide and to what? In short, what is the selection process tracking? To say that morality provides a foundation for social cohesion and the consequent benefits that accrue is not enough. We have to be exact about it.

I think that there are several levels of context in which selection tracks fitness in moral contexts, and several kinds of thing that are fitness-enhanced. I will call these levels 0 to 4.

Level 0: Primate society

Level 0 is the prehuman PSSS (Primate Standard Social Structure). Here individuals compete for mating and resources for reproduction, such as food, territory, shelter, etc. They do this by pairwise competition, using positive signals (grooming, permitting mating access, resource sharing) and negative signals (the primate threat stare, stature raising, and acts of violence). This raises the genetic fitness of the individuals who succeed in attaining group membership. Individual groups may flourish as well if cooperation is well managed and effective norms are enforced. Consequently, the fitness bearers are individuals and their reproductive lineages, and groups, insofar as they are sorted out through mean fitness, competitively with other groups. Transmission of this is largely done via genes and niche construction. However, there is evidence of cultural transmission among primates, and so mimetic (behaviour-copying) channels of transmission of norms are also likely.

Level 1 is the human standard social structure. This is not a question of implementations of social structures, because I consider it very likely that humans have a vast array of actual social organisation, and nothing will be all that universal, but instead a matter of the inherited dispositions to acquire local social norms. One is not born with the Golden Rule, for example, but if that rule is the norm of the postnatal society, as the individual matures, they will tend to acquire that norm just because it is the local norm. This, however, will compete with other dispositions – to acquire mating opportunities through social popularity and resource acquisition, for example. Individually, we “learn” a number of competing exigencies, and instrumentally attempt to satisfy them as best we can. This is a tradeoff, so moral rules (like Kant’s proscription of ever lying) will be treated in a casuistic fashion, to offer the best outcome the agent considers it can achieve (and of course, success at that will be determined post hoc).

Level 1: Human dispositions

In level 1 morality, one acquires the rules of society in an undetermined fashion; one does not learn from experience that the Golden Rule works, but only that others think it is worthwhile. We tend to use shortcut heuristics to acquire social knowledge (see Gigerenzer’s work) because we cannot construct that knowledge by induction, nor even by abduction. Because we have these inborn rules some stimuli and not others are more salient to us, and we construct our generalisations from these stimuli and not others. Consequently, the emotional contexts of responses by other agents (such as family or friends) will tend to carry more weight when we construct our moral framework than objective reportage of what works.

This means that we should be careful of thinking that moral rules are simple summaries of past success-acquisition. Moreover, while we may construct some of our moral generalisations, given that we (and not the other primates) are symbol users, much of it will be passed on to us verbally or by demonstration (mimesis), and this relies heavily therefore upon our inherited capacities to learn and process language and copy behaviours, with correction. Such learning “scaffolds” human development and socialisation. A feral child will not learn social norms properly past the age at which such basic cognitive resources develop.

We tend in this context to give greater weight to kin than to strangers, and so our rules will tend to guard the family in proportion to the relatedness of the individuals. Kin-tracking is directly related to inclusive fitness. In this context, nepotism and xenophobia are Goods.

Level 2: After the Neolithic Transition

Level 2 is what I call the Post-Transitional context. Once populations exceed working memory constraints for tracking reciprocal altruism, abstract indicators of social commitment (and hence trustworthiness for reciprocal benefits) must be called into play. Social divisions arise in terms of class, division of labor and skill, and ethnicity, and so these indicators, which I call “tribal markers”, apply. Instead of tracking hundreds or thousands of individual, one merely has to track these indicators as instances of class, and societies tend to have only a few such classes. Note: I do not mean the sociological sense of class as a measure of wealth or power, here, but the more logical sense. Nevertheless, these classes are arrayed in terms of high status to low status, and there may be more than one set of hierarchies in play, complicating the simultaneous judgements individuals must make upon encountering a representative of these classes. For example, a Scot was lower in status than an Englishman in the colonial era, but a Scottish aristocrat had a higher status than an English wheelwright, not matter how superior the latter thought their nation.

Now the rules are driven by cultural norms and institutions that serve the overall interests of urban societies. The fitness bearer is the urban society itself, and the social benefit is social cohesion that permits the high status classes to benefit most. Around this time, warrior classes arise (“equestrians”, “knights”), and their interests are served by their being part of, or rewarded by, the highest status classes. This division of political and economic power is seen very early on, in the appearance of distinct early Neolithic architecture that is clearly the residences and forts of a ruling class, around 10,000 BCE to 3000BCE. Where the ordinary folk had round houses in Anatolia, for example, we start to see rectangular floor plans and much larger buildings.

Consequently, we see moralities that are fiercely in favour of the local settlement: gods are local, rituals are local, and loyalty to the state and the rejection of betrayal becomes institutionalised. Religions arise around this time (meaning, a sacerdotal division of a priestly class or vocation). Since this period is pre-literate in the main, the transmission of the rules is by symbolic communication and force majeure. The fitness bearers are the classes themselves (and in particular the high status individuals within the class – status hierarchies apply within as well as between, classes), and when a class that has the highest status is overthrown (as in the Hyksos taking over from the prior rulers of Egypt), there is a period of major restructuring of fitness of the classes.

Level 3: Imperial ethics

Once states come into being, much of the ethics has to do with the gaining and maintaining of cooperation, and the punishment of defection, at the state level. When nearly homogenous states, ethnically, start to cover greater ethnic diversity (and that will include religious diversity, diversity of taboos and rituals, diversity of economic and ecological contexts and behaviours, etc.) that becomes difficult, and so ethics for the state will be invented. Patriotism will replace ethnic loyalty. Following the law will replace following village custom. The pre-eminence of the ruling class becomes a moral question: kings and their families must be regarded with awe and strict protocols.

Imperial ethics are superstructures of urban ethics, and they can be more ephemeral even within a series of generations of a single dynasty; but usually they are simpler and more long-lived than most ethics, because being simpler they are more easily adapted to local and transitory circumstances. Imperial ethics can even survive a revolution, as they did in the Soviet case – all that changed were the ruling class attributes.

The fitness bearers here are the imperial institutions, and (usually) families that head up these institutions. Imperial ethics are primate social dominance behaviours writ large, very large indeed. Consequently, when empires fall or are revolutionised, the ethics do not need much revision. Transmission of imperial ethics is done through literary and institutional modes, of course, but also tends to have a strong monumental aspect – norms are enforced by stelae or other monuments with inscriptions.

Level 4: Industrial ethics

You might, if that sort of terminology pleases you, call this Colonial ethics. To have industry is, of necessity, to have a cheap labour pool and cheap raw materials, and so it must be colonial. [By the way, I do not think we are any distance at all from being colonial; so there simply is not yet any post-colonialism except in some special cases, and not enough of them to form postcolonial ethics.]

Colonial or Industrial ethics are the first truly universal ethics, in that all must attend to the structures of the industrial age, irrespective of class, ethnicity, or physical prowess. This, of course, does not mean all equally benefit from industrial ethics, of course. Dominant families, groups and tribes continue to benefit most.

Industrial ethics are largely consquentialist. What counts is the outcome for the interest-bearers (continued production, wealth acquisition, stability of the workforce, etc.), and as the interest-bearers become multinational in scope, so too do they undercut the ethics of empires.

Who benefits

So the question as to when ethics is fitness enhancing depends on what level or type of ethics we are discussing, who or what the fitness bearers are, and the scope and context of the type of fitness. What is fit in, say, Transitional contexts (for example, the tribalisms of some European regions where family defeats national interests) will be very different for fitness in industrial situations, and it is my opinion (note: not well thought out belief!) that most ethic conflicts occur when these different levels conflict; otherwise, ethical rules tend to settle out stably when the social context is itself stable. Village ethics, for instance, do not change much unless extra-village conflicts occur.

I may do several more in this series to draw it all together. Then again, I may not…

[Morality and Evolution 1 2 3 4 5 6 7]

A while back I gave a talk to a group of theologians on the question of Darwinian accidents. It had no ethics content. The first question I was asked was “If you are an atheist, how can you have moral rules?” Like many others who talk about Darwin and evolution, I have been asked this a lot, and my answer is always the same:

“I am an ape. That is what apes do.”

Social apes (arguably all the apes) are evolved to function in social groups and norm-following is a crucial aspect of this. It would be remarkable if humans, who evolved from the other apes, did not follow group norms, not that they do.

Ethical philosophers (I mean, philosophers who do ethics, not good people who do philosophy) call the view that all moral content comes from a God or Divine source as the Command Theory. It seems to be the default view in western nations and probably many others. It boils down to the following claims:

  1. Moral values are absolute and real
  2. If you aren’t told to live by these values by an authority, you will act savagely and horribly.

Whether or not moral values are real (a view known as, obviously enough, “moral realism”) or not (a view called “error theory” on the basis that it is just an error to think moral values are real), it is the second claim that is seriously in question.

It appears to imply that we are all sociopaths at best and psychopaths at worst, and that without Divine Command and Threat of Punishment, we would all be rapists, murderers and thieves. My response is that I really do not want to be around people who, if they lost their faith for any reason, would default to such behaviours. Ordinary folk, however, will not. And atheists are ordinary folk, nearly all of the time.

So, since atheists are apes, and apes follow group norms, it appears the moral monsters here are the theists who think they’d automatically become rapists and murderers, etc., if they ceased being Christian. The real question is, “Why would a Christian need to think only theists are moral?” And the answer to that is “Because it makes being Christian (or theist) more important in their eyes.”

I am not an atheist in the philosophical sense, I’m an agnostic, and the question came out of the blue, but the question highlights the real concern people have about moral questions and evolution. If we evolved, what does this mean for our moral values?

Many people believe that we can have morals only if God commands them (and enforces them). Others believe that morals are what any rational being, human or not, would choose to enact. Still others think that morals are facts about the world. It’s confusing and complicated.

Still, I think the main issue is easy: are moral values real or are they constructed? If we evolved, many may think that moral values are constructed by organisms, and yet a good many thinkers believe that any rational evolved creature will trick onto the same moral values. If those aliens are coming, we might well hope they share our values, although that inference didn’t work so well in the film Mars Attacks.

It boils down to this: the world is thus and so. Does it include moral facts or not? If you say it does, whether these facts are facts about the natural world or about what rational agents will converge upon, then you hold moral realism. If you say it does not, and that moral values are constructed (that is, you think they are at most facts about us), then you are a moral antirealist. The term “antirealism” is basically just a word we apply to those who deny realism about some issue or other, and so it can have many different ways of being accepted.

Let’s start with moral realism. There is an argument, due to the ethicist Guy Kahane at Oxford, called an evolutionary debunking argument. It runs roughly like this:

• Something evolved through natural selection

• Natural selection tracks fitness

• Truth is not the same as fitness

• So if that thing evolved, it can be said to be fitter, but not true.

He calls this the problem of “truthtracking”. Consider this argument:

• We evolved our ideas about the world and God

• If the idea of God has evolved then it is a fit idea, but not necessarily true

• Hence we do not have a reason to think God is real because people tend to believe in God

That is, the idea of God is to some extent debunked by explaining belief in God as the outcome of evolution. Now apply this to moral values:

• We evolved moral values and our ideas about them

• If the idea of moral values evolved, then they are fit ideas, but not necessarily true

• Hence we do not have a reason to think that moral values are real.

Note that while an evolutionary debunking argument does not disprove the ideas that have evolved, it does tend to undercut our reasons for believing it to be true, because its success can be explained by increasing fitness not truth. Evolution tracks fitness, not truth as such (I’ll discuss whether that means we must think our ideas gained through evolution must be questioned later in the series on what evolution means).


Paul Griffiths and I have called this a Milvian Bridge.[1] On October 28, 312, the contenders for the post of emperor of the Roman Empire, Constantinus and Maxentius, fought a crucial battle on the Milvian bridge over the Tiber in Rome, which Constantinus won, eventually becoming the emperor Constantine. The church chronicler Eusebius recounts the story:

Being convinced, however, that he needed some more powerful aid than his military forces could afford him, on account of the wicked and magical enchantments which were so diligently practiced by the tyrant … Accordingly he called on him with earnest prayer and supplications that he would reveal to him who he was, and stretch forth his right hand to help him in his present difficulties. And while he was thus praying with fervent entreaty, a most marvelous sign appeared to him from heaven, the account of which it might have been hard to believe had it been related by any other person. … He said that about noon, when the day was already beginning to decline, he saw with his own eyes the trophy of a cross of light in the heavens, above the sun, and bearing the inscription, Conquer by this. At this sight he himself was struck with amazement, and his whole army also, which followed him on this expedition, and witnessed the miracle.

Consequently he won, and later formally converted to Christianity, making it the official a legitimate religion of the empire. Now one might say that because he won, God supported him and thus caused the victory, but equally a more skeptical historian might note that most of his forces were Christian, and they won because they thought they had a divine mandate and fought harder than their opponents. One cannot argue from the success of the belief to the truth of the belief. The Milvian bridge will not cross over from success to truth.

The success of common moral values in human societies means only that those who hold them will tend to flourish in societies that reward those values. Does it mean that societies that hold those values are more closely approaching moral truth? Darwin tried an argument like this in his Descent of Man(1871).

No tribe could hold together if murder, robbery, treachery, &c., were common; consequently such crimes within the limits of the same tribe “are branded with everlasting infamy…” [I.93]

It must not be forgotten that although a high standard of morality gives but a slight or no advantage to each individual man and his children over the other men of the same tribe, yet that an advancement in the standard of morality and an increase in the number of well-endowed men will certainly give an immense advantage to one tribe over another. There can be no doubt that a tribe including many members who, from possessing in a high degree the spirit of patriotism, fidelity, obedience, courage, and sympathy, were always ready to give aid to each other and to sacrifice themselves for the common good, would be victorious over most other tribes; and this would be natural selection. At all times throughout the world tribes have supplanted other tribes; and as morality is one element in their success, the standard of morality and the number of well-endowed men will thus everywhere tend to rise and increase. [I.166]Darwin explains the origin of moral values as the outcome of the success of the groups that have “a high standard of morality”. This is a kind of natural selection, only of groups rather than of individuals. It is very similar to a view that dates at least from Aristotle, that moral values increase the “flourishing” of human societies.

However, this gives us no reason to think that moral values are real, only that they have a kind of instrumental value. Darwin’s target is a view, quite popular at the time he wrote and since, that what drives ethical decisions is individual selfishness. In the ethical philosophy known as “utilitarianism”, ethical choices should be aimed at maximising some sought good, like the avoidance of pain or the achievement of pleasure. To seek these out is a matter of personal value, not value in the world at large. Modern versions of utilitarianism, such as Peter Singer’s, however, treat the minimising of suffering and the maximising of pleasure as good in themselves for all beings, and so I don’t want to suggest that all utilitarians are selfish. Singer, for example, extends utilitarian values to all sensitive creatures (i.e., those that can feel pain), not just humans. Darwin’s nineteenth century targets are more like modern libertarians or neo-conservatives, sometimes called (wrongly) “social Darwinians” today.

A rival view in ethics is the Kantian view, derived of course from the late eighteenth century philosopher Immanuel Kant. He held that what is true for moral value is that any rational (or reasonable) person or moral agent would choose to do what they would want others to do to and for them. Thus, as we do not want to be killed, so we should not kill others. Kant’s view is a philosophical version of the Sermon on the Mount: do unto others as you would have them do unto you, also called the Golden Rule.[3] These are facts about the world: rational agents will always converge on the same solutions.

In evolutionary thinking, this is discussed under the heading of “game theory”. In the middle of the twentieth century, mathematicians like John von Neumann and others worked out a mathematics of social interactions. Starting with a problem called the Prisoner’s Dilemma, they assumed that rational agents are self-interested, and so developed a procedure for working out, as in cases where the Soviets and Americans faced off against each other with nuclear weapons, a way to predict what the competing sides would choose to do.

The Prisoner’s Dilemma works like this. Two criminals are being interrogated separately, and they cannot communicate with each other. They are both offered the following options: rat on the other and go free. If neither rat out, then they will both be convicted of a lesser offence and get less jail time, but if both rat, they both get the same large jail sentence. Their individual choices form what game theorists call a “payoff matrix” [3]:

  Prisoner A doesn’t rat (cooperates) Prisoner A rats (defects)
Prison B doesn’t rat (cooperates) Each serves 2 years Prisoner A goes free Prisoner B serves 3 years
Prisoner B rats (defects) Prisoner B goes free Prisoner A serves 3 years Each serves 10 years

Now rationally, neither should rat, since by doing so they would minimise the time anyone spends in prison. But A will know that B knows this, and prefers that he gets no jail time, not caring about the other. However, A knows that B knows this too, and will therefore rat. But if they both rat they get a big sentence, and so on. They have to defect to minimise their own personal outcome. There is no solution that they can come up with that doesn’t end up in both getting a long spell in jail. This simple game can be used to describe many kinds of “transactions”, from the interactions of DNA in evolution or in social interactions between economic or political actors. That’s in a single case. In many cases, though, the dynamics are more interesting. When “iterated” or repeated cases occur, it turns out that if most people are inclined to cooperate, then someone who simply does to another agent what that agent did to them previously, but who starts off by cooperating on the first “move”, can on average tend to do better than many other strategies. This strategy is called “tit-for-tat” after a computer simulation in which economist Robert Axelrod submitted just this strategy.

Later work, though, showed that even tit-for-tat can do very badly in a population of noncooperators. In short, if you live among Chicago gangsters, it pays to not cooperate as a default. The end result is that cooperators get eaten alive, and you end up with nothing left but selfish economists. Greed is only good if that’s the society you operate in.

So it seems that game theory won’t solve our problem. What succeeds depends on whether your group is largely Hawks or Doves. A militant tribe might do very well if the surrounding tribes are too nice, to return to Darwin’s metaphor. One might cynically see this playing out in the modern world.

Many evolutionary writers, such as Richard Dawkins and Michael Ghiselin, made a lot of the early results of game theory in the 1970s. Ghiselin wrote,

Scratch an ‘altruist’, and watch a ‘hypocrite’ bleed.

And Dawkins argued that we are driven by “selfish genes”, which he meant that all our moral altruistic choices are in the end based upon genetic interests. We help those who are more closely related to us, genetically. This doesn’t mean that individuals are not psychologically altruistic, but that the reason they are is because cooperation helps the genes that make them psychologically inclined to help others. We are genetically selfish, but psychologically altruistic. Many evolutionary psychologists hold to the contrary that we are “eusocial”, meaning inclined to be more cooperative than a game theory account might suggest we would be, because we evolved in small groups of related people, and that now, in a larger society of less related people, we have a moral module in our heads that misfires, so to speak. This is the “faculty of empathy” that Darwin discussed at length in the Descent.

So we haven’t been able to cross the Milvian bridge for morality yet. In the next post, I will discuss whether selection can track a kind of moral fact.


Commenter Enon noted that Constantine did not make Christianity the official religion of Rome, but that Theodosius in 380 did. I had not read my Gibbon…


  1. Paul E Griffiths and John S. Wilkins, “When Do Evolutionary Explanations of Belief Debunk Belief?,” in Darwin in the 21st Century: Nature, Humanity, and God, ed. Phillip R. Sloan (Notre Dame, IN: Notre Dame University Press, In Press); John S. Wilkins and Paul E. Griffiths, “Evolutionary Debunking Arguments in Three Domains: Fact, Value, and Religion,” in A New Science of Religion, ed. J. Maclaurin and G. Dawes (Chicago: University of Chicago Press, 2013).
  2. Similar ideas are proposed in other philosophies, such as Jewish and Buddhist thought. Gautama Buddha (5th century BCE) said “Hurt not others in ways that you yourself would find hurtful.” [Udanavarga 5:18] The Jewish philosopher Hillel (1st century CE) wrote “That which is hateful to you, do not do to your fellow. ” [Talmud, Shabbat 31a]
  3. The payoff can be any amount, so long as the individual choice is that ratting is preferable to not ratting for each possibility.
Orthodox apocalypse

The apocalypse in an Orthodox church. Source: Wikimedia

[Apologies this took a while; I’ve been rather sick]

So, given all this [Why believers believe silly things, why they believe the particular silly things they do, and the developmental hypothesis of belief acquisition], how can you change a believer’s mind? It is tempting to say that you cannot, or to take a more rationalist perspective and think that more argument is all that is needed, and both views are often put. But, as we might expect, the situation is a bit more complex than that.

First of all there are two distinct questions here. One is the individual question: how can we change a particular individual’s beliefs? The other is the communal question: how can we change the overall reasonableness of a given group or population? These are different questions with different answers.

The individual question has no general answer: it depends upon the individual’s belief-set, and how coherent it already is, and whether or not they are sensitive to experiential challenges (that is, if they are in a crisis). A believer who has a relatively well-cohering set of beliefs, with no real internal conflicts of note, but who is in no personal position of challenge by experience, is relatively immune from rational argument. If they face empirical challenges (their beliefs do not match with the world they are experiencing, as in the classical study of the failed millennialists by Leon Festinger and colleagues (Festinger et al. 1956)), one solution is to deny the facts, another is to to reinterpret the peripheral or less weighted beliefs to save the core beliefs, and a third is to reinterpret the core beliefs so that they are not challenged by the facts. All three strategies can easily be found. For example, global warming denialists will challenge the facts. Creationists will allow some facts but reinterpret them or the ways they are handled by creationist thinkers. And my favourite case of core reinterpretation is the reaction of the Catholic church to Daltonian atomism and chemistry: change the interpretation of a core belief in substance in the doctrine of transubstantiation from a physical reality to a metaphysical reality (thereby partly conceding to their Lutheran critics of 400 years earlier).

When these things happen, believers will usually deny that they have happened (Schmalz 1994), like the historical revisionism in Nineteen Eighty-Four, where the state goes to war with a new enemy and now tells its pliable population that “We have always been at war with Oceania”. These three strategies are increasingly schizoid. Reinterpreting the core beliefs to accommodate new facts is a healthy response to the world, leaving only questions of group identity marking (we do not agree with those Lutherans, they are heretics!). The Church has accepted (belatedly) the scientific virtue of Galileo, Dalton and Darwin.

The revision of peripheral beliefs is more strained. When [honest] creationists spend time trying to accommodate the facts of biogeography, biodiversity, genetics and dating techniques, they may find their “hypothesis” dying what Flew called “the death of a thousand qualifications”, but so too do defenders in science of outmoded hypotheses, and there is no threshold at which it becomes irrational to hold those beliefs. Nevertheless, like pornography, we can recognise irrationality when we see it. The rationalist approach to argument, however, behaves as if there is, or ought to be, a line that one should not cross. This leads to interminable “debates” of claim and counterclaim, which rarely result in any resolution.

The third approach is to simply deny the facts. This can be achieved by adjustments to the reliability of those who we disagree with (ad hominem attacks, for instance, on the probity of climate scientists). Both believers in pseudoscience (like Bigfoot or homeopathy) and anti science (such as creationism or anti-vaccination) find methods of calling into question the facts themselves.

Now as the response becomes less grounded in the empirical, reasoning becomes much more difficult, until you reach a stage where no reasoned argument is possible. But this is determined by the strategies adopted by the believer, not by the subject or belief they hold. Homeopaths can be argued out of homeopathy, and Catholics can still hold stubbornly onto the view that the Host really is blood and flesh, and that chemists are just anti-Catholics. So it depends upon the individual. If the core beliefs are cognitively entrenched, then they are less likely to undergo any kind of rational or empirical revision. [As a side note, one often anecdotally hears of a believer in homeopathy or some other “complementary medicine” who abruptly adopts empirical medicine when it is their child or loved one who is suffering. This is a very personal crisis. However, it can also drive the believer deeper into the silly belief, as Festinger noted.]

At a group level, however, things are even more complicated. Here what counts includes the institutional structure of the belief-group. The plasticity of the group itself will help determine whether the group adapts or digs in further: the more authority-driven the group, and the more exclusionary it is to those who deviate even slightly from the approved belief-set, the less it will change. And another issue is group size. The Catholic Church, for example, while supposedly hierarchical (indeed, the very term hierarchy was taken from its military-style structure of command and constraint; it means “rule of priests”), has been very fluid in its interpretation of its core beliefs. In large part this is because the Church is not small and there are many de facto command structures apart from the clerical. The Jesuits, for instance, played a great role in adopting, refining and making viable scientific acceptance within the Church, even as others were pushing for a return to older, conservative, beliefs. Christian, Jewish and Islamic doctrine has been in various ways able to adapt to new science and new social conditions (as Harnack showed in great detail in his classic History of Dogma in the late nineteenth century).

But some generalisations can be made. One is, that the more a belief-group is reliant upon authority figures to tell believers what they should believe, the less fluid the tradition. This is, as I argued in the paper on rational creationism [mentioned in the last post], due to a kind of doxastic [that is, belief] division of labour. Most of us have little time to test and become familiar with the technical ideas of science, for instance, and so we rely upon authorities. But the authorities we select to rely upon depends a lot upon what belief-group we are in. We choose to believe our authorities over theirs. As I argued, this is because, evolutionarily speaking, they aren’t dead yet. Having their beliefs may have a cost, but that is offset by the benefit of savings in time, effort and resources of taking ready-made ideas off the shelf. We have a disposition to adopt the views of those we grow up around, because it is economic to do so, and adopting those views won’t likely kill us. Only when we reach a crisis state do we challenge those authorities, and even then we will tend to do so piecemeal until we reach a (personal) threshold of incredulity.

Another depends upon the degree of engagement we have with the wider society in which our belief-group is located. Even the Plymouth Brethren must deal with teachers, the media, and popular culture that is right there on the shelf in the bookshop. Messages that conflict with our belief-set can reach another (personal) threshold that we find challenges our core beliefs. When that happens, we may find a crisis that causes a rapid conversion (or de-conversion) in core beliefs.

This is why one of the major areas of battle between belief-groups lies in the control and amelioration of these challenges in education. If you can introduce some doubt about the strength of, say, evolutionary biology among younger children, it is rational (in a bounded sense) for them to stick with the core beliefs of their belief-group. Only if evolutionary biology (or whichever other topic is at issue) is presented firmly and without competing beliefs in educational contexts will it begin to undermine the authority structure of the student’s belief-group. As I argued in the creationism paper, sufficient challenges will tend to sway the average developmental trajectory of a believer away from the hard-core or exclusive belief-set of the belief-group. The population as a whole becomes more accommodationist.

This leads to my final point: herd immunity. In vaccination, when a sufficiently high number of the population has been immunised, the epidemiology of the disease being vaccinated against reaches a point at which the likelihood of infection among the unvaccinated (the very young, for instance) is very slight. Beliefs behave like pathogens (a metaphor that has been widely abused, in my view) in that since we take our belief cues from the experienced social norms, when those norms are reasonable ones, unreasonable beliefs tend to founder, and so this sets up a selection pressure in the evolution of beliefs for beliefs to be not too weird, or they isolate the believer too greatly from the social context in which they live. Sufficient education in reasonable beliefs forces many silly beliefs, or at any rate those that have real world consequences, to become less silly.

Anyone who understands population genetics will realise that this does not mean that the entire population will become reasonable as such. In genetics and in epidemiology, the ratio of beneficial to deleterious variants will reach a tradeoff point, called an evolutionarily stable strategy. In economics, this is called a Pareto optimal point. To increase one variety will lower the average fitness of the population, and so the two variants will remain in a set balance until external conditions change. It is for this reason, for example, that I do not think religion will “disappear” as many rationalists think it will. There are group benefits to religion, and even in the most secular society, until the costs of being religious exceed those benefits, religion as an institution will persist.

So in order to ameliorate the supposed evils of religion (or conservatism, pseudoscience, radicalism, etc.), the best strategy that those whose ideas are empirically based can take is, in my view, to resist attempts to dilute science and other forms of education. This sets up a selection pressure against extremist views. Similar approaches might be taken in what Americans call “civics” classes to deal with political extremisms, and so on.

To conclude, I should make the following point: I am not suggesting that I alone am ideologically pure and coherent in my beliefs. Anything I say in general must apply to me also (this is why one of the objections to Marxism is that somehow Marx exempts himself from false consciousness). So I assume that I, too, will have conflicting belief subnetworks, and so one of the reasons why I put these thoughts out here is to get the same kind of correction from the wider community that I expect those I have used as examples here require. I am a radical (increasingly as I age), conservationist, small-l liberal of the Millian variety, agnostic and very, very, pro-science. I expect I have more than a few of my own shortcomings. As a friend once said of me, I am like a hunchback who cannot see his own hump, but sees everyone else’s. I expect this. But I think this analysis is roughly in the right region.


Festinger, Leon, Henry W. Riecken, and Stanley Schachter. 1956. When prophecy fails. Minneapolis, MN, US: University of Minnesota Press.

Schmalz, Mathew N. 1994. “When Festinger fails: Prophecy and the Watch Tower.” Religion 24 (4):293-308.

If, as I argued in the last post, believers believe silly things in order to make the community cohere in the face of competing loyalties of the wider community, why is it that they believe the things they believe?

For example, you will often see Jews attempt to argue that kashrut (kosher, in Yiddish) dietary rules make sense in arid environments where trichinosis was rife[1], and so on, but what is the reason why you can’t mix fabrics, or get tattoos? The reason appears to be that these marked the Jews out from their competing cultures. An approach taken by recent Cognitive Science of Religion (CSR) scholars adopts the “costly signalling hypothesis” formulated in evolutionary biology by Amotz Zahavi and applies it to the cultural evolution of these kinds of displays. Zahavi’s hypothesis supposes that if an organism is signalling its toxicity to predators or genetic health to potential mates, it can easily fake those signals. Evolution, however, is a hard mistress, and will weed out these easy-to-fake signals over the long term, as any variant predator or mate that tricks on a way to detect fakes will spread rapidly through the population, causing an arms race. So in the long term, signals of whatever property is being signalled will have to become hard to fake. Zahavi suggests that behaviours like stotting will have to honestly signal the fitness of the organism.

So there are several properties for a costly signal. One is that it costs more to fake than to simply have the right property. Another is that it must correlate with the right varieties. Another is that it must be arbitrary: it should not be a trait or behaviour that is selectively advantageous, or many different varieties or organisms will trick upon it, and it will not therefore correlate. So an honest, costly signal is an arbitrary signal.

CSR researcher Richard Sosis proposed that many of the doctrines and institutions of religions are such costly signals. Kashrut is arbitrary, because it has one function: to mark out, uniquely and honestly, Jews from their (genetically related) neighbours. This is not biological evolution, but cultural evolution – what evolves are institutions, rituals and behaviours. They function as what I call “tribal markers”. They include accents, languages, dress, diet, and a host of other things. Consider the ban on pork by Muslims and Jews: here is an easy to raise food resource that is foregone to identify themselves. It is hard to fake if food is not plentiful. Circumcision and scarification among various groups is another kind of costly signal. People can die from these rituals. That is the ultimate genetic cost.

So the reason why (or if you prefer a pluralist approach, a major reason why) religions have these silly beliefs is that they serve to honestly signal identity. But this doesn’t explain why they have these silly beliefs. And extending the argument to all kinds of belief-systems, it fails to explain why belief-groups settle on the particular beliefs they do as the tribal markers of identity.

One suggestion is that these are simply contingently adopted. For example, the use of some “shibboleth” like abortion or the use of tattoos or tassels may be a simple matter of an idea being proposed at the right time and taking off, as a fashion, so long as it involves all the right costs. There may be no other reason for it. “Shibboleth”, by the way, is an example from the Tanakh:

And the Gileadites took the fords of the Jordan against the E’phraimites. And when any of the fugitives of E’phraim said, “Let me go over,” the men of Gilead said to him, “Are you an E’phraimite?” When he said, “No,” they said to him, “Then say Shibboleth,” and he said, “Sibboleth,” for he could not pronounce it right; then they seized him and slew him at the fords of the Jordan. And there fell at that time forty-two thousand of the E’phraimites. [Judges (Shoftim) 12: 5–6]

The word used, “shibboleth” means the seed or fruit bearing part of a plant. The specific meaning is irrelevant here, and it’s adoption is due to accent differences between the E’phraimites and the Jews, which has all the costly signalling characteristics: it is arbitrary, and hard to fake (as every American actor finds out when called on to do a foreign accent). Since then, a shibboleth has been a costly signal.

But there are other reasons why a tribal marker might be the thing it is. For example, it may be that the marker arose at a time of conflict between groups. Denial of global warming arose as an in-group identity marker when those raising the issue were seen to be challenging some core values of conservatives and those who benefited from the coal and oil industries (for example, employees of those industries and their friends and families). It was not arbitrary in that dispute, although the signal might have been something else. Once entrenched, the signal becomes a “frozen accident”; it is now entrenched in a developmental sequence of belief acquisition, and to remove it would seriously disturb the development of “right thought and action”, as the Buddhist tradition calls it in the Eightfold Path.

A third reason might be cynical intervention by rulers and thought leaders. For example, few think that reasonable conservatives (I would like to say here that I know many such beasts) have reasons for thinking global warming is a hoax now, least of all those whose personal interests are maintained by the offending industries. Yet many do. It may be that on the part of the majority of these people this is simply a matter of division of labour: authorities think that it is a hoax, and I don’t have the time to investigate the matter myself. So why do these authorities think this? Possibly they don’t, but it suits their social and economic interests to act as if they do. This is very old. Cynical manipulation of followers can be found as a strategy in Aristotle and Machiavelli.

Also he [the tyrant] should appear to be particularly earnest in the service of the Gods; for if men think that a ruler is religious and has a reverence for the Gods, they are less afraid of suffering injustice at his hands, and they are less disposed to conspire against him, because they believe him to have the very Gods fighting on his side.  [Aristotle, Politics. Bk 5, ch XI]

Therefore it is unnecessary for a prince to have all the good qualities I have enumerated, but it is very necessary to appear to have them. And I shall dare to say this also, that to have them and always to observe them is injurious, and that to appear to have them is useful; to appear merciful, faithful, humane, religious, upright, and to be so, but with a mind so framed that should you require not to be so, you may be able and know how to change to the opposite. [The Prince, chapter 18]

Once a signal has been proposed, cynically or otherwise, then it will spread to the extent that it acts as a useful marker. That is, just so far as it identifies honestly a member of the in-group. Rarely (in my opinion), the marker or signal will be something that bears directly upon the core beliefs of the belief-group. For example, modern western conservatism has as one of its stated values the freedom of the individual from government intervention, yet many of the signals, such as abortion or marriage, involve direct government intervention in people’s private lives. Justifications are given that are post hoc and ad hoc. Likewise, commitment to free market economics are set aside when special interests benefit, through subsidies and interventions or tax exemptions of failing industries. Likewise, social progressives often adopt economic policies that serve the interests of industry rather than their putative constituency, working people.

So costly signals for in-group identity are often contrary to the beliefs the group holds most dear. Abortion, for example, was not a core issue for evangelicals until they made common cause with Catholics in the early 1970s (see Frank Schaeffer’s Crazy for God for an account of this). But once it took root, rational debate became impossible. And this is because it is not about the idea, but about the community. As Sosis noted about religion, it is not about God, but about us.

1. But this fails to explain why the neighbouring tribes did eat pork, since they lived in the same environment.