In the last two posts I have discussed why members of belief-groups have silly beliefs (that is, beliefs that the wider population finds silly), and why those particular beliefs, whatever they are, are the ones they believe. In broad terms, the answer is that these are arbitrary, costly hard-to-fake signals of group membership which tend to be historically contingent “frozen accidents”. In those posts I mentioned, and appealed to, something I call the “developmental hypothesis of belief acquisition” or DHBA for short. In this post I want to outline that view. It is not something I have directly lifted from others, so any flaws in it are entirely my own.
We are developmental organisms, which means that we change our morphology and behaviour as we mature, and as part of the typical life cycle of our species. Many organisms do not have a developmental cycle, although there is a lifecycle of their species: single celled organisms often simply divide into progeny cells that are in all relevant respects identical to the “parent” cell. However, contrary to common belief, this is not the norm. Many single celled organisms reproduce in a cycle of cell forms. For example, many bacteria have distinct stages between reproduction events, where they merge genetic material, in a form of sex. So the normal behaviour for living things is to have a developmental sequence.
Development is maintained by many things, but the most obvious, prima inter pares as it were, is genetic control. Genes modulate when and how these steps are taken. In organisms of our kind (multicellular eukaryotes), this is a very complex process, but the sequence is generally obvious. We go through fertilisation, division, invagination, birth, maturation and enculturation in a relatively stable and predictable fashion. It is not a great leap to see the process of belief acquisition as being a part of that process.
Nobody is born knowing very much, if anything. It makes more evolutionary sense when an organism can live in many variable environments, both in a single lifetime and in the range of the species, for organisms to be able to acquire beliefs from local cues, such as (in humans) culture and practice, as well as by personal experience. So instead of humans being born with a set of “Pleistocene” beliefs (which the sociobiologists mark 2 think), we are instead born with dispositions to acquire beliefs. These are sometimes called “fast and frugal heuristics”; ways that allow us to get just enough of a belief-set that is liable to increase our survivability to mating.
Now beliefs are slippery things. I think of them as “cognitive stances” (Olson) in which our cognition leads us to adopt attitudes to certain inputs in ways that lead to action when necessary. We can think of them as sentences here, though. A belief is a sentence that we are inclined to assert the truth of; that is, take as a reason for action.
We typically discuss belief-sets as static entities, as logically or rationally connected lists of things we believe, either at the individual level or the group level (“Christians believe that…”). But this is misleading. Beliefs are dynamic entities. They grow or shrink, connect to various other beliefs in different ways, and form networks as we mature. A Christian friend once thought that the Bible, as the Word of God, was a timeless writing; he now still thinks it to be the Word of God, but has a more nuanced historical view in which it underwent many redactions over time. This represents the dynamism of belief-sets. They are the outcome of constant revision and acquisition, and a shuffling of their relative weights and connections.
W. V. O. Quine once wrote a book (with his student J. S. Ullian) entitled The Web of Belief, in which he argued that we do not have foundational beliefs, but rather a web, or as I prefer, a network of beliefs that give mutual support to each other. As a result, he argued that we can revise, rationally all our beliefs. Objections to this followed, employing the cutely named theorem of mathematics, the hairy ball theorem, that once you start combing a fuzzy ball, there must be at least one point where all the adjacent hairs radiate out from that point. Using the analogy of hair direction to rational revision, Quine’s view suggests that there must be one single rational foundation. However, if revision is done on the basis of the current weighting of beliefs, and these are dynamic, as I have suggested, then the process of revision can go on indefinitely.
Quine’s and his critics’ view was that the beliefs are static, but connected. If we think of them as the current state of our beliefs, a time slice through our belief-set right now, then we begin to see that we may rationally revise in some future stage the current “foundational” belief. And this is exactly what happens as we develop our belief-set. What we took to be a coherent set of beliefs at, say, age ten, no longer need be now, as we test our beliefs, including our beliefs about the real world, against our experience, utilising the in-born heuristic disposition trust your experience. And so we have a more plastic notion of beliefs than Quine et al. But there are constraints.
As we adopt our beliefs, these become entrenched in the dynamic set, and so an early belief will tend to be implicated in giving support to more and more beliefs as we age. Call these cognitively entrenched beliefs. An entrenched belief acts as a modifier of all kinds of later beliefs, and so to revise an entrenched belief is to force the revision, potentially, of many others, and the earlier the belief is acquired, the more “damage” is done to the belief-set. Consequently, the likelihood that a belief will be revised over time depends upon how many subsequent beliefs it adds weight to. It’s not that we cannot revise it, only that there is a cognitive cost to it. I’ll get back to costs in a bit.
So what we have is a probabilistic “cone of possible belief-sets” that we might expect to achieve in the future, and that cone narrows the choices the older we get. Here is a diagram I used in my essay on creationism. The arrows represent pro-science or folk-belief influences as the person matures:
Each of the dark lines represents an individual trajectory of development. Since the influences have to be greater to move a trajectory from one place in the space to another some distance away, the more entrenched views get less and less likely to shift in a mature person the earlier the belief was acquired.
So getting back to “costs”, we can think of the belief-set as an investment of time, energy and resources in belief acquisition. The more time and resources you have expended trying to acquire you beliefs, the less inclined you are to abandon them (a kind of cognitive Gresham’s Law). So if there is a cost to getting a belief, then the probability you will abandon it is inversely related to that cost. As an example, if someone has learned all the Linnaean names for plants, then any proposal to abandon this scheme in favour of another will be resisted the more effort you have taken to learn the old one, especially if the beliefs are themselves not really a matter of practical outcome.
So what drives changes of belief, for we often see people undergoing “conversions” from one belief-set to another? To understand this, I think we need to consider the quality of the beliefs we acquire early on. By definition, a novice in some field is uncritical of what they are being taught. Five year olds will accept whatever any suitably authoritative source teaches them (like parents or peers). Consequently, assuming (as I think we can) that one acquires beliefs from a disparate range of sources, not all of whom are consistent with each other, we will tend to have a complex of as-yet-unramified belief-sets which will likely include ideas that are mutually unsupportive or even contradictory. These are maintained by compartmentalisation, in which contradictory beliefs are never brought into conflict in the life of the person.
But people do live their lives, and occasionally they will enter a cognitive crisis, in which they must decide which of two contradictory beliefs they will act upon. These can be social crises, or experiential crises, or moral crises, and so on. When they do, a process of “belief warfare” begins, and so people find themselves evaluating and dropping various beliefs. This can happen quite rapidly, and is the foundation for what some refer to as “conversion” experiences. These conversions can be partial (affecting only some beliefs) or global (affecting all or most beliefs).
When conversions occur, typically the believer is left with gaps in their belief-set. For example, losing their religion might leave them bereft of moral rules. They must then, to the degree that these gaps are urgent, find replacement beliefs. And these are typically on offer. If you become a non-Christian in favour of, say, skeptical views, there are many ethical systems out there you can select from, ready-made, as it were. There is an additional cognitive load to acquiring them, but often it is enough to just get the basics and then learn more as needed. The choice of ethical system, for instance, might be that scheme that most closely matches the ethical values you have abandoned (for example, a western bourgeois Christian might adopt bourgeois freethinker ethics).
This also plays into the costly signalling claim: you may choose a system that marks you out as a member of some new group, in order to have support and community with that group.
So if we think of belief acquisition as a developmental or DHBA process, we can understand why it is that costly signal beliefs are so critical; they offer a way to do a lot of cognitive acquisition easily, but they at the same time signal one’s communal identity in an honest manner, especially if your new beliefs are going to undercut your engagement with those of rival belief-sets.