Last updated on 27 Oct 2018
Debates over reduction in science are as old as philosophy of science, but in the 1960s, Ernest Nagel’s book The Structure of Science really set things going. Nagel argued that a goal of science was to reduce one theory to a more general and explanatory theory, so that one can deduce the laws of the reduced theory from the laws of the reducing theory given enough time and computational capacity (Rosenberg 2008). The example he used and which has been the standard one since is chemistry and physics: all chemical entities and processes can be reduced to more general descriptions in physics (ideally). This was extended to biology (Brigandt and Love 2008), psychology and cognition, in what has come to be called “layer cake reductionism” by Ken Waters (2008) in contrast to “layer cake antireductionism”, a form of holism that insists that every layer of the ontic cake is somehow self-subsistent or emergent. It is a particularly sharp issue when discussing genetic relations to the phenotypic traits of organisms: gene reductionists often seem to, or sometimes actually do, assert that the properties of the organisms as a whole are the properties of genes.
The idea of a hierarchy of ontological layers can be diagrammed like this:
The idea is twofold: for reductionists, each theory or law of one layer can – in principle – be redescribed and/or predicted from the laws and theories of the next deepest layer. For antireductionists – variously called holists or emergentists – each layer has qualities, properties or laws that cannot be so reduced. Biologists like Ernst Mayr, for example, who thought that biology was a science independent of physics, hold that phenomena at higher levels are sui generis and cannot be reduced. This is often thought to be the justification for biology being a science on its own at all (it isn’t; what counts as a science has more to do with shared histories of the discipline and special methodologies than anything really deep).
Layer cake reductionism gets made fun of, as in this cartoon by xkcd:
… and sociologists will assert that mathematics is just applied sociology. However, there’s a mistake of reference in this. It suggests that physics is mathematics, but the term “physics” here equivocates over whether we are referring to the subject of physics (i.e, the physical world), or the method and knowledge of physics (i.e., our theories of physics). The physical world remains whatever it is even if we don’t have the epistemic tools to find out about it or to predict higher level properties from physics. In short, limitations in ourselves as cognitive systems should not be projected to the world we cognise. If we describe the physical in mathematics, that does not mean, pace Wheeler, that the world is mathematics (a view that goes back to Pythagoras’ followers if not the man himself). It means that we represent and understand the world in mathematical terms. I therefore reject what is called Mathematical Platonism. Typically we distinguish between the ontological, methodological and epistemic forms of reductionism (but I think the methodological are epistemic; Brigandt and Love 2008). These are, respectively, the reduction of: types of things, ways of investigating things, and relations of domains of knowledge. I’m only really interested in the ontological, but I’ll get back to the epistemic and methodological later.
Ontological reductionism is no longer a widely held view in philosophy, for reasons I find hard to comprehend. It seems to be tied up with two issues: functionalism in philosophy of mind, and functionalism in biology. In philosophy of mind one of the starting points arguments spring from is that there are facts about mind, such as “seeing red” or “feeling what it is to be in pain” that just are not reducible to physics (a long debate about “identity theory”, or the claim that the mind is identical to brain states argues that this is not the case, most recent leading examples being Searle’s Chinese Room and Jackson’s Mary Problem). But while the identity theory debate concluded rightly that we cannot identify pain states with neural states, the reason is not because there is something ontologically privileged in the mental realm, but rather because we think something is pain when it satisfies, among other things, social criteria. In short pain is the neurological and the social. This extends, in my view to all psychological and semantic properties. What counts as seeing red is something that obtains when public stimulus response criteria are met, and these are constructed from the overall aggregate behaviours of the society or language group. Put simply, red is what the society says is red, and what the society says is red is what the biology of the members of the society typically respond to. The reason why we can’t reduce red-seeing to physical properties directly is that we have an ontological dangler in the social categories used. But if they, too, are physical objects, then everything reduces, directly, to physical descriptions. The Mind-Brain Identity Theory was too restricted in its scope, that’s all.
If Layer Cake Reductionism is faulty, and given that we would need to include things at both higher and lower levels to reduce any class of properties at an intermediate level to a lower level in this way, as it surely is, what alternative is there? I want to suggest something I haven’t seen proposed (but given my ignorance of the field almost certainly has been, and much better than I will do here): Pizza Reductionism. On this account, these “higher level phenomena” are not arrayed in a strict hierarchy. They are artefacts of our cognitive dispositions to recognise and describe them, and they are all physical objects. Words and meanings, states of mind, biological properties, and chemical structures are all physics, and should be directly reducible had we but world enough and time. A diagram:
Each “level” is in reality a heterogeneous class of contingently grouped observer-relative properties, and each is just physical. If there is a principled way to array these properties it might be in terms of scale relative to observers and the practices of observers. We might need to use neurobiology, biochemistry and sociology in a reduction, or we might find it agreeable and convenient to first reduce one “type” of phenomenon to another not-yet-reduced-to-physics phenomenal description, but ontologically it is not self-standing.
Pizza reductionism has some interesting implications. Consider the notion of supervenience. A supervenient property is one that two different physical systems can have, but two identical physical systems must have. It might be that you and a robot see red in physically different ways (the robot using CCD receptors) , but you and I, being identical (for certain values of “identical”) in the relevant sense see red in the same physical manner, with the right sort of L photopsin containing cone cells in our retinas, etc. That makes “seeing red” a supervenient property. But why do we say that the robot sees red in the first place? Because there is a physical property underlying it (light in the region of 600nM) that we typically call “red” that normal visual systems identify. A robot might in fact see evenly over a spectrum, unlike primates, and so not identify red as a distinct class, but it would still pick up wavelengths around 600nM.
So the physical property is the key. It is what exists independent of the propensities and predilections of the observer systems. How we carve that up at scales above the microphysical is conventional. But the phenomena themselves are clearly real: observers really do see reds, feel pain and use descriptors for classes of physical states. It’s just that these are not the final story, the explanans. And here we return to the methodological and epistemic versions of reduction. Alex Rosenberg (one of the last “club footed” reductionists* still about, and with whom I agree) has argued (1994) that the problem with reductionism is simply computational: we just don’t have sufficient ability to work out the properties we see from first principles; neither the time nor the computational capacity.
Consider how this affects a popular notion: emergence. Emergentism was developed in the 1930s to deal with the concept of evolutionary novelty (see my series on that topic here). It has become a bit of a panacea for everything from physics to theology. Complex systems are emergent, no doubt, but what does that mean? Many emergentists treat emergence as an ontological thesis – something new and irreducible has emerged. I, on the other hand, think it means that we are surprised at something we didn’t expect. Emergence is in effect a measure of our surprisal (a term from information theory). Let’s put this into pseudo-formal English:
An emergent property E is, for an observer O, a property that was U degree of unexpected given a knowledge of the underlying properties P and the underlying laws L, and the cognitive limitations C of O.
The limitations C include the storage, computational speed, time available, degree of interest, and other relative properties of the observer O. Unless you are God or Laplace’s Demon, you will have these limitations. But let us consider what makes E Eish. We refer to emergent properties as “properties” and thereby hide all manner of complexity. A property is often stated in philosophical analyses as if it were something to be explained, and that is how I mean it here, but little attention is given in these instances to how we delineate it as a property to begin with. Let’s say E is a phenomenon instead, and call the descriptors that we use to identify it as a phenomenon its properties (which is very much in line with Aristotle’s original meaning of propria: they are predicates, or descriptors, and thus semantic objects and not facts of undescribed reality). The question then becomes why we use those descriptors. Would God use them? After all, any observer with unlimited time, computational power, detail of observation, and knowledge of the laws might not even be inclined to see these things as phenomena, a point often made against Laplace’s Demon. Perhaps, though, this is because the phenomenally of these phenomena is due in part to the constrained nature of the observers.
Any observer system O must abstract and isolate what it issues as a description from what is an indefinitely, if not infinitely, large set of possible observants. Humans and other classifier systems (animals, neural net robots, librarians) must pick out some feature classes as salient and significant. They have to or they could not process anything regularly. So what counts as a phenomenon to O must be a two-place relation, between the available observants and the features that O finds salient and significant. In short, while the phenomenon is real (because none of the observable features that make it a phenomenon are unreal), it is also observer-relative. Laplace’s Demon might not see any such phenomenon.
So biology might not be a separate layer for a deity or demon. That might mean biology has no laws, or if it does those laws will be summaries, aggregates or placeholders for laws at the physical “level”. The so-called emergent properties and entities of biology, psychology and the rest of the supra-physical domains are just phenomena that we, as observers find important. Ontological emergence evaporates, leaving only methodological and epistemological emergence, and that is a measure of our surprise. In short, it tells us at least as much about ourselves as it does the phenomena we register surprise about.
[Update: Reader Ant below asks for a definition of an emergent phenomenon.
An emergent phenomenon EP is a set of properties salient to an observer O selected from the totality of properties T of the universe of discourse, that is U degree of unexpected given a knowledge of the underlying properties P and the underlying laws L, and the cognitive limitations C of O.
For a physicalist, this would read
An emergent phenomenon EP is a set of properties salient to an observer O selected from the totality of properties T of the universe of discourse, that is U degree of unexpected given a knowledge of the underlying physical properties P and the underlying physical laws L, and the cognitive limitations C of O.
The universe of discourse is effectively the world, and the phenomenon is selected from the feature set the observer can cognise in that environment.]
So pizza reductionism is a methodological point: we have to investigate the world in terms of these phenomena, but let us never forget they are the explananda not the explanans, a general point that we should apply in the philosophy of mind, ethics, and other non-physical domains and sciences.
[Added: Owen Flanagan has named the problem of naturalising meaning the “Really Hard Problem“; his solution is that prescriptive ideas are based around eudaimonic properties – which is to say, things that make us flourish. I agree with this, but continue to insist these are purely physical relations. Meaning, and here I mean semantic meaning, is a physical thing. It doesn’t exist on Omicron Persei 8, even if there are organisms there and they communicate symbolically. That would be a different set of physical kinds to those that obtain here. So as I think the Hard Problem is not Hard, I think the Really Hard Problem is not either.]
[* A term from Paul Griffiths. I immediately identified myself as one when he said that.]