Last updated on 22 Jun 2018
I have been kind of busy with actual, you know, work, which is ironic because I do not actually have, you know, employment. But I am teaching. Anyway this is by way of being an apology and apologia for not having posted lately. Be assured much Wilkinsy goodness is being done behind the scenes.
So my text for today is John Horgan’s piece in Scientific American on whether or not everything is built from information. His argument is not great: basically it fails common sense. Many things fail the common sense test without thereby being false, and the commentators pick up on this almost immediately. However, I agree with the conclusion even if not that argument. Here are some equally bad ramblings on why.
There is a long-standing western tradition that derives from the classical era, that there is something ontologically unique about information, usually given the label Logos. Philo of Alexandria bequeathed that philosophy to the eventually-Christian west out of nascent neo-Platonism, but information as a quantity is rather late. For most of the history of the Christian west, “form” not information was the key property. Like information now, it was not physical but it had physical effects. Basically, this view, known as hylopmorphism (substance-formism), was a constraint upon what evolved into modern science. It was mostly supplanted by atomism and its physical heirs and successors, such as quantum mechanics and modern subatomic physics and The Zoo.
Okay, so why is information now so important? As communications technology improved, it became important to ensure that a signal sent at one place was received properly at the end point. At Bell Labs in the 1930s and 1940s, Claude Shannon developed a mathematical theory of communication (note: not “information” as such) which involved the definition of “bits” (binary digits) and an entropy-like equation that came to be known as “Shannon’s metric”:
Basically, Shannon’s metric is the number of binary decisions it takes to get from a field of possible states or symbols to a single state or symbol. It’s a measure of difference.
Now this is not really what most people think of when they think of information, although it is part of it. Shannon himself was fairly clear that this had nothing to do with semantic information, or meaning. Moreover, he and his colleague Norbert Weiner, the founder of cybernetics, knew very well that this was about formal descriptions of things, not the things themselves. Weiner even wrote:
Information is information, not matter or energy. No materialism which does not admit this can survive at the present day. (Wiener 1948: 132)
What was information, then? Well, it was something that “existed” in descriptions and statements. It was the structure of some string of symbols and our uncertainty that the string we have received is the string that was sent. How, then, is the universe supposed to be comprised of information, as the physicist John Wheeler, whose slogan “the it from bit” indicated, held that it was? Wheeler’s view (1990) was basically this: if a state of the universe or part of it can be clearly described, as physicists think that it can be, then there is an information content to that state. Using equations like the Wave Function we can describe the universe and its evolution over time. Therefore, the universe is made from information. A similar argument is often put under the rubric “the Matrix”, after the famous film, by David Chalmers. Any reality we experience is simply the sum of all the information we have about it. This is Berkeleyan Idealism updated for the computer age.
Now if I may step back a bit to the oft-abused scholastic philosophers of the late medieval period, they made a distinction that later was adopted by C. S. Peirce, between the sign and the signified. If you like, it is between the words, and the world. The informational idealism of Wheeler is, in effect, to say that all we have access to, and therefore ontologically all there is, is the information contained in our equations and descriptions of the world. Not only is to be a matter of instantiating a variable as Quine put it, it is just the value of the variable. This is a case of an error of inversion: putting Descartes before the horse, so to speak.
Assume with me now that there is, in fact and independently of anything anyone may know about it including gods, a computer before me. I am actually, whether I know it or not, typing on this computer. Now suppose I give you a clear and precise description of that process. Call that description D and the state of the computer being typed S. Does S resolve down to D? Is S nothing more than D? Is the sentence “John types on his Mac” the state of John typing on his Mac? Surely that is faintly absurd. I might say that the fact of John typing on his Mac is the sentence or some proposition that has equivalent information, sure. I might even say that my knowledge (or anyone’s knowledge) of that case consists entirely in the information content of that sentence or proposition. But to say that my typing on my Mac is just the factual propositional content of D is a case of anthropomorphism of the highest water.
To mistake the sign (the word, description or formalisation) for the signified (the denotation, extension or reference) is a classic mistake. It goes by the name “reification fallacy” (Marcuse) or “hypostasis“. Whitehead, that badly-underappreciated philosopher, called it the “fallacy of misplaced concreteness“. John Maynard Smith would walk up to students in the cafeteria at the University of Sussex and ask of their discussions “Is this about words, or the world? If it is about the world, I will stay, but if it is about words, I will go.” [Anecdote about JMS by David Penny, c2000] Surely we cannot be making such a simple mistake?
We can, and do. In fact it is I think one of the enduring mistakes of western thought for 2500 years, to the point where a good many people think it is not a mistake at all. It underlies the argument from design (since Socrates, according to Sedley 2007). It puts our conceptual forms, and symbolic formulations, before the world they are supposed to refer to. It’s in Locke, Kant and Russell. And it is, I believe, entirely unnecessary. One need not think that the world has semantic content even if it has structure.
The misuse of information talk, the new hylomorphism, is ubiquitous. We cannot conceive of things without representing them, so we mistake our representations for the things. Consider arguments from simulation, such as the infamous “Singularity” views of Ray Kurzweil. Ignore the fact that few if any of the predictions made by people since Turing have come to pass; that may be due more to the problems of technological development. Kurzweil’s argument is roughly this: we can simulate the activity of each neuron in the brain. A neural simulation behaves the same way. As we are the sum of all the neuronal behaviours of our brains, eventually we will be able to instantiate ourselves in a computer, and live forever.
But, and here is the hypostatic fallacy, a simulation is not the same as the thing simulated, or a computer model of the solar system would have a mass of 1.992 x 1030kg, which it doesn’t. A “brain” being simulated is a simulated, not a real, brain. Physical differences make a difference. This new anthropic hylomorphism misleads our thinking. It is found in genetics (genes are “transcribed”, “edited”, and “code for” properties). It is found in physics. It is obviously found in information technology. It is found all over the place. There is even a tendency for scientists to mistake their formal descriptions and record keeping (as in the Ontology project, the very name of which is a giveaway) for the things they record. Systematists in biology, in their battles over nomenclature, often make this very error.
So, if information is not a physical property of energy or matter, what is it? Here I think the ideas of Edward Zalta, who among other things edits the wonderful Stanford Encyclopedia of Philosophy, can help. In his theory of abstract objects (1988), Zalta distinguishes between things that are bounded by space and time, and are hence concrete, and things which are not so bounded, which he calls abstract objects. For my money, only a concrete object can have causal powers, and hence only a concrete object can be explanatory for physical processes and states. Information is an abstract property that inheres in abstract objects. Words, qua words, are abstract objects (but an instance of a word is a concrete object of sound, ink or electromagnetically modulated signals), and so they have no causal power. This debate, too, is old. It includes the famous nominalisms and conceptualisms of the middle ages: are universals (things which include more than a single particular thing) real or in the head? Is information just in the head? How can a physicalist like me account for shared informational properties?
Again, I refer to the abstract/concrete distinction. What is in my head, and indeed what is in the sum of all heads across time and space, is concrete even if it is a functional rather than material thing. All cases of the word “dog” and cognates exist in physical heads or something like them, and ancillary contexts for recording and retrieving information sensu lato. But the concept itself does not. It is unbounded by time and space. And that suggests that the nominalist view, that these things do not really exist, is correct, and so I conclude. We are the ones that instantiate abstractions, and so the information exists, inasmuch as it does, only in our semantic behaviours. There is no existing thing that is information, just behaviours that we abstract out for formal purposes. However, one may take a different line and still be a physicalist or a realist of some flavour. I’m just giving my preferred defence of the matter.
So if we abandon the metaphysics of hylomorphism and adopt a realist view of the world, I think that is common sense of a kind. It avoids the unnecessary anthropomorphism that we have and probably always will fall into. It’s not an easy view to hold, but I think it is right. Information is an abstraction, and does not, strictly speaking, exist.
Chalmers, David. 2005. The Matrix as Metaphysics. In (C. Grau, ed) Philosophers Explore the Matrix (Oxford University Press, 2005). Reprinted in (T. Gendler, S. Siegel, & T. Cahn, eds) The Elements of Philosophy (McGraw-Hill, 2007). Reprinted in (S. Schneider, ed) Science Fiction and Philosophy (Wiley, 2009).
Sedley, David N. 2007. Creationism and its critics in antiquity, Sather classical lectures. Berkeley, CA; London: University of California Press.
Wheeler, John Archibald. 1990. Information, physics, quantum: The search for links. In Complexity, Entropy, and the Physics of Information, edited by W. Zurek. Redwood City, CA: Addison-Wesley.
Wiener, Norbert. 1948. Cybernetics, or, Control and communication in the animal and the machine. Cambridge, Mass: Technology Press.
Zalta, Edward N. 1988. Abstract Objects: An Introduction to Axiomatic Metaphysics. Dordrecht: D. Reidel.