Is the brain a computer?

There is an ongoing debate over whether or not our brains are computers lately (against | for). This is an old debate, going back at least to Turing’s famous “Can Machines Think?” paper of 1950. To answer why I think that brains are not computers, contrary to my friend Jeff Shallit (second link above), let me consider a different case.

Before I do, though, let me say that I think Epstein’s article (first link) misses the point. He is correct that metaphors regarding thinking tend to rely on the latest technology, but there is a better reason why I think brains are not computers than his assertion that brains do not store information the way a computer does, and of course it is philosophical rather than technological or scientific.

Consider a model of the solar system. Back before computers (way back in fact – the Antikythera mechanism dates to the second or third century BCE) astronomers and calendrists made physical models of the solar system as a way to calculate what the planets would do. In the post-Copernican era, beginning in the 18th century, these were in fact representation models known as orreries.

Orrery
An 18th century clockwork orrery.

These devices and their successors were analogue calculators of the planetary orbits. When computers came along, virtual orreries became possible, and these days you can find one with a high precision (like Celestia). These run on computers, and compute the state of the planets (and stars). We can say that Celestia is a computational algorithm. But can we say that of either the physical orreries, or for that matter, the universe itself?

Celestia logo

Why not the universe? Doesn’t it calculate the state of the universe moment by moment? Well yes, in one sense – if you can “read off” the universe’s states, then it acts (for you) as a calculator of those states, but I think most people would rather call that observation, not calculation (although on Cicero’s dictum that there is nothing so absurd that some philosopher hasn’t said it, at least some think the universe is indeed a calculation). So how about the orrery of old?

To give another example consider the story of World War 2 aeronautical engineers using a device consisting of springs to “calculate” the drag and lift profiles of wings to save laborious simultaneous calculations. In this sense, the orrery is a calculator (as was the Antikythera mechanism). But is it a computer? This is where it gets tricky, or to put it another way, philosophical.

A computer is a logical device, or more exactly, a Turing Machine. Physical computers closely approximate Turing machines well enough that we call them computers, but their computational role lies in the way we interpret the output. A Turing machine never suffers power outages, short circuits or component failure, and it never runs out of memory. But computers are useful to us to treat as Turing machines, until they aren’t. To call a physical computer a Turing machine is to say that Turing machines are a useful model of the physical device.

This is where the philosophy comes in. We use mathematical models for all kinds of purposes to represent the physical objects they do. They are computational models like the orrery. Anything can be modelled this way, from electrons to populations of organisms, but every model lacks some parameters the physical system does have (short of using the universe as its own model) and so the representation is not ever complete. It is only ever good enough for our purposes in modelling.

Brains, as Epstein noted, are one of the physical systems we have modelled using mechanical and latterly mathematical metaphors, and they are good as far as they go. But a brain is a physical system that involves chemistry, biology and embodied connections to the rest of the brain-bearer’s environment. To call a brain a “computer” is to say that in some of its behaviours it is useful to us to represent it that way, and this has been very fruitful, albeit not as fruitful as is sometimes claimed.

But consider neural networks. A physical neural network is a connected network of neurones that have processes that are adjacent at synapses, and which process electrical signals to the synapses, where the “signal” is passed over by the exchange of calcium ions. They have a refectory period where they cannot send that signal again until the ions are restored. Dendrites can also signal directly to nerve cells, the bloodstream and other parts of the body.

Now consider an artificial neural net (ANN). This is a logical construct where the “neurones” (which are actually algorithms in the program) have “weights” or “strengths” (propensities to signal forward to the next node, or “neurone”. In some respects an ANN “represents” some aspects of the brain’s behaviour, but it would be better to say that it is inspired by the way the brain behaves, because it leaves out most of what the brain actually does. We have learned a lot about the behaviours of different kinds of ANN; and from this we can make some partial inferences about the properties and capacities of actual neural nets (they are very good classifier systems on noisy data sets, for example). But they are not brains, nor parts of brains. They are programs utilising a logic.

This conflation between the model and the physical object is both ubiquitous and old. In the 12th century, logicians such as Buridan (he of the famous ass) made a clear distinction between the sign and the signified, later taken up by Saussure. These days we have physicists arguing that the world is an algorithmic representation or that electrons just are their mathematical properties, in both cases confusing the mathematical representation for the thing being represented. So it is unsurprising that the computational model of mind is so widespread – we have forgotten the distinction between signifier (the model) and the signified (the physical thing). As it is often put, we mistake the map for the territory.

But orreries and algorithms lack the physical properties of the things they represent, or else my laptop would have the rough mass of the solar system when I run Celestia, and need a major influx of oxygen and glycose, among other nutrients when I run an ANN. And they don’t (fortunately in both cases).

The brain is not a computer – it is a brain. The orrery is not a solar system – it is a mechanical or logical simulation. They both serve for us to compute the states of the physical thing, but the things themselves are something else.

Unless one takes a fully Platonist view of the world (that everything is calculations, a view Plato gets via the Pythagoreans), to call something what it merely resembles for our purposes is the failure to distinguish between the map and the territory. Something is always left out. As I have said before, physical differences make a difference. Run a simulation of me in a computer, and I guarantee there will be crucial things left out, even if you get down to a Planck level simulation.

We tend to privilege our cognition as more real than the rest of the world – a piece of hubris humans have always displayed – and so we naturally think that what we find useful defines what we are looking to explain. But science is the process of applying, exploiting and finally abandoning models to real world data, and in the end, the data always triumphs. Let us not get too tangled up in our metaphors.

13 thoughts on “Is the brain a computer?

  1. A computer is a computer because we compute with it just as a watch is a timepiece because we tell time with it. No formal description of the motions of either the currents in the computer or the wheels in the watch will tell you what they are doing if you don’t cheat by importing your understanding of the world they are embedded in. It’s amazing to me that so many rather sophisticated individuals believe that meanings can be encapsulated in particular places in either computers or brains like spirits imprisoned in bottles.I read someplace that some computer guys tried to figure out what the chip in a Donkey Kong game did without reference to knowing about the game. Unsurprisingly, they found out next to nothing. They could trace the circuitry and even run experiments on what happened when you introduced a current here or there, but all the information was purely formal: a description of motions.

    I recently read Chomsky’s latest summary of his ideas on linguistics, a book he cowrote with a computer science prof named Berwick. The model in Why Only Us: Language and Evolution is pretty familiar if you’re used to Chomsky, though the grammar or syntax module is a lot simpler than it used to be—everything grammatical now unfolds from the single move Merge, whose emergence was, according to Chomsky, the key moment in human evolution. The syntax machine interfaces on one hand with systems that realize sentences as spoken or signed language (sensorimotor system for externalization) and on the other with “the conceptual system for inference, interpretation, planning, organization of action, and other elements of what is informally called ‘thought’.” This last system contains elements that have meaning (though, obviously, they can’t be words exactly), and it’s the part of Chomsky’s thinking that doesn’t compute in my conceptual system because It beats me how word-like elements can have meanings absent motivated interaction with the world. One evidence of this is the way in words lose their meaning if they are repeated over and over again without a context, a fact exploited in many religious and mystical rites. Mantras don’t mean. That’s kinda the point. The thoughts of the brain in the bottle become a meaningless chant. Even the homunculus in Faust had to break the glass to become a living thing.

    Here’s the connection with the computer/brain issue. For all I know, there really are specific places in my head where memories are stored just as the word “flummox” is on page 187 of a particular dictionary. What I got out of Epstein (or perhaps just wanted to find in his essay) was not so much that the computer metaphor is wrong; but that even if happens to be apt at least to a certain extent, it is profoundly misleading. The word “flummox” can’t appear in the dictionary-in-itself and the memories I have can’t be dissected out of my brain because words and memories are what they are for us only because they are interpretable.

    By the way, welcome back.

  2. In this analysis, I think you miss at least two important things.

    First, what is meant by the claim “the brain is a computer”. Nobody understands by this that the brain is a modern digital computer made of silicon, so of course that is a metaphor. What is meant is that the brain computes: it takes inputs, processes them, and gives results. I stated that pretty explicitly in my rebuttal of Epstein, and that is the crucial point.

    Furthermore, the brain does what it does using systems that obey the laws of physics, and so (in principle) can be simulated by analogous systems, whether they are made of silicon or meat.

    I wonder if you have similar objections to claims like “the heart is a pump”, “the flagellum is a molecular machine”, and so forth. We use language like this all the time, and nobody except the densest would think they mean, for example, that the heart is made of metal and rubber. I fail to see why these claims are misleading or wrong.

    Second thing you missed is one of the deepest insights of Turing: all computers can be modeled by a Turing machine, and any sufficiently powerful computer can do all a Turing machine can do. So when we say “the brain is a computer” we are also saying that we understand the kinds of computational powers the brain can have, and the limits of those powers.

    As a computer scientist, I may be biased, but I think these kinds of insights into the brain are far deeper and more useful than anything produced by philosophers (I am thinking, in particular of the work of Dreyfus and Searle). Why do computational savants succeed in finding large primes, but do not succeed in factoring large numbers? That’s the kind of question I can answer with my view, but philosophers cannot.

    1. There are abstractions of pumps and machines in general, but it is pretty clear they abstract physical objects, so nobody is confused if you say that a physical object that pumps is a pump, etc. Well, not confused that much – the machine metaphor for flagella confuse creationists and intelligent designoids.

      But a “computer” is a logical object only. Macs, PCs, Raspberry Pis and the like are grouped in virtue of their instantiating that logical conception only. Brains do not instantiate that without a fair degree of abstraction – the workings of brains are not logical devices except in the very general sense that anything can be represented as a logical device (i.e., a Turing machine).

      Where brains and computers have been fruitfully compared is when computers (the logical kinds instantiated in hardware) are programmed to behave something like a brain, not the other way around. The computational metaphor hasn’t led to any new neuroscience, but neuroscience has led to new computational techniques (ANNs). That is a bald statement, and I am sure there are some slight exceptions, but that is very indicative.

      Philosophers who work on the brain/mind relationship are not restricted to the admittedly strange views of Searle (who is running a linguistic metaphor of mind rather than a computational one). The Churchlands take the neurology seriously, for example.

      It is not disputed that there are those who can operate according to algorithmic rules, such as savants (a term now out of favour, by the way). But the reality is not that their brains are like computers at base, or else I could do it too, but rather that they have a neurology that makes it possible for them to learn to do it in a manner that computers can describe. I can add simple numbers – but that doesn’t make my brain a computer. I learned to do it.

      One thing we do is learn rules, though, as in language, and so it doesn’t surprise me that some can apply the rules of mathematics (which, after all, humans invented) better than others.

    2. Well, it is not simply my brain that computes and processes things. *I* compute and process things. It is not a wholly mechanical, automatic, subconscious phenomenon. Otherwise how could we ever make a mistake when learning or practicing mathematics, for instance? Or ever be corrected? It’s this entanglement of my i) mind and ii) will that makes the human being an interesting and peculiar case. People theorizing about AI should carefully consider whether or not they can actually imbue free will into a machine, for example, that isn’t absolutely deterministic in its outcomes on one hand or wholly random on the other, neither of which is really free will. Nor would its being sometimes deterministic and sometimes random be an instance of free will either, as that too would actually be dependent on something ultimately deterministic in the machine’s programming.

      In more classical philosophy, free will and intellect were often closely associated. Saint Thomas Aquinas defined free will in one place as the intellectual desire or as intellectual appetite. This is perhaps partly why the concept of The Good and its nature or reality has always been significant in philosophy.

      Again, *I* can quite freely choose not to exercise my reasoning powers. A stubborn pupil may just not incline himself to learn that day’s lessons. While all the inputs are present, it is not necessary he produce the desired or correct outputs or even any relevant outputs at all. Unlike computers and machines, a human being’s computing and processing can be sometimes personal. We can start to talk about issues of character even being relevant. But to seriously suggest that my computer isn’t working properly because it’s being stubborn would be just silly.

  3. I guess I get hung up on that sentence, “the brain is not a computer”.

    The brain is a murkily understood phenomenon — it’s a collection of interacting subsystems that are biologically useful in regulating the behavior of an organism.

    To most people, a computer is a specific and well defined device. In that case, it’s safe to say that we shouldn’t compare a complex mystery to a specific technology.

    But if you have a broader sense of what a “computer” is that isn’t tied to a specific instantiation, and if you’re trying to get a handle on the process of computation, then yes, a brain IS a computer. Babbage’s difference engine is a computer, even if it is using gears and cams and has a limited range of functions. My mac laptop is a computer even if it doesn’t use weighted ion fluxes in a distributed network of neurons. My brain is a computer even if it sucks at repetitive calculations and generating tables of numbers.

  4. I made a similar argument in “Computation and Early Chinese Thought” (Asian Philosophy 22:2, 2012). In it, I come up with this definition for computation: “a process in which the fact that one system is rule governed is used to make reliable correlations to another rule governed system.” From that definition it follows that the brain is not “doing computation” (being a computer), since it’s the thing that correlates, not the correlated system.

  5. Reading the comments, I would say the problem with the comments who argue for a “broader” and more metaphorical sense of “the brain is a computer” is that they are not broad enough and miss the heart of the metaphor.

    How can we tell computers from non-computers? If the sand washing up on the beach corresponds to some arbitrary calculation, is the ocean a computer that is computing that calculation? Since any rule governed system can be described by a Turing machine, it follows that we need a better definition of computer or else EVERYTHING can be a computer, which is another way of saying that nothing is.

    A pump can be spoken of without reference to human activity. A pump is a physical thing which causes water to move by compressing it and allowing it to go somewhere. Parts of computing (input, output, etc.) can be spoken of without reference to human beings, but “doing a computation” and “being a computer” cannot because everything rule governed can be described using computation.

    Here’s an analogy. Suppose someone argued, “the brain is a book.” Why? Well, a book is a written thing, and you can write about all the things going on in the brain! Yes, but on that definition, everything in the universe is a book! The brain being a computer because it can be described by a computer has the same conceptual problems.

  6. ” Let us not get too tangled up in our metaphors.”

    It strikes my small and insignificant brain as very tangled to the point of denying its a metaphor.

    I am reading a 17th century natural historian just now who I suspect believed the universe was contained in his glass of water and its occult properties could be discovered and categorised by taste.

    Movement from 17th century tongue to 21st century brain.

  7. “We use language like this all the time”

    “He is correct that metaphors regarding thinking tend to rely on the latest technology, but there is a better reason why I think brains are not computers.”

    I think rejecting Epstein’s historical approach is sensible. The generalisation on technology certainly works as far as I can see, the examples he uses to suggest that metaphor holds form through time, fall apart when you start to place it in context I think.

  8. “forgotten the distinction between signifier (the model) and the signified (the physical thing).” One does not have to be a Platonist to believe a physical “simulation” of a mathematical calculation is in fact that realizing that calculation. In the same way that I don’t think my playing a Bach prelude on the piano is a simulation of Bach’s original performance – it is an interpretation; you may or may not think of it as an instance,

    As to pancomputationalism (or “dancing pixies”), I reckon the answer is in the thermodynamics of information and of life.eg Parrondo et al 2015, Faist et al 2015.

  9. “One does not have to be a Platonist”

    The historical argument on the theory of properties is an interesting one. Both in terms of method and subject.

    “it developed in response to a variety of needs, and one mistake of modern attempts at interpretation is to seek a unique rationale of one notion or another. Each notion evolved continually, satisfying one need at one time and another at a later date, and often several conflicting needs at the same time.”

    http://plato.stanford.edu/entries/medieval-terms/

  10. My limited understanding on the subject suggests first and foremost that the brain is really numerous components, some who may behave in some ways like a digital computer (i.e. the colour-opponent neurons found in the visual cortex), and other centers behaving much less like a traditional “computer”, and more like feedback loops and the like. The brain’s evolutionary history is very complex, from the earliest brains probably being little more than “switching stations” for traffic from the peripheral nervous system, which would be a rather “computer-like” function but as some lineages evolved larger and more complex brains, the organ became, like so many organs to be found in living creatures, a hodgepodge of bits and pieces, with a good deal of co-opting and repurposing.

    So I can’t see how one could say certain parts of the brain aren’t “computers” for some rather loose definition, but obviously the organ as a whole is far too complex, with too much specialization in some regards, and oddly enough, some functions duplicated or shared between parts of the brain that may even be separated by considerable distance in physical location in the brain, or in how they’re wired together.

    Like everything in evolution, it’s built from the parts that are available, with the more plastic elements making connections in often peculiar ways. No one would build a computer like a brain.

  11. //the machine metaphor for flagella confuse creationists and intelligent designoids.//

    Erm, this is not a metaphor. If all IDists had to do was make the point that this is a literal machine, then the argument would be over. No, the arguments are over how it came about, Darwin apologists insist it was a series of chance events, IDists say it was designed.

Leave a Reply