There is an ongoing debate over whether or not our brains are computers lately (against | for). This is an old debate, going back at least to Turing’s famous “Can Machines Think?” paper of 1950. To answer why I think that brains are not computers, contrary to my friend Jeff Shallit (second link above), let me consider a different case.
Before I do, though, let me say that I think Epstein’s article (first link) misses the point. He is correct that metaphors regarding thinking tend to rely on the latest technology, but there is a better reason why I think brains are not computers than his assertion that brains do not store information the way a computer does, and of course it is philosophical rather than technological or scientific.
Consider a model of the solar system. Back before computers (way back in fact – the Antikythera mechanism dates to the second or third century BCE) astronomers and calendrists made physical models of the solar system as a way to calculate what the planets would do. In the post-Copernican era, beginning in the 18th century, these were in fact representation models known as orreries.
These devices and their successors were analogue calculators of the planetary orbits. When computers came along, virtual orreries became possible, and these days you can find one with a high precision (like Celestia). These run on computers, and compute the state of the planets (and stars). We can say that Celestia is a computational algorithm. But can we say that of either the physical orreries, or for that matter, the universe itself?
Why not the universe? Doesn’t it calculate the state of the universe moment by moment? Well yes, in one sense – if you can “read off” the universe’s states, then it acts (for you) as a calculator of those states, but I think most people would rather call that observation, not calculation (although on Cicero’s dictum that there is nothing so absurd that some philosopher hasn’t said it, at least some think the universe is indeed a calculation). So how about the orrery of old?
To give another example consider the story of World War 2 aeronautical engineers using a device consisting of springs to “calculate” the drag and lift profiles of wings to save laborious simultaneous calculations. In this sense, the orrery is a calculator (as was the Antikythera mechanism). But is it a computer? This is where it gets tricky, or to put it another way, philosophical.
A computer is a logical device, or more exactly, a Turing Machine. Physical computers closely approximate Turing machines well enough that we call them computers, but their computational role lies in the way we interpret the output. A Turing machine never suffers power outages, short circuits or component failure, and it never runs out of memory. But computers are useful to us to treat as Turing machines, until they aren’t. To call a physical computer a Turing machine is to say that Turing machines are a useful model of the physical device.
This is where the philosophy comes in. We use mathematical models for all kinds of purposes to represent the physical objects they do. They are computational models like the orrery. Anything can be modelled this way, from electrons to populations of organisms, but every model lacks some parameters the physical system does have (short of using the universe as its own model) and so the representation is not ever complete. It is only ever good enough for our purposes in modelling.
Brains, as Epstein noted, are one of the physical systems we have modelled using mechanical and latterly mathematical metaphors, and they are good as far as they go. But a brain is a physical system that involves chemistry, biology and embodied connections to the rest of the brain-bearer’s environment. To call a brain a “computer” is to say that in some of its behaviours it is useful to us to represent it that way, and this has been very fruitful, albeit not as fruitful as is sometimes claimed.
But consider neural networks. A physical neural network is a connected network of neurones that have processes that are adjacent at synapses, and which process electrical signals to the synapses, where the “signal” is passed over by the exchange of calcium ions. They have a refectory period where they cannot send that signal again until the ions are restored. Dendrites can also signal directly to nerve cells, the bloodstream and other parts of the body.
Now consider an artificial neural net (ANN). This is a logical construct where the “neurones” (which are actually algorithms in the program) have “weights” or “strengths” (propensities to signal forward to the next node, or “neurone”. In some respects an ANN “represents” some aspects of the brain’s behaviour, but it would be better to say that it is inspired by the way the brain behaves, because it leaves out most of what the brain actually does. We have learned a lot about the behaviours of different kinds of ANN; and from this we can make some partial inferences about the properties and capacities of actual neural nets (they are very good classifier systems on noisy data sets, for example). But they are not brains, nor parts of brains. They are programs utilising a logic.
This conflation between the model and the physical object is both ubiquitous and old. In the 12th century, logicians such as Buridan (he of the famous ass) made a clear distinction between the sign and the signified, later taken up by Saussure. These days we have physicists arguing that the world is an algorithmic representation or that electrons just are their mathematical properties, in both cases confusing the mathematical representation for the thing being represented. So it is unsurprising that the computational model of mind is so widespread – we have forgotten the distinction between signifier (the model) and the signified (the physical thing). As it is often put, we mistake the map for the territory.
But orreries and algorithms lack the physical properties of the things they represent, or else my laptop would have the rough mass of the solar system when I run Celestia, and need a major influx of oxygen and glycose, among other nutrients when I run an ANN. And they don’t (fortunately in both cases).
The brain is not a computer – it is a brain. The orrery is not a solar system – it is a mechanical or logical simulation. They both serve for us to compute the states of the physical thing, but the things themselves are something else.
Unless one takes a fully Platonist view of the world (that everything is calculations, a view Plato gets via the Pythagoreans), to call something what it merely resembles for our purposes is the failure to distinguish between the map and the territory. Something is always left out. As I have said before, physical differences make a difference. Run a simulation of me in a computer, and I guarantee there will be crucial things left out, even if you get down to a Planck level simulation.
We tend to privilege our cognition as more real than the rest of the world – a piece of hubris humans have always displayed – and so we naturally think that what we find useful defines what we are looking to explain. But science is the process of applying, exploiting and finally abandoning models to real world data, and in the end, the data always triumphs. Let us not get too tangled up in our metaphors.