Every so often somebody or other will assert that one day we will achieve immortality by downloading our brains into computers (this week it is Stephen Hawking). What happens when tech support tells the sysadmins to reboot the computers is unclear, from a perspective of personal identity, but I want to deal with a somewhat different issue. Can we, in fact, upload our brains? Or our minds?
Consider this: the solar system can be simulated in a computer – I have a really cool one on my iPad. However, there are some rather obvious differences between the solar system and the digital orrery I play with. For a start, the solar system is roughly one and a quarter light years across. My iPad is about 9″ long. So whatever is happening in my computer is not quite like the thing it simulates. This is an old observation: when John Locke held that we have in our heads pictures of the objects we see, he of course knew that the image of the red table in our minds is neither red nor the same size as the table itself. The reason is pretty obvious: the represented thing and the representation are very different entities. There may be some way of mapping one to the other, but this only occurs by taking some interpretations as being important, and not others. If you wish to put a cup of coffee on the table, you had better realise very quickly that the representation lacks some rather important physical features, or else your iPad will shortly cease working on account of being over-caffeinated.
So let’s think about what is going on in downloading a brain. Assume that we can nondestructively copy the neural structure of my brain to a computer program. Will it be the same sort of thing as my brain? Yes, and no.
For many years now it has been presumed that what matters in cognitive processing is the structure of the neuronal networks – their propensity to fire, their topology (the shape of the networks), and so on. All this can be simulated as a neural network in a computer, using a formalisation known as a “neuristor“, or a node in a graph that will send a signal forward if it receives some threshold of signals. This generates “neurone-like behaviour”. Neural nets so formed are very useful in various fields of computing, especially in classifier systems dealing with noisy data. We have used these neural nets to uncover some of the properties of brains.
The “download a brain” approach, though, works in reverse. It assumes that what counts about a brain are just the formal network properties of neuristors mapping neural networks. That is the interpretation. And it has some serious problems: for a start, it is an abstraction. The actual messy brain I carry around in my head also has hormones, deficiencies of various neurotransmitters and superfluities of other chemicals like alcohol. There are rates of glial cell infusion of nutrients to the neurons (and we aren’t all that sure if processing occurs in them, either) which must at least regulate or modulate neural processing, etc. And then, there’s the body it comes in. Brains do not sit in a vat; they have a rich rate of data input.
So the first approximation is incredibly simplistic. It will never be enough to just map neural networks. But okay, suppose we can model or simulate all these properties too. Won’t that put John Wilkins or Stephen Hawking in the box? Well, even that will be a simulation based on abstractions. The physical world is messy, noisy and subject to minute variations that can amplify and have a serious large scale effect in systems like this. So no matter how accurate or precise we make the simulation, there will be some features left out, unless we run the simulation at a subatomic level, in which case we will run so slowly that we would “exist” in a molasses-like world, as the real world zipped past. Or, we could just run a simulation of ourselves in an atom-for-atom copy of ourselves by some kind of Star Trek transporter duplicator. That would run at real world speed and have little to no abstraction, but then what would be the point? Maybe we should work hard at fixing the brains we do have?
Physical differences make a difference, and ahead of time we can’t predict what will have an effect on whether silico-me will behave the same way vivo-me does. While I am sure that we will manage to simulate “me” or somebody like me (more likely Stephen Hawking than me), the simulate won’t be “me” in any deep sense. It’s either going to be a representation of me that has some dynamical properties in common with me but not all, or it’s going to be a deep simulation of me that lives a day every so many months of realtime. And even then it won’t be exactly like me, unless you also simulate the body, the environmental affordances for that body that match the world I do inhabit, and every last possibly important biological function or process that might affect me. I look forward to hearing from the person who has to simulate my lower colon.
Abstract representations are not the things being represented, and a model is not the modelled thing; a point many philosophers but not enough mathematicians have noted before. So I finish with this SMBC comic: