Skip to content

Will brains be downloaded? Of course not!

Last updated on 27 Sep 2013

Every so often somebody or other will assert that one day we will achieve immortality by downloading our brains into computers (this week it is Stephen Hawking). What happens when tech support tells the sysadmins to reboot the computers is unclear, from a perspective of personal identity, but I want to deal with a somewhat different issue. Can we, in fact, upload our brains? Or our minds?

Consider this: the solar system can be simulated in a computer – I have a really cool one on my iPad. However, there are some rather obvious differences between the solar system and the digital orrery I play with. For a start, the solar system is roughly one and a quarter light years across. My iPad is about 9″ long. So whatever is happening in my computer is not quite like the thing it simulates. This is an old observation: when John Locke held that we have in our heads pictures of the objects we see, he of course knew that the image of the red table in our minds is neither red nor the same size as the table itself. The reason is pretty obvious: the represented thing and the representation are very different entities. There may be some way of mapping one to the other, but this only occurs by taking some interpretations as being important, and not others. If you wish to put a cup of coffee on the table, you had better realise very quickly that the representation lacks some rather important physical features, or else your iPad will shortly cease working on account of being over-caffeinated.

So let’s think about what is going on in downloading a brain. Assume that we can nondestructively copy the neural structure of my brain to a computer program. Will it be the same sort of thing as my brain? Yes, and no.

For many years now it has been presumed that what matters in cognitive processing is the structure of the neuronal networks – their propensity to fire, their topology (the shape of the networks), and so on. All this can be simulated as a neural network in a computer, using a formalisation known as a “neuristor“, or a node in a graph that will send a signal forward if it receives some threshold of signals. This generates “neurone-like behaviour”. Neural nets so formed are very useful in various fields of computing, especially in classifier systems dealing with noisy data. We have used these neural nets to uncover some of the properties of brains.

The “download a brain” approach, though, works in reverse. It assumes that what counts about a brain are just the formal network properties of neuristors mapping neural networks. That is the interpretation. And it has some serious problems: for a start, it is an abstraction. The actual messy brain I carry around in my head also has hormones, deficiencies of various neurotransmitters and superfluities of other chemicals like alcohol. There are rates of glial cell infusion of nutrients to the neurons (and we aren’t all that sure if processing occurs in them, either) which must at least regulate or modulate neural processing, etc. And then, there’s the body it comes in. Brains do not sit in a vat; they have a rich rate of data input.

So the first approximation is incredibly simplistic. It will never be enough to just map neural networks. But okay, suppose we can model or simulate all these properties too. Won’t that put John Wilkins or Stephen Hawking in the box? Well, even that will be a simulation based on abstractions. The physical world is messy, noisy and subject to minute variations that can amplify and have a serious large scale effect in systems like this. So no matter how accurate or precise we make the simulation, there will be some features left out, unless we run the simulation at a subatomic level, in which case we will run so slowly that we would “exist” in a molasses-like world, as the real world zipped past. Or, we could just run a simulation of ourselves in an atom-for-atom copy of ourselves by some kind of Star Trek transporter duplicator. That would run at real world speed and have little to no abstraction, but then what would be the point? Maybe we should work hard at fixing the brains we do have?

Physical differences make a difference, and ahead of time we can’t predict what will have an effect on whether silico-me will behave the same way vivo-me does. While I am sure that we will manage to simulate “me” or somebody like me (more likely Stephen Hawking than me), the simulate won’t be “me” in any deep sense. It’s either going to be a representation of me that has some dynamical properties in common with me but not all, or it’s going to be a deep simulation of me that lives a day every so many months of realtime. And even then it won’t be exactly like me, unless you also simulate the body, the environmental affordances for that body that match the world I do inhabit, and every last possibly important biological function or process that might affect me. I look forward to hearing from the person who has to simulate my lower colon.

Abstract representations are not the things being represented, and a model is not the modelled thing; a point many philosophers but not enough mathematicians have noted before. So I finish with this SMBC comic:

41 Comments

  1. “The reason is pretty obvious: the represented thing and the representation are very different entities.”

    Yes. In the case of the solar system the difference is obvious. Ditto for simulating atomic explosions. No one insists that when you do that on a computer you must put the computer in a cave a mile under the ground because everyone knows that the computer doesn’t actually explode.

    But the upload crowd pretty much assumes that the simulation of a brain is, effectively, a brain. And that, as you argue, is at best, a deeply problematic assumption.

    People ought to read John von Neumann, The Computer and the Brain (1958). Yeah, it’s old; computers have changed a lot; and we know more about brains. But the man was a profound thinker, and that little book is a profound meditation on what it means to implement a computational scheme in physical stuff. It doesn’t address the download/upload problem because, well, no one was advancing that fantasy in those days, but there’s probably something in there that speaks to the problem. Like the difference between digital and analog computer.

    Finally, my teacher and colleague, the late David Hays, once made a comment that put this upload business in its place. He asked a question: Once Marvin Minsky, for example, had uploaded his brain to a computer, what would be the first thing the upload would say to the “real” Minsky? “I’m immortal and you’re not.”

  2. DiscoveredJoys DiscoveredJoys

    Yes, but… actually I find you have already mentioned all the structural difficulties I was going to bang on about.

    Plus there is the almost certain impossibility of ‘snapshotting’ enough data to construct a mathematical model of the brain. You would need enormous bandwidth to do so – probably greater than any we could feasibly organise, ever.

    And then there is the divergence issue. You miraculously make a copy of your brain then go to sleep. When you wake up you want eggs and bacon but your copy fancies toast and marmalade – because your life experiences and brain events, even in sleep, are different. Which then is the truer copy of your pre-sleep brain?

    On the other hand an artificial intelligence which shares a similar personality may be much more practical…

  3. bwana bwana

    And how are you sure you’re not already a simulation running on something’s super computer!? Or that the human body is not simply the “mechanical housing” created to maintain your disembodied brain?

  4. Hilary Putnam identified a basic flaw in the whole brain-in-a-jar fantasy: a closed system of representations can’t represent. In the absence of a world outside the jar, ideas cannot identify their referents. I think the mystics give a practical demonstration of the problem. If you decouple yourself from the environment, your thoughts will gradually become meaningless—you can get something of the same effect by simply repeating a word over and over to yourself. Since meditation is a cheaper route to nirvana than reincarnation in silico, maybe we should leave it at that.

  5. There’s a story that a man complained to Picasso that his paintings looked nothing like real life. Picasso asked him if he had a picture of his girlfriend. The man brought out his wallet and showed him the picture, and Picasso asked the man, “Is she really that small and flat?”

  6. Dr. Atheism Dr. Atheism

    Theoretically, given an inexhaustibly powerful computer, it should be possible to simulate all physics down to the structure of quantum field theory. It would not be required to have, say, a space the size of a planet to simulate a planet; quantum computation could allow us to carry out extremely powerful simulations using relatively little resources. In fact, a quantum computer the size of a laptop could carry out 10^45 calculations per second, theoretically.
    If the basic structure of quantum fields is simulated, there is no further level of complexity.

    • Also, in theory Archimedes could move the earth with “a lever and a place to stand.” Some things just aren’t gonna happen.

      • bwana bwana

        And why don’t you think it is already happening? Ever watched the movie The Thirteenth Floor. We may already be in a computer simulation and not even realize it…

  7. I think the distinction between represented and representation is mostly a semantic confusion. If you reword things to say that you are “duplicating” the essential aspects of yourself in silico, those functional aspects of yourself that make you you, then we sidestep the representation/represented roadblock to some extent. The “duplicated” you may not be you in every aspect, but then you are not you in every aspect (only every essential aspect), each moment, all of the time. You are not drunk or groggy or aneurtotypical all the time, moment by moment. The uploaded you might be bright and clear and lucid at every moment, and never need sleep. In a way, it may be more you than you, an experience similar to that recounted by many people introduced to antidepressants. That may not be you…but it’s kind of you, in a way. If tomorrow, inexplicably, you became that way, a medical miracle, nobody, not even you, would contend that you were anyone else.

    Viewing brains in metaphor as massively parallel computational machines is yet another byproduct of living in the age of digital computers. What we really seek to say about them is that they are cellular and chemical processes. However, brains may not be the ideal medium to house consciousness. The very fact hat consciousness exists is mysterious, as many philosophers have noted. Why are we conscious and not just zombies that “learn” (store and process information from their environment) and react. Consciousness must have some evolutionary advantage, probably concomitant to socialization. Whatever the reason for its existence it may turn out that other media are much more ideal to implement it. The situation may be analogous to the hypothetical case where nature set out, by some odd arrangement, to perfect a tic-tac-toe playing organism, and so built up a structure of neurons, hormones, neurotransmitters, axons, dendrites. Would we then say that no machine could ever duplicate its operation because it would never be exactly the same thing? One day the organism consistently draws an X in only 300 ms and on the other it tends to take 500, and on another day we can get it drunk, and it doesn’t even win when given the first move. Perhaps all those idiosyncrasies are a part of it, but they aren’t what defines its primary function. What is more, our computer program is better, more efficient, is hosted on a more natural platform to perform the functions of tic-tac-toe.

    • bwana bwana

      I think you’ve summed it up nicely.

    • Hunt, you are rather epically missing the point. It is no more than an article of faith that we can identify the “essential” or “functional” elements of anything. John’s argument is that there are no knowable extraneous elements when it comes to making something “as it is.” All simulations are by nature representations, not duplications. This is simply undeniable. If it were not, you would not be discussing the project of reducing the human brain to its “essentials.” .

      Your tic tac toe analogy illustrates the fallacy beautifully, by begging the question that we can even identify what humans are “for.” Any essentialist or functionalist analysis must start here. In the meanwhile, saying that uploaded brains are better than meat brains in the same way that a computer is better than a person at playing tic tac toe, well, I wish your public relations team all the best of luck with that.

      • bwana bwana

        At some point, call it the Singularity if you wish, machine intelligence will attain consciousness and go far beyond the capabilities of what you refer to as the “meat brain”. It is yet another step in the evolution of the human species…

        • Well, if that’s what your holy scripture tells you (The 13th Floor?) then it must be true, despite all reason to the contrary.

        • bwana bwana

          I don’t need The 13th Floor to tell me this (but it is a good example). I only have to look at the advances in technology over my lifetime. It will happen (or may have already happened)!?

    • I’m not suggesting that a computer could not be functionally conscious. I’m merely suggesting that it could never be me, and you seem to concede that point.

      • bwana bwana

        Could a clone be you? I think not BUT a computer with a functional copy of your “consciousness” could be a better representation of you than your clone.

      • I think the question would need to be put to the in silico you. If you were functionally duplicated and then “awoke” within a computer system, even with very limited sensory input, as if you were put in a sensory deprivation chamber, then by some means asked who you were, “you” would by definition say “John Wilkins,” since you are functionally the same. (If you suspend yourself within a sensory deprivation chamber, you don’t cease being yourself just because your environment has been radically altered.) This can lead into all kind of digressions about identity like the transporter problem that are pretty much a waste of time, in my opinion. I’m personally satisfied that whatever thinks it’s you, has your memories, and–while not resorting to behaviorism—reacts cognitively and emotionally like you, is you.

        You concede that artificial functional consciousness may be possible, and I don’t see why it could not be made to function like you, have your memories, etc. There’s nothing special about your own functional identity, or anyone else’s, so unless what you contend is that nothing could ever be made to function like you, have your memories, etc., you would be in the position of arguing that the machine is not who it thinks it is.

        • But that is my point: “like” me, not me. And what counts as “functional”? For example, I like certain songs (currently Ólafur Arnalds’ work): if the simulated me did not, is that functionally equivalent? What about my love for chocolate? Is the lack of that functionally equivalent? How about my constant pain from an old motorcycle injury? And so on. What counts as “functional” depends upon facts not part of me but of the observer/designer of the system.

          Since the original issue was whether we could achieve immortality by downloading our brain into a computer, I think we can answer that question in the negative. We can, however, record a functional representation, in theory.

        • bwana bwana

          If the upload did not include the memories of the original brain, it could not be you or even a good representation; thus, a good copy of your brain would like the same songs and chocolate, and remember the good times and bad. It would know your friends (and enemies), etc. It would not have constant pain from an old injury because that piece of its anatomy would probably be in fine condition. It would, however, remember the pain of the old injury and be thankful for the new “body”…

          We’re not talking about about simply a functional brain. We’re talking about a brain that thinks it is you, even though you could not readily accept that fact.

          With the proper sensory devices, probably beyond the capabilities of our limited organic ones, the new “you” could see a wider spectrum of light, hear a wider frequency of sound, smell with enhanced range and sensitivity, walk/run faster/longer, etc. You could also upgrade your body as new features became available. You could be the ultimate explorer/adventurer with almost no limitations… All you would require is a reliable power source and an environment that doesn’t destroy you.

          If humans don’t destroy themselves in the next hundred or so years, this fiction of today will be fact in the future.

        • What would you say to the “functional representation” when it insists it is the person? How would you convince it?

          This reminds me of the Star Trek episode (original series; “What Are Little Girls Made Of?”) when Kirk visits a planet with a scientist who has been missing for a decade, come to learn that he actually died but made an android representation of himself before doing so. His efforts to convince Kirk that he really is Roger Korby are quite pathetic and indicative of our time, when we still really do not accept that we are elaborate machines. That’s basically what this is all about. Since Wöhler’s synthesis of urea, we’ve grudgingly abandoned the idea of elan vital, that there is something special or spiritual about matter, but we can’t quite make that final step about yourselves, our minds. We recoil from the idea that we might be made into something with a power switch, like a stereo system, or something that can be rebooted. That is still bizarre, or even horrifying. But is this just us being apes frightened by fire?

          Of course, the Star Trek drama poisons the well by really making Roger Korby sound like a machine desperately playing human, but the point is there. If you happen to find yourself as a machine convinced that you’re the same person who used to be human (and you will, you know, because that is what functional equivalence means), how do you argue with people who are determined that you aren’t?

        • Jeb Jeb

          The sticking point for me in you’re argument is not that I hold some spiritual belief about matter but that mind is seriously related formed and maintained by environment and for humans in particular culture. Remove me from that niche I become something else and function differently (with all the issues of being a stranger in a strange land). I am open to being corrected as I study the way this issue is represented imaginatively in art and culture.

          I could not understand why you used the example of sensory deprivation, as my understanding is that it leads to measurable and significant cognitive change. The effects on an infant mind perhaps more dramatic but the effects are also measurable on an adult mind.

          I found I had to work rather hard when I moved in the past to live in a different country, the differences were often small and seemingly inconsequential but I found it confusing and very disconcerting. Things did not behave in the way I expected or was used to. The external world did not match my internal expectations.

          I don’t think we deal with environmental very well such a dramatic environmental change would create significant issues and differences in how I functioned.

          Remove me from my environmental niche would be to change me utterly I feel. Such a major environmental shift as my memory become a part of such a fundamentally different environment |I think the issue for the system administrator would be dealing with the significant different cognitive changes such memories would be undergoing and attempting to deal with the serious levels of anxiety, shock, overwhelming sense of loss etc. Such change would not just result in fundamental difference but I would suspect significant dysfunction and cognitive changes that had a physical basis as my brain would have to alter to deal with the change.

        • Your last paragraph gets into issues that I began making a long comment about last night but then considered out of place. I think a technology advanced enough to upload a mind and also somehow implement it on computer hardware (or whatever “computer hardware” means at that time) would also have the capacity to modulate emotional reaction to whatever extent necessary to adapt the new mind to its situation. Flip a switch and you would be drunk, another and you would be heavily sedated, another and you would be deliriously happy. The natural question is would this be tantamount to changing your identity. Actually, I don’t think so. This is already what we consult psychiatrists specifically to accomplish. I myself had undergone antidepressant therapy for anxiety (anxiety and depression are paradoxically related). I can specifically remember being in situations that I knew previously would have been fearful or terrifying, without the effects of the drug. Never did I consider that I wasn’t me. There is something more fundamental to personal identity that specific emotional reaction.

        • Richard Wein Richard Wein

          @Hunt

          I agree with you about functional equivalence. But not with your jump from “functionally equivalent to you” to “is you”. But neither do I say the functionally equivalent system isn’t you. I say that, once we’ve granted functional equivalence, there’s nothing more to say. It’s the urge to say more that leads us into awkward questions, such as, “Which of ten identical copies would really be me?” “Which one should I care about?”

          I suggest going back to the classic example of Theseus’s ship. (Taking consciousness out of consideration removes one source of confusion.) After many years of repairs, every part of Theseus’s ship has been replaced by similar parts. Is it still the same ship? I say that this is a case of what Wittgenstein called language “going on holiday” or “idling”. Philosophers often take words out of their ordinary contexts–where they’re useful–and put them into strange contexts (thought experiments) where they’re deprived of their ordinary use, and cease to do any useful work. The question is meaningless or incoherent, because there is no real matter to be settled. We’ve already been told everything that matters. It just feels like there’s some matter to be settled, because we’re used to employing such language in contexts where there is, e.g. “Is that the same ship I saw yesterday?”

          I then say that the same is true when we ask the same question about a person. Once we’ve granted functional equivalence, there’s no further fact to be settled. (Like you and Bwana, I’m taking functional equivalence as referring to more than just external behaviour. It also refers to internal states and processes, at an appropriate level of abstraction.) Of course, for someone who believes in a dualistic personal essence or soul there may still be a question: which copy has that essence? But I assume we’re all physicalists here.

          It suits our ordinary, everyday purposes to model people as continuing entities, just as we model inanimate objects as continuing entities. A considerable degree of reification is necessary if we are to usefully model reality. But as well as epistemic value, reification also has an instrumental value. Reifying an object and myself allows me to make such instrumentally useful statements as: “That apple is mine. Keep your hands off!” Of course it goes a lot deeper than this: our sense of ourselves as continuing entities has deep psychological significance. Reification is generally very useful, but sometimes it leads us astray, and I say that asking whether Theseus’s ship is still the same ship is one of those cases.

          Let’s call the organism that’s typing this comment Richard1, the similar organism of an hour from now Richard2, and the (less) similar organism of 10 years from now Richard3. Richard1 cares about the fate of Richard2 and Richard3 because evolution has programmed him to care. (It’s programmed him to care rather more about the fate of Richard2 than of Richard3.) And one way that it’s programmed him to care is by having him see himself as a continuing entity that encompasses all 3 Richards and all the intermediate states (and past Richards).

          Now let’s call the Richard who comes out of a Star Trek transporter Richard4. I may not feel quite the same sense of continuity towards Richard4 as I do towards Richard2 and Richard3, because being transported is outside of my experience, and the thought of being dematerialised seems rather like dying. Such a lack of sense of continuity might be exacerbated if Richard4 is only going to be constructed after the data have been stored in a transporter buffer for 100 years (like Scotty in TNG), or if multiple copies will be created. Of course, granted functional equivalence, Richard4 will feel just the same as the dematerialised Richard. But even knowing that, my sense of continuity may be weak or non-existent, and I may just not care about him. It makes no sense to say that I _should_ care. As Hume pointed out, our deepest cares are just what they are; they can’t be judged as rational or irrational. Similarly, it makes no sense to say that I _should_ have a sense of continuity towards any of the future Richards. Having such a sense improves my survival prospects. But if I don’t care about surviving, what’s that to me? It’s simply a fact that mostly we do care about surviving.

          People often try to support the claim that Richard4 is really me by saying that the situation of Richard4 is not fundamentally different from that of Richard2 and Richard3. I say, yes, but it’s also meaningless to insist that Richard2 and Richard3 are really me! That’s not to say that they’re not me. But the fact that you feel the need to emphasise that they’re me, even after we’ve agreed that they’re functionally equivalent (apart from having aged), suggests that you are over-reifying. In ordinary language this problem doesn’t arise. If you show me an old photograph of myself and say, “That’s you”, then I may be happy to agree. In that situation what’s at issue is whether the photo is of young Richard or of someone else. It’s not whether the person in the photo and I share some essential quality of continuing Richardness, or whether I should care about that person.

        • I think I agree with what you’re saying, if I understand correctly. As I commented before, I often find the transporter type scenarios more confounding than not, but in this case it does bring home clearly that there are really two sides to this equation, before and after, and as you say, nobody can force you to care about what comes out at the other end. It follows that John Wilkins has every right to reject the notion of uploaded immortality. However, to take this another step, and as I’ve begun to emphasize in previous comments, this leaves out the other side of the equation, the transported duplicate(s) and perhaps too uploaded minds. John Wilkins has the right to not call his in silico counterpart himself, but he doesn’t have the right to tell his in silico self that it is not John Wilkins (except perhaps in a court of law). By the same token, when I go to sleep tonight, what do I have invested in the person who (presumably) wakes up tomorrow morning? Really, only that he will wake up and remember that he is me. If he doesn’t do that, then I might as well put a bullet in my head instead of taking SLEEP-EASE (a sleep aid). If there happen to be a hundred Hunts awaking, if I were cloned a hundred times in silico, or replicated by the transporter, it’s hard to see how I would specify my preference for any of them. The fact that this turns almost everything we usually think about continuity of personal identity on its head is why I usually opt to avoid transporter problems. Perhaps the best thing to do is not invest in anyone except ourselves in the here and now. The Buddhists kind of beat us to that one.
          In summary, who are the winners of the immortality lottery here? The uploaded minds. Because they are immortal, and they really do think they are Hunt, John Wilkins, etc.

        • Richard Wein Richard Wein

          Hi Hunt. Thanks very much for your thoughtful reply.

          “As I commented before, I often find the transporter type scenarios more confounding than not…”

          In my view consideration of such scenarios is useful, precisely because they’re so unsettling. They suggest (correctly) that there’s something wrong with our usual approach to questions of personal continuity. Our usual approach–as with much philosophy–is to put too much trust in our intuitions.

          “John Wilkins has the right to not call his in silico counterpart himself, but he doesn’t have the right to tell his in silico self that it is not John Wilkins (except perhaps in a court of law).”

          I would avoid talking about rights here. My point is that there is no fact of the matter as to whether the in silico counterpart is John or not. That’s not a meaningful question. (To be more precise, it’s not a meaningful question once we’ve accepted functional equivalence, in the appropriate sense. If John’s denial that the counterpart is him is a proxy for saying that they’re not functionally equivalent, then there’s a question to be settled about whether they’re functionally equivalent.)

          Your point about the law raises interesting questions. The law is a social construction; the legal position on in-silico counterparts (should they ever exist) will be whatever legislators decide it to be. The current law is indeterminate in this respect. It implicitly adopts the familiar reification of people as entities with continuing personhood. That reification usually suits our human purposes very well, but it becomes problematic when we want to decide about such an unfamiliar case as an in-silico counterpart. On the basis of current law, I wouldn’t say a judge would be making a factual mistake if s/he decided that the in-silico counterpart is to be legally considered John, or decided the opposite. I would say that s/he is making new law, by fixing a point that the law previously left indeterminate. (But I might find fault with any justification the judge gives, e.g. if s/he makes the arguments that John has made here.) The possibility of both John and his counterpart existing simultaneously, or his having multiple counterparts, is a practical problem that might reasonably influence the judge. The law as it stands would probably become ambiguous or indeterminate in many serious respects if it recognised multiple people as being legally John Wilkins!

        • Jeb Jeb

          I just think of the examples of people who wake up and feel they are someone else or unsure what they are. Someone I know who had a spinal injury after a fall, that had a dramatic effect on both his own sense of being and everyone who knew him, before the accident. Was a difficult thing to deal with for everyone involved.

          The women from the South of England who woke up after suffering a blinding headache and now speaks with a ‘Chinese accent’. She feels her self has died.

          To say that people have a right to make firm identity claims and cannot be challenged or asked to question such feelings and emotions seems unhelpful and potentially detrimental to well being I suspect.

          To make the claim that a ‘simia naturae’ would in all cases feel that it is in every way the original I think is far from certain. Who knows how such feelings and emotions would play out when placed in such a dramatically different environment? Giving such a being the right to hold an emotion which could potential be detrimental to proper cognitive functioning could be highly problematic.

        • Jeb Jeb

          P.S @ hunt despite a rather crude above comment from me the power of the law in terms of what it can do to identity are breathtaking and seriously interesting I think.

          These issues are vastly under explored, the definition you are using a thing is what it feels itself to be are common in relation to ethnicity but a full working definition of ethnicity has yet to appear. The work needed on these issues is significant.

          You may find this example interesting (or not the subject may be very unfamiliar) you may want to skip the first half until it gets on to the legal aspects p127. onward.

          If nothing else it demonstrates how careful you need to tread here, the power of the law to utterly alter something that is so often viewed as unchanging I find breathtaking.

          This was the breakthrough paper on the issue its wider spread than the example given. A standard European legal move affecting many different areas related to identity and altering status or full on eradication of identity on a large scale.

          http://www.st-andrews.ac.uk/history/staff/alexwoolf/Apartheidandeconomics.pdf

  8. Jeb Jeb

    I think you can say that we would be emotionally and culturally disposed to view such a thing as real and will use a range of cultural moves to imaginatively maintain continuity with what we perceive as original and authentic.

    It seems to be the default way we view many rituals and beliefs.

    The Burry Man a Wild Man walks the streets of South Queensferry every year in a civic ritual. Default description is to note that the ritual is first recorded in the late 17th century “but is thought to be much older.” Downloaded intact and without interruption from the neolithic period. Without such continuity its distasteful and not authentic.

    Science culture would go to great lengths to maintain such continuity with Stephen Hawkins brain in a jar I suspect. Particularly as it is strongly disinclined to reflect on its own culture and emotive use of such things.

    Very strong cultural and emotional motivation to do so and maintain a fictive relationship with origin. Would require a significant shift in culture to view it otherwise.

    • Jeb Jeb

      Or the argument is highly likely to maintain high noise, low fidelity over time as the social cultural uses of maintaining such continuity are culturally significant and vital.

  9. “Destructive uploading” is the usual acid test proposed here. If people go in for destructive uploading,they believe the copy is good enough and contains all the most important attributes. Destructive uploading is likely to come before any other sort of uploading is practical. Of course you could argue that no one will be convinced – or that the convinced are deluded, but that’s really up to them, I figure. Note that people who sign up for cryonics are mostly already shelling out for destructive uploading – before it is even technically practical.

  10. Richard Wein Richard Wein

    I think you’re conflating two issues: the (alleged) practical impossibility of simulating a brain in sufficient detail and in real-time; the idea that a simulation is the not the same as the real thing.

    I’m not so certain as you of the former. I’m not sure that any significant features would have to be left out from a simulation above the atomic level. For example, even if it’s necessary to simulate chemical reactions, it doesn’t follow that we would have to simulate every molecule individually. Perhaps we would need to, but I don’t think you’ve made a case for it. Of course a simulation at any level would be an approximation. At the very least there are bound to be continuous variables (not just discretely-valued ones) involved, so it could never be absolutely equivalent to the original. But I don’t see why that matters, providing it’s a good enough approximation. I see no reason so far to draw the line between good enough and not good enough at the atomic level.

    In your final paragraph you seem to depart from the question of practical possibility, and make a different point:

    “Abstract representations are not the things being represented, and a model is not the modelled thing…”

    Well, first let’s note that a computer simulation is not a model in the sense that a scientific theory is a model. A computer simulation is a dynamic system in itself, and not just a representation of a system. (The software isn’t the same thing as the running system.)

    Of course a computer running software isn’t physically identical to a human brain. But they’re both dynamic systems, and you’ve made no argument (apart from practical impossibility) why they can’t in principle be equivalent in everything that matters to the download question.

    In discussion with Bwana you make a further claim that a simulation could be like you, but wouldn’t be you. Setting aside the question of practical possibility, I agree with Bwana that a sufficiently detailed simulation of you would have the same memories, personality, preferences, etc, as you. But I think it’s a mistake to then say either “It would be you” or “It wouldn’t be you”. Once we’ve agreed that the simulation is the same as you in all those respects, the further distinction (it is or isn’t you) becomes incoherent, unless we have in mind some dualistic essence of you that might or might not be transferred.

    If it’s the “would it be me?” issue that concerns you, it’s probably better to talk about exact physical copies made by Star Trek transporters. Do you think a such a copy of you would be you? (I say the question is incoherent.)

    • Richard Wein Richard Wein

      P.S. On re-reading the OP I see that the second point I addressed (the difference between a simulation and the thing being simulated) wasn’t just mentioned in your final paragraph. It was made at greater length in the second paragraph. I’ll respond at greater length.

      It’s not clear how your analogy with red tables is supposed to work. What plays the role of the red table in the case of a computer simulation? Are you establishing an analogy between the computer simulation modelling the brain and our mental processes modelling red tables? In that case the analogy is misguided. In simulating the brain we’re not concerned with whether the brain or its constituents are red or hard, or with any other physical properties of the brain. We’re concerned with more abstract properties and processes, which could better be considered “logical” properties and processes (in the computing sense of “logical”). And logical properties and processes are substrate-independent. For example, a simulation of a brain doing addition will also be doing addition. Ditto for playing chess. Ditto for the system’s knowledge of the chess board, including the colours of the pieces. It doesn’t matter what colour the brain is! It’s substrate-independence which allows us to run a program on completely different computer hardware, by using emulation software.

      Alternatively, perhaps the red table in your analogy is supposed to correspond to a red table in the case of the computer simulation. Your point could be that the human brain has the wherewithal to model a red table and to make sense of that model. But a computer simulation of a brain can only model the table, without making sense of that model. But why should we think that? If we’re simulating all the processes of the brain, those will include the processes which interpret models and make other judgements.

      Maybe I’m wrong, but I can’t help feeling that views like yours are generally held by people who don’t have much knowledge of computer science. A thorough understanding of just how computers work, of emulation software, and so on, seems very helpful here.

      • Richard Wein Richard Wein

        P.P.S. Apologies for that last paragraph. It was an ill-advised spur-of-the-moment thing. You probably have a perfectly adequate knowledge of computers.

        • As I have a computing degree, and have taught cognitive science, it might be said I have a better than lay appreciation of the topic. Moreover, I have used virtualisation software, simulation software, and a number of GOFAI style and NFAI style programs.

          As I said, the issue is not whether a computer can be a functionally conscious system – I agree that it can. The issue is whether the functional system can ever fully instantiate me. I argue that it cannot, and that no system that says it is me under simulation can be thought by this me as a continuation of my self.

        • Richard Wein Richard Wein

          Thanks for your reply, John. Two points.

          1. It’s unclear what point you are making in your second and final paragraphs. They seem to confuse the issue, suggesting that you have a second argument in mind. (They sound rather like one of John Searle’s arguments against AI consciousness.)

          2. Ignoring those paragraphs, the sticking point between you and me seems to be the word “fully” (or “exactly” as you put it in the OP). How fully does the simulation have to capture the relevant properties, like beliefs and preferences? Do you think that nothing less than perfect equivalence will do? That would be a strange position, because none of us remains perfectly the same from one second to the next, let alone throughout our lives. If the me of one second ago doesn’t have to be perfectly identical to the me of now to be considered me, then why does a simulation have to be perfectly equivalent to be considered me? If, on the other hand, you accept that equivalence is a matter of degree, then it’s much less clear that the simulation must be at the atomic level (or below) to achieve a sufficient degree of equivalence.

  11. On simulation, from a recent post:

    http://languagelog.ldc.upenn.edu/nll/?p=7300

    What, I hear someone objecting, aren’t you assuming a lot when you assert computational mechanisms? Yes, I am, but just what I’m assuming, that’s not at all obvious. For the nature of computation itself is by no means clear. Maybe we’re talking about a dynamical system of the sort modeled by Berkeley neurobiologist Walter Freeman, and many others, or maybe it’s symbolic computation implemented in a dynamical system, Maybe we’ve got a three-level system, as argued by Peter Gärdenfors (Conceptual Spaces 2000, p. 253):

    On the symbolic level, searching, matching, of symbol strings, and rule following are central. On the subconceptual level, pattern recognition, pattern transformation, and dynamic adaptation of values are some examples of typical computational processes. And on the intermediate conceptual level, vector calculations, coordinate transformations, as well as other geometrical operations are in focus. Of course, one type of calculation can be simulated by one of the others (for example, by symbolic methods on a Turing machine). A point that is often forgotten, however, is that the simulations will, in general be computational more complex than the process that is simulated.

    Who knows?

    • A point that is often forgotten, however, is that the simulations will, in general be computational more complex than the process that is simulated.

      This is a phenomenon well known to translators, who often find it takes more words in the target language to convey the meaning of the original. And despite the added bulk, a great amount of nuance from the original is invariably lost.

      This doesn’t pose too huge an obstacle when, say, translating the instructions for checking out of a hotel, but for any kind of text that transcends mere function–which is to say, literature–it’s an age-old problem built into the nature of language itself. That which is multiply referential cannot be fully translated.

      Which is to say that a poem is not at all “substrate independent.” it would be odd, then, if the mind that created it were.

  12. Glen Glen

    Speaking as a Physicist I disagree with you, for a start the universe is only ever an observation be it seen by a camera limited to a number of pixels and shades of colour, or the eye and brain limited to a number of cones and rods and a limited number of colours in my colour blind state less then other people’s. touch, smell, taste are all limited models, only mathematics can have any real chance of coming close and even then humans have no way of solving such huge numbers so some simplification has to be made.

    When downloading your brain you will not be the same person you are reading this, but nor will you be in a weeks time, your brain is in a constant state of reinvention, trying to puzzle out the happenings of each day, it’s adaptable, would it be as adaptable on a computer? Not today but that’s what Hawkings means when he says the technology isn’t there yet…. Would quantum computing allow the chaos into a machine, early evidence suggests so, is it possible to make a model of an ever changing brain… I think so yes.

    So assuming such a computer exists what would living in a computer be like, well personally I don’t think you will be able to tell, not if you wanted it to be the same as life, a virtual you would need the computer to dumb down all your senses to human standards, heck disabled people could stay disabled in a virtual world if it were to aid the switch from ‘real’ world to virtual. But eventually you will want to tweak the sliders on your scenes and see what x rays look like, or feel the solar wind, and why not, a virtual body need not stay shackled to the earth, it could have data downloaded from deep space satellites or mars rovers, or probes launched deep into Saturn. Would you be human any more, no, but you would be as much you as you are now, you will change with time just as you do between a Monday morning and a Friday afternoon.

    Our thoughts are not our own, I can guarantee someone somewhere is sharing your trouts at any given time, there are to many people in the universe for that not to be the truth, and in my opinion time is a construct anyway so at some point in all space and time every thought you have had and every feeling every taste every sunset have all been experienced already. Ask a 16 year old what is more interesting, Physics of the universe or physics of a computer game and they will say the latter, so while adults may have difficulty adjusting to life in the machine, kids will go straight to leading tall buildings and shooting each other with fire vision.

    In the end the old views are dieting and they are doing so in the developed world so quickly that as soon as more segregated countries open their borders their youth will follow suit, heaven can’t be a cloud and harps anymore, it has to have the level of excitement the Xbox one and ps4 will offer, so when it comes to downloading our brains it may be difficult for the old but it will be easy for the young.

    All in all does it matter if the mind on the disk is you or not, you will die as will the virtual you, all this allows is that some of what makes you you will go on after your death.

    Finally I will add that brain spikes are making technology brain interface better everyday, instead of it being used to control robotic limbs I’m sure that it will allow people in the future to interface with a virtual you, why wait till your at deaths door, I see a future where a person will grow old with a virtual counterpart syncing information, allowing us to view satellite data directly into our brains, to have robot bodies driving across the surface of mars, to make online gaming more emmersive, that one day you will not be able to tell the difference between the real and computer versios of yourself, so when you die you will not really miss life as your downloaded brain will have the memory of breathing and tastste and life to fool your for as long as it exists.

  13. I never understood the implication that I could upload *me* into a computer by uploading my *brain*, and thereby become immortal (even dismissing, for the sake of argument, all the problems discussed in the blog post). My consciousness would not be continuous with the switch-over. It could be “identical” to “me” in every respect (again, setting aside the arguments in the post), down to memories even–but I would not know it; I would not be experiencing it. I would either be dead or experiencing consciousness here in my meat bag instead, still plagued with its attendant mortality. The only work-around I can envision is a gradual, piecemeal replacement of all physical components with virtual and/or bionic facsimiles over time, preserving the continuity of my singular consciousness throughout the process. Anyway, I think a more promising answer to the problem of mortality lies in senescence research.

  14. Although this post and the following discussion are about a year old, I would like to point out that I recently released a book on this topic and many of the questions it addresses, if anyone is interested. I definitely fall into the multiple realizability, pattern identity camp, i.e., my book presents a metaphysical model by which we can conclude that upload procedures a viable means to “transfer” personal identity. It isn’t a popular stance, but I present it as fairly as possible. I also offer a taxonomy of mind-uploading thought experiments that I think people will find very interesting.

    Cheers!

    http://www.amazon.com/dp/0692279849

Comments are closed.

Optimized by Optimole