Last updated on 1 Jul 2020
I’m avoiding work and jotting down notes that might one day issue forth in a paper.
The issue of what consciousness is, is a vexed problem in philosophy of mind. Partly this is because it is so ill-defined, as any coined term based on an impressionistic philosophy like Descartes’ can be expected to be, but mostly because there are supposed to be special states or properties of consciousness, known as qualia, which are, in the famous title of a famous paper, what it is like to be something, in this case a bat. A similar argument, the so-called Mary Problem, argues that no matter how much we know objectively about some phenomenon, we do not thereby know what it is like to experience it. In short, experience, and hence consciousness, is irreducible.
I suspect this is because we are trading upon intuitions rather than hard information, and upon linguistic practices rather than fact. That is, we have intuitive categories based upon the uses of words like “like”, which we, quite correctly, cannot simply reduce to facts about the world. Instead of concluding that the concepts and categories are flawed, however, we conclude that the qualia are somehow ontologically distinct from the empirical world. And so we conclude or bolster belief in aspect dualism of various kinds.
I think this is wrong, as you may have guessed. I’m not original in this: Dennett among many others are qualia eliminativists, but it is a hard problem to dislodge this way of thinking. So I’d like to fly a simple argument and a Sorites in defence of eliminativism. In short, rather than arguing that we could not distinguish P-Zombies from P-Angels (those persons without, and those with, consciousness), I want to argue that we are all in the end, P-Zombies anyway, and what does it matter? I call this, for reasons that will be obvious, perspectivism, not to be confused with perspectivism in epistemology.
Asking, as Nagel did, what it is like to be a bat or even another person (the inverted spectrum argument, for example) is already too hard. We have intuitions: the qualia for bats, with a sonar sensorium, is qualitatively different from our own sensoria. We can’t imagine what it would be like to be a bat and be able to evaluate batitude as if we were a human, because to be like being a bat, you’d need to be a bat, and bats can’t think like humans.
So let’s take something similar but simpler so our intuitions do not get in the way: a digital camera. We know well how these work: light is focused upon a charge-coupled device (CCD) which converts the intensity and wavelength of light on each receptor to an electrical signal. The array of such signals composes the final picture.
Suppose I drop my camera in the corner of my living room, and it takes a picture of the room. What is it like to have that experience, for the camera?
Well one answer is to say cameras do not have qualia, because they are the wrong kind of thing, but that is question begging. Let’s see how far we can run the line that if we have qualia as physical things, so to do cameras. Let us ask the question, “what is seeing my living room like, for the camera?” I can readily imagine what it is like to see the room from that perspective. I see things from perspectives all the time, and while the camera might not know what it is like for me to see the world, I can certainly know what it is like to see the world like that camera. How? Well, I can look at the resulting photograph. Or I can get down on my knees (slowly, now I am older) and close one eye to see from that place.
Suppose, though, I am asked to give a formal description, like Mary is asked to give a formal objective account of seeing red before she has. How might I do that? Well, I can do a CAD rendition, to any arbitrary precision and accuracy, of lighting, surfaces, geometry, and so on, until the rendition done using the latest ray tracing techniques is indistinguishable from the photograph. CGI in films does this all the time. “What is it like to see my living room from that corner? See file
johnwilkinsloungroom.cad using camera 6 and full resolution rendering”. If I read that file, of course I do not “experience” the perspective, but if I process it in the right way, then I do. What it is like to be my camera in my living room is exactly, and without remainder, specified in that formal description, and the gap between my reading the file and the “experience” is one solely of processing technique and bandwidth.
So, if there is no remainder of inexplicable qualia for my camera, why, apart from complexity and usage, should I think there are any for me, which is after all just another physical environmental recording system (albeit of rather different substrate; I am leaving to one side whether or not we can treat any physical system as computable in the same manner as the camera)? Of course, I have much higher resolution and bandwidth (the Mary example gets a lot of traction from the difference in the rate of processing and number of bits processed from a verbal or even mathematical description and the subsequent neurological rendering). I have sensory receptors that the camera does not have. But I do nothing, qualitatively speaking, different to what the camera does. This is the Sorites.
So if I do not need qualia to explain the experience of experiencing, I can dispose of qualia, and in doing so dispose of the major support for consciousness as an ontologically distinct property. Where would this leave us?
If experiences are merely (!) physical states and linguistic habits, would this mean we are all unconscious? Are we all P-Zombies? I think we are. If there’s nothing irreducibly conscious, and we can’t tell the difference, it’s P-Angels that are suspect, not P-Zombies. Abandoning the idea of consciousness as a real state, as opposed to a verbal convention, leaves us none the worse.
I still won’t go along with Swampman, though.
Just to lob another set of concepts in…
My digital camera (and most others to a greater or lesser degree) carries out a huge amount of digital processing in camera before the ‘image’ is finalised in memory. It straightens barrel distortion at wide angles, it tidies up noise, it selectively compresses or expands the colour and contrast, and compresses the image into a smaller JPG file. Hell, I can even ‘prime’ the processing by twiddling the ‘scenes’ dial.
This is rather like our unconscious processing stimuli before our conscious self becomes aware. What your camera ‘sees’ and what your conscious self ‘sees’ is augmented reality. Not necessarily accurate, but ‘truer’. The ‘truer’ operator means that salience is boosted, but may extra true or extra false.
As the stimuli wend their way through the many neural associations of the brain they also trigger other associated memories and feelings. See an apple (we’ll not discus visual processing, life’s too short) and by the time you are consciously aware there are associated echoes of how other apples have tasted, how your muscles would feel if you picked it up, Biblical myths (if you are that way inclined), what other apples have tasted like, fear of biting into a worm (if you are phobic) and so on. Even these echoes are comprised of many triggered sub echoes along the way.
And that, I think, is what it is like to be a human.
One thing I never got about ‘Mary in the black and white room’ is that, whilst humans cannot experience a colour if given a complete physical description, this is biological contingency – it is possible that, out on the planet Zargon Prime, there lives a species that would experience red if given a complete physical description of it.
I gotta say that for me, the subject of consciousness is one on which you’ve never made the slightest bit of sense. Surely what it’s like to be your camera is, at least, the formal description plus a camera-relative mapping from description to qualia, the latter being the remainder whose existence you underhandedly deny.
(In other words, I don’t get it.)
I’m afraid that you need a bit of background to be able to understand how far off base I really am. I suggest you read this book:
"Philosophy of Mind: A Contemporary Introduction (Routledge Contemporary Introductions to Philosophy)" (John Heil)
and then this one:
"Philosophy of Mind and Cognition: An Introduction" (David Braddon-Mitchell, Frank Jackson).
I ordered this the other day:
From what I understand you’d deny whatever it is Chalmers is selling. He thinks consciousness is ontologically real doesn’t he? Like a part of the fabric of the universe? You could have posted this a few days ago so that I could’ve ordered better introductory books. 😉
David’s book is an excellent introduction to the issues, and he is very entertaining and clear. He’s just wrong, that’s all…
P-Zombies? Are you having a pop at Prof. Myers again?
It seems to me that Chalmers and others think of consciousness as some special kind of mental ‘bucket’ that carries a special kind of information, qualia, or perhaps it’s a ‘conduit’ that transmits that information. That’s just wrong. I have no problem with thinking of consciousness as real, and not epiphenomenal, but it doesn’t work like THAT. As for how it does work, alas, though I like the account that William Powers gave in Behavior: The Control of Perception (Aldine 2003), that’s not something I can easily summarize here. But it’s not a bucket.
The double aspect theory says that there is something about consciousness that is physical but not reducible to physics. I think it’s a kind of Whiteheadian process theory, myself.
The “information” in qualia is of a curious kind: it is inexpressible information. How something can be inexpressible and information is unclear to me.
If the unlikely event that the camera has experience, it is surely only experience of formal data (binary digits). It won’t have experience of the external world. At least that’s my current opinion. The camera is designed so that the picture emerges from the formal data. But it is an emergent phenomenon for us, not for the camera.
For people, I’m a qualia denier, but not an experience denier. The distinction is that I see experience as an activity, not as a thing.
I can’t say that I’ve given all that much thought to qualia & I’m inclined to think that the Mary problem is a mistake. But I’ve given quite a lot of thought to color, a so-called secondary quality. And some of that thinking is about the very concrete business of producing the ‘right’ color balance for my digital photos. The problem is that it is physically impossible to get the ‘real’ colors onto the photograph. Some of that is a bandwidth problem, but some of it is due to perceptual mechanisms of so-called color constancy. Here’s a post, with illustrations, about this issue.
Neutral monism is sexier. Unless you wanna supervaluate the threshold for experience, what’s the diff?
I cannot make sense of neutral monism. If physical things do not exist, then I have no way to make sense of “exist”. But maybe that’s just my failure of imagination.
Another option (which of course you won’t like) is that experience is irreducible, unexplainable, and merely correlated with physical observations. If one asks what it is like to be a camera, one can also ask what it is like to a bat, or a tree, or a molecule, or one of the cells in your body, such a single neuron. But you are not one of those. That is not what is being observed. Where is the dividing between what is conscious and what isn’t? There a unity to conscious experience that somehow integrates multiple observations and experiences into one unified whole. If I was a P-Zombie, it would be equivalent to there being nothing rather than something. But that is not what is being observed. I do feel pain, sometimes even when I read Dennett 😉
P-Zombies do not fail to feel pain. They fail to have pain qualia. And I am convinced that the claim of the unity of consciousness is itself an illusion. Give me that old-time Society of Mind!
I think that consciousness is a Bad Predicate (“Bad predicate! Look what you’ve done!”); not a natural category. So the mere fact that there are phenomena we will stereotypically call “consciousness” doesn’t imply that there is some qualitative break. Just the accrual of systems that react to their environment. To be typically conscious is to have enough of those systems that you react in complex ways.
This was pretty much my answer to the p-Zombies question on the PhilPapers survey. I wonder how many philosophers of science think Chalmers’ Hard Problem is a pseudoproblem*?
I think it’s useful to borrow a move from John Dewey. Dewey wholeheartedly rejected subjectivist accounts of experience — the veil of perception, qualia, sense data, basic ideas, &c. My experience of, say, the taste of my curry pancakes (with apple chutney) isn’t a mental state. It’s also not a brain state, a state of my body, or a state of me (whatever, ultimately and all-inclusively, I am). Rather, it’s an ongoing interaction between me (whatever, ultimately and all-inclusively, I am) and my curry pancakes (with apple chutney) in a certain environmental context (relevant features of which are things like the temperature, humidity, and aerosol content of the air and the local EM field, to the extent that it’s localized).
The digital camera may not have qualia, or anything else that’s necessary for having subjective experience. But it can certainly interact with its environment, and in this sense it clearly does have experience. Just like me.
* In the sense that it presupposes one or more dubious assumptions, not in the sense that it’s meaningless according to some ridiculous semantic criteria.
It’s not that Chalmer’s Hard Problem is not hard; it’s that it is not Hard (with the capital H). The hardness is one of facts and ultimately successful natural/physicalistic explanation. My view (and perhaps yours) is that the Hard irreducibility argument fails simply because it is question beggingly set up (which is possibly my Inner Wittgenstein coming out again).
It’s not going to be easy to eliminate subjectivity/qualitative consciousness from the furniture of the world. The hard problem may well be a pseudoprobelm. But before plumping for dissolution, we need an account of what assumptions underly the problem, and a reasonably persuasive explanation of why they are dubious.
Of course. At best this is one section of an actual paper. Which I don’t have time to write now. Or possibly ever.
Any potential coauthors out there?
John, I assumed your embrace of Uexkull would inoculate you against qualia eliminativism. 😉
I don’t think I get the objection to aspect dualism, which seems pretty win-win to me. I find it hard to understand why anyone who thinks strenuously about the matter would reserve skepticism toward the existence of the one thing anyone knows for sure, and through which all subsequent knowledge must be transported. If experience is not reducible, it’s not reducible. Nobody said life was fair. Not to mention that it may well be that this irreducibility is precisely what enables us to reduce all other knowledge into concise, formal models.
I can certainly know what it is like to see the world like that camera. How? Well, I can look at the resulting photograph.
We have no reason to surmise the photograph represents the camera’s “experience,” as such, and many reasons to suspect it does not. The Chinese Room argument, for one. A photograph is a product of a human mind, which a camera lacks. Imagine writing your name on a chalkboard with your fingertip, and then again with a piece of chalk. Is there any reason in the second instance to propose that the chalk has the “experience” of writing? Why are we inclined to talk of a camera as “seeing” but not a tape (or digital) recorder as hearing? These are mental, not mechanical, categories, and I think that is part of what is throwing you off. You are not “just another physical environmental recording system.” You are (at a minimum) that recording system, plus an interpretive, meaning-generating mind.
Dennett’s big mistake, which I think you repeat here, is to construe reductionism as reversible. Having encoded experience into a formal model, we can then turn around and regenerate that experience from the model. Because only that which is encodable is real, nothing of consequence is lost in the reduction, qualia being nothing but semantic residuum from Cartesian dualism. This kind of verificationalism has its uses, but is clearly not equipped for philosophy of mind, which is more than just neurology-on-steroids.
For example, Dennett writes, in “Quining Qualia,” that his hypothetical wine tasting machines would “type out a chemical assay, along with commentary: ‘a flamboyant and velvety Pinot, though lacking in stamina’–or words to such effect.” This constitutes an enormous leap, from objective to subjective language, that he nowhere bothers to try to justify. On the plausibility scale, this has got to be somewhere near fairies at the bottom of the garden.
The Mary Problem is self-defeating, as Dennett shows, but he doesn’t acknowledge that its main flaw is that it’s not logically possible to know everything about color perception or any other subject, which also defeats his RoboMary response. This is partly a consequence of there being an infinite (or functionally infinite) number of facts on any topic, coupled with basic epistemological limitations. (Which is why I thought your reverence for the concept of Umwelt would lead you down a different path.) Descriptions are always metonyms for experience. Everything is “at least what it is given as in experience,” in Sterling Lamprecht’s words. This is not to say that qualia are “ontologically distinct entities,” necessarily. (I’m not entirely sure what rides on that question.) But we shouldn’t be intimidated into considering qualitative descriptions as less accurate than quantitative ones, just because some philosophers are afraid we’ll smuggle ghosts back into the machine–which would have real consequences for the amount of attention and presence we embody in the world.
Hey, philosophy guy. Go look up something called “Turing Complete”. It allows one computational system to simulate another. (Technically it allows one computing system to compute all others.) A computing system, like your brain, need only be partially complete to process the experience of another being. It would be similar to running a virtual machine on your computer. Considering bats are reasonably close to humans in the basic construction of their brains, and the human brain is so much larger, it would theoretically possible for a human brain to simulate being a bat. It would merely require specialized training and/or specialized surgery to accomplish. I am reminded of an SMBC comic where engineers were banned from philosophy conferences. Most of these “complex” philosophical problems can be collapsed into fairly simple mathematical modeling problems/technical experiments.
Gosh, what a suggestion! Philosophy doesn’t know anything about Turing computability, since it’s only been teaching that topic my entire life. Hey, computing guy, read the post: “I am leaving to one side whether or not we can treat any physical system as computable in the same manner as the camera”, and stop your guys trying to reinvent things that philosophy has dealt with for fifty or more years.
Comments are closed.