Last updated on 1 Jul 2020
I’m avoiding work and jotting down notes that might one day issue forth in a paper.
The issue of what consciousness is, is a vexed problem in philosophy of mind. Partly this is because it is so ill-defined, as any coined term based on an impressionistic philosophy like Descartes’ can be expected to be, but mostly because there are supposed to be special states or properties of consciousness, known as qualia, which are, in the famous title of a famous paper, what it is like to be something, in this case a bat. A similar argument, the so-called Mary Problem, argues that no matter how much we know objectively about some phenomenon, we do not thereby know what it is like to experience it. In short, experience, and hence consciousness, is irreducible.
I suspect this is because we are trading upon intuitions rather than hard information, and upon linguistic practices rather than fact. That is, we have intuitive categories based upon the uses of words like “like”, which we, quite correctly, cannot simply reduce to facts about the world. Instead of concluding that the concepts and categories are flawed, however, we conclude that the qualia are somehow ontologically distinct from the empirical world. And so we conclude or bolster belief in aspect dualism of various kinds.
I think this is wrong, as you may have guessed. I’m not original in this: Dennett among many others are qualia eliminativists, but it is a hard problem to dislodge this way of thinking. So I’d like to fly a simple argument and a Sorites in defence of eliminativism. In short, rather than arguing that we could not distinguish P-Zombies from P-Angels (those persons without, and those with, consciousness), I want to argue that we are all in the end, P-Zombies anyway, and what does it matter? I call this, for reasons that will be obvious, perspectivism, not to be confused with perspectivism in epistemology.
Asking, as Nagel did, what it is like to be a bat or even another person (the inverted spectrum argument, for example) is already too hard. We have intuitions: the qualia for bats, with a sonar sensorium, is qualitatively different from our own sensoria. We can’t imagine what it would be like to be a bat and be able to evaluate batitude as if we were a human, because to be like being a bat, you’d need to be a bat, and bats can’t think like humans.
So let’s take something similar but simpler so our intuitions do not get in the way: a digital camera. We know well how these work: light is focused upon a charge-coupled device (CCD) which converts the intensity and wavelength of light on each receptor to an electrical signal. The array of such signals composes the final picture.
Suppose I drop my camera in the corner of my living room, and it takes a picture of the room. What is it like to have that experience, for the camera?
Well one answer is to say cameras do not have qualia, because they are the wrong kind of thing, but that is question begging. Let’s see how far we can run the line that if we have qualia as physical things, so to do cameras. Let us ask the question, “what is seeing my living room like, for the camera?” I can readily imagine what it is like to see the room from that perspective. I see things from perspectives all the time, and while the camera might not know what it is like for me to see the world, I can certainly know what it is like to see the world like that camera. How? Well, I can look at the resulting photograph. Or I can get down on my knees (slowly, now I am older) and close one eye to see from that place.
Suppose, though, I am asked to give a formal description, like Mary is asked to give a formal objective account of seeing red before she has. How might I do that? Well, I can do a CAD rendition, to any arbitrary precision and accuracy, of lighting, surfaces, geometry, and so on, until the rendition done using the latest ray tracing techniques is indistinguishable from the photograph. CGI in films does this all the time. “What is it like to see my living room from that corner? See file
johnwilkinsloungroom.cad using camera 6 and full resolution rendering”. If I read that file, of course I do not “experience” the perspective, but if I process it in the right way, then I do. What it is like to be my camera in my living room is exactly, and without remainder, specified in that formal description, and the gap between my reading the file and the “experience” is one solely of processing technique and bandwidth.
So, if there is no remainder of inexplicable qualia for my camera, why, apart from complexity and usage, should I think there are any for me, which is after all just another physical environmental recording system (albeit of rather different substrate; I am leaving to one side whether or not we can treat any physical system as computable in the same manner as the camera)? Of course, I have much higher resolution and bandwidth (the Mary example gets a lot of traction from the difference in the rate of processing and number of bits processed from a verbal or even mathematical description and the subsequent neurological rendering). I have sensory receptors that the camera does not have. But I do nothing, qualitatively speaking, different to what the camera does. This is the Sorites.
So if I do not need qualia to explain the experience of experiencing, I can dispose of qualia, and in doing so dispose of the major support for consciousness as an ontologically distinct property. Where would this leave us?
If experiences are merely (!) physical states and linguistic habits, would this mean we are all unconscious? Are we all P-Zombies? I think we are. If there’s nothing irreducibly conscious, and we can’t tell the difference, it’s P-Angels that are suspect, not P-Zombies. Abandoning the idea of consciousness as a real state, as opposed to a verbal convention, leaves us none the worse.
I still won’t go along with Swampman, though.