Human reasoning 27 Oct 2010 Human reasoning is not simple, neat, and impeccable. It is not akin to a proof in logic. Instead, it draws no clear distinction between deduction, induction, and abduction, because it tends to exploit what we know. Reasoning is more a simulation of the world fleshed out with all our relevant knowledge than a formal manipulation of the logical skeletons of sentences. We build mental models, which represent distinct possibilities, or that unfold in time in a kinematic sequence, and we base our conclusions on them. This is the conclusion of a paper just published in PNAS, and which is open access (i.e., free), entitled “Mental models and human reasoning” by Philip N. Johnson-Laird at Princeton. I find it interesting that it effectively revives the “picture theory” of meaning and thinking (see the Stanford entry on mental visualisation). Philosophers have tended, under the “linguistic turn“, to treat reasoning as a process of manipulations of symbols and sentences, and it is on this basis modern logic is founded (for that matter, ancient logic too – “predicate” is a word). The debates over the past few decades about artificial intelligence have led me to conclude that humans are not at all very like computers. We aren’t even much like massively parallel distributed systems. This reinforces my prejudices… Epistemology Metaphysics
Education Dicks, niceness and evil: a rant 29 Aug 2010 The ongoing blogbattle over whether to be nice or a dick skeptic continues. Phil Plait gave a speech suggesting that niceness works better. There was blowback, of course, which he discusses here and here. The Great Tone Debate seems to resolve down to those who think that minds are changed… Read More
Education Is Brian Blessed a monkey or an ape? 22 Apr 201122 Jun 2018 One of the recurring creationist attacks on evolution is, “If we evolved from monkeys, why are there still monkeys?” I responded to this once before but it is time to revisit it. Why? Because Marty Robbins has attacked the British media, itself always a noble thing to do, for constantly… Read More
Epistemology Operative concepts 27 Apr 202227 Apr 2022 Gave a talk “The Good Species” yesterday (26 April 2022) to the HPS crowd at UniMelb. The discussion went a way I didn’t expect: classification in the psychiatric and medical domains. I proposed a third kind of concept formation in science: what I am calling “operative concepts”: folk terms and… Read More
That looks like an interesting paper. Thus far I have browsed through it, but it will take more time to read it thoroughly. Thanks for bringing it to our attention. It has always been obvious to me that reasoning was not formal logic. It has long puzzled me that many people seem to get that wrong. Yes, I think we do some sort of mental modeling, but probably not in the way that PLJ suggests. For example, he says “The principal data in the construction of mental models are the meanings of premises.” I am quite doubtful that meanings are anything at all like data. I’ll read the paper in more detail before I decide whether to comment more.
Quite right about modelling. The mistake here is to think that we do this by manipulation of symbols of any kind at all. The subsymbolic hypothesis in philosophy of mind/AI suggests that we should treat symbolic claims as the output of a generalised model rather than the elements of it, and I agree with this. Neural nets classify not by manipulating mental objects in a homomorphic relation to the objects they represent, but in any way that works. I doubt I have a “dog” mental object when I say “dogs bark”; instead I have a bunch of nets that fire in the right way to reliably produce that output. This is (in my view, anyway) a kind of reformed behaviourist functionalism.
Parallel processing is definitely not the defining characteristic of mind. If you have carefully watched yourself think, you will notice that you are only ever fully conscious of one concept (picture, if you will) at a time. If you try to reduce it, it changes into the reduced concept. Yes, you can be subconsciously aware of many things (such as background voices or music), but if your attention is directed to one of them, it will displace the previous concept and become the focus of consciousness, and this can oscillate back and forth very rapidly. If you could actually be fully conscious of two things at the same time, you would be two separate people or minds, and then the why-am-I me question becomes entirely relevant: from which one those perspectives would reality be perceived? Both is inconceivable, just as it is inconceivable for you to be two separate people at the same time. The problem then is reconciling the observed multiplicity of things correlated with mind (brain, neurons and neural nets, physical events, etc) with the unity of consciousness. If you are a materialist, one question is: what line is there in the brain (and elsewhere) that divides that which is observed from that which observes?
Parallel processing is definitely not the defining characteristic of mind. But your argument only supports the claim that parallel processing is not a characteristic of consciousness. If you think of ‘mind’ as accomplishing other things beyond consciousness, then the distinction matters. Your “what line is there in the brain?” question remains, but it is a question about consciousness, not about mind.
Your argument on parallel processing doesn’t really work. It’s an argument against using parallel processing to do many different things at the one time. It does not address the possibility of parallel processing where the parallel threads are all dealing with various aspects of the single problem that is central to your thoughts.
But I have no argument with parallel processing as a concept. My argument is that it is not sufficient to explain mind, in a reductionist sense.
That rather depends on your project. If you are, as I am, a physicalist, then at best “mind” is a functional category that can be arrived at in a number of ways, one of which is parallel processing. At worst, it is a noncategory; an illusion brought about by our language (a Wittgensteinian eliminativism). To assume that “mind” must be (i) singular, and (ii) linear is to beg the question badly. Parallelism can bring about singular outputs; in fact that’s the whole point of neural nets. Interrogating one’s stream of consciousness will not provide evidence for or against it. My objection is rather different. I agree that, for any functionally defined property of “mind”, one can simulate it in a massively parallel computer system. But it is, and remains, a simulation; it is no more mind that an orrery is a solar system. At some point a simulation approaches, in physical terms, instantiating the sort of thing it simulates (like Lewis Carrolls’ map in Sylvie and Bruno Concluded, where the country is its own best map), but – and this is the point – unless it is physically identical (or as near as dammit), it isn’t actually a mind. Since mind as we know it is wet and nonlinear and nonalgorithmic, any simulation that approaches it close enough must likely be wet, nonlinear and unalgorithmic. Hence, I reject the idea that we will one day download ourselves. If only Sheldon had talked to me first.
“at best “mind” is a functional category that can be arrived at in a number of ways, one of which is parallel processing. At worst, it is a noncategory; an illusion brought about by our language (a Wittgensteinian eliminativism)” Not to be sophomoric but really, if it is an illusion then, isn’t everything we ascribe to it and all that flows through it also an illusion and not to be trusted? (such as the concept of language, the physical, “objective” verification, your opinion, etc 😉 Yes observable parallelism can bring about singular observable outputs, but as you pointed out, that parallelism is objective and the apparent veil of a subjective stream of consciousness cannot be penetrated (at least not physically). And yet everything you can conceive of, such as parallelism, is a subjective picture in your mind. The map can never become the territory, because the map is what it is, and the territory is what it is – two separate objects in space. They will never become the same thing, even if one is an exact copy of the other. It’s an identity problem. In the case of mind, I would maintain that it is an even deeper identity problem, due to the unity of conscious thought – perspective and the self that I’ve already touched on, but I know you would reject this concept of identity.
If you have carefully watched yourself think, you will notice that you are only ever fully conscious of one concept (picture, if you will) at a time. You obviously don’t suffer from ADD!
Oh yes I have! On and off all kinds of drugs. The oscillations are bewildering, in waking life as well as dreams. But at any given point in my subjective time, I am only conscious of one overall concept, although there may be many things begging for attention at the periphery. John says that parallel processes can have singular outcomes, and he is right. But in the case of mind, those singular outcomes are not objectively observable. No scientist can know what it is like to be glen beck. Maybe that’s a good thing, I don’t know.
It doesn’t look to me like Johnson-Laird gives a picture theory of meaning, which is a theory of what it is that makes one thing represent at all, and of what it is that it represents. It’s a theory of what constitutes meaning. Looks to me like J-L simply takes for granted that certain mental states represent. (I would quibble with the linked definition of “picture theory”, but it gets the preceding point right.) Although I’m not at all a devotee of Wittgenstein’s Philosophical Investigations, I do think he was correct about what was wrong with the picture theory that he gave in his earlier work Tractatus Logico-Philosophicus . That simple picture theory is untenable, although there have been attempts such as Ruth Millikan’s to avoid its problems. This doesn’t seem to be one of J-L’s goals–nor need it be.
Ludwig’s original picture theory was, of course a logical picture, and such pictures cannot be homomorphic with the world, but it doesn’t mean that all pictures are not able to be similar enough to represent. So I think that J-L’s view if supported will give us a non-language-of-thought style picture theory, just as anti-Fodorians think it might.
Is that really what J-L is trying to do? Similarity to X by itself doesn’t distinguish between different things which are similar to X. (Surely there are other, causally related patterns in the brain which are similar to X because of their causal connections to X–why aren’t those what are represented, rather than something outside the brain?) A bit of sophistication about the details is also needed for representation that’s sufficiently fine-grained for thought. The system needs to be able to represent the difference between Jo being outside the house in an unspecified location, Chris being outside the same house in an unspecified location, and either one being in a particular location that’s not in the house. These are not new problems. I’m not saying that some kind of isomorphism can’t be a part of the story, but there has to be more.
Emotion: people with damage to their emotional centres find it very difficult to make any decisions, even of the “tea or coffee?” kind. We make most of our decisions in a “gut” way, then the rationality is post hoc justification. I suspect scientists (and perhaps philosophers too) also make progress in the same informal way. But the important thing with these disciplines is that the argumentation for an assertion has to be formal and rational even if the original idea came in a spliff-induced dream about purple rabbits. Associativity: Temple Grandin describes the pictorial and associative thought processes in her autistic brain and she is sure that her way of thinking is much closer to non-human mammals, such as horses, dogs, cattle, etc. She reckons animals use the principle “A and B happened at the same time so if A then B” – which I suppose is the basis of classical conditioning and a lot of apparent intelligence comes from the power of association. Parallel or not? Consciousness and the reasoning bits of the human brain are only a thin veneer on our animal brain. Overall we parallel-process like mad, you don’t stop breathing and wet yourself when you try to solve a differential equation. OK, that’s autonomous NS, but you can scratch an itchy nose while contemplating John’s insights. But is philosophy interested in the thinking behind scratching an itchy nose? Hands up, who scratched something as the result of reading something about scratching itches?
I think folk beliefs also have to have a post hoc justification which appeals to reason and match elite knowledge claims up until the 19th cen. at least; when the lines of communication seem to become disrupted. But thats my bias. Look at them out of context they look like a spliff induced dream of someone who is bat shit crazy. Form of argument that is of the folk rather than the preserve of one particular group. But it makes me uncharacteristically optimistic when it comes to the ability of humans to engage with reason.