Last updated on 18 Sep 2017
Sometimes, as a philosopher, one forgets that not everyone has been forced to undergo a logic class. This is a problem, both because logic is taught as the second most boring subject after calculus, and because, like calculus, it is enormously relevant to everything we do. Most especially it is something that is relevant to scientists. Now, I do not want to imply that all scientists do not understand logic, or misuse it, but it is worthwhile occasionally revisiting the basics. Especially for the nature of classification and inference in science.
Last time I wrote about natural classification, I discussed the use of clades as a straight rule for induction. An induction, for those who do not recall their introductory philosophy of science, is an inference from a limited number of particular observations to a general conclusion: all the swans I have seen are white, so swans are white. Inductions can be wrong. Deductions move from the generalisation (“All swans are white”) to the particular case (“this is a swan, so it is white”). Deductions cannot be wrong if the premises (the generalisation itself, and the claim this is a swan) are true. Now, the most widely known philosophy of science, that of Karl Popper, is based upon a logical deduction – if the general claim (the “law”) says that all As are Bs, and this B is not an A, then the law is false. He called this “falsification”. It is based on what we call the modus tollens, and is bandied about all the time by philosophers and scientists alike. It seems to me that not everybody understands what is at issue here. So, a simple introduction follows below the fold.
One standard, and possibly the most widely used, form of inference or argument, is called modus ponens, a Latin term we inherited from the marvellous logicians of the middle ages. It works like this: suppose I have a conditional statement, which we sometimes call an “if-then” statement. If something is true then something else is true; IF A THEN B. [We represent this with an arrow in logic: A ? B.] This is a whole statement. It tells us something about the world it represents, that As always imply Bs. As a whole statement it is either true or false.
An argument is a series of statements that have a logical conclusion (cue Monty Python and the Argument Clinic sketch: “An argument is a series of propositions intended to establish a conclusion. It is not merely the automatic gainsaying of everything the other person says”). We usually lay this out like a sum:
A ? B (Premise 1)
A (Premise 2)
This is a modus ponens argument. If the conditional statement is true, and the second premise is true, then the conclusion must be true. Now, suppose the conditional is true, but we know that the conclusion, B, is false; what then?
A ? B (Premise 1)
¬ B (Premise 2)
¬ A (Conclusion)
Where “¬ ” means “NOT-“. [In basic or first order logic, we use simple operators like NOT, AND, and OR to make up our logical “equations” the way we use PLUS, MINUS, MULTIPLY and DIVIDE in arithmetic.] Here, the fact that the second part of the conditional statement, called the “consequent”, which we have shown as the letter B here, is false, suggests that the first part of the conditional (the “antecedent”) is false. Stay with me here, because this is the machinery of the point of this post. This is called modus tollens.
Modus ponens and modus tollens (we always italicise them, since they are used as Latin names or terms), are the foundations of inference. And moreover, modus tollens is the foundation for Karl Popper’s philosophy of science. Let me review this briefly.
Popper was concerned about what is known as the “problem of induction”: no matter how many observations we make of swans, there is always the possibility that the next one we observed (in Perth) might be black. So how can we justify the use of induction? This problem goes back to David Hume (who, by the way, did not ever use the term induction), but was raised most strongly by John Stuart Mill in the mid-19th century. It was a standard philosophical staple in the 1930s, when the Vienna Circle was arguing about the logic of science.
Popper noted that the problem of induction was insoluble, contrary to philosophers like Hempel and Carnap who thought we might be able to formalise it, and came up, instead, with the principle of falsification: if you can disprove a theory or generalisation by modus tollens, then it is dead and buried, but you can never prove the generalisation. Inductive inference did not, in Popper’s world, exist. This leads to a problem: how does science actually get the generalisations in the first place? Popper did not care: guess, conjecture, dream or cast dice. In a book the English title of which was The Logic of Scientific Discovery, Popper had no logic of scientific discovery.
Why is this relevant to natural classification? Largely, because systematists adopted Popper as their philosopher of science, despite his treating classification as something that could be dispensed with. Systematists recast their classifications as “hypotheses” and “models”, and treated their data as potential falsifiers of these models, rather than building up the classifications inductively. This sea change was dramatic. In the “traditional” systematics, the whole point of doing classification was to provide inductive generalisations (“All mammals have character A”) which then called for explanation (and lest anyone assert this is a pre-Darwinian or ancient metaphysics or epistemology, note that people were publishing this in the 1960s, as well as in the 1860s; that it was commonly understood before the arrival of numerical taxonomy and cladistics is sometimes forgotten, or deliberately ignored).
But let us return to how logic works, and look at a common fallacy used in science. I’m going to argue that it is a fallacy, but that it is sometimes a justified fallacy. It is called the Fallacy of Affirming the Consequent, and it works like this. Recall our argument form of modus ponens: If A, then B, A, therefore B. A common mistake, made by children and adults alike, is to argue this way: If A, then B (our conditional), B (the consequent of the conditional), therefore A. Here’s an example:
P1. If the theory of evolution is true, then we should see lots of convergence (that is, organisms should “solve” the same problems the same way)
P2. We do see lots of convergence
C. Evolutionary theory is true
Why is this a fallacy? Because convergence could arise in many ways: two of the most obvious are the creationist claim (God made them the same way because He liked that “solution”) and the Lamarckian claim (species go through a predetermined sequence of stages or grades of evolution, so we will observe similarities of function and similarities of form). In fact, there are an infinitely large number of possible explanations, so finding convergences do not bolster the generalisation of evolutionary theory. This is why, in my view, convergence (called “analogy” or “homoplasy” in systematics) is not informative about evolutionary history, and does not form part of classification; something I have talked about before.
Now, this means that we cannot use convergence as evidence for evolution. But we do. This is in fact commonly the case in science: we find things are as they were expected to be according to the theory and so we say the theory is “confirmed”, in defiance of all logical strictures. The Fallacy of Affirming the Consequence is the inversion of modus ponens (a similar fallacy is the fallacy of inverting modus tollens, called the Fallacy of Denying the Antecedent). We can show this as a table:
Valid forms Fallacious forms Modus ponens
If A then B
Affirming the consequent
If A then B
If A then B
Denying the antecedent
If A then B
Why is it that science relies upon affirming the consequent? It has to do with a rather complex topic of Confirmation Theory, and so typically philosophers of science will appeal to Bayesian logic:
a hypothesis is confirmed just to the extent that the likelihood that the hypothesis was true given the evidence is greater than the likelihood that the evidence would occur otherwise, very ( very ) roughly [here’s why] a hypothesis is confirmed to some extent just in case the probability that it is true is greater in light of the evidence than otherwise. Because science needs to affirm the consequent in a world where we can never make simple deductive inferences from known true laws, we needed to have a logic that allows us to commit this formal fallacy with justification. But this has ridden roughshod over some thorny ground, in my opinion, ignoring the fact that we do other things than simply address theories, hypotheses, conjectures and models in science.
Classification is somewhat a forbidden topic in philosophy of science as practiced by the post-Vienna analytic tradition. It’s what librarians do, for convenience, or psychologcal tendencies cause. It’s anthropomorphic and anthropogenic. Subjectivism, conventionalism, psychologism and conceptualism are terms thrown about in the debate. But classification, I think, can explain why we affirm the consequent. It has to do with limiting the scope of the argument.
Suppose I have a bag of marbles of various colours, and I want to make inferences about what colours there are (because the black ones are more valuable and I am about to trade my bag for a bicycle with Tommy, but I don’t want to give away the black ones and I don’t want to count them in front of Tommy, which would undercut my bargaining position for arcane reasons any ten year old boy will recognise). Now if there were an indefinitely large population of marbles, my hypothesis that black ones are rare might not be either confirmed or disconfirmed, no matter how many bags of marbles I had previously observed. I might have been in the area of a shop that happened to sell the brand that had few black marbles, whereas in general, only yellow ones are rare and the black ones are worthless. But suppose instead I am trying to make inferences in only my neighbourhood. There are many fewer marbles there, and the frequency of black marbles is more likely to represent that population, and hence the worth of the colours.
An inductively-based classification is more representative of a limited population, so in that case, the unbounded ignorance forced upon us by indefinitely large possible cases is trimmed away substantially. We can affirm the consequent here if the scope is small, because observations and actual frequencies converge. More than simple logic rules here; we aren’t so ignorant to begin with. Because we have classified our domain and iteratively refined it, we can confirm our hypotheses. It’s defeasible (that is, we might be wrong in a given case), but it’s the best we can do under uncertainty about the natural world. And it doesn’t need to be subjective or any of those other Bad Words. Yes, classification is a human cognitive activity, but all science is, and if we’re waiting for a God’s Eye View we may be waiting some time.
Classification restricts the inferential domain the arguments work in, and so the absence of a consequent does undercut the antecedent of the law or generality, if the scope is countable. I can confirm my claim there are no elephants in my left pocket by observing all the things in my left pocket (this goes to the claim we cannot prove a negative, which I do with respect to elephants in my pockets all the time). By setting the domain up (“my actual left pocket”) the formal point of the fallacy, which applies when the domain is all possible worlds, becomes unnecessary.
So sometimes the fallacy of Affirming the Consequent is a fallacy, and sometimes it isn’t, depending on what is a live and viable question. Since we do not know that ahead of time, we have to make inferences from where we begin.
Couple of questions John.
As I understand it, and surely I understand it wrongly, is that those medieval logicians worked with Aristotelian logic with was basically the syllogism.
All A’s are B’s. Some A’s are C’s. Therefore Some B’s are C’s. or some-such.
What I don’t get, is that Modus ponens ponendo (way of affirming by affirmation) and Modus tollens tollendo (way of denying by denial) seem to be solely useful on the conditional which isn’t a form of argument one sees in Aristotelian logic. Not unless it’s recast as predicate logic:
1. if for each x, if x is an A then x is a B (All A’s are B’s)
2. if y is an A, then y is a B (universal instantiation)
3. y is an A
5. therefore, y is a B (2, 3, modus tollens)
But of course, before Frege, Russell, et al. in the late 19th early 20th century this would have been impossible. What method did the medievals use to combine syllogism with modus ponens and modus tollens?
Another question, scientific reasoning which uses affirming the consequent has also been termed as abductive reasoning or argument to the best explanation hasn’t it?
Abductive arguments are “inferences to the best explanation.” They typically recognize some facts, point out that it is entailed by a certain hypothesis, and conclude that the hypothesis is true. Taken at face value, abductive arguments seem to be instances of the fallacy “affirming the consequence,” but they still play a central role in medical, scientific and legal reasoning
D’oh! that was meant to be:
1. For each x, if x is an A then x is a B (All A’s are B’s)
1. if for each x, if x is an A then x is a B (All A’s are B’s)
Yes to both. I am not concerned with the history of logic, which is largely of interest only to specialists. The fact is that now, for scientists, they need to understand these basic features of argument if they do not. Most, of course, do. I am presuming that a reasonable translation of the square of opposition into predicate logic is as a conditional – at least, that is what works when I have taught this stuff to non-philosophers.
As to abduction and IBE, that is later… this is an opening gambit in a longer argument. And, as I said before, this blog is where I work things out…
that a reasonable translation of the square of opposition into predicate logic is as a conditional
I don’t understand this. I presume it’s ignorance on my part.
this is an opening gambit in a longer argument. And, as I said before, this blog is where I work things out…
Apologies. Your post fired up a few neurons that synapsed and brought forth those questions and vague memories of logic and the term abductive reasoning. I didn’t mean to interfere with the end game of a series of posts that would be well worth the read.
Traditional logic (All As and Bs, No A is a B, etc.) uses the Square of Opposition to diagram logical syllogisms. I am converting it roughly into a conditional, which, as you say, was not a major part of the old logic.
Thanks John. Was the square of opposition the template for Kant’s categories?
Gack! I don’t know. I don’t do Kant if I can avoid it. Possibly.
On second thoughts. I’m derailing the thread. Beg pardon.
As far as I am aware, Kant doesn’t make a big deal about the square of opposition (he knows about it, of course). But his own version of the theory of logical judgments (categorical vs. hypothetical, for instance) is absolutely foundational . That’s where he gets his categories from.
Thanks Brandon. It must have been the superficial similarity between the diagram that John has in his post and the memory I have of a table of the categories that I have seen in the critique of pure reason that lead me to ask.
Aristotle (who started discussion of the syllogisms you are talking about, but didn’t finish), the Stoics, and the medievals (who took it quite far) built up their theory of hypothetical syllogisms (as they were called) in the same way they built up their theory of categorical syllogisms — by building it from ground up, slowly but rigorously working through the forms of argument to find which ones were valid. They were also helped, though, by the fact that, in principle (practice is trickier), every categorical syllogism can be turned into a hypothetical syllogism and vice versa, because there is a close logical relationship between a universal affirmative proposition and a conditional proposition. (If there weren’t, predicate calculus wouldn’t be able to handle categorical syllogisms.) The logical relationship is not purely straightforward, but it’s manageable.
So it actually wasn’t impossible — the medievals in particular would have found it very easy. They often (but not always) preferred working with categorical propositions rather than conditionals, though, for reasons having more to do with the Aristotelian theory of knowledge than with their ability to handle them. So perhaps that’s where the idea comes that they had difficulty with them.
Thanks Brandon. I started reading your reply, and once I saw the term hypothetical syllogism, another bit of synaptical awakening occured. I have seen in my attempts to study logic the hypothetical syllogism.
If A is B and B is C then A is C.
I think that’s how it goes.
I wasn’t aware of how much the Stoics and medievals contributed to the development of logic as most introductory books just mention in passing that Aristotle first systematized logic with his theory of syllogisms which was enlarged by the stoics and given a forth mood? (I think that was the term for things like Barbara, Darrii and so on) by the medievals. Thanks for that.
Can you tell me, if you’re around, if what was termed ‘Port Royal’ logic which I believe was logic that Hume, Malebranche, and Kant would have learnt was the sum of Aristotelian, Stoic, and medieval/scholastic logic? Or was it something else?
The Port-Royal Logic, by Antoine Arnauld and Pierre Nicole, was the first attempt to build a logic on purely Cartesian principles. It teaches a (very, very) simplified version of Aristotelian logic and has a few things on hypothetical syllogism, but because it’s Cartesian it’s only interest in logic is as a way to find clear and distinct ideas and preserve their clarity and distinctness while reasoning. Pretty much from the fifteenth century to the nineteenth century, with the exceptions of Leibniz and some of the Spanish Scholastics and maybe some scattered others, is a logical dark age — the field is in a state of collapse. Some of it gets preserved, through people like Aldrich or Arnauld or the Wolffians, but for the most part it’s extremely simplistic compared to what was done in the days of Ockham and Buridan. It was a point of pride, in fact: it showed that you weren’t a logic-chopping schoolman if you stuck to a few simple logical rules. It’s not until the twentieth century that logic as a field is again as sophisticated as it was with the medievals.
(This is all crude, of course: in reality there were ups and downs. Overall the early modern period up until Boole and De Morgan was a low point for logic as a field. It’s important to keep in mind despite this, however, that there were still advances on some fronts: for instance, the Port-Royal Logic has one of the early attempts to try to use probability theory to describe reasoning.)
I have to disagree. Logic was extensively developed through the nineteenth century (if you ignore Hegel, who did more to obfuscate than any ten politicians). Archbishop Richard Whatley’s Elements of Logic first published in 1829, is a marvel of clarity and set Bentham (both uncle and nephew), Whewell, Mill, Venn, Boole and a host of others going. We tend to read modern logic as beginning with Frege. That is simply wrong.
Thanks heaps Brandon. I wouldn’t know how to even begin to find out that information. It seems to be quite arcane or esoteric.
Cartersianism always struck me as odd. It seems to rest on what an individual can imagine clearly. It seems place too much emphasis on ones intuition, as if intuition is axiomatic or taps into ‘reality’. Without coming across as sciencefictionalist (your coinage?) scientific results do seem to often be counter-intuitive and place doubt on what we clearly and distinctly grasp as true.
When I read the meditations and Descartes would say something like ‘by the light of nature I clearly and distinctly perceive that this is false’ I felt he was basically making an argument from ignorance. From this there seems to have been derived a principle, Hume declares it a maxim of metaphysics in his Treatise I think, that what is conceivable is possible or some such. Which I think Hume used in his argument for uncaused effect. Which I’ve employed more than once when arguing on the intertubes against Lane-Craig’s cosmological argument which rests seemingly on the principle of sufficient reason and its reliance on causal explanations. If effects can be uncaused then the premise that everything that begins to exists has a cause is false.
Anyway, I’m waffling. Thanks again.
John, my discrete mathematics book basically gives the history of logic as Aristotle, stoics, scholastics, Venn, Boole, Frege, Russell, and so on.
Exactly the historical foreshortening that pisses me off!
Well, it was a discrete mathematics book that only included predicate logic to serve as a foundation/tool for mathematic proofs. I was surprised that it had information boxes about logicians at all.
Well, Whately’s nice, but that’s because he’s almost the first British logician since Aldrich to go beyond Aldrich’s very simple logical textbook in sophistication; Whately is, in other words, the first beginning of the upturn. But Whately’s still doing little more than presenting an elementary textbook; it’s the interest he sparks in logical questions in people like Boole and De Morgan that really starts things back on the road to recovery. I can see why one would want to mention him, but I would classify his role as primarily just making people realize that there were logical discoveries to be made: it’s still a long way forward from Whately to reclaim what had been lost. But very much in agreement about logic not beginning with Frege — the algebraic logicians did splendid work.
That’s not chopped liver! There’s a special place in Plato’s heaven for those who set up the conditions for a successful research program. Moreover, you ignore people like Jevons, who came up with both algorithmic processing of logic, and also quantifiers (yes, I know that’s an old debate, but I plump for Jevons). Basically logic was active throughout the nineteenth century, and as a result we got both formal logic and philosophy of science, so what’s not to love?
Who decided that logic was better taught with derivations instead of algebraically. It seems that when taught in philosophy at least one will find derivations to prove the validity of some argument, while in mathematics books one finds algebraic substitutions to prove some argument. Is it just that that is useful in maths, and not so much in philosophy?
Agreed, it’s certainly not chopped liver (I actually take the trouble to mention Whately myself when I occasionally do a history of logic spiel for my intro course). I tend to think of Jevons (explicitly writing on logic about 1860ish with Boole as one of his starting points) as after Boole (1850ish), when we are considering the history of logic, but you’re right that he shouldn’t be ignored.
Despite the fuzziness of such things, I’m going to hold fast to Boole as the key landmark here. There are logicians before him who do say interesting things, and there are things in political economy and the like that could be seen as setting up for later logical work, but mostly it’s things that had already been discovered in the thirteenth or fourteenth century and been forgotten, and it’s only with Boole and De Morgan that we get the really systematic re-thinking that allows for extensive discovery. Actually, I think that if it weren’t for Boole most of the logical work from the nineteenth century would look like Bentham’s lists of fallacies, Mill’s System of Logic or Newman’s Grammar of Assent, which indeed is where most of the work was post-Whately, pre-Boole: not that these aren’t interesting, but I don’t think it can really be said that they provide a good context for systematic logical discovery.
I’m very much a fan of the nineteenth century (seventeenth and eighteenth, too), and especially in philosophy of science; but I think it’s very difficult to make a plausible case that there’s any sort of real research program in logic, as opposed to some scattered people saying interesting things about scattered logic topics, before the second half of the century. Lots of great philosophical work is done in the first half of the nineteenth century; I just don’t see that much of it was done in logic. Certainly the Benthams and Mill, for instance, despite coming up with some interesting ideas here and there, aren’t usually much of an improvement beyond Watts, who comes up with some interesting ideas here and there, but is in turn not much of an improvement beyond Port-Royal. But starting with Boole we have an explosion: Boole, Jevons, Venn, Carroll, Keynes, Pierce, Ladd-Franklin, etc., etc. And I would say that it’s really with this movement that logic as a field of study first begins again to have the sophistication that can really be said to rival the age of Ockham and Buridan. Prior to that, ‘active’ seems less plausible to me as a description of the research than ‘occasional’.
I haven’t looked at the topic myself, but I imagine single most important factor was the influence of Bertrand Russell. Venn and most other logicians of the day seem to have considered Frege’s Begriffsschrift an unpromising step backwards (Venn’s review of it was quite brutal), but Russell, who shared Frege’s interest in philosophy of mathematics, argued that when you looked past the cumbersome notation it was very insightful; and because of his advocacy a better notation was found, and thus the predicate calculus in the form we usually think of it, and people kept using it because that’s what they were taught. I’m sure there are other factors I don’t know about, but I’d be surprised if Russell wasn’t the single most important one.
I read somewhere that Russell was the founder of, and his shadow still presides over, analytic philosophy. Which is interesting because he was surpassed in philosophy by his pupil, Wittgenstein, and others of course. It seems that his attitude or whatever you want to call it, held the day. I may have that all wrong.
I first encountered logic because I read the God Delusion, and then read many knowledgeable types say that the Dawk had really no idea philosophically speaking. So I started trying to learn philosophy which lead to learning logic from books by Tarski, and then Kalish et al. (I really should have gotten a modern book on logic in the first place, but as I had no idea, I bought whatever someone told me online when discussing the subject.) Anyway, to cut to the chase, all the logic books of a philosophical bent seem to favor derivations. I’d never encountered algebra of logic until I had to learn mathematical logic. There were no derivations in my math subjects.
It seems somewhat ironic that philosophy favours derivations in place of algebra, because of the influence of Russell, when Russell was a mathematician (or philosopher of mathematics) as much as he was a philosopher. At least, it seems that way to me.
@Brian, John & Brandon, it seems your little discourse on the history of logic took place whilst I was enjoying my well earned sleep but if I may add a footnote or three…
I agree totally with Brandon that Boole is the point where modern logic ignites in the 19th century although it turned out to have a slow fuse, Boolean logic didn’t really take off until long after Boole’s death.
I would still however break a lance for Richard Whately. Until he published his Elementary Logic, logic had been effectively dead in the English university system for more than a century. Whately’s logic revival did have repercussion that directly effected Boole and the genesis of his logic. Without Whately there would not have been the dispute on the quantification of the predicate between Hamilton and De Morgan that led Boole to develop his system of algebraic logic.
Having said that there is a second stream of influence that all to often gets ignored in the historical discussion and that is the creation of abstract algebras by the Cambridge-Dublin axis starting with Woodhouse and Peacock and continuing through the other Hamilton, Douglas Gregory, De Morgan, Boole, Cayley etc all the way up to Whitehouse with his Universal Algebra.
On the question as to why philosophical logic is of the axiomatic deductive variety rather than Leibniz/Boole algebraic type, this is a very interesting question that either gets ignored or answered wrongly. It is often claimed or just simply assumed that the Frege/Russell/Peano logic became dominant as soon as it emerged at the beginning of the 20th century. However the algebraic logic was still in contention if not dominant up to about 1930 with Löwenheim, Skølem and Tarski all working in the algebraic logic tradition of Schroeder i.e. Boolean logic with quantification. The turning point was Hilbert/Ackermann Grundzüge der theoretischen Logik (1928), which adopted the logic of Principia.
Thanks for the comment, especially for the heads-up on the importance of Hilbert & Ackerman.
I think I am being outvoted by Smart People on the significance of Whately; but, given that I actually do like Whately, I’m happy to concede that there are probably things about his work and his influence that I’m not properly taking into account!
I made an inductive inference that Thorny would have the low down on this subject.
1) Reality is made up of bits far smaller than animals. It’s not necessarily true that a crude model of it can be accurate, consistent, and useful as we might wish it to be.
2) You severely undersell Bayesean inference. “Now, this means that we cannot use convergence as evidence for evolution.” We cannot deduce evolution’s truth from convergence. We can use it as evidence.
Although we do need to calculate the posterior probability of convergence under other models too.
Classification restricts the inferential domain the arguments work in, and so the absence of a consequent does undercut the antecedent of the law or generality, if the scope is countable.
Very glad to see someone recognizing this. The fact that induction is crucially affected by the domain is one of those things that’s really important that even philosophers sometimes forget (too much dealing with the domain of all possible worlds, which is often not the best domain for working in) — it leads occasionally to funny business with interpreting counterexamples, leads people to overlook the relationship between inductions and eliminative arguments, and leads people to overlook the role of criteria for relevance (which are related to the domain) in inductions, just to give some examples.
No, it does not mean that at all. It only means that evolution cannot be the result of a logical inference from the evidence. And we really should say that the evidence supports the theory, not that it confirms the theory.
Generally speaking, a scientific theory is an attempt to solve a “goodness of fit” problem, rather than a logical inference problem. Often much of the data is theory-laden, which rules out any possibility of such a logical inference. It’s a bit like solving a crossword puzzle. If you answer all of the horizonal clues, and then you observe that the vertical clues are then automatically solved, that adds a lot of support to your solution for the horizontal clues (because everything fits so well).
>>Logic is taught as the second most boring subject after calculus
Well it may be taught as that but it isn’t. I think logic is very interesting.
Brandon >> They often (but not always) preferred working with categorical propositions rather than conditionals
Hmm. They talked a lot about ‘consequences’ which are essentially conditionals, and pretty much every logical term we use to talk about such inferences date from at least the 13th century. Probably earlier, but we don’t have many surviving sources.
>> if the general claim (the “law”) says that all As are Bs, and this B is not an A, then the law is false
Shouldn’t that be ‘this A is not a B’. If the law is that ‘every man is an animal’, and this animal (a giraffe) is not a man, that doesn’t disprove the law.
>> modus ponens, a Latin term we inherited from the marvellous logicians of the middle ages.
Actually I have never come across a medieval logician who used that term (although you are right that most of our logical terms originate with the Latin schoolmen). It is probably later than the 13th/early 14th century. You can use my amazing and celebrated and highly-praised-by-some logic site searcher here http://www.logicmuseum.com/latinsearcher.htm to verify this. (Includes all of Aquinas, most of Ockham, and many other scholastic logicians and philosophers).
I suppose I should have said that “every medieval logical text I have looked at fails to contain the term ‘modus ponens’” and then argue inductively to “every medieval logical text whatever fails to contain the term ‘modus ponens’”, although I think that would be a bad argument.
>>This problem goes back to David Hume
What you perhaps should have said was that every source you had looked at says that the problem goes back to David Hume, and then made the inductive inference ‘every source says that the problem goes back to David Hume’ but in this case the inference would be false, because actually long before that Avicenna talks about some problem of Sudanese. “Were we to imagine that there were no people but Sudanese, and that only black people were repeatedly perceived, then would that not necessarily produce a conviction that all people are black?”. There is an excellent paper on the history of it here http://stephanhartmann.org/HHL10_Milton.pdf .
Note also (as the paper notes) that the medieval writers used the term ‘induction’ differently. More like ‘proof by example’. I strongly recommend John Longeway’s translation of Ockham’s commentary on the Posterior Analytics, with a great 100-page introduction. I discuss it here http://ocham.blogspot.com/search/label/longeway
Brandon recommended this post by the way.
I’m not sure I understand your contrast between a “goodness of fit problem” and a “logical inference problem.” It seems to me that science is chock full of logical inferences, strict derivations from theoretical postulates. Of course, every such inference is accompanied by a ceteris paribus clause, but that doesn’t change the nature of the inferences involved.
Sure. But in that case the theory is a premise, rather than what is inferred.
I have not had any formal training in logic, but have read some of Popper. I think hypotheses cannot be confirmed, only supported or rejected. Is this a reasonable way to look at it?
That is what I am rejecting; and I think most philosophers of scientific methodology would also. Hypotheses cannot be deductively confirmed, no, but why think that is the only game in town?
Some random thoughts:
While everyone makes a big deal about Cygnus atratus, I feel sorry for poor Cygnus melanocoryphus. How would a logician confront that bird? Does it falsify the claim that all swans are white?
I don’t think your statement about Popper and systematics is quite right. Some cladists (Farris, Kluge, etc.) adopted Popper as patron saint, but hardly all. There are, even today, arguments about who is the true Popperian, but most systematists don’t care now, and I don’t think they cared then either. Trying to shoehorn the principle of parsimony into Popper’s falsification language can be an amusing exercise in cognitive dissonance, though.
Also, the idea of classification as inductive predictor was certainly quite alive as late as the mid-1970s, and played a prominent part in the Cladist Wars. Many trees were killed in arguments over which system of classification was the best summary of character data, and hence the most stable. The entire justification for phenetics, in its ultimate form as abandonment of any quest for phylogeny, was just such a claim. What finally, I think, won out was a realization by most that efficient summary wasn’t the point of classification after all, and that we really should be trying to represent phylogeny, even if its groups were not the most efficient representations of similarity and didn’t allow the greatest number of correct inductive conclusions.
Probably most scientists in any field do not care about philosophy; those that did at the time chose Popper. And the predictive induction claim is something I got from (you guessed it) Gareth Nelson, in a paper published around 1979. I am not trying to be original here, and neither was he at the time.
Yeah, he was one of the major makers of such arguments. He was arguing for cladistic classifications as best summarizer of information. It was a fine, three-way battle among cladists, “evolutionary” systematists, and pheneticists. And I do think “who cares?” did eventually emerge as the winner.
But you didn’t answer the swan question.
I think “who cares?” was a retreat from battle, not a winner. In fact I think that the approach I call “statistical phylogenetics” (what Joe Felsenstein calls the “It Doesn’t Matter Very Much” “school” is in fact an admission of frustration, not a solution.
If it isn’t white it isn’t a swan, OK?
Now, now. It’s not “who cares about classification?” but “the proper principle of classification is phylogeny, but not because it’s the most efficient summary of information”.
And how much white? Every swan has some non-white bits. No swan is 100% white or black. I’m asking where you draw the line. Hey, even Cygnus atratus has some white on it.
Same thing, really.
A swan that is not black in every part is not a black swan. Aristotle said so, I’m sure.
Ah, well. At least there are still green vases.
No the vases are grue
Filled with bleen water, or at any rate something that closely resembles water on Twin Earth.
I was ” forced to undergo a logic class. ” I almost failed the course (scrapping by with a D), but I did learn “it is enormously relevant to everything we do,” so thank you for the “simple introduction” or refresher.
I hesitate to post this for fear of making a fool of myself, as I am no expert on these matters, but this doesn’t seem to me make sense even as a rough explanation. You seem to be saying that, according to the view under consideration, a hypothesis is confirmed just in case (“to the extent that”?) p(h|e) > p(e|~h)–or, if you are using “likelihood” in the technical sense, just in case p(e|h) > p(~h|e). Neither one makes sense to me. Surely the relevant comparison has to be either p(e|h)>p(e|~h) or p(h|e)>p(~h|e)–or perhaps p(h|e)>p(h). In any case, it can’t be between the “likelihood” (probability?) of the hypothesis and that of the evidence, as you say.
John did write “very roughly”.
(there you go, John, I’m writing your replies for you)
You are probably right. I find Bayesian logic confusing and unhelpful. But another role for this blog is for me to make careless statements in public so I can be corrected by Those Who Know.
Fortunately, we have The Stanford Encyclopedia of Philosophy to set the record straight:
So what John should have written is this:
Note: (1) The pertinent conjunctive phrase is “confirmed to some extent just in case,” not “confirmed to the extent that.” (2) “Otherwise” here means “otherwise than in light of the evidence,” i.e., “without regard to the occurrence of the evidence,” not “in light of the non-occurrence of the evidence.” (The original passage does not make this mistake, but I thought it worth adverting to in case somebody does.)
This may indeed be “unhelpful,” as John says, but it does not take account of the work that Bayes’s theorem does. This is the point at which a phrase that occurs in the original passage, “the likelihood that the evidence would occur otherwise,” becomes relevant. Setting aside the potentially confusing term “likelihood,” this touches on the idea that one can compute the probability of a hypothesis in light of certain evidence from the probability of the evidence in light of the hypothesis and certain other values. But my command of these matters is too shaky for me to venture to expound them further.
MKR, many thanks. I accept your revision, and will amend the article.
Regarding science I think you can help by expanding the claims a bit. A scientific model does not simply say “if A then B”, it says “If A, then B and not not B”. That is, it limits the predictions. The problem with ID/Creationism is that they allow, but do not predict results. So the ID/Creationist model actually says “If A then B or not B”. We use evidence to eliminated models that predicted some other result. There may be an infinite number of possible models that predict B rather than not B, but there are only a limited number of actual models that do this. And being limited beings we can only deal with what actually exists and not what could exist.
(I have a troublesome feeling I went down a familiar path here rather than the right one.)
Comments are closed.