Last updated on 1 Mar 2019
- Conceptual confusion
- The economics of cultural categories
- What are phenomena?
- What counts as sociocultural?
- Constructing phenomena
- Explanations and phenomena
Anyone who has ever had a child knows the issue with indefinite “why?” questions. The interrogator asks why for every answer that is given, until the responder gets tired or emotional. And this is not just a problem with preschool age children, but also with scientists. It is for this reason – and indefinitely long chain of questions – that science is divided into investigatory domains, for once one has gotten to the base level of explanations in a domain, to continue to ask why is to hand off the problem to another expert group. This prevents scientists from getting emotional, to a degree.
Science is divided in practice into smaller bite-size questions: why does a peacock have that ridiculous tail? Why does Mercury’s precession not match predictions? What causes X? Once that question is resolved, experts move onto another tractable issue in the “field”. Scientific fields, however, are often merely institutional divisions, and crossing those divisions negatively impacts on the ability of a researcher or research group to gain funding and respect (one reason why interdisciplinary studies are looked on askance). So, the limitations of investigation and explanation are often fairly arbitrary and artificial.
And yet… when a science finds that its division of labour is no longer progressing, such institutional divisions will often evaporate and be redrawn. Molecular biology, an amalgam of molecular chemistry and genetics and cytology and developmental biology (increasingly), and so on, is a good instance of this. In this way, sciences adapt to a twofold set of pressures: one being social (or, as I have called it in this series, “constructed”) and the other being natural (“unconstructed”), to produce a set of problems, explanatory resources, techniques and categories that are natural and artificial.
In the early 1980s, two writers, Bas van Fraassen (1980) and Alan Garfinkel (1981), independently came up with an account of explanation which has come to be known as “contrastive explanation”. Prior to this, explanation was generally regarded as the process of deriving the observed outcome from laws and initial conditions, the so-called nomological-deductive model. Van Fraassen and Garfinkel, however, argued that to explain is to select the best solution from a contrast-class of alternatives for that problem. In short, to explain is to give a relevant answer to a well-defined question.
In van Fraasen’s Contrastive model (1980, 142ff), there are three factors in an explanation:
- A Topic; that is, a fact within an investigatory subject.
- A set of contrasts. Lipton (1991) calls these “foils”.
- A relevance relation (to exclude answers that are not part of that topic.
A why-question is thus a three-place relation: Question = <Fact, Foils, Relevance>, and an answer to the Question is of the form:
Fact, in contrast to all the alternative foils, because of Answer.
Or, to put it in ordinary English, that fact is the fact because it isn’t the case that anything else is the fact because something makes it the fact. This is pretty obvious, but it points up what an explanation must do – exclude all other possible alternative facts, in a way that expresses an answer to that question.
This is all abstract (and I have avoided van Fraassen’s notation to try to make it less so), so let’s use Garfinkel’s unfortunately apocryphal example of Willy Sutton. Sutton was a notorious bank robber in the 1920s and 1930s. Once, so the anecdote goes, Sutton was asked why he robbed banks (the version Garfinkel uses has a priest asking; other versions have reporters and gaolers). Sutton replied “That’s where the money is”. The interrogator intended, from context, to ask why Sutton robbed rather than making an honest living; Sutton, however, had a different contrast-class: robbing banks versus robbing corner stores, for example. He explains his choice of target, rather than his choice of activity, by noting the greater return in robbing banks. Much strife in the history of science has been caused by a failure of competing researchers to set up the same contrast-class, and often scientists call their opponents (in the social sense) “unscientific” due a lack of shared contrasts. But that is for another discourse. For now, I want to focus on how contrast classes cause phenomena to be noted.
Instead of a contrast-class, I want to make use of the physics notion of a phase space (or a Hilbert Space, for mathematicians)*. Such a space is defined by a number of axes (plural of axis, not the thing scientists sometimes wish to bury in their opponents heads), each of which is an independent variable of the issues or ideas in play, and which can be of any number of dimensions. Assuming each axis is a set of real numbers, there are indefinitely many possible coordinates within that space, each one of which represents in our case a potential answer to a topic question. To answer a question is thus to assert a coordinate in that space is correct.
Now any scientific field at a given time and place, has an existing set of alternatives within a phase space. And in that field, there are a subset of viable alternative explanations/foils. The smaller the viable subset, the more consensus there is in that field. In the case of a field that has a single coordinate explanation, there is 100% consensus. Generally, though, there is not just one alternative in play. Sometimes the viability set is dependent upon measurement error, sometimes on variant theoretical terms, and so on.
Now, suppose a field which has a high consensus (to pluck a figure out of the air, 97%). This means that the experts in the field have a bounded set of expectations for any new observation. Nobody expects in climate science, for instance, that the next set of measurements will show a massive cooling, or even a stable set of temperatures. The explanation for the facts observed over the past 100 years is that CO2 is causing the retention of heat. Such a phenomenon would be anomalous to the current state of explanation (i.e., outside the viability space). It would be the making of the phenomenon in fact. It would call for an explanation not currently in play.
This is not the only way that phenomena are recognised, of course. If a set of prior measurements (say, on animal sizes) fell within a distribution curve prior to a novel observation of sizes outside that range, this may also trigger recognition of a phenomenon that calls for explanation. Likewise, an entirely novel set of observations (of a new species, for instance) may be a phenomenon that calls for numerous explanations to be tested. This is how the Starting Problem is overcome. Observers never start tabula rasa, and so even a prescientific culture can recognise phenomena when there are no prior explanations to be had, as prior experience sets the limit of expectations, and thus identifies phenomena. Even a prescientific observer (like a hunter) has prior expectations, based on cultural inheritances and personal experience.
To return to the phase space idea, what makes answers viable in that space? There is no principled answer to that, I think, but one point is that science iteratively refines its viability spaces over time based largely on empirical data. If your expectations are that, for instance, orbits will be the shortest geodesics in the gravity well, an orbit that an explanatory model allows that is not the shortest geodesic is outside the viability space. As van Fraasen noted, a theory must be empirically adequate†. It must also force that coordinate, within measurement error, to be that sort of outcome, and so on. But theory is not the only thing that sets empirical expectations.
So phenomena are things that stand out. What does this mean for natural categories? Stay tuned
- Gärdenfors calls this the “geometrical approach to the structure of concepts”. See especially his section 4.4.3. One phase spaces, see Wikipedia.
† There are numerous “theoretical virtues”. See this forthcoming paper by Michael Keas.
Gärdenfors, Peter. 2000. Conceptual spaces: the geometry of thought. Cambridge, Mass.: MIT Press.
Garfinkel, Alan. 1981. Forms of explanation: rethinking the questions in social theory. New Haven, Conn: Yale University Press.
Keas, Michael N. 2017. “Systematizing the theoretical virtues.” Synthese.
Lipton, Peter. 1991. Inference to the best explanation. London: Routledge.
Van Fraassen, Bas C. 1980. The scientific image. Oxford: Clarendon Press.
The Sutton joke is frequently repeated, but I don’t see it saying much other than causation is complex, especially in psychology.The contrastive “insight” only says something about practice of human problem solving, but not the form of the final explanation.
No, it doesn’t, because the form of explanation is context-dependent in each field. What it does do it indicate that explanation is question dependent: the “erotetic logic” of explanation. That is, what question is at issue in each case.
See: Prior, Mary, and Arthur Prior. 1955. “Erotetic Logic.” The Philosophical Review 64 (1):43-59.
Hi John. I know where you’re coming from, and it’s attractive to some extent for describing theory formation and testing, and even the giving and receiving of explanations at an individual level, but where does Wholism then fit in? What about effective theories, where there is an excellent data fit but mechanistic explanation is actually below the level of coarse-graining chosen? The nice thing about reductionism is that the different levels of explanation all interlock.
…take the example of someone explaining a mathematical theorem. Is this different from scientific explanation at the level you are talking about? [I am wondering where semantics slots in too]. One person wishes for another to instantiate a particular conceptual structure (a la Gardenfors). The success of this can be tested by presenting novel data to the student to be processed using this structure and obtaining identical results. We accept that there are a large number of equivalent proofs, short and long, and a good teacher chooses an explanation accessible give the student’s prior knowledge and computational capacities. This doesn’t say anything about the explanation that might be given to an ideal listener.
Anyway, carry on to pt 7!
Comments are closed.