- Conceptual confusion
- The economics of cultural categories
- What are phenomena?
- What counts as sociocultural?
- Constructing phenomena
- Explanations and phenomena
There is a naive empiricist view held by nobody on close inspection, that phenomena merely present themselves to the observer, and call for explanation. At least since Kant, such a view has been untenable, as Michela Massimi has shown. As she notes, it is well understood that phenomena are underdetermined by observational data, and she plumps instead for the following view:
whenever we have prima facie rival potential causes for the same phenomena, in order to distinguish between them and to determine which entity-with-causal-power has actually produced the observed effect, we must in the end rely on a description of what causal powers/ capacities /dispositions an entity is to have so as to produce the observed effects. This description is given by a scientific theory. [Quoted in Massimi 2011, from her 2004]
On the one hand, I agree that we need to have prior understanding of the causal powers that produce our observations of phenomena, but on the other, I do not agree that this is, necessarily, described by a scientific theory. To make any sense of that view, and to support my whole pragmatist view of phenomena and explanation, I must needs* do a bit of work.
Let us begin with the naive empiricist (NE). He says that in observing the world, certain phenomena are ready made and call for explanation. But the Kantian replies that the NE must choose what patterns in the data to include in the phenomenon, and what to exclude as irrelevant or noisy. Imagine trying to program an AI to select the “right” patterns to call explanatorily relevant phenomena out of all possible data sources, for instance. Hence, she will say, the NE has no access to phenomena until he has a theory of causality, relevance, and explanation in that (the phenomena’s) domain.
But this leaves us with a Starting Problem. Bayesian logic will deal with this by an iterative process of refining the prior probabilities based on new data (asymptotically?) approaching the correct patterns. But the naive first investigator (not necessarily a naive empiricist) faces a field that has no scientific theory on which to draw; no prior probabilities for that domain. How to commence? What should she pay attention to? Bayes suggests an answer, and it has to do with how we extrapolate from the general knowledge we have of the world to specialised domains.
Consider a phenomenon: the precession of Mercury. This is where the orbital perihelion itself moves slowly around the sun. It is as clear a phenomenon as you can find in science, but it would not have been a phenomenon to the Ptolemaic astronomers for the simple reason that they could not observe it without previously having adopted both a heliocentric (or perhaps Keplerian) model of the solar system, and Newtonian physics (as opposed to, say, Descartes’ vortex physics). Yet the observation of the precession of Mercury’s perihelion could be done without very much in the way of theoretical knowledge, using measuring instruments that in no way depended upon either theory. It simply was not an anomaly worth noting until the Newtonian/Copernican model had been adopted, and it deviated from the expectations of that model. And even then, it took around 150 years to show up as an anomaly.
With this (admittedly violently oversimplified) statement of the history of this issue, let us draw some tentative inferences.
1. Astronomers had an account of the causal powers that resulted in the observations – the transmission of light via optical lenses, along with assorted geometrical and mathematical techniques. None of this required theory very much theory. As Hacking noted, optical techniques were testable without these theories, and the theory of the propagation of light was not finalised until after this.
2. Measurement is more important here than theoretical descriptions of these causal powers, contra Massimi.
3. Measurement is something that is independent of the phenomenon, and the domain of explanation, that sets up the anomaly.
4. The phenomenon calls for an explanation (and possibly, a new theory).
It seems from this example, and many others, that in order to identify a phenomenon, one does need prior assumptions about what is “normal” or expected dynamics of the domain under investigation. How do we acquire these expectations, in order to be able to construct the phenomena? The usual explanation is, as Massimi says, that we rely on prior theory. And this is often, indeed in modern science, usually the case. There’s a lot of theory in play in nearly all domains of scientific exploration. Nobody starts an investigation nowadays without a slew of ancillary theory and techniques. So we can concede that to the Kantian.
But the Starting Problem generalises beyond individual cases of novel investigations. How did science itself get going? In the absence of prior theory of any real utility, there has to be a process. I was taught, as an undergraduate, that there was always a prior theory – Aristotle, Ptolemy, Galen, and so forth. But at some point as we move back in time, the meaning of “theory”, as a set of models, techniques and predictive results, fades away into religious and metaphysical speculation, superstition, and cultural practices (literally: cultic behaviours like consulting oracles). Did science bootstrap itself into existence? If it did, that is a counterexample to the claim that we use theory to construct phenomena.
There are two approaches to this issue:
1. Science evolved from quasi scientific etiological accounts (origin stories of the gods and the creation of the cosmos); and
2. We have “theory” from our evolution as cognitively competent organisms.
Neither is all that compelling. Etiologies (like Hesiod’s Theogony) have a completely different function to natural philosophical investigation. They are moralising narratives, not empirical tests and studies of the nonsocial world. And to call our cognitive predispositions as evolved “theory” is to beggar the meaning of “theory” so completely that anything is theory, a move I do not like at all. If “theory” has any meaning in science, it must not be watered down to include our disposition to notice mesoscale phenomena that we might eat, navigate, fear or copulate with.
And yet, that set of sensory dispositions is what does underpin scientific investigations. Most of our measuring tools are ways to represent at mesoscale what we cannot otherwise see or notice. A telescope and a microscope both present phenomena in ways we can use our evolved sensory apparatus to observe. However, as anyone who has used either of these devices knows, some experience is required to interpret what is seen, and the more a measuring device abstracts the microscale or macroscale, the more training it takes to be able to interpret what is measured. Such things as molecular assays, statistical analyses, cloud chambers, and x-rays all require more than naive sensory processing [see note 5 in Bogen and Woodward 1988].
This is why I have said that a phenomenon is something recognised by observation undertaken by a trained and experienced specialist†: it’s not just the observing (as Bogen and Woodward said) and it’s not just based on theoretical description (as Massimi says). But phenomena are something that contrasts with all our prior expectations, and so calls for an explanation. Those prior commitments include theory, to be sure, but often they do not. That is to say, they do not need to involve theories in the domain under investigation. We all have theories of this or that which set our expectations, or something that might, with sufficient effort, be cast as theories, but most of what we expect comes from the exigencies of interacting with the environment, both social and extrasocial, in order to make a living. That is to say, trial and error, leading to success or failure in some motivated goal.
So in order to understand why phenomena are things that call for explanations, we have to understand why explanations are called for, and for that I turn to the Contrastive Theory of Explanation [Van Fraassen 1980, Garfinkel 1981].
Bogen, James, and James Woodward. 1988. “Saving the phenomena.” The Philosophical Review 67 (3):303–352.
Garfinkel, Alan. 1981. Forms of explanation: rethinking the questions in social theory. New Haven, Conn: Yale University Press.
Massimi, M. 2004. “Non-defensible middle ground for experimental realism: Why we are justified to believe in colored quarks.” British Journal for the Philosophy of Science 71 (1):36-60.
——. 2008. “Why there are no ready-made phenomena: what philosophers of science should learn from Kant.” Kant and Philosophy of Science Today, Royal Institute of Philosophy Supplement 63:1–35.
——. 2011. “From data to phenomena: a Kantian stance.” Synthese 182 (1):101–116.
Van Fraassen, Bas C. 1980. The scientific image. Oxford: Clarendon Press.
Woodward, J. 2000. “Data, phenomena, and reliability.” Philosophy of Science 67 (3):163–179.
*Must needs is one of those wonderful archaic English phrases that seems totally unnecessary but has just the right flavour for academic pretentiousness. I am using it wrongly here, of course.
† Woodward defines phenomena as “Phenomena are stable, repeatable effects or processes that are potential objects of prediction and systematic explanation by general theories and which can serve as evidence for such theories” [Woodward 2000]. I do not deny this definition either.