Explanation

It occurred to me as I was chatting to a friend (KiwiInOz) that I actually have a philosophical method. It comes as a surprise. I thought I just meandered along, but as I yet again did a semantic space diagram to outline the issues (in this case in biodiversity measures that my friend and I are working on) it hit me that this is my method – analysis of issues in terms of axes determined by the active variables in a given situation, discourse or debate.

This led me to think of why it is my method, though. And the answer is to do with the nature of explanation.

My first paper (1998) came out of my masters thesis, in which I tried to give an account of science as an evolutionary process, following David Hull’s lead, and I invented (or rather reinvented as we shall see) what I came to call a “state space” model to show how scientific theories evolved. There were, so far as I could tell, two ways: either differing positions were taken, coordinates in the conceptual or semantic state space, or different variables, conceptual axes, were taken up by adding or eliding issues.

A scientific explanation, I thought, was effectively to take a coordinate in the overall space on all of the contested issues/axes. In physical sciences, this is more than a metaphor. The model of a theoretical explanation is in fact an equation with bound variables; the explanation is twofold – first to interpret the variables in physical terms, and the second move to show that the model accurately represents the empirical phenomena. When it does better, in measurable ways, than competing explanations, then a degree of explanation has been achieved that wasn’t there before. The difference between the empirical data and the model’s explanation is grist for a revision, or elaboration, of the model – in short our present models and explanations tell us what we need to explain next. Only if repeated attempts to refine a model fail for long enough (this being a relative quantity of time) do we abandon our best models and start again, and even then they can come back to haunt us in the form of a novel application of the older model.

But state space models in physical sciences tend to provide not a single coordinate, but a surface over which the system or phenomenon under explanations can range – so long as the observed trajectory of the system is on or near the surface described by the model, there is so far an explanation of the facts. A full explanation would, I think, involve showing in a suitably rich state space that only the actual states are possible, in the order they are observed (which turns out to be a single coordinate in a state space of the dynamical model after all).

The traditional views of explanation, of which the nomological deductive model of Hempel and the functional model that preceded it are the best known, have been recently superseded, at least in fashion, by what has come to be known as the causal mechanical model of Salmon and Dowe. All of these have some exemplary case of explanation to which all other methods must conform. The ND model had a law statement, a set of boundary conditions, and the observed phenomena deduced from it. It was elaborated to included statistical inductive models, but the structure remained the same. The CM model relies on a theory of causation as a “conserved property, quantity or mark”. I don’t know what to think is best, or even if anything is best to capture the ways science really reasons. But I do think that explanation has to limit the cases that are possible so that the empirical cases are closely confined within a narrow set of possible states for explanation to occur, whether the model is a lawlike generalisation, a causal account, or a statistical inferential heuristic.

This goes some way to explaining why abstract models like natural selection are explanations – suitably elaborated and interpreted (for instance in identifying what the fitness carrier is in a given case, such as a trait, a gene, or an overall phenotype) the model does exactly what I think it should. Likewise it goes to explain the conceptual role of purely theoretical simulations that we find in theoretical and computational biology. If the abstract model has a suitable physical interpretation we can say it is explanatory, because it must confined the outcomes to a field close to the observed results. If it cannot be interpreted, it is only a partial, or even a subjective, explanation sketch. That is, we look at the dynamical trajectories predicted by the model and say that they look like they might be similar sorts of phenomena to something we actually see. But until they have been fleshed out, they are at best mere tools in the armory.

I recently published a paper on speciation (2007) in which I used my conceptual space method, which really isn’t mine, to elaborate differing modes of speciation relative to each other. A conceptual space model allows us to figure out what the relationships between different models may be. Without any particular statistical data or analysis, I gave the following model of speciation models:

Speciation modes

Here the conceptual model is based on three active variables – the amount of gene flow m per generation between individuals (or populations), whether selection is imposed by the non-specific environment that is extrinsic to the species, or whether it is intrinsic in the form of competition such as sexual selection, and how much stochastic factors like drift and contingency of distribution contribute to the final speciation. Putting it this way enables us to overcome the dichotomies of the debate (selection v drift; allopatry v sympatry) and see that the field is rather more elaborate than we realise (unless we are specialists who can keep all this in our heads).

What have I explained here? Very little. I explain that these are the active issues. I explain that the positions taken are in such and so a relation, but I don’t give a meta-level account of why speciation modes must be related in this way. To do that, I’d need to have a solid mathematical theory of speciation, like Sergey Gavrilets’. I hope he comes up with one.

But it isn’t philosophy’s place to do this for science. Instead we do the conceptual placement, in the hope that it will improve our knowledge of the ways of science for philosophy, and that it will improve the debate within the sciences. But my method – which comes initially via Forms of Explanation: Rethinking the Questions in Social Theory by Alan Garfinkel (I read it as an undergraduate not long after it had been published), and is being elaborated by Peter Gärdenfors – adds to our understanding of explanation, I think, even if we are not always able to offer one in every application of it.

References

Wilkins, John S. 1998. The evolutionary structure of scientific theories. Biology and Philosophy 13 (4):479–504.

———. 2007. The dimensions, modes and definitions of species and speciation. Biology and Philosophy 22 (2):247-266.

8 thoughts on “Explanation

  1. If you’re referring to the figure, no, the journal’s favourite colour is black ink. Hence all figures have to be in one colour. I wasn’t going to redraw it for this article, so I used it as it was.

       0 likes

  2. I wasn’t actually. It just seemed a good metaphor for the underlying theme of the posting.
    Dang, and I thought it was quite funny too.
    Bob

       0 likes

  3. I wasn’t actually. It just seemed a good metaphor for the underlying theme of the posting.
    Dang, and I thought it was quite funny too.
    Bob

       0 likes

  4. This is the reason I visit (no not Bob O’H’s recondite attempts at humour) articles that activate my grey matter.

       0 likes

  5. This is the reason I visit (no not Bob O’H’s recondite attempts at humour) articles that activate my grey matter.

       0 likes

Leave a Reply