For those who know this stuff, this will be quite banal, but I recently gave it to a non-philosopher science grad, and she found it useful, so some of you might too.
What is it to know something in a fallible world? A world where nobody (nobody who is talking to us, anyway, and we still wouldn’t know whether to trust them – such as God – if they did) can know things certainly. How can we say that we do know things? I have a mental picture I’d like to share with you.
Many people these days talk about “semantic spaces”. A recent rather elegant paper even mapped brain responses in terms of semantic spaces, visualising the resulting brain activities as a WordNet diagram. But what is a semantic space? I think of it as a literal space – a Cartesian graph of n dimensions, one for each semantic variable. Another way to put this is to see it as an axis on a graph where commitments vary along some scale. It might be simple: yes/no. It might be quantitative in some range. It might be qualitative. Or it might even be a function of the credence one puts into some claim. So long as it can vary, it can form a space.
If these semantic variables are independent of each other, then they form a volume. Suppose we have three variables – colour, size and shape. Colour is the standard spectrum. Size is, well, size. Shape might be along some taxonomy of shapes ranging from smooth to rough. The details don’t matter.
Now I ask, “what do I know about this object O?” I might not have a clear belief about the exact shade, but I know it’s blueish, so I exclude all parts of the spectrum scale that aren’t blueish. I have achieved an increase in specificity – I am more certain of the colour of this object O than I was to begin with. All regions of the space that aren’t blueish are excluded. I have learned something.
Notice that most of the possible states of the semantic space are now excluded. Now suppose I note that the object O is rough, not smooth. I reduce the possible states even further. I can say I know more as more space is excluded:
Finally, I realise the size of the object, more or less. I know have excluded most of the space, and have a restricted range of semantic coordinates – properties – for O:
I now know a lot about O compared to when I began. If I can find precise semantic properties for O, then I can reduce that space down to a single coordinate:
Now I know O very well indeed.
This is of course a simplification. For a start, there is always some uncertainty, either because the metric is not discrete (that is, because it is impossible to be precise; consider the problem of Heisenberg uncertainty, for example) or because the possibility of error and other cognitive biases leaves me unable to assert pure certainty about the properties of O, no matter how strongly I feel that I have nailed it.
Science is like this. We learn about the world by various means. One is measurement and observation (assuming that observation is not just a form of measurement). The more precise our observations, the smaller the uncertainty we have about the thing observed. Another is somewhat less direct: theoretical models reduce the possible states of affairs we represent in our semantic spaces. However, ultimately a theory that precisifies our knowledge claims is always in the final analysis about measurements (apart form a priori sciences like logic and mathematics).
However, there remains another question: how do we construct the axes of our semantic spaces in the first instance? They are neither given by God, nor logic, nor evolution. And the answer is to step up a level and treat the axes themselves as coordinates in a higher space of possible semantic spaces. We eliminate regions of those spaces by finding out what fails to work. Metaspaces (sets of dimensions of semantic variables) are narrowed down much the same way as the spaces themselves are, by trial and error. This is why we have different notions of elements than the Greeks or the Indus Valley sages for example, and why we no longer seek to anthropomorphise nature. Those ideas did not deliver success at knowledge acquisition.
Scientific theories themselves evolve as we find out what contrasts, what dimensions, fail to capture the way the world is. Instead of a semantic ascent, where in order to assert the truth of a statement P, we need to have a metalanguage in which “P” (the sentence that asserts that P is true) can be said to be true, we begin with a conceptual metaspace and evolve it to better precision. It is semantic descent. We step down, revise and reframe, until we have something that works and matches the observed world.
This permits us to say also that we know something better than we did before – as when we say that we knew more when we thought the sun, not the earth, was the centre of the universe, even though the sun isn’t either. To put the sun at the centre of the universe eliminates all the states of affairs forced on us by the geocentric universe, even if Copernicus retained epicycles and an absolute universe. We later modified this to eliminate circular orbits, epicycles and eventually an absolute space. Each step precisifies our semantic coordinates (or corrects them) in line with what is observed.
So when next you say you know something, consider what is being eliminated from your conceptual, that is to say, semantic, space of possibilities.