I found an amazing (or maybe just interesting) book while hunting around for material for my intellectual property project. It's called Science Bought and Sold: Essays in the Economics of Science, edited by Philip Mirowski and Esther-Mirjam Sent.
The weird thing about it is that while it claims to be about the "economics of science," and contains some essays on the topic (like the one by Kenneth Arrow that made me find the book in the first place), the other essays (picked at random) touched mostly philosophy and sociology of science.
Here's the skinny on what I got out of it:
First, I should give Charles Sanders Pierce more credit. He wrote this "Note on the Theory of Economy of Research" which presents a quantitative analysis of how to decide between different scientific experiments based on their cost and the amount that they minimize error.
This is cool because: (a) it has a clearly defined idea of scientific progress as reducing error in predictions and (b) it recognizes the limitation of resources in approaching that knowledge--which is key.
I was pondering a lot of things around the time I was reading this essay, and a bunch of things occurred to me. When I talk about Truth nowadays, I'm implicitly defining it in terms of information. Information. It has a nice ring to it. I'm not entirely sure what I mean by it, to be honest, but it seems to wrap up intuitions about theory and description that I always mince around when pressed. So: information.
Information can either be accurate or inaccurate (? - note assumptions) depending on its meaning, although the idea of meaning immediately screws over any coherence in the definition.... But by saying that Truth must be representable as information means we have a concrete handle on its properties; its complexity, its size, its communicability. There's a lot of solid, mathy work done on information, its properties, its ability to be stored or compressed. I need to fill myself in on this stuff.
Because really, for something like a description of the world to be communicated, it probably has to be in the form of information, right?
I suppose you could define Truth as a particular feeling--I think a lot of philosophical and religious thought does. But I'm not sure what that gets you, or how you determine Truth from Falseness (Hegel, I think, says that the Truth is when your consciousness stops needing to go any farther...but there are no arguments about whether this state exists, or is unique, or is even desirable.)
Really, "Truth" as a term is loaded with too much emotional value. It ought to be split in two so that people stop arguing the semantics about who gets the bragging rights associated with "Truth" and start actually getting where they are trying to go.
Which means that I'm sliding from a sort of embarrassed and apologetic positivism to a grittier pragmatism.
Which kind of sucks, because pragmatism, I've always thought, ends up reducing the epistemic problem to a moral one, which is probably much harder.
Harrumph.
The second thing this book showed me was the thinking behind what it called the "sociology of scientific knowledge," which it condensed handily into the acronym SSK. In its radical form, SSK is an attack on the classical philosophy of science and therefore the appeal to epistemic standards and therefore to beliefs of mine that somehow have become important to me in the past year. So it's good to get a handle on this stuff.
Here's the history of SSK in a nutshell, according to D. Wade Hands in his essay "The Sociology of Scientific Knowledge":
Robert Merton writes a doctoral dissertation in 1935 describing the social factors that contribute to the development of the modern scientific method. He attributes most of it to the Protestant/Puritan ethic (wooo!). This is the sociology of science, in that it attempts to find social factors that matter to science. It maintains the classical claim, however, that science and its method is a good way of learning facts about objective reality, and that the content of scientific theories is independent of society.
Everything is chill in my casa, at this point.
Next, Thomas Kuhn in 1962 gets a bit more social and a bit more skeptical of science. He claims that a scientific community are in the grips of a socially constructed paradigm:
"In 'holding' a certain paradigm what scientists 'see,' or do not 'see,' is determined by the paradigm. Observations are not independent and 'theory free,' but rather are a product of the paradigm and are 'theory laden.'... On this view there are no theory-neutral empirical observations by which scientific theories can be independently judged. Rather it is the scientific theory itself, or more properly the scientific paradigm itself, that actually determines the observations within its domain. Two different paradigms are thus fundementally 'incommensurable'; they constitue two incomparable ways of viewing the world."
Here I get a little antsy. On the one hand, the bit about the paradigm affecting empirical observation is important and exciting and probably true. At this point I revel in the destruction of another intellectual cathedral. No more hierarchy of thought! The scientific paradigm is not "meta-," it exists on an equal and competitive plane as the rest of belief, and is just as messy as all the others as fair as its influence on observation.
On the other hand, this particular formulation of the problem is more destructive than it has to be. It doesn't turn the hierarchy into a fluid network; it just severs off huge hunks of hierarchy and lets them drift around uselessly. There are two reasons why Kuhn's formulation fails us:
- Wade Hands advises, "Notice how this Kuhnian view of science introduces an irrevocably social element into science." The paradigms, to Kuhn, are necessarily social constructions that are shared by the scientists. But society doesn't exist like this! How could it? A society can't construct beliefs, a society can't hold a paradigm, a society can't observe phenomena--these are the abilities of individuals alone. There is no superorganism brooding over the empirical data here. So while a paradigm might be in common among many scientists in a community, it can't be socially constructed. It is constructed for each individual by that individual's experience and nature. It may well be encouraged and reinforced by experiences with other members of the community, but the paradigm is communicated, if at all, person-to-person, not person-to-SOCIETY-to-person.
- Kuhn says that different paradigms, or theories produced under them, are incomparable. This is frightening, because it destroys any possibility of communication, and therefore fact-checking, and therefore consensus, and therefor approachable absolute objectivity of knowledge. If paradigms are social, as described above, then this means cultural relativism. Since paradigms are individual, this means total relativism, which devolves almost immediately into skepticism or nihilism.
Thankfully, we don't have to go this far. The very reason why it was tempting to lump scientists together so as to give them collectively held paradigms is because people do in fact communicate theories and compare them all the time. But if paradigms vary across individuals, and individuals find ways to communicate theories and compare empirical evidence, then scientific knowledge developed under different paradigms must sometimes be comparable.
The problem is, once again, that these people are thinking binary. Either evidence, influenced by paradigms, can be compared, or not. No! Things can be more or less comparable, and where there is the slightest bit of comparability, there is the hope of pressing that edge to work towards consensus.
There's more to say, but I think I managed to break the back of the problem. The rest is picking up the pieces, postulating something constant to phenomenology despite the influence of paradigms on observation, guessing how one might update one's paradigm in the face of new empirical data. Maybe there's a good argument for eventual convergence?
Too much work to do here. But I can have hope.