Skepticism on Expertise, Part II

Nov 07, 2014 17:05

As mentioned in the previous entry in the context of the discussion on modern vs. alternative medicine, a common tactic is to bring up flaws and biases in the scientific research. I'm going to use 'scientific literature' as a partial replacement for 'scientific research' because the former has the important specificity in its reference to scientific work that has been published. The importance of this will become very clear below, since it is a significant part of the discussion.

One point I want to start with, before getting into specifics, is that those who use this tactic against mainstream modern science, especially against consensus therein, seem to bring up these problems regarding the scientific literature as if the scientific community is completely unaware of them. It's as if they think they are making some kind of exposé. But literally every single problem they bring up is already well-known throughout the scientific community. These are problems that the scientific community openly acknowledges, not tries to hide, precisely because there is a widespread and constant effort to try, if possible, to avoid and/or fix these problems as best they can. And for those that either haven't yet been or can't be avoided or fixed, the effort is to keep everyone aware of such problems in order to attempt to minimize them. Because this, too, is part of the very nature of science, and thus another contributing factor to the inherent epistemic trustworthiness of science; for this is part of what significantly helps to make science self-correcting.
It is, frankly, naïve to think that the scientific community on the larger scale doesn't take these problems into consideration when it comes to things like acceptance, agreement, and consensus. In addition, scientists have been trained to be skeptical of the work of other scientists when even the slightest thing in that work seems fishy. That doesn't, of course, mean that every single scientist out there is very good at employing the skeptical skills they were trained to develop; but taken as a whole, the scientific community is pretty damn good at it. (If it wasn't, y'all wouldn't have those pretty iPhones in your pockets, and hundreds of thousands, if not millions of women would be dying from breast cancer yearly.)
The point is, when the scientific community - or more specifically, the relevant scientific community for some issue - reaches some degree of consensus on something, that consensus doesn't occur without taking into consideration all of the problems of the scientific literature and scientific research. Reaching consensus is not a simple process, and often involves a lot of time, a lot of work, and a lot of conversation between many scientists, with at least some, if not most of that conversation making it into the scientific literature.

There is a second point to mention before getting into the specifics. The whole "sphere", so to speak, of scientific literature and publishing scientific work is really a very complex, complicated, and context-dependent "thing". I would argue that it is more complex and complicated than people outside the scientific community realize - even more than some inside the scientific community realize! - and so those who use this tactic against modern science don't really know enough to really know what they are talking about. It's worth pointing out that, while advocating a distrust in the experts, they are claiming (wrongly) to be an expert on this whole aspect of science, i.e., scientific literature and publishing scientific work. So the point I want to mention is that, the situation is actually worse than they realize, but it is also much better than they realize. I think that if they really knew all the details and nuances of the situation, they would probably be more horrified than they might already be. However, that's not the end of the story - that's the problem with their thinking, they see a bunch of problems and then stop right there without bothering to look any further. To make an analogy here, consider mountain climbing: this can be an extremely dangerous thing to do, and some people have died doing it; but that's precisely why training is necessary and why there are so many measures that are taken to make it much, much safer. That doesn't, of course, guarantee that no one will ever get hurt doing it, or even die, but it does significantly decrease the chances of injuries and fatalities; so that, so long as you know what you're doing, you are much less likely to get hurt or die. I mean, you're still hanging off the side of a fucking mountain, for christ's sake, but it's a safe bet that you're probably going to be fine.

Now, I can't possibly go through all of the problems that have to do with the scientific literature, because I am confident that there are problems I am not even aware of, and because I know that I don't have a full understanding of some, perhaps many of them, because a full understanding requires a lot of technical and specialized knowledge. Additionally, there are many problems that are context-dependent, and so cannot be discussed in general terms, but instead require discussing particular cases because the fine-grained details of those cases are needed to explain what the problems actually are. And I don't think I have nearly enough background knowledge off the cuff to go into such cases and such details.

One category of problems is publication bias. (I'll speak in terms of journal articles here, but this all applies to books as well.) I say 'category' because the term 'publication bias' can be used to refer to different specific kinds of problems for which the term 'publication bias' is an accurate descriptor. What all types of publication bias have to do with is the fact that every journal has a selection process for choosing which submissions for publication will actually get published. There are, of course, very good reasons why journals should be selective about what they are willing to publish: they want to publish good work instead of crap. The question is, what defines or counts as good work, and what defines or counts as crap? And, how do they tell the difference? Journals also want to select based on appropriateness. A journal that is devoted to microbiology, for example, isn't going to publish a paper on psychology; a geological journal isn't going to publish a paper on dermatology; a physics journal isn't going to publish a paper on nutrition. Some journals are intended to have wider scope, and will likely be motivated to select less specialized papers over highly specialized ones; a journal intended to have a very narrow scope will of course favour highly specialized papers (in the right field, of course) over less specialized. These sorts of biases are clearly needed and easily justifiable.

But the selection process gets very messy very quickly. Because journals can't possibly publish, due to limited space, every submission that meets the criteria for being both good work and appropriate. So they must employ other selective pressures.

It is important to understand why at a motivational level journals want to be selective, some more so than others: the reputation of the journal is always on the line for every article they choose to publish. Some journals are not that concerned about their reputations; other journals take their reputations very seriously. The latter, of course, will be highly selective, while the former have the freedom to be as selective as they feel like.

Before going on, it needs to be pointed out that journal reputation is, in fact, a contributing factor to the epistemic trustworthiness of science. All scientific work builds on and extends out from previous work in some way or other - science cannot be done without this - which is to say that all scientific work relies on previous work. Nothing makes this more plain and clear than flipping through the scientific literature and looking at, for each article, the citations and references to other published scientific work. But reliance on previous scientific literature is a liability: the credibility and epistemic trustworthiness of one's work is vulnerable to that of the previous works on which one has relied. As a scientist, how does one choose out of all the scientific literature out there those works that have the least amount of risk and the highest degrees of trustworthiness for one's own research and work? The source of the article, i.e., in which journal it was published, is one significant factor: you don't want to end up using and relying on an article that was published in a journal that has a reputation for publishing unreliable and low quality work. (Presuming you don't already have good, scientific reasons to be confident in the claims of that article.) And you do want to use and rely on articles that were published in journals that have reputations for publishing high quality and reliable work. As always, there are, of course, no guarantees: a journal with an excellent reputation can still end up publishing bad work, although it is much less probable than for a journal with a bad reputation; and vice versa. That these things do sometimes happen does not "prove" that mainstream modern science, especially in regards to consensus, is "broken", unreliable, and untrustworthy. I dare say that if it "proves" anything, it is that good science is really hard. Which only legitimates, by the way, why several journals have a vested interested in their reputations.

In the selection process, there may be, and often is, selective pressure for being interesting, or exciting. Now, it is very easy at this point to come up with criticisms, many of which are legitimate, that give reasons to look upon the scientific literature with a skeptical eye of some kind: whether or not an article is interesting or exciting perhaps seems frivolous and subjective; good scientific work that is boring and unexciting is still good science, and still might be important.

There is no doubt that the presence of bias for publishing articles on scientific research that is deemed interesting and exciting should give us cause for concern and criticism, but before discussing some of these legitimate criticisms, I want to make the case that it is not an unfounded practice for the selection process. We are finite beings, with a finite amount of energy available, finite resources to do the sorts of things we do in day to day life; and in a significant way, the scope of the world we live in day to day, the world as we perceive and experience it, is finite. As a whole, the scientific enterprise is inherently worth pursuing. But at any one time, there is much more out there to be pursued than we are practically capable of pursuing. We have to make choices that create limits, because we can't do everything. In consideration of what are we to make such choices?

The following example isn't about publication bias, but it's one I can think of off the cuff for which there is something about it that relates here. The Superconducting Super Collider project was barely begun when Congress decided to pull the plug. There are many reasons why, but some of them had to do with, not surprisingly, money: it was an extremely expensive project and funding was coming up short. There was another huge science project proposed around the same time, and that one got the go ahead and was completed. That was the Human Genome Project. No one would have denied that both of these projects were absolutely worth pursuing. But a decision had to be made between them, because of practical limitations. There is no right or wrong answer for this sort of thing, because it is rather a decision about which values one is willing to give priority or precedence at that time. As I said, there were many reasons for the choices that were made, but at the end of the day, the HGP had more widespread appeal because the potential benefits to humanity were likely to be in the near future, whereas the benefits of the SSC were likely to be much further into the future. So there were ethical considerations being weighed, and precedence was given to the potential medical advancements that might open many new doors to treating and curing diseases and conditions for which we either had no treatments or cures, or had only treatments that were far from medically ideal. It is in this way that I think it makes perfect sense to say that the HGP was more interesting and more exciting than the SSC. That is not an objective, absolute assessment: it is relative to a context that must be taken into consideration; it is, to some degree, subjective, yes, but it is not arbitrary, and that is what makes the difference. (Just to point it out: there are some very good, but very sophisticated epistemological and metaphysical arguments against the notion of 'objectivity' as normally thought of. The concept as normally understood may very well be incoherent; but even if it were coherent as the concept is normally understood, it may not even be something we ought to generally pursue, and that we really do need subjectivity. One could argue that some degree of subjectivity is needed in order to make logical and metaphysical sense of values. But I won't go any further into that topic, just because… well, it's the sort of topic that requires a book in and of itself.)

I think one key to understanding the positive case for 'being interesting or exciting' is the realization that 'interesting' and 'exciting' mean something different for scientists, generally speaking, than they do for the general public. Something might be interesting to scientists because it might be a potential springboard for further research into something that has not previously been researched and investigated, perhaps because it might be something no one has previously seen or even known about. Or because it might change the face of a specific scientific field because it might demonstrate that a significant theory is not quite right, or incomplete, or is just wrong about something. Or because it might tell us something new about ourselves, something we weren't previously aware of or didn't understand. Something might be exciting to scientists because it may lead to technological advancements; and especially exciting if it might lead to long anticipated technological advancements, or even whole new technologies. Or because it might open the way to a whole new approach to trying to solve a problem we still haven't yet solved. Or because it might make possible something we thought was impossible (at least, in a practical sense).
I mean, when scientists get excited about, say, a successfully created novel genetic mutation in fruit flies, while the rest of the general public are scratching their heads about what's so great and exciting about fruit flies, … Well, it at least tells you that we need to stop and think about what exactly 'interesting' and 'exciting' mean in the world of science, so that we can take a more careful look at what might be going on when journals select articles partially based on how interesting or exciting they are. Because it just might turn out that a scientist's take on 'interesting' and 'exciting' are, in principle at least, based on legitimate scientific values, and would thus turn out to be legitimately justifiable criteria in the selection process.

To be continued…

phil of science, logic, epistemology, skepticism

Previous post Next post
Up