Neuropseudoscience

Jan 22, 2009 17:42

If you vaguely follow neuroscience (i vaguely, vaguely, vaguely do), or if you read a lot of regular newspapers/magazines, you're probably familiar with the idea that we can correlate specific areas of the brain with specific emotions or mental activities.

A handful of researchers at MIT and UCSD have an article in press for Perspectives on Psychological Science (it will probably appear in the first issue of 2009), where they say they became suspicious of the number of really high correlations appearing between someone thinking about a particular task, and a region of their brain lighting up with activity as they do so.

They have statistical reasons to be suspicious; basically the level of confidence you can have in a statistical correlation between two measurements is limited by the level of confidence you have in your measuring tools.

If somebody is measuring something (say, the average length of a kind of leaf) with a ruler that only measures things in centimetres, and they produce a report on these leaves that says "the average leaf length is 15 centimetres, give or take 3.948 millimetres, then you'd start to get a little bit suspicious.

This is the sort of thing that has now been noticed in neuroscience studies; they've been reporting results that should be mathematically impossible to get, and that led the authors of the Perspectives on Psychological Science paper (Edward Vul, Christine Harris, Piotr Winkielman, & Harold Pashler) to uncover systematic flaws in the way some neuroscience is being carried out and reported.

What i find more disturbing is that these researchers needed to contact the original authors of the studies they surveyed to figure out what their exact methodology was. The reason was that, according to Vul, Harris, Winkielman & Pashler, "the exact methods were simply not made clear in the typically brief and sometimes opaque method sections."

This seems like an even bigger problem than the discovery of systematic flaws in accepted neuroscience methods. The "Methods" section of a scientific paper is supposed to be proof against exactly this sort of thing! We trust the methodology that a researcher has used, because that researcher has to make their methodology clear. That way, even if a poor methodology gets published, it's easy to follow up on. Vul, Harris, Winkielman & Pashler had to contact the researchers themselves to follow up on it.

What's even stupider is that, according to New Scientist (which led me to the "in press" *.pdf of this paper), some of the researchers being criticised defended themselves by saying that Vul et al. didn't ask them the right questions about their methodology.

They said this to defend their papers. Surely the idea that someone would need to contact you personally, and then "ask the correct questions", in order to be capable of assessing your methodology is a serious issue, and far worse than having a problematic methodology in the first place?

(cross-posted, with modifications, to the _scientists_ community)

statistics, neuroscience, epistemology, science

Previous post Next post
Up