I had run into some of the thoughts posited in this essay recently, mostly thanks to the amazing and tenacious
jamiam.
I was at the gym today, and I picked up an issue of the New Yorker (much more entertaining than Parents or Elle) and lo and behold, it was the one with that essay in it.
OMG, I was happy as a pig finding a trove of acorns. (I won't say
(
Read more... )
Comments 6
Reply
(I'm sorry I'm so tenacious! Honestly, I felt very on-the-spectrum arguing about the numbers like that... I was actually going to ask to be excused from the dinner table, before I stuck my foot any farther down my throat.)
But then that other link popped up on my twitter feed, and it was a pretty good essay.
Reply
Reply
1. Statistical significance is usually p<=0.05 (or one in twenty). So results reported as statistically significant (only) mean that there is less than one chance in twenty that it arose as a result of chance. It's not a particularly high benchmark.
2. The article cites as an example, a study on ESP. Seriously?!
Reply
And again at the same time, my colleague Bryan Kolb, discovered that batches of rats ordered from Quebec breeding farms produced strange behavioral results, an effect he was later able to attribute to the plane ride. This is what most likely happened to the Edmonton mice - they had had a similar unfortunate ride were so were now hyperresponsive to a dose of cocaine that only moderately affected the mice tested in two other laboratories. So, it was not a matter of bad stats or poor experimental methods, it is a matter of that is the way it is in the life sciences. There are often unanticipated complexities.
From this article, and the comments are worth readng too.
Reply
This article highlights a number of cases where particular statistical models have failed to be useful. But the real scientific failure occurs when one substitutes statistical for scientific thinking.
Reply
Leave a comment