Michael Pettit, The Science of Deception: Psychology and Commerce in America: A lot of interesting observations about the development of psychology as a science and the concept of deception as it mediated psychology’s interactions with the world of commerce, but I’m not sure I got a coherent through-line in the end. His story begins in the Gilded Age, where building a national transportation infrastructure (railroads) was inextricably intertwined with defrauding small investors; commerce and fraud were both expanding. “What people branded as innovation in certain realms (such as the manufacturing of guns or bicycles), they labeled fraudulent adulteration when it came to the production of food, drink, or medication”-innovation was truly happening, but so was deception. Worse, confidence men represented an aggressive form of masculinity, imposing their will on others and feminizing them, even as liberal ideology required individuals to exercise more choice and control. “Moreover, the success of nationally recognized brands soon lead to their counterfeit in the form of trademark infringement from rival companies.”
Scientists responded by trying to find new ways of truth-telling: “the visible production of the boundaries between the credulous and the trustworthy.” This was also a means for psychologists to assert professional expertise about the mind as against spiritualists, religious folks, etc. They tried to intervene in trademark infringement cases, but judges initially rejected their expertise while still adopting the model of the easily deceivable consumer that the psychologists endorsed. Pettit also makes the point that the courts initially adopted the view that consumers were easily fooled about trademarks-which caveat emptor in other areas of the law generally did not accept-in order to protect the interests of established businesses, not to protect the consumers themselves. Although gendered understandings of the consumer played a role, Pettit argues that they weren’t key; indeed, “[l]awmakers invoked the unwary purchaser in order to cut off empirical investigations involving differences in perception and psychological ability among consumers.” Consumers had subjectivity, but not agency. Eventually this understanding among judges legitimated more general consumer protection laws such as those enforced by the FTC.
Psychologists also tried to debunk spiritualists, but at the same time adopted the method of deceiving subjects about what an investigation was about, in the name of finding a greater truth-a method that persists today in controlled experiments as well as in some reporting, though no longer with sociology. “These demonstrations led to a gendered understanding of deception as male psychologists demanded full transparency from their female subjects while priding themselves on their masculine guile in besting their opponents.” And from the 1920s onward, “psychologists increasingly understood deception as an unavoidable, perhaps even a necessary and beneficial element of everyday life.” Rather than identifying untrustworthy types, they turned to searching for individual reactions-“the changing physiology of individuals as they momentarily deceived,” as with the lie detector. There was continuity with older attempts to use the body to tell the truth that the speaking person didn’t, but also change in the idea of deceit-from fixed characteristic to specific activity. One comparison I found interesting was that, in terms of “pathological” liars, Europeans were more worried about class passing-poor folks pretending to have high-class backgrounds-than Americans, who were more concerned with juvenile delinquency, gender, and sexuality. Of course, a study of “pathological liars falsifying cases of sexual abuse,” which the authors emphasized was relatively rare, was treated by the leading legal authority on evidence as being about how all women’s testimony was inherently unreliable in rape cases.
Pettit also discusses Catherine Lutz’s concept of “the epistemology of the bunker,” a concept I like quite a lot: “the ways in which the suspiciousness endemic to the political culture of the Cold War informed how people understood the self.” The “deceivable” self was a threat to democracy, which depended on “an individualism that was not wholly trusted.” Psychologists were employed in creating personality tests-working to expose potentially deceitful employees but also policing the boundaries of the profession, sometimes through deception. The acceptance of deception, Pettit says, played a big role in therapeutic practice, such as adoptions where the children were selected to look plausible as biological children of their adoptive parents “in the hopes of making the human intervention in the family invisible.” Likewise with the medical management of intersexuality, where children were not told the truth because knowing that truth “would impede the intended socialization.”
Richard Harris, Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions: Ulp. About the crisis of reproducibility in the medical field, which appears at least as bad as the crisis of reproducibility in behavioral psychology. I learned about “Eroom’s Law,” the opposite of Moore’s law, which holds that there is an exponential slowing in the state of drug development, starting in 1950; if it holds, we’ll be done in 2040. Lack of rigor in biomedical research is an important culprit.
Even if mice are good models (which they often aren’t) it turns out that cage position can affect the outcome of an experiment, given mice’s distaste for bright lights and open spaces. Harris quotes a scientist: “As you move from the bottom of the rack to the top of the rack, the animals are more anxious, more stressed-out, and more immune suppressed.” Also, “Mice are so afraid of [human] males that it actually induces analgesia,” numbing pain and screwing up studies. So mouse study results can vary hugely from lab to lab. But the bigger problem may be testing in mice at all, or testing only in one strain of animal. If you tested a new drug on white women aged 35 who all lived in one town with identical homes, husbands, diets, thermostats, and grandfathers, “that would instantly be recognized as a terrible experiment, ‘but that’s exactly how we do mouse work.’”
Harris is only moderately optimistic about small-molecule innovations. He quotes a scientist who argues that “evolution has created so many redundant systems that targeting a single pathway in a complex network will rarely work…. ‘We have evolved seventeen different biological mechanisms to avoid starving to death. Drugging one of those mechanisms isn’t going to do anything!’”
Cell experiments are troubling too, even when they’re properly identified. “The very act of propagating cells in the laboratory changes them profoundly,” and atmospheric oxygen in particular is really important because a lot of regulatory factors that affect tumor growth are oxygen regulated. “In fact, cell lines derived from all sorts of cancers end up looking much more like one another than they do the original tumors from which they came… ‘Some people say that HeLa is a new species,’ [a scientist] told me. ‘… The chromosomes are all rearranged… [I]t has made all these changes to adapt’ to the environment where it now makes its home.” Precision medicine can’t be developed until we deal with the fact that even molecules in a living body change when surgeons cut off the blood supply to the tissue they’re going to remove.
Here are a couple of statistical twists I hadn’t thought about, too. If you set your p-value for significance at 0.05, then there’s almost a 50% chance that running the experiment again would give you a higher value, and almost a 50% chance that you’d get a lower one, and therefore be deemed insignificant. To have a 95% chance that an experiment run a second time would still be statistically significan, a p-value of 0.005 would be required. This can often be done, if the phenomenon at issue is real, by increasing the sample size by 60%--expensive, but Harris argues pretty persuasively that it would be worth the costs. Another point: scientists too often confuse exploratory research with confirmatory research. Statistical tests used to confirm or disconfirm a hypothesis don’t work if you don’t have a hypothesis and are just fishing around for anything interesting or unexpected in the data. “It’s fine to report those findings as unexpected and exciting, but it’s just plain wrong to recast your results as a new hypothesis backed by evidence.”
All is not lost. A federal law requiring scientists doing drug studies to declare endpoints in advance seems to have had significant effects: of 30 big studies done before the law, 57% showed a benefit. But after, only 8% confirmed the preannounced hypothesis.
Reproducibility is the key. Although some responses to the crisis point out that failed attempts to reproduce certain results may be because the original lab did important things differently, but that’s part of the point: “if any tiny detail can derail an experiment, just how robust is the result? Nobody cares about an experiment that … requires conditions so exquisite that only the lab where it originated can repeat it.” Harris advocates (1) blinding (amazingly, not universal); (2) repeating basic experiments; (3) presenting all results rather than cherry-picking; (4) using positive and negative controls-running experiments that should succeed and fail, respectively, if the hypothesis is correct; (5) careful validation of the ingredients (which turns out to be a much bigger problem than I knew; for example, did you know that lots of cell lines labeled otherwise are actually HeLa, which is very good at taking over, and between 18-36% of cell experiments used misidentified cell lines?); and (6) using the right statistical tests.
comments on DW |
reply there. I have invites or you can use OpenID.