Yesterday I got a response on a post I did a little over a year ago about research that suggested fundamental differences in firing patterns in the brains of liberals and conservatives. The study, headed by Darren Schreiber of the University of Exeter, used fMRI technology to look at functionality in people of different political leanings, and found that liberals have greater responsiveness in parts of the brain associated with risk-seeking, and conservatives in areas connected with anxiety and risk aversion.
The response, however, was as pointed as it was short. It said, "I'm surprised you weren't more skeptical of this study," and provided a link to a criticism of Schreiber's work by Dan Kahan over at the Cultural Cognition Project. Kahan is highly doubtful of the partisan-brain study, and says so in no uncertain terms:
Before 2009, many fMRI researchers engaged in analyses equivalent to what Vul [a researcher who is critical of the method Schreiber used] describes. That is, they searched around within unconstrained regions of the brain for correlations with their outcome measures, formed tight “fitting” regressions to the observations, and then sold the results as proof of the mind-blowingly high “predictive” power of their models—without ever testing the models to see if they could in fact predict anything.
Schreiber et al. did this, too. As explained, they selected observations of activating “voxels” in the amygdala of Republican subjects precisely because those voxels—as opposed to others that Schreiber et al. then ignored in “further analysis”—were “activating” in the manner that they were searching for in a large expanse of the brain. They then reported the resulting high correlation between these observed voxel activations and Republican party self-identification as a test for “predicting” subjects’ party affiliations—one that “significantly out-performs the longstanding parental model, correctly predicting 82.9% of the observed choices of party.”
This is bogus. Unless one “use[s] an independent dataset” to validate the predictive power of “the selected . . .voxels” detected in this way, Kriegeskorte et al. explain in their Nature Neuroscience paper, no valid inferences can be drawn. None.So it appears that Schreiber et al. were guilty of what James Burke calls "designing an experiment to find the kind of data you reckon you're going to find." It would be hard to recognize that from the original paper itself without being a neuroscientist, of course. I fell for Schreiber's research largely because I'm a generalist, making me unqualified to spot errors in highly specific, technical fields.
Interestingly, this comment came hard on the heels of a paper by Monya Baker that appeared last week in Nature called "1,500 Scientists Lift the Lid on Reproducibility." Baker writes:
More than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments. Those are some of the telling figures that emerged from Nature's survey of 1,576 researchers who took a brief online questionnaire on reproducibility in research...
Data on how much of the scientific literature is reproducible are rare and generally bleak. The best-known analyses, from psychology and cancer biology, found rates of around 40% and 10%, respectively. Our survey respondents were more optimistic: 73% said that they think that at least half of the papers in their field can be trusted, with physicists and chemists generally showing the most confidence.
The results capture a confusing snapshot of attitudes around these issues, says Arturo Casadevall, a microbiologist at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland. “At the current time there is no consensus on what reproducibility is or should be.”The causes were many and varied. According to the respondents, the failure to reproduce results derived from issues such as low statistical power to unavailability of method to poor experimental design; worse still, all too often no one bothers even to try to reproduce results because of the pressure to publish one's own work, not check someone else's. As as result, slipshod research -- and sometimes, outright fraud -- gets into print.
How dire is this? Two heartening responses described in Baker's paper include the fact that just about all of the scientists polled want more stringent guidelines for reproducibility, and also that work of high visibility is far more likely to be checked and verified prior to publication. (Sorry, climate change deniers -- you can't use this paper to support your views.)
[image courtesy of the Wikimedia Commons]
And despite all of this, science is still by far our best tool for understanding. It's not free from error, nor from the completely human failings of duplicity and carelessness. But compared to other ways of moving toward the truth, it's pretty much the only game there is.
No comments:
Post a Comment