Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label Dan Kahan. Show all posts
Showing posts with label Dan Kahan. Show all posts

Saturday, August 6, 2016

Bring on the documentaries

I was in my junior year of college when Carl Sagan's Cosmos first aired.  I, and several of my friends, were absolutely riveted.  After each episode we'd eagerly discuss what we'd learned, what amazing stuff about the universe Dr. Sagan had expounded upon.  I was blown away both by the visual artistry (although it looks antiquated today, back in 1980 it was seriously impressive), and by the music, which was and is absolutely stunning.

[image courtesy of the Wikimedia Commons]

I also remember, however, the backlash Sagan himself received from other scientists.  He was derided as a "popularizer," scorned as someone who presented pretty pictures and watered down, common-language analogies rather than actual hard science.

I thought this was pretty mean-spirited, but I didn't realize how common that perception was in the scientific world.  Four years later, as a graduate student in oceanography at the University of Washington, I found out that there was really only one pair of words that was considered so vulgar that no one was allowed to utter it: "Jacques Cousteau."  Cousteau was an object of derision, not a "real scientist" at all, just a guy who spoke in a cheesy French accent and liked to get filmed while scuba diving.  In fact, my adviser once told me that he made a point of never accepting a graduate student who mentioned Cousteau's name in their interview.

So this irritation with people who make science accessible to the layperson runs deep, although I have to hope that this is changing, with a few truly first-rate scientists writing books to bring the latest research to the masses (Stephen Hawking, Sean Carroll, Brian Greene, Kip Thorne, Roger Penrose, Lee Smolin, and Lawrence Kraus come to mind).

It's a good thing.  Because to judge from a piece of research published this week in Advances in Political Psychology, there's more to be gained from popularizing science than just encouraging children to pursue science as a career; fostering a fundamental curiosity about nature is essential to eradicating biased thinking across the board.

Called "Science Curiosity and Political Information Processing," the paper, written by Dan Kahan of Yale University et al., looks at how best to move people from leaning on their own preconceived notions to evaluating the strength of claims based on evidence.  The research looked at how watching science documentaries engenders a curiosity about how the world works, and correlates with a lower likelihood of biases in arguments on subjects like anthropogenic climate change.

Kahan spoke with Chris Mooney, science writer over at The Washington Post, and explained what the research by his team had shown.  "It just so happened that, when we looked at the characteristics of [people who watch science documentaries], they seemed to be distinct politically," Kahan said.  "They stood out by being, as a group, less likely to feed the current polarization of political opinion on scientific matters such as climate change.  The data we’ve collected furnish a strong basis for viewing science curiosity as an important individual difference in cognitive style that interacts in a distinctive way with political information processing."

The most fascinating part of the research is that the difference doesn't seem to be related to scientific training, but scientific curiosity.  Having established a scale for measuring curiosity, Kahan et al. looked at both liberals and conservatives and assessed them for biased thinking.  Mooney writes:
Armed with the scientific curiosity scale, Kahan’s new study first demonstrated that while liberal Democrats and conservative Republicans with higher levels of proficiency in scientific thinking (which he calls “ordinary science intelligence”) tend to become more polarized and divided over the scientifically supported risks involved in both climate change and fracking, Democrats and Republicans with higher levels of science curiosity don’t.  Rather, for both groups, the more curious they are, the more their perceptions of the risks tend to increase...  [T]he study also contained an experiment, demonstrating that being possessed of heightened levels of scientific curiosity appeared to make political partisans more likely to read scientific information that went against their predilections.
The final statement is, to me, the most important.  An absolutely critical feature of the scientific view of the world is the ability to continually question one's base assumptions, and to look at the data with a skeptical eye.  And I am not using the word "skeptical" to mean "doubting," the way you hear people talk about "climate change skeptics" (a phrase that makes my skin crawl; no actual skeptic could consider the evidence about climate change questionable).  I am using "skeptical" in its literal sense, which means giving a rigorous look at the data from every angle, considering what it's telling you and examining the meaning of any trends that you happen to observe.  Which, of course, means entertaining the possibility that your prior understanding may be incorrect.  To me, there is no better indication of a truly scientific mind than when someone says, "Well, after examining the evidence, turns out I was wrong about that after all."

So we should be thankful for the popularizers, who follow in a long tradition of work by such greats as Sagan and Richard Feynman.  Children need to have their curiosity about the universe piqued early, and the flames fanned further by watching cool science shows that open their eyes to what a fascinating place we live in.  Think about what it would be like if we had a nation full of people who were committed to looking at the world through the lenses of evidence, logic, and critical thinking instead of prejudice and stubborn adherence to their own biases.

It's a nice possibility to think about, isn't it?

Tuesday, May 31, 2016

Doubt, experiment, and reproducibility

Yesterday I got a response on a post I did a little over a year ago about research that suggested fundamental differences in firing patterns in the brains of liberals and conservatives.   The study, headed by Darren Schreiber of the University of Exeter, used fMRI technology to look at functionality in people of different political leanings, and found that liberals have greater responsiveness in parts of the brain associated with risk-seeking, and conservatives in areas connected with anxiety and risk aversion.

The response, however, was as pointed as it was short.  It said, "I'm surprised you weren't more skeptical of this study," and provided a link to a criticism of Schreiber's work by Dan Kahan over at the Cultural Cognition Project.  Kahan is highly doubtful of the partisan-brain study, and says so in no uncertain terms:
Before 2009, many fMRI researchers engaged in analyses equivalent to what Vul [a researcher who is critical of the method Schreiber used] describes.  That is, they searched around within unconstrained regions of the brain for correlations with their outcome measures, formed tight “fitting” regressions to the observations, and then sold the results as proof of the mind-blowingly high “predictive” power of their models—without ever testing the models to see if they could in fact predict anything. 
Schreiber et al. did this, too.  As explained, they selected observations of activating “voxels” in the amygdala of Republican subjects precisely because those voxels—as opposed to others that Schreiber et al. then ignored in “further analysis”—were “activating” in the manner that they were searching for in a large expanse of the brain.  They then reported the resulting high correlation between these observed voxel activations and Republican party self-identification as a test for “predicting” subjects’ party affiliations—one that “significantly out-performs the longstanding parental model, correctly predicting 82.9% of the observed choices of party.” 
This is bogus.  Unless one “use[s] an independent dataset” to validate the predictive power of “the selected . . .voxels” detected in this way, Kriegeskorte et al. explain in their Nature Neuroscience paper, no valid inferences can be drawn.  None.
So it appears that  Schreiber et al. were guilty of what James Burke calls "designing an experiment to find the kind of data you reckon you're going to find."  It would be hard to recognize that from the original paper itself without being a neuroscientist, of course.  I fell for Schreiber's research largely because I'm a generalist, making me unqualified to spot errors in highly specific, technical fields.

Interestingly, this comment came hard on the heels of a paper by Monya Baker that appeared last week in Nature called "1,500 Scientists Lift the Lid on Reproducibility."  Baker writes:
More than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments.  Those are some of the telling figures that emerged from Nature's survey of 1,576 researchers who took a brief online questionnaire on reproducibility in research... 
Data on how much of the scientific literature is reproducible are rare and generally bleak.  The best-known analyses, from psychology and cancer biology, found rates of around 40% and 10%, respectively.  Our survey respondents were more optimistic: 73% said that they think that at least half of the papers in their field can be trusted, with physicists and chemists generally showing the most confidence. 
The results capture a confusing snapshot of attitudes around these issues, says Arturo Casadevall, a microbiologist at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland.  “At the current time there is no consensus on what reproducibility is or should be.”
The causes were many and varied.  According to the respondents, the failure to reproduce results derived from issues such as low statistical power to unavailability of method to poor experimental design; worse still, all too often no one bothers even to try to reproduce results because of the pressure to publish one's own work, not check someone else's.  As as result, slipshod research -- and sometimes, outright fraud -- gets into print.

How dire is this?  Two heartening responses described in Baker's paper include the fact that just about all of the scientists polled want more stringent guidelines for reproducibility, and also that work of high visibility is far more likely to be checked and verified prior to publication.  (Sorry, climate change deniers -- you can't use this paper to support your views.)

[image courtesy of the Wikimedia Commons]

What it means, of course, is that science bloggers who aren't scientists themselves -- including, obviously, myself -- have to be careful about cross-checking and verifying what they write, lest they end up spreading around bogus information.  I'm still not completely convinced that Schreiber et al. were as careless as Kahan claims; at the moment, all we have is Kahan's criticism that they were guilty of the multitude of failings described in his article.  But it does reinforce our need to think critically and question what we read -- even if it's in a scientific journal.

And despite all of this, science is still by far our best tool for understanding.  It's not free from error, nor from the completely human failings of duplicity and carelessness.  But compared to other ways of moving toward the truth, it's pretty much the only game there is.

Thursday, September 19, 2013

Magnets, politics, and preconceived notions

Two stories showed up just in the last couple of days that are interesting primarily in juxtaposition.

First, we had a scholarly paper published in PLOS One, entitled "Copper Bracelets and Magnetic Wrist Straps for Rheumatoid Arthritis – Analgesic and Anti-Inflammatory Effects: A Randomised Double-Blind Placebo Controlled Crossover Trial."  In it, we find out what most skeptics suspected from the get-go -- that magnetic and copper bracelets and anklets and necklaces and shoe-sole inserts and so on are a complete non-starter when it comes to treating disease.

These claims have been around for years, and usually rely on pseudoscientific bosh of the kind you find in this site, wherein we have the following "explanation:"
Life developed under the influence of the earth's geomagnetic field.  We are surrounded by a sea of magnetism.  The human body, its individual organs and each of the millions of cells making up the organs and the body bathed by this sea are magnetically charged.  Cell regulation, tissue function and life itself are controlled by internal electromagnetic currents.  In disease states, these electromagnetic potentials are altered but fortunately can be favorably influenced by the external application of magnetics...  Used correctly, Electro-Magnetic Energy Fields are a proven therapeutic modality.  Research and clinical experience has established that the very gentle, EULF, low power pulsed magnetic energy improves the repair of damaged tissue and reduction of pain, improved oxygen transport in the red blood cells, increased nutrient and oxygen uptake at the cellular level.  Greater elasticity of blood vessels, changes in acid/alkaline balance, altering of enzyme and hormone activity, all play an important role in the return to good health...  Negative magnetic fields oxygenate and alkalize by aiding the body's defense against bacteria, fungi, and parasites, all of which thrive in an acid medium.  In degenerative diseases, calcium is found deposited around inflamed joints, bruised areas on the hell, and in bones and kidney stones.  Infections occur because they function well in an acidic, oxygen deficient state.
Which, in my opinion, should win some kind of award for packing the most bullshit into a single paragraph.

So the whole copper-and-magnet thing never did make much sense.  But don't take my word for it; here's what Richardson, Gunadasa, Bland, and MacPherson said, after having run a double-blind efficacy test on magnetic bracelets:
The results of this study may be understood in a number of ways. The most obvious interpretation is that they demonstrate that magnetic wrist straps, and also copper bracelets, have little if any specific therapeutic effects (i.e. beyond those of a placebo) on pain, inflammation, or disease activity in rheumatoid arthritis...  The fact that we were unable to demonstrate... a difference for the primary outcome measure on its own, nor indeed any of the other core measures employed, strongly suggests that wearing magnetic wrists straps, or copper bracelets, in order to minimise disease progression and alleviate symptoms of rheumatoid arthritis is a practice which lacks clinical efficacy.
But as I said, this is hardly a surprise to skeptics, who doubted the whole thing pretty much from the outset.

The second story at first seems to connect to the first in only a tangential fashion at best.  Chris Mooney, a skeptical writer of well-deserved high reputation, wrote about it this week in Grist in a piece called "Science Confirms: Politics Wrecks Your Ability to do Math."   In Mooney's article we hear about a study by Dan Kahan and his colleagues, of Yale Law School, in which two groups of people were asked to solve the same (rather difficult) mathematical problem -- but one group was given the problem in the context of its being about "the effectiveness of a new skin cream for rashes," and the other group that it was about "the effectiveness of a new law banning private citizens from carrying concealed handguns in public."

What Kahan's study found was that when the problem involved the relatively emotionally-neutral context of a skin cream, your ability to solve the problem correctly depended upon only one thing -- your skill at math.  In other words, both Democrats and Republicans scored well on the problem if they were good at math, and both scored poorly if they were bad at math.  But when the problem involved handguns, a different pattern emerged.  Here's how Mooney explains the results:
So how did people fare on the handgun version of the problem? They performed quite differently than on the skin cream version, and strong political patterns emerged in the results — especially among people who are good at mathematical reasoning. Most strikingly, highly numerate liberal Democrats did almost perfectly when the right answer was that the concealed weapons ban does indeed work to decrease crime...  an outcome that favors their pro-gun-control predilections. But they did much worse when the correct answer was that crime increases in cities that enact the ban... 
The opposite was true for highly numerate conservative Republicans: They did just great when the right answer was that the ban didn't work... but poorly when the right answer was that it did. 
Put simply: when our emotions and preconceived notions are involved, data and logic have very little impact on our brains.

This is a profoundly unsettling conclusion, especially for people like me.  Every day I get up and write about how people should be more logical and rational and data-driven, and here Kahan et al. show me that all of the double-blind studies in the world aren't going to convince people that their magnet-studded copper bracelets aren't helping their arthritis pain if they already thought that they worked.

It does leave me with a sort of bleak feeling.  I mean, why test wacko claims, if the only people who will believe the results are the ones who already agreed with the result of the experiment beforehand?  Maybe this justifies the fact that I spend as much time making fun of woo-woos as I do arguing logically against them.  Appeal to people's emotions, and you're much more likely to get a result.

On the other hand, this feels to me way too much like sinking to their level.  I live in hope that the people who are convinced by what I write -- and maybe there have been a few -- have been swayed more by my logic than by my sarcasm.

But given human nature -- and Kahan's experiment -- maybe that's a losing proposition.