Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label replicability. Show all posts
Showing posts with label replicability. Show all posts

Wednesday, August 25, 2021

The honesty researcher

One of the things I pride myself on is honesty.

I'm not trying to say I'm some kind of paragon of virtue, but I do try to tell the truth in a direct fashion.  I hope it's counterbalanced by kindness -- that I don't broadcast a hurtful opinion and excuse it by saying "I'm just being honest" -- but if someone wants to know what I think, I'll tell 'em.

As the wonderful poet and teacher Taylor Mali put it, "I have a policy about honesty and ass-kicking.  Which is: if you ask for it, I have to let you have it."  (And if you haven't heard his wonderful piece "What Teachers Make," from which that quote was taken -- sit for three minutes right now and watch it.)


I think it's that commitment to the truth that first attracted me to science.  I was well aware from quite a young age that there was no reason to equate an idea making me happy and an idea being the truth.  It was as hard for me to give up magical thinking as the next guy -- I spent a good percentage of my teenage years noodling around with Tarot cards and Ouija boards and the like -- but eventually I had to admit to myself that it was all a bunch of nonsense.

In science, honesty is absolutely paramount.  It's about data and evidence, not about what you'd dearly love to be true.  As the eminent science fiction author Phillip K. Dick put it, "Reality is that which, when you stop believing in it, it doesn't go away."

Or perhaps I should put it, "it should be about data and evidence."  Scientists are human, and are subject to the same temptations the rest of us are -- but they damn well better be above-average at resisting them.  Because once you've let go of that touchstone, it not only calls into question your own veracity, it casts a harsh light on the scientific enterprise as a whole.

And to me, that's damn near unforgivable.  Especially given the anti-science attitude that is currently so prevalent in the United States.  We don't need anyone or anything giving more ammunition to the people who think the scientists are lying to us for their own malign purposes -- the people whom, to quote the great Isaac Asimov, think "my ignorance is as good as your knowledge."

Which brings me to Dan Ariely.

Ariely is a psychological researcher at Duke University, and made a name for himself studying the issue of honesty.  I was really impressed with him and his research, which looked at how our awareness of the honor of truth-telling affects our behavior, and the role of group identification and tribalism in how much we're willing to bend our own personal morality.  I used to show his TED Talk, "Our Buggy Moral Code," to my Critical Thinking classes at the beginning of the unit on ethics; his conclusions seemed to be a fascinating lens on the whole issue of honesty and when we decide to abandon it.

Which is more than a little ironic, because the data Ariely used to support these conclusions appear to have been faked -- possibly by Ariely himself.

[Image licensed under the Creative Commons Yael Zur, for Tel Aviv University Alumni Organization, Dan Ariely January 2019, CC BY-SA 4.0]

Ariely has not admitted any wrongdoing, but has agreed to retract the seminal paper on the topic, which appeared in the prestigious journal Proceedings of the National Academy of Sciences back in 2012.  "I can see why it is tempting to think that I had something to do with creating the data in a fraudulent way," Ariely said, in a statement to BuzzFeed News.  "I can see why it would be tempting to jump to that conclusion, but I didn’t...  If I knew that the data was fraudulent, I would have never posted it."

His contention is that the insurance company that provided the data, The Hartford, might have given him fabricated (or at least error-filled) data, although what their motivation could be for doing so is uncertain at best.  There's also the problem that the discrepancies in the 2012 paper led analysts to sift through his other publications, and found a troubling pattern of sloppy data-handling, failures in replicability of results, misleading claims about sources, and more possible outright falsification.  (Check out the link I posted above for a detailed overview of the issues with Ariely's work.)

Seems like the one common thread running through all of these allegations is Ariely.

It can be very difficult to prove scientific fraud.  If a researcher deliberately fabricated data to support his/her claims, how can you prove that it was deliberate, and not either (1) an honest mistake, or (2) simply bad experimental design (which isn't anything to brag about, but is still in a separate class of sins from outright lying).  Every once in a while, an accused scientist will actually admit it -- one example that jumps to mind is Korean stem-cell researcher Hwang Woo-Suk, whose spectacular fall from grace reads like a Shakespearean tragedy -- but like many politicians who are accused of malfeasance, a lot of times the accused scientist just decides to double down, deny everything, and soldier on, figuring that the storm will eventually blow over.

And, sadly, it usually does.  Even in Hwang's case -- not only did he admit fraud, he was fired by Seoul National University and tried and found guilty of embezzlement -- he's back doing stem-cell research, and since his conviction has published a number of papers, including ones in PubMed.

I don't know what's going to come of Ariely's case.  Much is being made about the fact that a researcher in honesty and morality has been accused of being dishonest and immoral.  Ironic as this is, the larger problem is that this sort of thing scuffs the reputation of the scientific endeavor as a whole.  The specific results of Ariely's research aren't that important; what is much more critical is that this sort of thing makes laypeople cast a wry eye on the entire enterprise.

And that, to me, is absolutely inexcusable.

*********************************************

I've been interested for a long while in creativity -- where it comes from, why different people choose different sorts of creative outlets, and where we find our inspiration.  Like a lot of people who are creative, I find my creative output -- and my confidence -- ebbs and flows.  I'll have periods where I'm writing every day and the ideas are coming hard and fast, and times when it seems like even opening up my work-in-progress is a depressing prospect.

Naturally, most of us would love to enhance the former and minimize the latter.  This is the topic of the wonderful book Think Like an Artist, by British author (and former director of the Tate Gallery) Will Gompertz.  He draws his examples mostly from the visual arts -- his main area of expertise -- but overtly states that the same principles of creativity apply equally well to musicians, writers, dancers, and all of the other kinds of creative humans out there. 

And he also makes a powerful point that all of us are creative humans, provided we can get out of our own way.  People who (for example) would love to be able to draw but say they can't do it, Gompertz claims, need not to change their goals but to change their approach.

It's an inspiring book, and one which I will certainly return to the next time I'm in one of those creative dry spells.  And I highly recommend it to all of you who aspire to express yourself creatively -- even if you feel like you don't know how.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Monday, September 10, 2018

Science bias

I come on pretty strongly in favor of science most of the time.  While I try to temper my obvious pro-science stance with an admission that the scientists are only human and therefore fallible, I've been known to use phrases like "the only game in town" with respect to science as a pathway to knowledge.

It may be, however, that I'll have to tone it down a little, considering a study that appeared in eNeuro last week.  Entitled, "Why Is It So Hard to Do Good Science?", by Emory University professor of pharmacology Ray Dingledine, this paper posits that science is inherently susceptible to confirmation bias -- the very thing the scientific method was developed to control.

[Image is in the Public Domain]

Dingledine's claim is that scientists introduce bias into experiments inadvertently because their preconceived notions about what they think they're going to find alter how they approach the question -- all the way down to the level of what equipment they use.  James Burke pointed this out, in his wonderful series The Day the Universe Changed.  "At this stage, you're looking for data to support your theory, so you design instruments to find the kind of data you reckon you're going to find," Burke says.  "The whole argument comes full circle when you get the raw data itself.  Because it isn't raw data.  It's what you planned to find from the start."

A lot of scientists bristle at this kind of criticism, and point out examples of scientists finding data that didn't fit the existing model, resulting in the model being overhauled or thrown out entirely.  Burke does as well; he cites plate tectonics, a model that arose from magnetometer data from the ocean floor that couldn't be explained with the understanding of geology at the time.

But the thing is, those instances stand out precisely because they're so uncommon.  Major revisions of the model are actually really infrequent -- which a lot of us rah-rah-science types have celebrated as a vindication that the scientific approach works, because it's given us rock-solid theories that have withstood decades, in some cases centuries, of empirical work.

Dingledine poniards that idea neatly.  He writes:
“Good science” means answering important questions convincingly, a challenging endeavor under the best of circumstances.  Our inability to replicate many biomedical studies has been the subject of numerous commentaries both in the scientific and lay press.  In response, statistics has re-emerged as a necessary tool to improve the objectivity of study conclusions. However, psychological aspects of decision–making introduce preconceived preferences into scientific judgment that cannot be eliminated by any statistical method.

It's possible to counter this tendency, Dingledine says, but not in any sense easy:
The findings reinforce the roles that two inherent intuitions play in scientific decision-making: our drive to create a coherent narrative from new data regardless of its quality or relevance, and our inclination to seek patterns in data whether they exist or not.  Moreover, we do not always consider how likely a result is regardless of its P-value.  Low statistical power and inattention to principles underpinning Bayesian statistics reduce experimental rigor, but mitigating skills can be learned.  Overcoming our natural human tendency to make quick decisions and jump to conclusions is a deeper obstacle to doing good science; this too can be learned.
Which just shows that bias runs deeper, and is harder to expunge, than most of us want to admit.

Now, I'm not meaning for anyone to switch from scientific experimentation to Divine Inspiration or whatnot.  Nor am I saying that any of the Big Ideas -- the aforementioned plate tectonics, the Newtonian/Einsteinian model of physics, quantum mechanics, molecular genetics, evolution by natural selection -- are wrong in any kind of substantive way.  It's more that we can't afford to get cocky.  What happens when you get cocky is you miss things, including the effect your preconceived notions have on your outlook.

So all of us could use a dose of humility, not to mention self-awareness.  The take-home message here is that we shouldn't take ideas as truthful out of hand, and should be especially wary if they agree with what we already thought was true.  We're all prone to confirmation bias -- and that includes the smartest amongst us.

**************************

This week's Skeptophilia book recommendation is a charming inquiry into a realm that scares a lot of people -- mathematics.  In The Universe and the Teacup, K. C. Cole investigates the beauty and wonder of that most abstract of disciplines, and even for -- especially for -- non-mathematical types, gives a window into a subject that is too often taught as an arbitrary set of rules for manipulating symbols.  Cole, in a lyrical and not-too-technical way, demonstrates brilliantly the truth of the words of Galileo -- "Mathematics is the language with which God has written the universe."





Tuesday, August 28, 2018

What we've got here is a failure to replicate

I frequently post about new scientific discoveries, and having a fascination for neuroscience and psychology, a good many of them have to do with how the human brain works.  Connecting behavior to the underlying brain structure is not easy -- but with the advent of the fMRI, we've begun to make some forays into trying to elucidate how the brain's architecture is connected to neural function, and how neural function is connected to higher-order phenomena like memory, learning, instinct, language, and socialization.

Whenever I post about science I try my hardest to use sources that are from reputable journals such as Science and Nature -- and flag the ones that aren't as speculative.  The reason those gold-standard journals are considered so reliable is because of a rigorous process of peer review, wherein scientists in the field sift through papers with a fine-toothed comb, demanding revisions on anything questionable -- or sometimes rejecting the paper out of hand if it doesn't meet the benchmark.

[Image is in the Public Domain]

That's why a paper published in -- you guessed it -- Nature had me picking my jaw up off the floor.  A team of psychologists and social scientists, led by Colin Camerer of Caltech, took 21 psychological studies that had been published either in Nature or in Science and didn't just review them carefully, but tried to replicate their results.

Only 13 of them turned out to be replicable.

This is a serious problem.  I know that scientists are fallible just like the rest of us, but this to me doesn't sound like ordinary fallibility, it sounds like outright sloppiness, both on the part of the researchers and on the part of the reviewers.  I mean, if you can't trust Nature and Science, who can you trust?

Anna Dreber, of the Stockholm School of Economics, who co-authored the study, was unequivocal about its import.  "A false positive result can make other researchers, and the original researcher, spend lots of time and energy and money on results that turn out not to hold," she said.  "And that's kind of wasteful for resources and inefficient, so the sooner we find out that a result doesn't hold, the better."

Brian Nosek, of the University of Virginia, was also part of the team that did the study, and he thought that the pattern they found went beyond the "publish-or-perish" attitude that a lot of institutions have.  "Some people have hypothesized that, because they're the most prominent outlets they'd have the highest rigor," Nosek said.  "Others have hypothesized that the most prestigious outlets are also the ones that are most likely to select for very 'sexy' findings, and so may be actually less reproducible."

One heartening thing is that as part of the study, the researchers asked four hundred scientists in the field who were not involved with the study to take a look at the 21 papers in question, and make their best assessment as to whether it would pass replication or not.  And the scientists' guesses were usually correct.

So why, then, did eight flawed, non-replicable studies get past the review boards of the two most prestigious science journals in the world?  "The likelihood that a finding will replicate or not is one part of what a reviewer would consider," Nosek said.  "But other things might influence the decision to publish.  It may be that this finding isn't likely to be true, but if it is true, it is super important, so we do want to publish it because we want to get it into the conversation."

Well, okay, but how often are these questionably-correct but "super important" findings labeled as such?  It's rare to find a paper where there's any degree of doubt expressed for the main gist (although many of them do have sections on the limitations of the research, or questions that are still unanswered).  And it's understandable why.  If I were on a review board, I'd definitely look askance at a paper that made a claim and then admitted the results of the research might well be a fluke.

So this is kind of troubling.  It's encouraging that at least the inquiry is being made; identifying that a process is flawed is the first step toward fixing it.  As for me, I'm going to have to be a little more careful with my immediate trust of psychological research just because it was published in Nature or Science.

"The way to get ahead and get a job and get tenure is to publish lots and lots of papers," said Will Gervais of the University of Kentucky, who was one of the researchers whose study failed replication.  "And it's hard to do that if you are able run fewer studies, but in the end I think that's the way to go — to slow down our science and be more rigorous up front."

******************************************

This week's Skeptophilia book recommendation is from one of my favorite thinkers -- Irish science historian James Burke.  Burke has made several documentaries, including Connections, The Day the Universe Changed, and After the Warming -- the last-mentioned an absolutely prescient investigation into climate change that came out in 1991 and predicted damn near everything that would happen, climate-wise, in the twenty-seven years since then.

I'm going to go back to Burke's first really popular book, the one that was the genesis of the TV series of the same name -- Connections.  In this book, he looks at how one invention, one happenstance occurrence, one accidental discovery, leads to another, and finally results in something earthshattering.  (One of my favorites is how the technology of hand-weaving led to the invention of the computer.)  It's simply great fun to watch how Burke's mind works -- each of his little filigrees is only a few pages long, but you'll learn some fascinating ins and outs of history as he takes you on these journeys.  It's an absolutely delightful read.

[If you purchase the book from Amazon using the image/link below, part of the proceeds goes to supporting Skeptophilia!]




Thursday, September 29, 2016

I smell a rat

I think I've made my position on GMOs plain enough, but let me just be up front about it right out of the starting gate.

There is nothing intrinsically dangerous about genetic modification.  Since each GMO involves messing with a different genetic substructure, the results will be different each time -- and therefore will require separate testing for safety.  The vast majority of GMOs have been extensively tested for deleterious human health effects, and almost all of those have proven safe (the ones that weren't never reached market).

So GMOs are, overall, as safe as any other agricultural practice -- i.e. not 100% foolproof, but with appropriate study, not something that deserves the automatic stigma the term has accrued.

There are a great many people who don't see it that way.  One of the most vocal is Gilles-Éric Séralini, who made headlines back in 2007 with a study that alleged that rats fed genetically modified corn showed blood and liver abnormalities.  When the study was published and other scientists attempted to replicate it (and failed), the results of Séralini's study were attributed to "normal biological variation (for the species in question)."

[image courtesy of the Wikimedia Commons]

Undeterred, Séralini went on in 2012 to publish a paper in Food and Chemical Toxicology about long-term toxicity of glyphosate (RoundUp) that is still the go-to research for the anti-Monsanto crowd.  He claimed that rats dosed with glyphosate developed large tumors and other abnormalities.  But that study, too, failed in attempts to replicate it, and it was withdrawn from FCT, with the editor-in-chief stating that the results were "inconclusive."

So if you smell a rat with respect to Séralini and his alleged studies, you're not alone.

But there's no damage to your reputation that can't be made worse, and Séralini took that dubious path last week -- with a "study" that claims that a homeopathic remedy can protect you from the negative effects of RoundUp.

So, to put it bluntly: a sugar pill can help you fight off the health problems caused by something that probably doesn't cause health problems, at least in the dosages that most of us would ordinarily be exposed to.

Being that such research -- if I can dignify it by that name -- would never pass peer review, Séralini went right to a pay-to-play open-source alt-med journal called BMC Complementary and Alternative Medicine.  Steven Savage, a plant pathologist, had the following to say about the study:
The dose is absurd.  They gave the animals the equivalent of what could be in the spray tank including the surfactants and the a.i. (active ingredients).  If glyphosate or its AMPA metabolite ever end up in a food it is at extremely low concentrations and never with the surfactant.  Unless you were a farmer or gardener who routinely drinks from the spray tank over eight days, this study is meaningless.
Furthermore, Andrew Porterfield, who wrote the scathing critique of Séralini I linked above, pointed out an additional problem:
Scientists have been sharply critical of the study’s methodology and conclusions... the paper has no discussion on the natural variability in locomotion or physiological parameters, making it impossible to tell if anything was truly wrong with any of the animals.
And if that weren't bad enough, Séralini proposes to counteract these most-likely-nonexistent health effects with pills that have been diluted past Avogadro's Limit -- i.e., the point where there is even a single molecule of the original substance left.  There have been dozens of controlled studies of the efficacy of homeopathy, and none of them -- not one -- have shown that it has any effect at all except as a placebo.

So we have doubtful health problems in animals that were not evaluated beforehand for health problems being treated by worthless "remedies" that have been shown to have zero effect in controlled studies.

Of course, considering how powerful confirmation bias is, I'm not expecting this to convince anyone who wasn't already convinced.  I will say, however, that we'd be in a lot better shape as a species if we relied more on reason, logic, and evidence -- and less on our preconceived notions of how we'd like the world to be.