It may be, however, that I'll have to tone it down a little, considering a study that appeared in eNeuro last week. Entitled, "Why Is It So Hard to Do Good Science?", by Emory University professor of pharmacology Ray Dingledine, this paper posits that science is inherently susceptible to confirmation bias -- the very thing the scientific method was developed to control.
[Image is in the Public Domain]
A lot of scientists bristle at this kind of criticism, and point out examples of scientists finding data that didn't fit the existing model, resulting in the model being overhauled or thrown out entirely. Burke does as well; he cites plate tectonics, a model that arose from magnetometer data from the ocean floor that couldn't be explained with the understanding of geology at the time.
But the thing is, those instances stand out precisely because they're so uncommon. Major revisions of the model are actually really infrequent -- which a lot of us rah-rah-science types have celebrated as a vindication that the scientific approach works, because it's given us rock-solid theories that have withstood decades, in some cases centuries, of empirical work.
Dingledine poniards that idea neatly. He writes:
“Good science” means answering important questions convincingly, a challenging endeavor under the best of circumstances. Our inability to replicate many biomedical studies has been the subject of numerous commentaries both in the scientific and lay press. In response, statistics has re-emerged as a necessary tool to improve the objectivity of study conclusions. However, psychological aspects of decision–making introduce preconceived preferences into scientific judgment that cannot be eliminated by any statistical method.
It's possible to counter this tendency, Dingledine says, but not in any sense easy:
The findings reinforce the roles that two inherent intuitions play in scientific decision-making: our drive to create a coherent narrative from new data regardless of its quality or relevance, and our inclination to seek patterns in data whether they exist or not. Moreover, we do not always consider how likely a result is regardless of its P-value. Low statistical power and inattention to principles underpinning Bayesian statistics reduce experimental rigor, but mitigating skills can be learned. Overcoming our natural human tendency to make quick decisions and jump to conclusions is a deeper obstacle to doing good science; this too can be learned.Which just shows that bias runs deeper, and is harder to expunge, than most of us want to admit.
Now, I'm not meaning for anyone to switch from scientific experimentation to Divine Inspiration or whatnot. Nor am I saying that any of the Big Ideas -- the aforementioned plate tectonics, the Newtonian/Einsteinian model of physics, quantum mechanics, molecular genetics, evolution by natural selection -- are wrong in any kind of substantive way. It's more that we can't afford to get cocky. What happens when you get cocky is you miss things, including the effect your preconceived notions have on your outlook.
So all of us could use a dose of humility, not to mention self-awareness. The take-home message here is that we shouldn't take ideas as truthful out of hand, and should be especially wary if they agree with what we already thought was true. We're all prone to confirmation bias -- and that includes the smartest amongst us.
This week's Skeptophilia book recommendation is a charming inquiry into a realm that scares a lot of people -- mathematics. In The Universe and the Teacup, K. C. Cole investigates the beauty and wonder of that most abstract of disciplines, and even for -- especially for -- non-mathematical types, gives a window into a subject that is too often taught as an arbitrary set of rules for manipulating symbols. Cole, in a lyrical and not-too-technical way, demonstrates brilliantly the truth of the words of Galileo -- "Mathematics is the language with which God has written the universe."