Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label Dunning-Kruger effect. Show all posts
Showing posts with label Dunning-Kruger effect. Show all posts

Saturday, October 3, 2020

The illusion of understanding

I've written before about the Dunning-Kruger effect, the cognitive bias that gives rise to the perception that everyone you ask will verify being an above-average driver.  We all have the sense of being competent -- and as studies of Dunning-Kruger have shown, we generally think we're more competent than we really are.

I just ran into a paper from about a long while ago that I'd never seen before, and that seems to put an even finer lens on this whole phenomenon.  It explains, I think, why people settle for simplistic explanations for phenomena -- and promptly cease to question their understanding at all.  So even though this is hardly a new study, it was new to me, and (I hope) will be new to my readers.

Called "The Misunderstood Limits of Folk Science: An Illusion of Explanatory Depth," the paper was written by Leonid Rozenblit and Frank Keil of Yale University and appeared in the journal Cognitive Science.  Its results illustrate, I believe, why trying to disabuse people of poor understanding of science can be such an intensely frustrating occupation.

The idea of the paper is a simple one -- to test the degree to which people trust and rely on what the authors call "lay theories:"
Intuitive or lay theories are thought to influence almost every facet of everyday cognition.  People appeal to explanatory relations to guide their inferences in categorization, diagnosis, induction, and many other cognitive tasks, and across such diverse areas as biology, physical mechanics, and psychology.  Individuals will, for example, discount high correlations that do not conform to an intuitive causal model but overemphasize weak correlations that do.  Theories seem to tell us what features to emphasize in learning new concepts as well as highlighting the relevant dimensions of similarity... 
The incompleteness of everyday theories should not surprise most scientists.  We frequently discover that a theory that seems crystal clear and complete in our head suddenly develops gaping holes and inconsistencies when we try to set it down on paper.  
Folk theories, we claim, are even more fragmentary and skeletal, but laypeople, unlike some scientists, usually remain unaware of the incompleteness of their theories.  Laypeople rarely have to offer full explanations for most of the phenomena that they think they understand.  Unlike many teachers, writers, and other professional “explainers,” laypeople rarely have cause to doubt their naïve intuitions.  They believe that they can explain the world they live in fairly well.
Rozenblit and Keil proceeded to test this phenomenon, and they did so in a clever way.  They were able to demonstrate this illusory sense that we know what's going on around us by (for example) asking volunteers to rate their understanding of how common everyday objects work -- things like zippers, piano keys, speedometers, flush toilets, cylinder locks, and helicopters.  They were then (1) asked to write out explanations of how the objects worked; (2) given explanations of how they actually do work; and (3) asked to re-rate their understanding.

Just about everyone ranked their understanding as lower after they saw the correct explanation.

You read that right.  People, across the board, think they understand things better before they actually learn about them.  On one level, that makes sense; all of us are prone to thinking things are simpler than they actually are, and can relate to being surprised at how complicated some common objects turn out to be.  (Ever seen the inside of a wind-up clock, for example?)  But what is amazing about this is how confident we are in our shallow, incomplete knowledge -- until someone sets out to knock that perception askew.

It was such a robust result that Rozenblit and Keil decided to push it a little, and see if they could make the illusion of explanatory depth go away.  They tried it with a less-educated test group (the initial test group had been Yale students.)  Nope -- even people with less education still think they understand everything just fine.  They tried it with younger subjects.  Still no change.  They even told the test subjects ahead of time that they were going to be asked to explain how the objects worked -- thinking, perhaps, that people might be ashamed to admit to some smart-guy Yale researchers that they didn't know how their own zippers worked, and were bullshitting to save face.

The drop was less when such explicit instructions were given, but it was still there.  As Rozenblit and Keil write, "Offering an explicit warning about future testing reduced the drop from initial to subsequent ratings.  Importantly, the drop was still significant—the illusion held."

So does the drop in self-rating occur with purely factual knowledge?  They tested this by doing the same protocol, but instead of asking people for explanations of mechanisms, they asked them to do a task that required nothing but pure recall, such as naming the capitals of various countries.  Here, the drop in self-rating still occurred, but it was far smaller than with explanatory or process-based knowledge.  We are, it seems, much more likely to admit we don't know facts than to admit we don't understand processes.

The conclusion that Rozenblit and Keil reach is a troubling one:
Since it is impossible in most cases to fully grasp the causal chains that are responsible for, and exhaustively explain, the world around us, we have to learn to use much sparser representations of causal relations that are good enough to give us the necessary insights: insights that go beyond associative similarity but which at the same time are not overwhelming in terms of cognitive load.  It may therefore be quite adaptive to have the illusion that we know more than we do so that we settle for what is enough.  The illusion might be an essential governor on our drive to search for explanatory underpinnings; it terminates potentially inexhaustible searches for ever-deeper understanding by satiating the drive for more knowledge once some skeletal level of causal comprehension is reached.
Put simply, when we get to "I understand this well enough," we stop thinking.  And for most of us, that point is reached far, far too soon.

And while it really isn't that critical to understand how zippers work as long as it doesn't stop you from zipping up your pants, the illusion of explanatory depth in other areas can come back to bite us pretty hard when we start making decisions on how to vote.  If most of us truly understand far less than we think we do about such issues as the safety of GMOs and vaccines, the processes involved in climate and climate change, the scientific and ethical issues surrounding embryonic stem cells, and even issues like air and water pollution, how can we possibly make informed decisions regarding the regulations governing them?

All the more reason, I think, that we should be putting more time, money, effort, and support into education.  While education doesn't make the illusion of explanatory depth go away, at least the educated are starting from a higher baseline.  We still might overestimate our own understanding, but I'd bet that the understanding itself is higher -- and that's bound to lead us to make better decisions.

I'll end with a quote by author and blogger John Green that I think is particularly apt, here:


*******************************

To the layperson, there's something odd about physicists' search for (amongst many other things) a Grand Unified Theory, that unites the four fundamental forces into one elegant model.

Why do they think that there is such a theory?  Strange as it sounds, a lot of them say it's because having one force of the four (gravitation) not accounted for by the model, and requiring its own separate equations to explain, is "messy."  Or "inelegant."  Or -- most tellingly -- "ugly."

So, put simply; why do physicists have the tendency to think that for a theory to be true, it has to be elegant and beautiful?  Couldn't the universe just be chaotic and weird, with different facets of it obeying their own unrelated laws, with no unifying explanation to account for it all?

This is the question that physicist Sabine Hossenfelder addresses in her wonderful book Lost in Math: How Beauty Leads Physicists Astray.  She makes a bold statement; that this search for beauty and elegance in the mathematical models has diverted theoretical physics into untestable, unverifiable cul-de-sacs, blinding researchers to the reality -- the experimental evidence.

Whatever you think about whether the universe should obey aesthetically pleasing rules, or whether you're okay with weirdness and messiness, Hossenfelder's book will challenge your perception of how science is done.  It's a fascinating, fun, and enlightening read for anyone interested in learning about the arcane reaches of physics.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Tuesday, April 17, 2018

Superior ignorance

I've written before on the topic of the Dunning-Kruger effect, the idea that we all tend to overestimate our own knowledge of a topic (parodied brilliantly by Garrison Keillor in his spot "News from Lake Woebegon" on Prairie Home Companion -- where "all of the children are above average").


A study released last week in the Journal of Experimental Social Psychology gives us another window into this unfortunate tendency of the human brain.  In the paper "Is Belief Superiority Justified by Superior Knowledge?", by Michael P. Hall and Kaitlin T. Raimi, we find out the rather frustrating corollary to the Dunning-Kruger effect: that the people who believe their opinions are superior actually tend to know less about the topic than the people who have a more modest view of their own correctness.

The authors write:
Individuals expressing belief superiority—the belief that one's views are superior to other viewpoints—perceive themselves as better informed about that topic, but no research has verified whether this perception is justified.  The present research examined whether people expressing belief superiority on four political issues demonstrated superior knowledge or superior knowledge-seeking behavior.  Despite perceiving themselves as more knowledgeable, knowledge assessments revealed that the belief superior exhibited the greatest gaps between their perceived and actual knowledge.  
The problem, of course, is that if you think your beliefs are superior, you're much more likely to go around trying to talk everyone into believing like you do.  If you really are more knowledgeable, that's at least justifiable; but the idea that the less informed you are, the more likely you are to proselytize, is alarming to say the least.

There is at least a somewhat encouraging piece to this study, which indicated that this tendency may be remediable:
When given the opportunity to pursue additional information in that domain, belief-superior individuals frequently favored agreeable over disagreeable information, but also indicated awareness of this bias.  Lastly, experimentally manipulated feedback about one's knowledge had some success in affecting belief superiority and resulting information-seeking behavior.  Specifically, when belief superiority is lowered, people attend to information they may have previously regarded as inferior.  Implications of unjustified belief superiority and biased information pursuit for political discourse are discussed.
So belief-superior people are more likely to fall for confirmation bias (which you'd expect), but if you can somehow punch a hole in the self-congratulation, those people will be more willing to listen to contrary viewpoints.

The problem remains of how to get people to admit that their beliefs are open to challenge.  I'm thinking in particular of Ken Ham, who in the infamous Ken Ham/Bill Nye debate on evolution and creationism, was asked what, if anything, could change his mind.  Nye had answered the question that a single piece of incontrovertible evidence is all it would take; Ham, on the other hand, said that nothing, nothing whatsoever, could alter his beliefs.

Which highlights brilliantly the difference between the scientific and religious view of the world.

So the difficulty is that counterfactual viewpoints are often well insulated from challenge, and the people who hold them resistant to considering even the slightest insinuation that they could be wrong.  I wrote last week about Donald Trump's unwillingness to admit he's wrong about anything, ever, even when presented with unarguable facts and data.  If that doesn't encapsulate the Dunning-Kruger attitude, and the Hall-Raimi corollary to it, I don't know what does.

Doesn't mean we shouldn't try, of course.  After all, if I thought it was hopeless, I wouldn't be here on Skeptophilia six days a week.  The interesting part of the study by Hall and Raimi, however, is the suggestion that we might be going about it all wrong.  The way to fix wrong-headed thinking may not be to present the person with evidence, but to get someone to see that they could, in fact, be wrong in a more global sense.  This could open them up to considering other viewpoints, and ultimately, looking at the facts in a more skeptical, open-minded manner.

On the other hand, I still don't think there's much we can do about Ken Ham and Donald Trump.

*********************
This week's Featured Book on Skeptophilia:

This week I'm featuring a classic: Carl Sagan's The Demon-Haunted World: Science as a Candle in the Dark.  Sagan, famous for his work on the series Cosmos, here addresses the topics of pseudoscience, skepticism, credulity, and why it matters -- even to laypeople.  Lucid, sometimes funny, always fascinating.




Friday, January 19, 2018

Climbing Mount Stupid

So the long-awaited "Fake News Awards," intended to highlight the "most DISHONEST and CORRUPT members of the media," were announced yesterday.

Or at least, Donald Trump attempted to announce them.  Under a minute after the announcement was made, the site crashed, and last I checked, hadn't been fixed.  But a screen capture done before the site went down lets us know who the winners were.  They seem to fall into two categories:
  1. Simple factual misreporting, 100% of which were corrected by the news agency at fault after more accurate information was brought forth.
  2. Anyone who dared to criticize Donald Trump.
Unsurprisingly, this included CNN, The Washington Post, and The New York Times.  The tweetstorm from Trump hee-hawing about how he'd really shown the press a thing or two by calling them all mean nasty poopyhead fakers ended with his mantra "THERE IS NO COLLUSION," which is more than ever seeming like "Pay no attention to the man behind the curtain."

So far, this is unremarkable, given that accusing everyone who disagrees with him of lying, while simultaneously claiming that he is always right, has been part of Trump's playbook ever since he jumped into politics.  But just last week a study, authored by S. Mo Jang and Joon K. Kim of the University of South Carolina School of Journalism and Mass Communications, brought the whole "fake news" think into sharper focus.  Because their research has shown that people are perfectly accepting that fake, corrupt news media exist...

... but that people of the other political party are the only ones who are falling for it.

The study, which appeared in Computers in Human Behavior, was titled, "Third Person Effects of Fake News: Fake News Regulation and Media Literacy Interventions."  The authors write:
Although the actual effect of fake news online on voters’ decisions is still unknown, concerns over the perceived effect of fake news online have prevailed in the US and other countries.  Based on an analysis of survey responses from national samples (n = 1299) in the US, we found a strong tendency of the third-person perception.  That is, individuals believed that fake news would have greater effects on out-group members than themselves or in-group members.  Additionally, we proposed a theoretical path model, identifying the antecedents and consequences of the third-person perception.  The results showed that partisan identity, social undesirability of content, and external political efficacy were positive predictors of the third-person perception.  Interestingly, our findings revealed that third-person perception led to different ways of combating fake news online.  Those with a greater level of third-person perception were more likely to support the media literacy approach but less likely to support the media regulation approach.
Put more simply, people tended to think they were immune to the effects of fake news themselves -- i.e., they "saw through it."  The other folks, though, were clearly being fooled.

Probably the only reasonable explanation of why everyone doesn't agree with me, right?

Of course right.

It's just the Dunning-Kruger effect again, isn't it?  Everyone thinks they're smarter than average.


All this amounts to is another way we insulate ourselves from even considering the possibility that we might be wrong.  Sure, there are wrong people out there, but it can't be us.

Or as a friend of mine put it, "The first rule of Dunning-Kruger Club is that you don't know you belong to Dunning-Kruger Club."

Jang and Kim focused on American test subjects, but it'd be interesting to see how much this carried over across cultures.  As I've observed before, a lot of the American cultural identity revolves around how much better we are than everyone else.  This attitude of American exceptionalism -- the "'Murika, Fuck Yeah!" approach -- not only stops us from considering other possible answers to the problems we face, but prevents any challenge to the path we are taking.

It'd be nice to think that studies like this would pull people up short and make them reconsider, but I'm guessing it won't.  We have far too much invested in our worldviews to examine them closely because of a couple of ivory-tower scientists.

And anyway, even if they are right, and people are getting suckered by claims of fake news when it fits their preconceived notions to accept them, they can't mean me, right?  I'm too smart to get fooled by that.

I'm significantly above average, in fact.

Monday, July 25, 2016

Fooling the experts

Today we consider what happens when you blend Appeal to Authority with the Dunning-Kruger Effect.

Appeal to Authority, you probably know, is when someone uses credentials, titles, or educational background -- and no other evidence -- to support a claim.  Put simply, it is the idea that if Stephen Hawking said it, it must be true, regardless of whether the claim has anything to do with Hawking's particular area of expertise.  The Dunning-Kruger Effect, on the other hand, is the idea that people tend to wildly overestimate their abilities, even in the face of evidence to the contrary, which is why we all think we're above average drivers.

Well, David Dunning (of the aforementioned Dunning-Kruger Effect) has teamed up with Cornell University researchers Stav Atir and Emily Rosenzweig, and come up with the love child of Dunning-Kruger and Appeal to Authority.  And what this new phenomenon -- dubbed, predictably, the Atir-Rosenzweig-Dunning Effect -- shows us is that people who are experts in a particular field tend to think that expertise holds true even for disciplines far outside their chosen area of study.

[image courtesy of the Wikimedia Commons]

In one experiment, the three researchers asked people to rate their own knowledge in various academic areas, then asked them to rank their level of understanding of various finance-related terms, such as "pre-rated stocks, fixed-rate deduction and annualized credit."  The problem is, those three finance-related terms actually don't exist -- i.e., they were made up by the researchers to sound plausible.

The test subjects who had the highest confidence level in their own fields were most likely to get suckered.  Simon Oxenham, who described the experiments in Big Think, says it's only natural.  "A possible explanation for this finding," Oxenham writes, "is that the participants with a greater vocabulary in a particular domain were more prone to falsely feeling familiar with nonsense terms in that domain because of the fact that they had simply come across more similar-sounding terms in their lives, providing more material for potential confusion."

Interestingly, subsequent experiments showed that the correlation holds true even if you take away the factor of self-ranking.  Presumably, someone who is cocky and arrogant and ranks his/her ability higher than is justified in one area would be likely to do it in others.  But when they tested the subjects' knowledge of terms from their own field -- i.e., actually measured their expertise -- high scores still correlated with overestimating their knowledge in other areas.

And telling the subjects ahead of time that some of the terms might be made up didn't change the results.  "[E]ven when participants were warned that some of the statements were false, the 'experts' were just as likely as before to claim to know the nonsense statements, while most of the other participants became more likely in this scenario to admit they’d never heard of them," Oxenham writes.

I have a bit of anecdotal evidence supporting this result from my experience in the classroom.  On multiple-choice tests, I have to concoct plausible-sounding wrong answers as distractors.  Every once in a while, I run out of good wrong answers, and just make something up.  (On one AP Biology quiz on plant biochemistry, I threw in the term "photoglycolysis," which sounds pretty fancy until you realize that it doesn't exist.)  What I find was that it was the average to upper-average students who are the most likely to be taken in by the ruse.  The top students don't get fooled because they know what the correct answer is; the lowest students are equally likely to pick any of the wrong answers, because they don't understand the material well.  The mid-range students see something that sounds technical and vaguely familiar -- and figure that if they aren't sure, it must be that they missed learning that particular term.

It's also the mid-range students who are most likely to miss questions where the actual answer seems too simple.  Another botanical question I like to throw at them is "What do all non-vascular land plants have in common?"  There are three wrong answers with appropriately technical-sounding jargon.

The actual answer is, "They're small."

Interestingly, the reason non-vascular land plants are small isn't simple at all.  But the answer itself just looks too easy to merit being the correct choice on an AP Biology quiz.

So Atir, Rosenzweig, and Dunning have given us yet another mental pitfall to watch out for -- our tendency to use our knowledge in one field to overestimate our knowledge in others.  But I really should run along, and make sure that the annualized credit on my pre-rated stocks exceeds the recommended fixed-rate deduction.  I'm sure you can appreciate how important that is.

Wednesday, March 30, 2016

The cult of ignorance -- and "do-it-yourself braces"

In his wonderful essay "The Cult of Ignorance," Isaac Asimov wrote something that still resonates, 36 years after it appeared in Newsweek: "There is a cult of ignorance in the United States, and there always has been.  The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"

This tendency on the part of Americans to assume that democracy means that all ideas are equal, and everyone's utterances equally valid, often drives us to do ridiculous things.  We discount the conclusions of scientists, preferring instead the evidence-free declarations of politicians, actors, and athletes.  The criticism "they don't understand the common, working-class people" is frequently lobbed in the direction of the intelligent.   This sense that the well-educated are either impractical or else actively evil leads us to elect the unqualified because they "seem like regular folks," even though you'd think we'd all want our leaders to be selected from the best and smartest we have.  But somehow there's this sense that being smart, well-educated, and thoroughly trained gives the experts an ivory-tower insulation from the rest of us slobs, and probably leaves them immoral as well.

What it actually accomplishes, though, is to give your average guy the idea that he knows a great deal more than he really does.  Called the Dunning-Kruger effect, it makes the unskilled overestimate their knowledge and underestimate their ineptitude.  And just yesterday, I ran into a new phenomenon that illustrates that amazingly well: do-it-yourself orthodonture.

I'm not making this up.  Put off by the high costs of orthodontic treatment, the inconvenience of having braces (sometimes for years), and the mystery of how a few brackets and wires could straighten out crooked teeth, people have said, "Hey, I could do that."  Now that 3-D printing is easy and cheap, people print themselves out resin brackets, affix them to their teeth with glue, and start yanking.  Lured by testimony that such a course of treatment could reduce the costs to under $100 and shorten the duration of brace-wearing from three years to as little as sixteen weeks, the idea has been spreading like wildfire.

[image courtesy of photographer Jason Regan and the Wikimedia Commons]

But folks -- orthodontists have to go through extensive training for a reason.  It's not enough to peer in a friend's mouth and try to replicate the friend's orthodontic hardware on your own teeth.  One of the DIY-ers, a fellow named Amos Dudley, has posted pictures of his changed smile on the internet, an alteration that took only four months.  But orthodontist Stephen Belli says that looks can deceive:
I’d like to see an X-ray, because he’s probably caused some irreparable harm.  He moved these teeth in only 16 weeks.  You can cause a lot of problems with that.  If you move a tooth too fast, you can actually cause damage to the bone and gums.  And if you don’t put the tooth in the right position, you could throw off your bite.
And why did Dudley take on his own orthodontic work?  "Because," he said, "I wanted to stick it to the dental appliance industry."

A stance that apparently gave him the impression that he knew enough to start shoving around his own teeth.

I find this attitude impossible to understand.  I consider myself reasonably intelligent, but I know I'm not smart enough to be responsible for my own medical care.  Similarly, I know I don't have the training to understand the latest scientific findings in most fields, nor to come up with solutions to the nation's economic problems and foreign policy.  This is why we have experts.

Which is why it really pisses me off when someone comes up with the latest pseudo-clever meme about how to fix everything, such as this one:


When I first saw this one -- and it's been posted far and wide -- my first thought was, "Can't you do simple arithmetic?"  If you don't see what I mean, try this: let's raise the salaries of the soldiers and seniors to, say, $50,000 a year.  Accepting the numbers they've quoted here, this would require an extra $12,000 for each soldier and an extra $38,000 for each senior.  Multiply each of these by the number of active-duty soldiers (1,388,000) and seniors on Social Security (59,000,000) in the United States, respectively, to see how much money we'd need.

Then, using a similar calculation, figure out how much we'd save yearly by cutting the wages of retired presidents (of which there are currently four -- Carter, Bush Sr., Clinton, and Bush Jr.  You can even be generous and throw in Obama if you like), the House and Senate members (535 of them), the Speaker of the House (1), and the Majority and Minority leaders of each branch of congress (4), down to $50,000 each.  See how much you save.

Falls a little short, doesn't it?

I mean, for cryin' in the sink, people.  If it was that easy, don't you think someone would have thought of this by now?

Which may seem a long way from do-it-yourself braces, but it's all the same thing, really; the attitude that a completely untrained individual is just as good as an expert at solving complex problems.  The cult of ignorance is still thriving here in the United States.  But despite our desire to think that everyone's ideas are on equal footing, ignorance will never be as good as knowledge.

Saturday, July 11, 2015

The illusion of understanding

I've written before about the Dunning-Kruger effect, the cognitive bias that gives rise to the perception that everyone you ask will verify being an above-average driver.  We all have the sense of being competent -- and as studies of Dunning-Kruger have shown, we generally think we're more competent than we really are.

I just ran into a paper from about thirteen years ago that I'd never seen before, and that seems to put an even finer lens on this whole phenomenon.  It explains, I think, why people settle for simplistic explanations for phenomena -- and promptly cease to question their understanding at all.  So even though this is hardly a new study, it was new to me, and (I hope) will be new to my readers.

Called "The Misunderstood Limits of Folk Science: An Illusion of Explanatory Depth," the paper was written by Leonid Rozenblit and Frank Keil of Yale University and appeared in the journal Cognitive Science.  Its results illustrate, I believe, why trying to disabuse people of poor understanding of science can be such an intensely frustrating occupation.

The idea of the paper is a simple one -- to test the degree to which people trust and rely on what the authors call "lay theories:"
Intuitive or lay theories are thought to influence almost every facet of everyday cognition.  People appeal to explanatory relations to guide their inferences in categorization, diagnosis, induction, and many other cognitive tasks, and across such diverse areas as biology, physical mechanics, and psychology.  Individuals will, for example, discount high correlations that do not conform to an intuitive causal model but overemphasize weak correlations that do.  Theories seem to tell us what features to emphasize in learning new concepts as well as highlighting the relevant dimensions of similarity...   
The incompleteness of everyday theories should not surprise most scientists.  We frequently discover that a theory that seems crystal clear and complete in our head suddenly develops gaping holes and inconsistencies when we try to set it down on paper. 
Folk theories, we claim, are even more fragmentary and skeletal, but laypeople, unlike some scientists, usually remain unaware of the incompleteness of their theories.  Laypeople rarely have to offer full explanations for most of the phenomena that they think they understand.  Unlike many teachers, writers, and other professional “explainers,” laypeople rarely have cause to doubt their naïve intuitions.  They believe that they can explain the world they live in fairly well.
Rozenblit and Keil proceeded to test this phenomenon, and they did so in a clever way.  They were able to demonstrate this illusory sense that we know what's going on around us by (for example) asking volunteers to rate their understanding of how common everyday objects work -- things like zippers, piano keys, speedometers, flush toilets, cylinder locks, and helicopters.  They were then (1) asked to write out explanations of how the objects worked; (2) given explanations of how they actually do work; and (3) asked to re-rate their understanding.

Just about everyone ranked their understanding as lower after they saw the correct explanation.

You read that right.  People, across the board, think they understand things better before they actually learn about them.  On one level, that makes sense; all of us are prone to thinking things are simpler than they actually are, and can relate to being surprised at how complicated some common objects turn out to be.  (Ever seen the inside of a wind-up clock, for example?)  But what is amazing about this is how confident we are in our shallow, incomplete knowledge -- until someone sets out to knock that perception askew.

It was such a robust result that Rozenblit and Keil decided to push it a little, and see if they could make the illusion of explanatory depth go away.  They tried it with a less-educated test group (the initial test group had been Yale students.)  Nope -- even people with less education still think they understand everything just fine.  They tried it with younger subjects.  Still no change.  They even told the test subjects ahead of time that they were going to be asked to explain how the objects worked -- thinking, perhaps, that people might be ashamed to admit to some smart-guy Yale researchers that they didn't know how their own zippers worked, and were bullshitting to save face.

The drop was less when such explicit instructions were given, but it was still there.  As Rozenblit and Keil write, "Offering an explicit warning about future testing reduced the drop from initial to subsequent ratings. Importantly, the drop was still significant—the illusion held."

So does the drop in self-rating occur with purely factual knowledge?  They tested this by doing the same protocol, but instead of asking people for explanations of mechanisms, they asked them to do a task that required nothing but pure recall -- such as naming the capitals of various countries.  Here, the drop in self-rating still occurred, but it was far smaller than with explanatory or process-based knowledge.  We are, it seems, much more likely to admit we don't know facts than to admit we don't understand processes.

The conclusion that Rozenblit and Keil reach is a troubling one:
Since it is impossible in most cases to fully grasp the causal chains that are responsible for, and exhaustively explain, the world around us, we have to learn to use much sparser representations of causal relations that are good enough to give us the necessary insights: insights that go beyond associative similarity but which at the same time are not overwhelming in terms of cognitive load.  It may therefore be quite adaptive to have the illusion that we know more than we do so that we settle for what is enough.  The illusion might be an essential governor on our drive to search for explanatory underpinnings; it terminates potentially inexhaustible searches for ever-deeper understanding by satiating the drive for more knowledge once some skeletal level of causal comprehension is reached.
Put simply, when we get to "I understand this well enough," we stop thinking.  And for most of us, that point is reached far, far too soon.

And while it really isn't that critical to understand how zippers work as long as it doesn't stop you from zipping up your pants, the illusion of explanatory depth in other areas can come back to bite us pretty hard when we start making decisions on how to vote.  If most of us truly understand far less than we think we do about such issues as the safety of GMOs and vaccines, the processes involved in climate and climate change, the scientific and ethical issues surrounding embryonic stem cells, and even issues like air and water pollution, how can we possibly make informed decisions regarding the regulations governing them?

All the more reason, I think, that we should be putting more time, money, effort, and support into education.  While education doesn't make the illusion of explanatory depth go away, at least the educated are starting from a higher baseline.  We still might overestimate our own understanding, but I'd still bet that the understanding itself is higher -- and that's bound to make us make better decisions.

I'll end with a quote by John Green that I think is particularly apt, here:


Monday, March 16, 2015

Science-friendly illogic

I usually don't blog about what other people put in their blogs.  This kind of thing can rapidly devolve into a bunch of shouted opinions, rather than a reasoned set of arguments that are actually based upon evidence.

But just yesterday I ran into a blog that (1) cited real research, and (2) drew conclusions from that research that were so off the rails that I had to comment.  I'm referring to the piece over at Religion News Service by Cathy Lynn Grossman entitled, "God Knows, Evangelicals Are More Science-Friendly Than You Think."  Grossman was part of a panel at the American Association for the Advancement of Science's yearly Dialogue on Science, Ethics, and Religion, and commented upon research presented at that event by Elaine Howard Ecklund, sociologist at Rice University.

Ecklund's research surrounded the attitudes by evangelicals toward science.  She described the following data from her study:
  • 48% of the evangelicals in her study viewed science and religion as complementary.
  • 21% saw the two worldviews as entirely independent of one another (which I am interpreting to be a version of Stephen Jay Gould's "non-overlapping magisteria" idea).
  • A little over 30% saw the two views as in opposition to each other.
84% of evangelicals, Grossman said, "say modern science is going good [sic] in the world."  And she interprets this as meaning that evangelicals are actually, contrary to appearances, "science friendly."  Grossman writes:
Now, the myth that bites the data dust, is one that proclaims evangelicals are a monolithic group of young-earth creationists opposed to theories of human evolution... 
(M)edia... sometimes incorrectly conflate the conservative evangelical view with all Christians’ views under the general “religion” terminology. 
I said this may allow a small subset to dictate the terms of the national science-and-religion conversation although they are not representative in numbers -– or point of view. This could lead to a great deal of energy devoted to winning the approval of the shrinking group and aging group that believes the Bible trumps science on critical issues.
Well, here's the problem with all of this.

This seems to me to be the inherent bias that makes everyone think they're an above-average driver.  Called the Dunning-Kruger effect, it is described by psychologist David Dunning, whose team first described the phenomenon, thusly:
Incompetent people do not recognize—scratch that, cannot recognize—just how incompetent they are...  What’s curious is that, in many cases, incompetence does not leave people disoriented, perplexed, or cautious. Instead, the incompetent are often blessed with an inappropriate confidence, buoyed by something that feels to them like knowledge. 
An ignorant mind is precisely not a spotless, empty vessel, but one that’s filled with the clutter of irrelevant or misleading life experiences, theories, facts, intuitions, strategies, algorithms, heuristics, metaphors, and hunches that regrettably have the look and feel of useful and accurate knowledge.
Now, allow me to say right away that I'm not calling evangelicals incompetent and/or ignorant as a group.  I have a friend who is a diehard evangelical, and he's one of the best-read, most thoughtful (in both senses of the word) people I know.  But what I am pointing out is that people are poor judges of their own understanding and attitudes -- and on that level, Dunning's second paragraph is referring to all of us.

So Ecklund's data, and Grossman's conclusions from it, are not so much wrong as they are irrelevant. It doesn't matter if evangelicals think they're supportive of science, just like my opinion of my own driving ability isn't necessarily reflective of reality.  I'm much more likely to take the evangelicals' wholesale rejection of evolution and climate science as an indication of their lack of support and/or understanding of science than I would their opinions regarding their own attitudes toward it.

And, of course, there's that troubling 30% of evangelicals who do see religion and science as opposed, a group that Grossman glides right past.  She does, however, admit that scientists would probably find it "troubling" that 60% of evangelicals say that "scientists should be open to considering miracles in their theories."

Troubling doesn't begin to describe it, lady.


That doesn't stop Grossman from painting the Religious Right as one big happy science-loving family, and she can't resist ending by giving us secular rationalists a little cautionary kick in the ass:
[S]cientists who want to write off evangelical views as inconsequential may not want to celebrate those trends [that young people are leaving the church in record numbers]. The trend to emphasize personal experience and individualized spirituality over the authority of Scripture or religious denominational theology is part of a larger cultural trend toward rejecting authority. 
The next group to fall victim to that trend could well be the voices of science.
Which may be the most obvious evidence of all that Grossman herself doesn't understand science.  Science doesn't proceed by authority; it proceeds by hard evidence.  Stephen Hawking, one of the most widely respected authorities in physics, altered his position on information loss in black holes when another scientist, John Preskill, demonstrated that he was wrong.  The theoretical refutation of Hawking's position was later confirmed by data from the Wilkinson Microwave Anisotropy Probe.  Significantly, no one -- including Hawking himself -- said, "you have to listen to me, I'm an authority."

If anything, the trend of rejection of authority and "personal experience" works entirely in science's favor.  The less personal bias a scientist has, the less dependence on the word of authority, the more (s)he can think critically about how the world works.

So all in all, I'd like to thank Grossman and Ecklund for the good news, however they delivered it in odd packaging.  Given my own set of biases, I'm not going to be likely to see the data they so lauded in anything but an optimistic light.

Just like I do my own ability to drive.  Because whatever else you might say about me, I have mad driving skills.

Thursday, May 2, 2013

Foxes, hedgehogs, and extreme politics

As if we needed anything to make us less confident about what goes on inside our skulls, an article in e! Science News appeared on Monday, entitled, "Extreme Political Attitudes May Stem From an Illusion of Understanding."

The study's principle author, Philip Fernbach of the University of Colorado, explained that the study came out of an observation that people who loudly expressed views on politics often seemed not to have much in the way of factual knowledge about the topic upon which they were expounding.

"We wanted to know how it's possible that people can maintain such strong positions on issues that are so complex -- such as macroeconomics, health care, foreign relations -- and yet seem to be so ill-informed about those issues,"  Fernbach said.

What the study did was to ask a group of test subjects to rate how well they understood six different political issues, including instituting merit pay for teachers, raising the age on Social Security, and enacting a flat tax.  The subjects then were asked to explain two of the policies, including their own position and why they held it, and were questioned on their understanding of facts of the policy by the researchers.  Afterwards, they were asked to re-rate their level of comprehension.

Across the board, self-assessment scores went down on the subjects they were asked to explain.  More importantly, their positions shifted -- there was a distinct movement toward the center that occurred regardless of the political affiliation of the participant.  Further, the worse the person's explanation had been -- i.e., the more their ignorance of the facts had been uncovered -- the further toward the center they shifted.

This seems to be further evidence for the Dunning-Kruger effect -- a bias in which people nearly always tend to overestimate their own knowledge and skill.  (It also brings to mind Dave Barry's comment, "Everyone thinks they're an above-average driver.")

I'm also reminded of Philip Tetlock's brilliant work Expert Political Judgment, which is summarized here but which anyone who is a student of politics or sociology should read in its entirety.  In the research for his book, he analyzed the political pronouncements of hundreds of individuals, evaluating the predictions of experts in a variety of fields to the actual outcome in the real world, and uses this information to draw some fascinating conclusions about human social behavior.  The relevant part of his argument, for our purposes here, is that humans exhibit two basic "cognitive styles," which he calls "the fox and the hedgehog" (the symbols come from a European folk tale).

Foxes, Tetlock says, tend to be able to see multiple viewpoints, and have a high tolerance for ambiguity (in the interest of conciseness, quotes are taken from the summary, not from the original book):
Experts who think in the 'Fox' cognitive style are suspicious of a commitment to any one way of seeing the issue, and prefer a loose insight that is nonetheless calibrated from many different perspectives.  They use quantification of uncertain events more as calibration, as a metaphor, than as a prediction.  They are tolerant of dissonance within a model - for example, that an 'enemy' regime might have redeeming qualities - and relatively ready to recalibrate their view when unexpected events cast doubt on what they had previously believed to be true.
Hedgehogs, on the other hand, like certainty, closure, and definite answers:
In contrast to this, Hedgehogs work hard to exclude dissonance from their models. They prefer to treat events which contradict their expectations as exceptions, and to re-interpret events in such a way as to allocate exceptions to external events. For example, positive aspects of an enemy regime may be assigned to propaganda, either on the part of the regime or through its sympathizers...  Hedgehogs tend to flourish and excel in environments in which uncertainty and ambiguity have been excluded, either by actual or artificial means. The mantra of "targets and accountability" was made by and for Hedgehogs.
The differences, Tetlock said, are irrespective of political leaning; there are conservative and liberal foxes, and conservative and liberal hedgehogs.  But, most importantly, the foxes' tolerance of many viewpoints, and awareness of their own ignorance, gives them the appearance of knowing less than they actually do, and lessens their influence on policy and society; and the hedgehogs' certainty, and clear, concise answers to complex problems, gives them the appearance of knowing more than they actually do, and increases their influence.

Hedgehogs, Tetlock found, were more often wrong in their assessment of political situations, but their views achieved wide impact.  Foxes were more often right -- but no one listened.

So, anyway, I read all of this with a vague sense of unease.  Having a blog, after all, implies some level of arrogance -- that you believe your views to be important, intelligent, and interesting enough that people, many of them total strangers, will want to read what you have to say.  Given Fernbach's study, not to mention the Dunning-Kruger effect and the conclusion of Tetlock's research, it does leave me with a bit of a chill.  Would my views on topics become less extreme if I were forced to reconsider the facts of the situation?  Do I really think I'm more knowledgeable than I actually am?  Worst of all (for a blogger), am I a simplistic thinker that is often wrong but whose views have wide social impact, or a complex thinker that no one pays attention to?

Oy.  I'm not sure I, um, want to reevaluate all this.  I think I'll just go have breakfast.  That sounds like a definitive solution to the problem, right?

Of course right.