Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label illusion of explanatory depth. Show all posts
Showing posts with label illusion of explanatory depth. Show all posts

Saturday, January 28, 2023

The roots of conspiracy

It's all too easy to dismiss conspiracy theorists as just being dumb, and heaven knows I've fallen into that often enough myself.

Part of the problem is that if you know any science, so many conspiracy theories just seem... idiotic.  That 5G cell towers cause COVID.  That eating food heated up in a microwave causes cancer.  As we just saw last week, that Satan's throne is located in Geneva and that's why the physicists at CERN are up to no good.

And sure, there's a measure of ignorance implicit in most conspiracy theories.  To believe that Buffalo Bills player Damar Hamlin's on-field collapse was caused by the COVID vaccine -- as both Charlie Kirk and Tucker Carlson stated -- you have to be profoundly ignorant about how vaccines work.  (This claim led to a rash of people on Twitter who demanded that anything with mRNA in it be officially banned, apparently without realizing that mRNA is in every living cell and is a vital part of your protein-production machinery.  And, therefore, it is not only everywhere in your body, it's present in every meat or vegetable you've ever consumed.)

But simple ignorance by itself doesn't explain it.  After all, we're all ignorant about a lot of stuff; you can't be an expert in everything.  I, for example, know fuck-all about business and economics, which is why it's a subject I never touch here at Skeptophilia (or anywhere else, for that matter).  I'm fully aware of my own lack of knowledge on the topic, and therefore anything I could say about it would have no relevance whatsoever.

Scientists have been trying for years to figure out why some people fall for conspiracies and others don't.  One theory which at least partially explains it is that conspiracy theorists tend to score higher than average in the "dark triad" of personality traits -- narcissism, sociopathy, and black-and-white thinking -- but that isn't the whole answer, because there are plenty of people who score high on those assessments who don't espouse crazy ideas.

But now a psychologist at the University of Regina, Gordon Pennycook, thinks he has the right answer.

The defining characteristic of a conspiracy theorist isn't ignorance, narcissism, or sociopathy; it's overconfidence.

Pennycook designed a clever test to suss out people's confidence levels when given little to nothing to go on.  He showed volunteers photographs that were blurred beyond recognition, and asked them to identify what the subject of the photo was.  ("I don't know" wasn't an option; they had to choose.)  Then, afterward, they were asked to estimate the percentage of their guesses they thought they'd gotten right.

That self-assessment correlated beautifully with belief in conspiracy theories.

"Sometimes you're right to be confident," Pennycook said.  "In this case, there was no reason for people to be confident...  This is something that's kind of fundamental.  If you have an actual, underlying, generalized overconfidence, that will impact the way you evaluate things in the world."

The danger, apparently, is not in simple ignorance, but in ignorance coupled with "of course I understand this."  It reminds me of the wonderful study done by Leonid Rozenblit and Frank Keil about a phenomenon called the illusion of explanatory depth -- that many of us have the impression we understand stuff when we actually have no idea.  (Rozenblit and Keil's examples were common things like the mechanisms of a cylinder lock and a flush toilet, how helicopters fly and maneuver, and how a zipper works.)  Most of us could probably venture a guess about those things, but would add, "... I think" or "... but I could be wrong." 

The people predisposed to belief in conspiracy theories, Pennycook says, are the ones who would never think of adding the disclaimer.

That kind of overconfidence, often crossing the line into actual arrogance, seems to be awfully common.  I was just chatting a couple of weeks ago with my athletic trainer about that -- he told me that all too often he runs into people who walk into his gym and proceed to tell him, "Here's what I think I should be doing."  I find that attitude baffling, and so does he.  I said to him, "Dude, I'm hiring you because you are the expert.  Why the hell would I pay you money if I already knew exactly how to get the results I want?"

He said, "No idea.  But you'd be surprised at how often people come in with that attitude."  He shook his head.  "They never last long here."

The open question, of course, is how you inculcate in people a realistic self-assessment of what they do know, and an awareness that there's lots of stuff about which they might not be right.  In other words, a sense of intellectual humility.  To some extent, I think the answer is in somehow getting them to do some actual research (i.e. not just a quick Google search to find Some Guy's Website that confirms what they already believed).  For example, reading scientific papers, finding out what the actual experts have discovered.  Failing that -- and admittedly, a lot of scientific papers are tough going for non-specialists -- at least reading a damn Wikipedia page on the topic.  Yeah, Wikipedia isn't perfect, but the quality has improved dramatically since it was founded in 2001; if you want a quick overview of (for example) the Big Bang theory, then just read the first few paragraphs of the Wikipedia page on the topic, wherein you will very quickly find that it does not mean what the creationists are so fond of saying, that "nothing exploded and made everything."

Speaking of being overconfident on a topic about which they clearly know next to nothing.

In any case, I'll just exhort my readers -- and I'm reminding myself of this as well -- always to keep in mind the phrase "I could be wrong."  And yes, that applies even to your most dearly held beliefs.  It doesn't mean actively doubting everything; I'm not trying to turn you into wishy-washy wafflers or, worse, outright cynics.  But periodically holding our own beliefs up to the cold light of evidence is never a bad thing.

As prominent skeptic (and professional stage magician) Penn Jillette so trenchantly put it: "Don't believe everything you think."

****************************************


Monday, June 13, 2022

The google trap

The eminent physicist Stephen Hawking said, "The greatest enemy of knowledge is not ignorance; it is the illusion of knowledge."

Somewhat more prosaically, my dad once said, "Ignorance can be cured.  We're all ignorant about some things.  Stupid, on the other hand, goes all the way to the bone."

Both of these sayings capture an unsettling idea; that often it's more dangerous to think you understand something than it is to admit you don't.  This idea was illustrated -- albeit using an innocuous example -- in a 2002 paper called "The Illusion of Explanatory Depth" by Leo Rozenblit and Frank Keil, of Yale University.  What they did is to ask people to rate their level of understanding of a simple, everyday object (for example, how a zipper works), on a scale of zero to ten.  Then, they asked each participant to write down an explanation of how zippers work in as much detail as they could.  Afterward, they asked the volunteers to re-rate their level of understanding.

Across the board, people rated themselves lower the second time, after a single question -- "Okay, then explain it to me" -- shone a spotlight on how little they actually knew.

The problem is, unless you're in school, usually no one asks the question.  You can claim you understand something, you can even have a firmly-held opinion about it, and there's no guarantee that your stance is even within hailing distance of reality.

And very rarely does anyone challenge you to explain yourself in detail.

[Image is in the Public Domain]

If that's not bad enough, a recent paper by Adrian Ward (of the University of Texas - Austin) showed that not only do we understand way less than we think we do, we fold what we learn from other sources into our own experiential knowledge, regardless of the source of that information.  Worse still, that incorporation is so rapid and smooth that afterward, we aren't even aware of where our information (right or wrong) comes from.

Ward writes:

People frequently search the internet for information.  Eight experiments provide evidence that when people “Google” for online information, they fail to accurately distinguish between knowledge stored internally—in their own memories—and knowledge stored externally—on the internet.  Relative to those using only their own knowledge, people who use Google to answer general knowledge questions are not only more confident in their ability to access external information; they are also more confident in their own ability to think and remember.  Moreover, those who use Google predict that they will know more in the future without the help of the internet, an erroneous belief that both indicates misattribution of prior knowledge and highlights a practically important consequence of this misattribution: overconfidence when the internet is no longer available.  Although humans have long relied on external knowledge, the misattribution of online knowledge to the self may be facilitated by the swift and seamless interface between internal thought and external information that characterizes online search.  Online search is often faster than internal memory search, preventing people from fully recognizing the limitations of their own knowledge.  The internet delivers information seamlessly, dovetailing with internal cognitive processes and offering minimal physical cues that might draw attention to its contributions.  As a result, people may lose sight of where their own knowledge ends and where the internet’s knowledge begins.  Thinking with Google may cause people to mistake the internet’s knowledge for their own.

I recall vividly trying, with minimal success, to fight this in the classroom.  Presented with a question, many students don't stop to try to work it out themselves, they immediately jump to looking it up on their phones.  (One of many reasons I had a rule against having phones out during class, another exercise in frustration given how clever teenagers are at hiding what they're doing.)  I tried to make the point over and over that there's a huge difference between looking up a fact (such as the average number of cells in the human body) and looking up an explanation (such as how RNA works).  I use Google and/or Wikipedia for the former all the time.  The latter, on the other hand, makes it all too easy simply to copy down what you find online, allowing you to have an answer to fill in the blank irrespective of whether you have the least idea what any of it means.

Even Albert Einstein, pre-internet though he was, saw the difference, and the potential problem therein.  Once asked how many feet were in a mile, the great physicist replied, "I don't know.  Why should I fill my brain with facts I can find in two minutes in any standard reference book?”

In the decades since Einstein's said this, that two minutes has shrunk to about ten seconds, as long as you have internet access.  And unlike the standard reference books he mentioned, you have little assurance that the information you found online is even close to right.

Don't get me wrong; I think that our rapid, and virtually unlimited, access to human knowledge is a good thing.  But like most good things, it comes at a cost, and that cost is that we have to be doubly cautious to keep our brains engaged.  Not only is there information out there that is simply wrong, there are people who are (for various reasons) very eager to convince you they're telling the truth when they're not.  This has always been true, of course; it's just that now, there are few barriers to having that erroneous information bombard us all day long -- and Ward's paper shows just how quickly we can fall for it.

The cure is to keep our rational faculties online.  Find out if the information is coming from somewhere reputable and reliable.  Compare what you're being told with what you know to be true from your own experience.  Listen to or read multiple sources of information -- not only the ones you're inclined to agree with automatically.  It might be reassuring to live in the echo chamber of people and media which always concur with our own preconceived notions, but it also means that if something is wrong, you probably won't realize it.

Like I said in Saturday's post, finding out you're wrong is no fun.  More than once I've posted stuff here at Skeptophilia and gotten pulled up by the short hairs when someone who knows better tells me I've gotten it dead wrong.  Embarrassing as it is, I've always posted retractions, and often taken the original post down.  (There's enough bullshit out on the internet without my adding to it.)

So we all need to be on our guard whenever we're surfing the web or listening to the news or reading a magazine.  Our tendency to absorb information without question, regardless of its provenance -- especially when it seems to confirm what we want to believe -- is a trap we can all fall into, and Ward's paper shows that once inside, it can be remarkably difficult to extricate ourselves.

**************************************

Saturday, October 3, 2020

The illusion of understanding

I've written before about the Dunning-Kruger effect, the cognitive bias that gives rise to the perception that everyone you ask will verify being an above-average driver.  We all have the sense of being competent -- and as studies of Dunning-Kruger have shown, we generally think we're more competent than we really are.

I just ran into a paper from about a long while ago that I'd never seen before, and that seems to put an even finer lens on this whole phenomenon.  It explains, I think, why people settle for simplistic explanations for phenomena -- and promptly cease to question their understanding at all.  So even though this is hardly a new study, it was new to me, and (I hope) will be new to my readers.

Called "The Misunderstood Limits of Folk Science: An Illusion of Explanatory Depth," the paper was written by Leonid Rozenblit and Frank Keil of Yale University and appeared in the journal Cognitive Science.  Its results illustrate, I believe, why trying to disabuse people of poor understanding of science can be such an intensely frustrating occupation.

The idea of the paper is a simple one -- to test the degree to which people trust and rely on what the authors call "lay theories:"
Intuitive or lay theories are thought to influence almost every facet of everyday cognition.  People appeal to explanatory relations to guide their inferences in categorization, diagnosis, induction, and many other cognitive tasks, and across such diverse areas as biology, physical mechanics, and psychology.  Individuals will, for example, discount high correlations that do not conform to an intuitive causal model but overemphasize weak correlations that do.  Theories seem to tell us what features to emphasize in learning new concepts as well as highlighting the relevant dimensions of similarity... 
The incompleteness of everyday theories should not surprise most scientists.  We frequently discover that a theory that seems crystal clear and complete in our head suddenly develops gaping holes and inconsistencies when we try to set it down on paper.  
Folk theories, we claim, are even more fragmentary and skeletal, but laypeople, unlike some scientists, usually remain unaware of the incompleteness of their theories.  Laypeople rarely have to offer full explanations for most of the phenomena that they think they understand.  Unlike many teachers, writers, and other professional “explainers,” laypeople rarely have cause to doubt their naïve intuitions.  They believe that they can explain the world they live in fairly well.
Rozenblit and Keil proceeded to test this phenomenon, and they did so in a clever way.  They were able to demonstrate this illusory sense that we know what's going on around us by (for example) asking volunteers to rate their understanding of how common everyday objects work -- things like zippers, piano keys, speedometers, flush toilets, cylinder locks, and helicopters.  They were then (1) asked to write out explanations of how the objects worked; (2) given explanations of how they actually do work; and (3) asked to re-rate their understanding.

Just about everyone ranked their understanding as lower after they saw the correct explanation.

You read that right.  People, across the board, think they understand things better before they actually learn about them.  On one level, that makes sense; all of us are prone to thinking things are simpler than they actually are, and can relate to being surprised at how complicated some common objects turn out to be.  (Ever seen the inside of a wind-up clock, for example?)  But what is amazing about this is how confident we are in our shallow, incomplete knowledge -- until someone sets out to knock that perception askew.

It was such a robust result that Rozenblit and Keil decided to push it a little, and see if they could make the illusion of explanatory depth go away.  They tried it with a less-educated test group (the initial test group had been Yale students.)  Nope -- even people with less education still think they understand everything just fine.  They tried it with younger subjects.  Still no change.  They even told the test subjects ahead of time that they were going to be asked to explain how the objects worked -- thinking, perhaps, that people might be ashamed to admit to some smart-guy Yale researchers that they didn't know how their own zippers worked, and were bullshitting to save face.

The drop was less when such explicit instructions were given, but it was still there.  As Rozenblit and Keil write, "Offering an explicit warning about future testing reduced the drop from initial to subsequent ratings.  Importantly, the drop was still significant—the illusion held."

So does the drop in self-rating occur with purely factual knowledge?  They tested this by doing the same protocol, but instead of asking people for explanations of mechanisms, they asked them to do a task that required nothing but pure recall, such as naming the capitals of various countries.  Here, the drop in self-rating still occurred, but it was far smaller than with explanatory or process-based knowledge.  We are, it seems, much more likely to admit we don't know facts than to admit we don't understand processes.

The conclusion that Rozenblit and Keil reach is a troubling one:
Since it is impossible in most cases to fully grasp the causal chains that are responsible for, and exhaustively explain, the world around us, we have to learn to use much sparser representations of causal relations that are good enough to give us the necessary insights: insights that go beyond associative similarity but which at the same time are not overwhelming in terms of cognitive load.  It may therefore be quite adaptive to have the illusion that we know more than we do so that we settle for what is enough.  The illusion might be an essential governor on our drive to search for explanatory underpinnings; it terminates potentially inexhaustible searches for ever-deeper understanding by satiating the drive for more knowledge once some skeletal level of causal comprehension is reached.
Put simply, when we get to "I understand this well enough," we stop thinking.  And for most of us, that point is reached far, far too soon.

And while it really isn't that critical to understand how zippers work as long as it doesn't stop you from zipping up your pants, the illusion of explanatory depth in other areas can come back to bite us pretty hard when we start making decisions on how to vote.  If most of us truly understand far less than we think we do about such issues as the safety of GMOs and vaccines, the processes involved in climate and climate change, the scientific and ethical issues surrounding embryonic stem cells, and even issues like air and water pollution, how can we possibly make informed decisions regarding the regulations governing them?

All the more reason, I think, that we should be putting more time, money, effort, and support into education.  While education doesn't make the illusion of explanatory depth go away, at least the educated are starting from a higher baseline.  We still might overestimate our own understanding, but I'd bet that the understanding itself is higher -- and that's bound to lead us to make better decisions.

I'll end with a quote by author and blogger John Green that I think is particularly apt, here:


*******************************

To the layperson, there's something odd about physicists' search for (amongst many other things) a Grand Unified Theory, that unites the four fundamental forces into one elegant model.

Why do they think that there is such a theory?  Strange as it sounds, a lot of them say it's because having one force of the four (gravitation) not accounted for by the model, and requiring its own separate equations to explain, is "messy."  Or "inelegant."  Or -- most tellingly -- "ugly."

So, put simply; why do physicists have the tendency to think that for a theory to be true, it has to be elegant and beautiful?  Couldn't the universe just be chaotic and weird, with different facets of it obeying their own unrelated laws, with no unifying explanation to account for it all?

This is the question that physicist Sabine Hossenfelder addresses in her wonderful book Lost in Math: How Beauty Leads Physicists Astray.  She makes a bold statement; that this search for beauty and elegance in the mathematical models has diverted theoretical physics into untestable, unverifiable cul-de-sacs, blinding researchers to the reality -- the experimental evidence.

Whatever you think about whether the universe should obey aesthetically pleasing rules, or whether you're okay with weirdness and messiness, Hossenfelder's book will challenge your perception of how science is done.  It's a fascinating, fun, and enlightening read for anyone interested in learning about the arcane reaches of physics.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Saturday, July 11, 2015

The illusion of understanding

I've written before about the Dunning-Kruger effect, the cognitive bias that gives rise to the perception that everyone you ask will verify being an above-average driver.  We all have the sense of being competent -- and as studies of Dunning-Kruger have shown, we generally think we're more competent than we really are.

I just ran into a paper from about thirteen years ago that I'd never seen before, and that seems to put an even finer lens on this whole phenomenon.  It explains, I think, why people settle for simplistic explanations for phenomena -- and promptly cease to question their understanding at all.  So even though this is hardly a new study, it was new to me, and (I hope) will be new to my readers.

Called "The Misunderstood Limits of Folk Science: An Illusion of Explanatory Depth," the paper was written by Leonid Rozenblit and Frank Keil of Yale University and appeared in the journal Cognitive Science.  Its results illustrate, I believe, why trying to disabuse people of poor understanding of science can be such an intensely frustrating occupation.

The idea of the paper is a simple one -- to test the degree to which people trust and rely on what the authors call "lay theories:"
Intuitive or lay theories are thought to influence almost every facet of everyday cognition.  People appeal to explanatory relations to guide their inferences in categorization, diagnosis, induction, and many other cognitive tasks, and across such diverse areas as biology, physical mechanics, and psychology.  Individuals will, for example, discount high correlations that do not conform to an intuitive causal model but overemphasize weak correlations that do.  Theories seem to tell us what features to emphasize in learning new concepts as well as highlighting the relevant dimensions of similarity...   
The incompleteness of everyday theories should not surprise most scientists.  We frequently discover that a theory that seems crystal clear and complete in our head suddenly develops gaping holes and inconsistencies when we try to set it down on paper. 
Folk theories, we claim, are even more fragmentary and skeletal, but laypeople, unlike some scientists, usually remain unaware of the incompleteness of their theories.  Laypeople rarely have to offer full explanations for most of the phenomena that they think they understand.  Unlike many teachers, writers, and other professional “explainers,” laypeople rarely have cause to doubt their naïve intuitions.  They believe that they can explain the world they live in fairly well.
Rozenblit and Keil proceeded to test this phenomenon, and they did so in a clever way.  They were able to demonstrate this illusory sense that we know what's going on around us by (for example) asking volunteers to rate their understanding of how common everyday objects work -- things like zippers, piano keys, speedometers, flush toilets, cylinder locks, and helicopters.  They were then (1) asked to write out explanations of how the objects worked; (2) given explanations of how they actually do work; and (3) asked to re-rate their understanding.

Just about everyone ranked their understanding as lower after they saw the correct explanation.

You read that right.  People, across the board, think they understand things better before they actually learn about them.  On one level, that makes sense; all of us are prone to thinking things are simpler than they actually are, and can relate to being surprised at how complicated some common objects turn out to be.  (Ever seen the inside of a wind-up clock, for example?)  But what is amazing about this is how confident we are in our shallow, incomplete knowledge -- until someone sets out to knock that perception askew.

It was such a robust result that Rozenblit and Keil decided to push it a little, and see if they could make the illusion of explanatory depth go away.  They tried it with a less-educated test group (the initial test group had been Yale students.)  Nope -- even people with less education still think they understand everything just fine.  They tried it with younger subjects.  Still no change.  They even told the test subjects ahead of time that they were going to be asked to explain how the objects worked -- thinking, perhaps, that people might be ashamed to admit to some smart-guy Yale researchers that they didn't know how their own zippers worked, and were bullshitting to save face.

The drop was less when such explicit instructions were given, but it was still there.  As Rozenblit and Keil write, "Offering an explicit warning about future testing reduced the drop from initial to subsequent ratings. Importantly, the drop was still significant—the illusion held."

So does the drop in self-rating occur with purely factual knowledge?  They tested this by doing the same protocol, but instead of asking people for explanations of mechanisms, they asked them to do a task that required nothing but pure recall -- such as naming the capitals of various countries.  Here, the drop in self-rating still occurred, but it was far smaller than with explanatory or process-based knowledge.  We are, it seems, much more likely to admit we don't know facts than to admit we don't understand processes.

The conclusion that Rozenblit and Keil reach is a troubling one:
Since it is impossible in most cases to fully grasp the causal chains that are responsible for, and exhaustively explain, the world around us, we have to learn to use much sparser representations of causal relations that are good enough to give us the necessary insights: insights that go beyond associative similarity but which at the same time are not overwhelming in terms of cognitive load.  It may therefore be quite adaptive to have the illusion that we know more than we do so that we settle for what is enough.  The illusion might be an essential governor on our drive to search for explanatory underpinnings; it terminates potentially inexhaustible searches for ever-deeper understanding by satiating the drive for more knowledge once some skeletal level of causal comprehension is reached.
Put simply, when we get to "I understand this well enough," we stop thinking.  And for most of us, that point is reached far, far too soon.

And while it really isn't that critical to understand how zippers work as long as it doesn't stop you from zipping up your pants, the illusion of explanatory depth in other areas can come back to bite us pretty hard when we start making decisions on how to vote.  If most of us truly understand far less than we think we do about such issues as the safety of GMOs and vaccines, the processes involved in climate and climate change, the scientific and ethical issues surrounding embryonic stem cells, and even issues like air and water pollution, how can we possibly make informed decisions regarding the regulations governing them?

All the more reason, I think, that we should be putting more time, money, effort, and support into education.  While education doesn't make the illusion of explanatory depth go away, at least the educated are starting from a higher baseline.  We still might overestimate our own understanding, but I'd still bet that the understanding itself is higher -- and that's bound to make us make better decisions.

I'll end with a quote by John Green that I think is particularly apt, here: