I've written before about the Dunning-Kruger effect, the cognitive bias that gives rise to the perception that everyone you ask will verify being an above-average driver. We all have the sense of being competent -- and as studies of Dunning-Kruger have shown, we generally think we're more competent than we really are.
I just ran into a paper from about thirteen years ago that I'd never seen before, and that seems to put an even finer lens on this whole phenomenon. It explains, I think, why people settle for simplistic explanations for phenomena -- and promptly cease to question their understanding at all. So even though this is hardly a new study, it was new to me, and (I hope) will be new to my readers.
Called "The Misunderstood Limits of Folk Science: An Illusion of Explanatory Depth," the paper was written by Leonid Rozenblit and Frank Keil of Yale University and appeared in the journal Cognitive Science. Its results illustrate, I believe, why trying to disabuse people of poor understanding of science can be such an intensely frustrating occupation.
The idea of the paper is a simple one -- to test the degree to which people trust and rely on what the authors call "lay theories:"
Intuitive or lay theories are thought to influence almost every facet of everyday cognition. People appeal to explanatory relations to guide their inferences in categorization, diagnosis, induction, and many other cognitive tasks, and across such diverse areas as biology, physical mechanics, and psychology. Individuals will, for example, discount high correlations that do not conform to an intuitive causal model but overemphasize weak correlations that do. Theories seem to tell us what features to emphasize in learning new concepts as well as highlighting the relevant dimensions of similarity...
The incompleteness of everyday theories should not surprise most scientists. We frequently discover that a theory that seems crystal clear and complete in our head suddenly develops gaping holes and inconsistencies when we try to set it down on paper.
Folk theories, we claim, are even more fragmentary and skeletal, but laypeople, unlike some scientists, usually remain unaware of the incompleteness of their theories. Laypeople rarely have to offer full explanations for most of the phenomena that they think they understand. Unlike many teachers, writers, and other professional “explainers,” laypeople rarely have cause to doubt their naïve intuitions. They believe that they can explain the world they live in fairly well.Rozenblit and Keil proceeded to test this phenomenon, and they did so in a clever way. They were able to demonstrate this illusory sense that we know what's going on around us by (for example) asking volunteers to rate their understanding of how common everyday objects work -- things like zippers, piano keys, speedometers, flush toilets, cylinder locks, and helicopters. They were then (1) asked to write out explanations of how the objects worked; (2) given explanations of how they actually do work; and (3) asked to re-rate their understanding.
Just about everyone ranked their understanding as lower after they saw the correct explanation.
You read that right. People, across the board, think they understand things better before they actually learn about them. On one level, that makes sense; all of us are prone to thinking things are simpler than they actually are, and can relate to being surprised at how complicated some common objects turn out to be. (Ever seen the inside of a wind-up clock, for example?) But what is amazing about this is how confident we are in our shallow, incomplete knowledge -- until someone sets out to knock that perception askew.
It was such a robust result that Rozenblit and Keil decided to push it a little, and see if they could make the illusion of explanatory depth go away. They tried it with a less-educated test group (the initial test group had been Yale students.) Nope -- even people with less education still think they understand everything just fine. They tried it with younger subjects. Still no change. They even told the test subjects ahead of time that they were going to be asked to explain how the objects worked -- thinking, perhaps, that people might be ashamed to admit to some smart-guy Yale researchers that they didn't know how their own zippers worked, and were bullshitting to save face.
The drop was less when such explicit instructions were given, but it was still there. As Rozenblit and Keil write, "Offering an explicit warning about future testing reduced the drop from initial to subsequent ratings. Importantly, the drop was still significant—the illusion held."
So does the drop in self-rating occur with purely factual knowledge? They tested this by doing the same protocol, but instead of asking people for explanations of mechanisms, they asked them to do a task that required nothing but pure recall -- such as naming the capitals of various countries. Here, the drop in self-rating still occurred, but it was far smaller than with explanatory or process-based knowledge. We are, it seems, much more likely to admit we don't know facts than to admit we don't understand processes.
The conclusion that Rozenblit and Keil reach is a troubling one:
Since it is impossible in most cases to fully grasp the causal chains that are responsible for, and exhaustively explain, the world around us, we have to learn to use much sparser representations of causal relations that are good enough to give us the necessary insights: insights that go beyond associative similarity but which at the same time are not overwhelming in terms of cognitive load. It may therefore be quite adaptive to have the illusion that we know more than we do so that we settle for what is enough. The illusion might be an essential governor on our drive to search for explanatory underpinnings; it terminates potentially inexhaustible searches for ever-deeper understanding by satiating the drive for more knowledge once some skeletal level of causal comprehension is reached.Put simply, when we get to "I understand this well enough," we stop thinking. And for most of us, that point is reached far, far too soon.
And while it really isn't that critical to understand how zippers work as long as it doesn't stop you from zipping up your pants, the illusion of explanatory depth in other areas can come back to bite us pretty hard when we start making decisions on how to vote. If most of us truly understand far less than we think we do about such issues as the safety of GMOs and vaccines, the processes involved in climate and climate change, the scientific and ethical issues surrounding embryonic stem cells, and even issues like air and water pollution, how can we possibly make informed decisions regarding the regulations governing them?
All the more reason, I think, that we should be putting more time, money, effort, and support into education. While education doesn't make the illusion of explanatory depth go away, at least the educated are starting from a higher baseline. We still might overestimate our own understanding, but I'd still bet that the understanding itself is higher -- and that's bound to make us make better decisions.
I'll end with a quote by John Green that I think is particularly apt, here:
No comments:
Post a Comment