Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
A point I've made here more than once is that my doubting many claims of the paranormal isn't because I think it's necessarily impossible, but because our sensory-interpretive systems are so fundamentally flawed.
I mean, they work well enough, for most of us most of the time. But not only do we have the capacity to miss a great deal of what's going on around us -- as the famous experiment in which a great many test subjects failed to notice a guy in a gorilla suit showed -- what we do sense is all too easy to misinterpret or remember incorrectly. This is why if someone comes to me with a claim of some supernatural occurrence or another, I'm going to ask for some kind of hard, scientifically-admissible evidence. To quote astrophysicist Neil deGrasse Tyson, "I need more than 'you saw it.'" Neither he nor I are accusing anyone of lying or perpetrating a hoax; the problem is that eyewitness testimony is all bad, even if you mean well and are trying your hardest to be honest.
[Image is in the Public Domain]
To throw another monkey wrench into the situation, consider the recent paper by psychologist Rodney Schmaltz of MacEwan University. Schmaltz became interested in the possible role of subsonic vibrations in claims of haunting; there was a case in England where a medical research building was claimed by several workers to be haunted, in one case by a "gray form that materialized, floated across the room, and vanished." More than one person saw the apparition, and several described a sensation of chill, as if they were being watched.
The culprit turned out not to be a ghost, but a furnace fan that had set up a subsonic standing wave in the basement. The frequency of the wave was around twelve Hertz -- so below the range humans can hear -- but created resonant vibrations in our eyes and ears that could be sensed by the brain. The result: eerie hallucinations, altered perception, and feelings of unease.
What Schmaltz did was try to see if there was a way to measure the human response to infrasound, by setting up test subjects to listen to recordings of music through headphones. Half the test subjects listened to calm instrumental music, and the other half eerie recordings that could have been the soundtracks of horror movies. What the subjects didn't know, though, was that half of each of the audio tracks had been altered to include infrasound.
The results were incontrovertible. The subjects exposed to infrasound weren't aware of it consciously, but responded to it regardless. Also, it didn't matter what the audible component was. If they were exposed to infrasound, they reported feeling unsettled and unhappy, and -- most strikingly -- a saliva test showed elevated levels of the stress hormone cortisol.
“Whether they were listening to calming instrumental music or something more unsettling, the infrasound shifted their mood and their stress response in a negative direction,” Schmaltz said. “In plain terms, you cannot hear infrasound, but your body and your mood appear to respond to it anyway, and the response tends to be unpleasant.”
Schmaltz suggests that a lot of the reports of ghosts in old buildings might be nothing more than infrasound coming from antiquated boilers, furnaces, and plumbing -- aided, of course, by the fact that we're already primed to expect something paranormal from such places by a hundred years of scary movies set in run-down mansions.
Even knowing all of this, though, probably wouldn't make a whit of difference to our actual responses in such a situation. Because that's the other part of the problem, isn't it? Our emotional reaction to a particular set of circumstances has a way of derailing our higher brain functions, especially when that reaction is "OMG a ghost, run!"
And unfortunately, that applies not just to those Crazy Kids and Their Stupid Dog, but to skeptical rationalists.
I was making dinner last week, and the recipe called for soy sauce. I knew we had a bottle of it -- and I was pretty sure it was somewhere in the door shelves of the fridge, amongst the various salad dressings, jellies, jams, sauces, and marinades we'd collected. But I could not find the damn thing, and was becoming increasingly frustrated.
So instead of a quick scan -- usually sufficient to find what I'm looking for -- I decided on a one-at-a-time, bottle-by-bottle search, and as you've probably already guessed, I found the soy sauce in under thirty seconds. I realized immediately what the problem was; in my mind I pictured it as having a red cap, and our bottle had a green cap.
You'd think that wouldn't make a difference, given that everything else about it was exactly like what I was picturing, up to and including being full of soy sauce and having a big label on the front that said, "SOY SAUCE." But one piece of the search parameter was off, and that made me scan right past it, not once but several times.
This is far from the first time this sort of thing has happened to me, and it amazes me how subtle the error can be and still derail my efforts. It doesn't have to be anything nearly as egregious as in the hilarious anecdote Dave Barry writes about when his mother, groceries in a cart and two small children in tow, spent an hour trying to find her car in the store parking lot. She looked so pathetic that several kind shoppers pitched in to try to help her. "It's a black Chevrolet," she said, over and over. It was only after the search had gone on for a ridiculous length of time, up and down the parking lot lanes, that she remembered that the previous week they'd traded in their old car for a new one, and told the helpers, "Wait! I just realized, it's not a black Chevrolet, it's a yellow Ford!", wearing a forced smile in a desperate attempt to convince them that she was not, in fact, insane.
The helpers apparently were not amused, and his mom spent the rest of her life trying to live down the embarrassment.
So we can be confounded by our brain's preconceived notions of what we're looking for, from the subtle to the (should be) obvious. And some researchers at Johns Hopkins University have found that finding the right search parameters even extends to characteristics we can't see.
This puzzling result came out of a series of experiments that were the subject of a paper in the Journal of Experimental Psychology. The team, led by cognitive neuroscientist Li Guo, timed how long it took test subjects to isolate a target object from clutter, and they found that knowing characteristics of the object that aren't apparent to the eye -- like hardness or fragility -- significantly improved the speed with which subjects could find the object in question. The authors write:
Our interactions with the world are guided by our understanding of objects’ physical properties. When packing groceries, we place fragile items on top of more durable ones and position sharp corners so they will not puncture the bags. However, physical properties are not always readily observable, and we often must rely on our knowledge of attributes such as weight, hardness, and slipperiness to guide our actions on familiar objects. Here, we asked whether our knowledge of physical properties not only shapes our actions but also guides our attention to the visual world. In a series of four visual search experiments, participants viewed arrays of everyday objects and were tasked with locating a specified object. The target was sometimes differentiated from the distractors based on its hardness, while a host of other visual and semantic attributes were controlled. We found that observers implicitly used the hardness distinction to locate the target more quickly, even though none reported being aware that hardness was relevant. This benefit arose from fixating fewer distractors overall and spending less time interrogating each distractor when the target was distinguished by hardness. Progressively more stringent stimulus controls showed that surface properties and curvature cues to hardness were not necessary for the benefit. Our findings show that observers implicitly recruit their knowledge of objects’ physical properties to guide how they attend to and engage with visual scenes.
What I find most curious about the results of this experiment is if the characteristic you're given can't be seen, how does it help your brain to locate the object you're searching for? "What makes the finding particularly striking from a vision science standpoint is that simply knowing the latent physical properties of objects is enough to help guide your attention to them," said study senior author Jason Fischer. "It's surprising because nearly all prior research in this area has focused on a host of visual properties that can facilitate search, but we find that what you know about objects can be as important as what you actually see... To me what this says is that in the back of our minds, we are always evaluating the physical content of a scene to decide what to do next. Our mental intuitive physics engines are constantly at work to guide not only how we interact with things in our environment, but how we distribute our attention among them as well."
So it may be that we're approaching our search from a set theory perspective; searching through "the set of all things in my living room" is more efficient if I can eliminate "the subset of things in my living room that are rigid, heavy, stand upright," etc., so eventually my brain can whittle it down to "the couch throw-pillow my puppy dragged behind the recliner."
It's still puzzling to me how our brains actually accomplish this, because it means some kind of interaction is occurring between our visual interpretive systems and our non-visual memories (of such things as texture, durability, and so on). It'd be interesting to have people perform this task while in a fMRI machine -- and see how their brain firing pattern differs while performing this task as compared to performing a task that simply requires memory retrieval.
So that's today's look at the fascinating world of cognitive neuroscience. It doesn't explain, however, the weird phenomenon that happens to me while I'm doing home repair projects, wherein I spend 5% of the time doing actual home repair and 95% stomping around swearing and looking for the tool that was just in my damn hand five seconds ago. That one's a mystery.
I've long been fascinated by the phenomenon of priming, where our interpretation of a sensory stimulus is altered by what we expected to see or hear. An excellent example of priming is this famous image:
If you've never seen this before, it's hard to see anything but black blotches. Once you realize it contains a Dalmatian dog -- his head and dark ear are right dead-center in the image -- you'll always see it. You can't go back to your previous state of blissful ignorance.
It works in the auditory realm, too. My wife and I are absolutely addicted to the wonderful British series The Great Pottery Throwdown, where a group of twelve amateur potters participate in a series of challenges and ultimately are whittled down to three finalists and a single winner. Carol and I are both potters -- I won't speak for her, but I can say with confidence that if I were on Throwdown I would be eliminated in the first round -- and it's astonishing what these artists can create given the demands and time constraints. (I also really enjoy how kind they are to each other. Although it's a competition, they help each other, and everyone seems genuinely heartbroken every time one of them gets sent home.) Well, we're re-watching one of the early seasons, and there's a young woman on the show with a pronounced Welsh accent. Even though I'm usually pretty good at understanding people from the UK, I'm baffled by something like half of what she says...
... until we turn on captioning. Then I have no problem. And it's not just that I'm reading along (although I certainly am) -- it really seems like her voice is much more understandable with that little bit of help.
The reason this comes up is a recent study by Cambridge University engineer Václav Volhejn, who is working with sine-wave speech, a voice simulation using a mixture of pure tones (sine waves). The result sounds like someone trying to imitate human speech using a slide whistle. (You can read how he creates the audio here.) If I close my eyes, I can barely get anything from it -- maybe a word here or there. But once I get the cues of what I was supposed to hear, suddenly it seems obvious. The effect lasts, too. If I turn off captioning and go back and listen to the audio again, I can still understand it nearly perfectly.
How this all works is not understood, but probably has something to do with how our brain accomplishes recall. A 1994 study found that we're primed to recognize words faster if we have prior exposure to semantically-related words; shown the word dog, for example, we recognize the word wolf more quickly than if we're presented it without the prime. We're also primed to anticipate -- and therefore more quickly recognize -- words that are commonly found in association (lot would be primed by parking), or words that have similar sounds even if they're semantically unrelated (ground would be primed by round). That it has something to do with the brain's recall network is supported by research suggesting that priming effects vanish very early in the development of dementia; apparently even before significant cognitive impairment occurs, dementia patients lose their ability to make these kinds of efficient associations.
What's strangest, though, is that you can be primed two different ways with equal strength. This article from Stranger Dimensions contains an audio clip of sine-wave speech that can be primed to sound like either green needle or brainstorm -- which have almost nothing in common phonetically, and don't even have the same number of syllables. Which you hear depends on which text you're looking at, and if you're like me you can go back and forth indefinitely, from exactly the same audio input.
Then there's the McGurk effect, where what we see actually overrides what we hear so completely that it can cause us not to understand what's coming in through our ears. The two syllables ba and va sound a great deal alike, but the first sound differs in how it's produced; /b/ is a voiced bilabial stop, /v/ a voiced labiodental fricative. But when we see someone's mouth moving in an audio/video clip that's been altered to make it look like he's saying va when he's actually saying ba, we hearva. It's absolutely convincing. Somehow, we're primed by seeing his mouth move -- explaining why it's always easier to understand someone face to face than on the telephone.
All of this is further evidence of a point I've made many times here at Skeptophilia; what you perceive is incomplete, inaccurate, and dependent on a great many external and internal conditions that can change from one moment to the next. "I know it happened that way, I saw it with my own eyes!" is fairly close to nonsense. Oh, sure; for most of us, our sensory-perceptual systems work well enough to get by on. But the idea that what we seem to perceive is some kind of perfect transcription of reality is simply wrong.
It's humbling and a little frightening how easily fooled we are, but the implications for how our brain retrieves stored information are absolutely fascinating. So even if we should be a little more careful about acting certain of the accuracy of our own perceptions and memories, it does open the window on how our brains make sense of the world we live in.
In Shirley Jackson's eerie gothic novel We Have Always Lived in the Castle, the main character -- an eighteen-year-old named Merricat Blackwood -- lives in the outskirts of an unnamed village in New England that contains echoes of H. P. Lovecraft's Arkham and Dunwich.
But if you're familiar with Jackson's better-known short story "The Lottery," you know that she was a past master at flipping the script when you least expect it, and about a third of the way through the book, you begin to suspect there's more to the story than meets the eye -- in particular, that there may be some justification to how the villagers see the Blackwoods. I won't spoil the end, but suffice it to say that the unsettling truth behind the relationship between the Blackwoods and the villagers shows once again that the world is a complex place, and very few of us have either purely good or purely evil motives.
Reading We Have Always Lived in the Castle left me thinking, though, that it's not just damaged individuals like Merricat, Constance, and Uncle Julian who are unreliable narrators of their own lives; we all are. We view our fellow humans through the lenses of our own experience, and reflect outward to them the parts of us we want them to see.
As AnaĂŻs Nin put it, "We don't see the world as it is. We see the world as we are."
It doesn't always work, though. You can probably think of times that you discovered someone you thought you knew was hiding something you never dreamed of, or -- conversely -- that some part of you you'd preferred remained well-hidden suddenly came to light. But really, we shouldn't be surprised when this happens. Nearly all of us wear masks with others, showing a particular face at work, another with friends, another with strangers we meet in the market, yet another with our significant others.
To be fair, there's a large measure of this that isn't deliberate deception. When I was a teacher, my professional face in the classroom quite rightly took precedence over any turmoil I was experiencing in my private life. We often choose what to show and what to conceal for good reasons. But the problem is, hiding can become a habit, especially for people who (like myself) suffer from mental illness. When the mask slips with people with depression and anxiety , and we unexpectedly show others what we're going through, it's much less likely that we "suddenly went into a tailspin" than that we'd been pretending to be well for months or years.
Explaining why even our nearest and dearest will often say in shock, "I never realized."
The whole thing got me thinking about a conversation between two of my own characters -- the breezy, outgoing Seth Augustine and the introverted, deeply damaged telepath Callista Lee in Poison the Well:
Seth’s mind returned to his earlier thoughts, about Bethany and the few other people who had disliked him, instantly and almost instinctively. “It can be painful to find out the truth.”
“Not nearly as painful as finding out that no one actually knows what the truth is,” Callista said.
When Seth didn’t respond, she continued, with more animation than he’d heard in her voice yet. “Everyone’s just this bundle of desires and emotions and random thoughts, resentment and love and fear and sex and anger and compassion bubbling right beneath the surface—all in conflict, all of the time, only most people aren’t aware of it. They think things, and their mind looks at them and says ‘this is true’—and they don’t realize that they almost always decide that something is true because it soothes the unpleasant parts—the resentment and fear and anger. It’s not because it actually is true. People believe things because their belief makes the demons quieter.”
We're all unreliable narrators of our own lives, aren't we? And that includes those of us -- I count myself amongst them -- who try to be as truthful as we can. Our determination to be as clear-eyed as possible, not only about others but about ourselves, only goes so far. We're not all hiding a secret as dire as the Blackwoods, I hope. But it highlights how important it is to leave our little self-absorbed bubbles and check in on our friends, often.
It's a well-worn saw by now, but I still remember being told this by a family friend when I was something like six years old. It left me gobsmacked then, and I've never forgotten it. It seems as good a place as any to end this. "Always be kinder than you think you need to be, because everyone you meet is fighting a terrible battle that you know nothing about."
Lately, the political scene in the United States has been dominated by not just the single-cause fallacy (the tendency to attribute complex phenomena to one root cause), but the simple-cause fallacy. This is the KISS principle (Keep It Simple, Stupid) writ large; make everything the result of one, easy-to-understand origin, and you'll have a convenient scapegoat when things go to hell.
How many times have you heard our current government officials saying stuff like "(Some bad thing) is because of (pick one: illegal immigrants, Democrats, brown people doing bad stuff, socialism, LGBTQ+ people)." And unfortunately, this kind of thing has its appeal. Complexity is challenging. We often don't like to be confronted with difficult-to-solve problems, especially when solving those problems involves (1) working with people we disagree with, and (2) facing situations where the solution involves painful compromises.
It's why there was very little pushback a couple of days ago when J. D. Vance, somehow maintaining a straight face the entire time, said that high housing prices were due to illegal immigrants. Lest you think I'm making this up, here's his exact quote:
A lot of young people are saying, housing is way too expensive. Why is that? Because we flooded the country with thirty million illegal immigrants who were taking houses that ought by right go to American citizens. And at the same time we weren’t building enough new houses to begin with even for the population that we had.
This is in spite of the fact that as of the latest data, the total number of illegal immigrants in the United States is less than half that, and the awkward question of how illegal immigrants (all thirty million of them, apparently) would get bank loans to purchase homes without steady, good-paying jobs -- and Social Security Numbers. Despite this, the person interviewing him -- unsurprisingly, it was Sean Hannity -- nodded as if what Vance just said made complete sense.
I saw a fascinating example of this tendency just yesterday, which I saw more than once appended to commentary to the effect of "Wow, people sure are stupid." It's a study in Nature by a team led by Francesco Pagnini, of the UniversitĂ Cattolica del Sacro Cuore, in Milan, and is entitled, "Unexpected Events and Prosocial Behavior: The Batman Effect."
What the researchers did was send a volunteer who was visibly pregnant onto a train, and counted the number of people who offered her a seat. Then they did the same thing, but right after she boarded, a man dressed up as Batman boarded as well. The number of people who gave up their seat for her almost doubled -- from 38% to 67%. And the vast majority of the posters and commenters I've seen mention this study were snickering about how gullible people are. Did the passengers really think that was Batman, and he was going to go all Justice League on their asses if they didn't give up their seat for the pregnant lady? One even went into a long diatribe about how our current online culture has made it hard for people (especially young people, he says) to tell the difference between fiction and reality.
Well, okay, maybe that's one possibility; that being reminded of a character who stands for fair play makes people think they should Do The Right Thing, too. But I can easily think of two other reasons this might have happened -- one of which the authors go into, right in the damn paper. (Highlighting another unfortunate tendency, which is that people often comment on social media posts just from the tagline, and without even clicking the link. I can't even tell you the number of times I've had someone post a comment on a Skeptophilia link that left me thinking, "Bro, did you even read the fucking post?")
The explanation that the authors went into is that having something unusual happen -- like a guy showing up in costume -- makes people take notice. I don't know about you, but when I've ridden trains, I'm seldom giving a lot of attention to the other passengers. (I've usually got my nose in a book.) Unless, that is, one of them is doing something peculiar. It wouldn't have to be Batman, or anyone associated with Smiting Evildoers; all it would have to be is something odd. Then I'd look up -- and be more likely to notice other things, such as a pregnant lady standing there hanging onto the grab bar.
The other possible explanation, though, is one that definitely would have occurred to me; if there's a guy standing there nonchalantly, dressed like Batman, is this part of a stunt? If so, there'd certainly be others watching and waiting to see what the other passengers do -- and possibly filming it. That would cause me to look around. It might induce me toward more prosocial behavior, too; if I know I'm being filmed, I wouldn't want to end up enshrined forever on YouTube as the lazy bum who sat there while a pregnant woman was hanging on for dear life trying not to fall down when the train lurches.
The point here is that an interesting finding (people are more prosocial when somebody nearby is dressed as Batman) is not proof that the passengers think that Batman is real, and (by extension) that they don't know the difference between fact and fiction. That might be true, at least for a few of them. But in this case, the simple (and wryly amusing) explanation is a vast overconclusion.
The fact that it has shown up over and over, though, is yet another example of confirmation bias; the people who are claiming this interpretation of the experiment obviously already think that humanity is irredeemably stupid, and this was just another nail in the coffin. So instead of doing what we all should do -- thinking, "what are other possible explanations for this?" -- they stop there, sitting back with smug expressions, because after all if they see how dumb everyone else is, it must mean they're smart themselves.
Or maybe I'm just falling for the single-cause fallacy myself. It's why I wouldn't want to be a psychologist; people are way too complicated.
But one conclusion I will stand by is that this phenomenon only gets worse with people like J. D. Vance, who not only falls back on simple one-liner explanations, he makes up the data as he goes to support them.
So anyway. Despite what you may have heard, most people don't think Batman is real, and therefore act nicer when he's around. My guess is people would have had exactly the same reaction if someone had showed up dressed as the Joker. It's always best to stop and question your assumptions and biases before jumping to a conclusion -- or commenting on a link just based on the tagline.
In general, I always cringe a little when I see that a scientific study has been called into question.
These days, especially in the United States (where being anti-science is considered a prerequisite for working in the federal government), the last thing the scientific endeavor needs is another black eye. It's bad enough when the scientists were trying their hardest to do things right, and simply misinterpreted the data at hand -- such as the recent study that might have invalidated the Nobel-Prize-winning research that demonstrated the accelerating expansion of the universe, and the existence of dark energy.
It's worse still when the researchers themselves apparently knew their work was bogus, and published it anyhow. It seems to validate everything Trump and his cronies are saying; the experts are all lying to you. The data is inaccurate or being misrepresented. Listen to us instead, we'd never lie.
Today, though, I came across an allegation that a very famous piece of research was based on what amounts to the researchers lying outright about what had happened in their study -- and if this debunking bears out, it will be about the best news we could have right now.
You ready?
You've probably all heard of the devastating paper called "When Prophecy Fails," published in 1956 by Leon Festinger, Henry Riecken, and Stanley Schachter. If you're a long-time follower of Skeptophilia, you might well have read about it here, because I've cited it more than once. The gist is that there was a UFO cult run by a woman named Dorothy Martin and a couple named Charles and Lillian Laughead. Martin claimed she was receiving telepathic communications from extraterrestrials, and attracted a group of people who were into her weird mix of UFOlogy and Christian End Times stuff. Well, after running this group for a time, she claimed she'd received word that there was going to be a catastrophic and deadly flood, but that the faithful were going to be picked up by spacecrafts and rescued -- on December 21, 1954.
The 1950 McMinnville (Oregon) UFO [Image is in the Public Domain]
Festinger, Riecken, and Schacter and several other paid observers infiltrated the cult, pretending to be true believers, and reported that when the 21st came and went, and -- surprise! -- no devastating flood and no flying saucers appeared, her followers' beliefs in her abilities were actually strengthened. She told them their faithfulness had persuaded God not to flood the place, so the failure of the prophecy was a point in her favor, not against.
The three psychologists came up with terms describing this apparent bass-ackwards response to what should have been a terrible blow to belief, terms which will be familiar to you all: cognitive dissonance and the backfire effect. Both refer to people's abilities to maintain their belief even in the face of evidence to the contrary -- and their tendency to double down when that edifice of faith is threatened.
Well, apparently that wasn't the actual way events played out.
A psychological researcher named Thomas Kelly has written a paper that basically debunks the entire study. Kelly became suspicious when he found that subsequent studies were unable to replicate the one done by Festinger, Riecken, and Schachter (whom Kelly calls "FRS"):
Inspired by FRS, several other scholars would later observe other religious groups that had predicted apocalypses. Generally, they failed to replicate the findings of FRS. Shortly after the publication of "When Prophecy Fails," Hardyck and Braden (1962) investigated an apocalyptic sect of Pentecostals to see if the failed apocalypse would result in enduring conviction and proselytization, but it did not. Balch, Farnsworth, and Wilkins (1983) investigated a Baha'i group that inaccurately predicted an apocalypse and found that the failed prediction undermined the size, conviction, and enthusiasm of the group. Zygmunt (1970) reviewed the proselytization efforts of the Jehovah’s Witnesses, a group which has predicted the apocalypse multiple times, and found that failed prediction led to reduced proselytization. Singelenberg (1989) also found that failed prophecies harmed proselytization efforts among the Jehovah’s Witnesses.
Kelly got access to Leon Festinger's files, including reams of notes that were unpublished, and found that not only did the Martin/Laughead cult not come together with strengthened faith in the way he and his co-authors had described, within six months the entire thing had collapsed and disbanded. In other words; the researchers seem to have lied about the facts of the case, not just their interpretation. Here's what Kelly has to say:
The authors of "When Prophecy Fails" had a theory that when faced with the utter disconfirmation of their religious beliefs, believers would soldier on, double down, and ramp up the proselytization. And the authors had ample resources to shape the cult’s behavior and beliefs. Brother Henry [Riecken's alias while he was a cult member] steered Martin and the others at pivotal meetings. The serendipitous, almost supernatural, arrival of Liz, Frank, and other paid observers buttressed the faith of the cultists. The sheer quantity of research observers in the small group gave them substantial influence. After the prophecy failed, Henry was able to prod Martin into writing the Christmas message and inspire belief in the supernatural by posing as the “earthly verifier,” an emissary of the "Space Brothers."
But even with all this influence, the study didn’t go as planned. The group collapsed; belief died. It did not persevere. What did persevere was FRS’s determination to publish their work and Festinger’s determination to use it to launch the theory of cognitive dissonance. Did any of Festinger, Riecken, or Schachter still believe at that point? History is silent.
The full scope and variety of the misrepresentations and misconduct of the researchers needed the unsealed archives of Festinger to emerge; the full story could not be written until now. But the reputation of "When Prophecy Fails" should not even have survived its first decade.
Now, Kelly's work is new enough that I'm fully expecting it to be challenged; Festinger et al.'s theory of cognitive dissonance is so much a part of modern psychological understanding that I doubt it'll be discarded without a fight. But if even a fraction of what Kelly claims is vindicated, the FRS backfire effect study will have to be completely reconsidered -- just as we've had to reconsider a number of other famous psychological studies that have been partially or completely called into question, such as the Stanford Prison Experiment, the "Little Albert" Experiment, and the Milgram Experiment.
My reason for being jubilant when I read this is not because I wish any kind of stain on the reputations of three famous psychological researchers. It's that if the FRS study in fact didn't demonstrate a backfire effect -- if even being infiltrated by fake cult members who pretended to be enthusiastic true believers, and who encouraged the (real) members into keeping the faith, still didn't buoy up their damaged beliefs -- well, it means that humans can learn from experience, doesn't it? That faced with evidence, even people in faith-based belief systems can change their minds.
And I, for one, find this tremendously encouraging.
It means, for example, that maybe -- just maybe -- there's a chance that the MAGA cult could be reached. The recent release of hundreds, maybe thousands, of horrifying emails between Jeffrey Epstein and his cronies, in which Donald Trump's name figures prominently, may finally wake people up to the monstrous reality of who Trump is, and always has been. (Even the few of these messages that have been made public are horrific enough to make my skin crawl.)
The FRS study has always seemed to me to promote despondency; why argue against people when all it's going to do is make them more certain they're right? But I had no reason to question their results.
Until now.
I'm sure there'll be more papers written on this topic, so I'll have to wait till the dust settles to find out what the final word is. But until then -- keep arguing for what is right, what is decent and honest, and what is supported by the evidence. Maybe it's not as futile as we'd been told.
A paper published this week in the journal Nature: Scientific Reports provided some interesting insights into how our memories of our own past might work -- but also raised a couple of troubling questions in my mind.
Our autobiographical memories reflect our personal experiences at specific times in our lives. All life events are experienced while we inhabit our body, raising the question of whether a representation of our bodily self is inherent in our memories. Here we explored this possibility by investigating if the retrieval of childhood autobiographical memories would be influenced by a body illusion that gives participants the experience of ownership for a ‘child version’ of their own face. Fifty neurologically healthy adults were tested in an online enfacement illusion study. Feelings of ownership and agency for the face were greater during conditions with visuo-motor synchrony than asynchronous conditions. Critically, participants who enfaced (embodied) their child-like face recollected more childhood episodic memory details than those who enfaced their adult face. No effects on autobiographical semantic memory recollection were found. This finding indicates that there is an interaction between the bodily self and autobiographical memory, showing that temporary changes to the representation and experience of the bodily self impacts access to memory.
Which is fascinating. Given the sensation of inhabiting our own (younger) body, we seem to unlock stored memories we previously could not access. It makes me wonder what's up there in our memory centers, you know? Assuming your brain is physiologically normal and uninjured, do you really have a record of everything that's happened to you in there somewhere, just waiting for the right trigger to release it?
"Our findings suggest that the bodily self and autobiographical memory are linked, as temporary changes to bodily experience can facilitate access to remote autobiographical memories," said study senior author Jane Aspell, in an interview with Science Daily. "These results are really exciting and suggest that further, more sophisticated body illusions could be used to unlock memories from different stages of our lives -- perhaps even from early infancy. In the future it may even be possible to adapt the illusion to create interventions that might aid memory recall in people with memory impairments."
Here's the thing, though.
How do they know the memories these volunteers reported are real?
[Image is in the Public Domain]
Let me give you an example from my own childhood.
When I was about four, my parents and I moved from a house in South Charleston, West Virginia to one in nearby Saint Albans. My dad worked at the Marine Corps Recruiting and Training Station at the time, and the move was basically to a nicer neighborhood. We'd lived in a rental next door to a big house I remember as "the green house" -- it was a blocky rectangular thing, two story, painted light green, where a family with two older boys (at a guess, perhaps seven and nine) lived.
Well, on moving day, my parents were loading the last stuff in the car, and had told me to entertain myself for a half-hour or so while they were finishing up. I wandered into the yard in front of the green house, and the two boys who lived there asked me if I wanted to play. I said "sure," and we went inside, then upstairs -- where they thought it'd be funny to trap me, and convince me my parents were going to leave without me.
I looked down from the window, screaming and trying to alert my mother, but she didn't hear me. I was terrified of being left behind (not, realistically, that this would ever have happened). Eventually the two boys relented and let me go, and I rejoined my parents -- me still tearful and freaking out about my near miss, they wondering what the hell had upset me.
Here's the kicker, though: I have no idea if this actually happened.
I asked my mother about it some years later, and she had no memory of it -- she didn't recall my disappearing, even for a short time, on the day we moved, nor returning upset and scared. "Why would I have told you to run off and play when we were about to leave?" she asked, which I had to admit was a good question. I have zero other memories of the two boys next door (other than that they existed), and to my knowledge I never went inside their house, nor was invited by them to play, on any other occasion. I've always been prone to vivid dreams; I remember being somewhat older, perhaps eight or nine, and having flying dreams so realistic that upon awakening I was halfway convinced they'd really happened. I might be recalling an unusually detailed (and terrifying) dream; or maybe there were two neighbor boys who thought it'd be funny to scare the living shit out of a gullible little kid.
The problem is, there's no way to tell which is the truth.
So I have no doubt that the Gupta et al. study triggered the release of something in the minds of the volunteers, but I think it's a stretch to conclude that what they accessed were real and accurate memories. I've seen plenty of evidence -- both from scientific studies and the experiences of me and my friends -- indicating that our memories are plastic, malleable, easily warped, and inaccurate. We all too readily conflate our recollections of what actually happened with (1) what we think happened, (2) what we were told happened, and (3) outright mental fabrications. A famous -- if unsettling -- study from Portsmouth University in 2008 looked at people's memories of the 2005 terrorist bombing of a double-decker bus in London, and found that many people recalled intricate and vivid detail from CCTV footage of the explosion, and made statements like, "The bus had just stopped to let people off when two women and a man got on" and "He placed a bag by his side, the woman sat down and as the bus left, there was an explosion" and "There was a severed leg on the floor" and "The bus had stopped at a traffic light when there was a bright light, a loud bang and the top flew off."
The problem? There is no CCTV footage of the explosion. None. Presented with that fact, people were astonished. That couldn't be true, they said; they knew they'd seen it, they could still picture it, still recall how upset they'd been watching it. One person, told that no video of the event existed, accused the researchers of lying.
So there you have it. Another reason not to trust your own recollections of past events, and a caution not to get your hopes up about accessing them by visualizing yourself as a child. Me, I'd just as soon not remember a lot of that stuff. Even if I was never kidnapped by the neighbors when I was four, I didn't exactly have a happy childhood. I'd just as soon remain in the present, thank you very much.
This was a little distressing to me, because I am terrible at this particular skill. When I'm in a bar or other loud, chaotic environment, I can often pick out a few words, but understanding entire sentences is tricky. I also run out of steam really quickly -- I can focus for a while, but suddenly the whole thing descends into a wall of noise.
The evidence, though, seems strong. "The relationship between cognitive ability and speech-perception performance transcended diagnostic categories," said Bonnie Lau, lead author on the paper. "That finding was consistent across all three groups studied [an autistic group, a group who had fetal alcohol syndrome, and a neurotypical control group]."
So. Yeah. Not a favorable result for yours truly. I mean, I get why it makes sense; focusing on one conversation when there are others going on is a complex task. "You have to segregate the streams of speech," Lau explained. "You have to figure out and selectively attend to the person that you're interested in, and part of that is suppressing the competing noise characteristics. Then you have to comprehend from a linguistic standpoint, coding each phoneme, discerning syllables and words. There are semantic and social skills, too -- we're smiling, we're nodding. All these factors increase the cognitive load of communicating when it is noisy."
While I'm not seriously concerned that about the implications regarding my own intelligence, it does make me wonder about sensory synthesis and interpretation in general. A related phenomenon I've noticed is that if there is a song playing while there's noise going on -- in a restaurant, or on earphones at the gym -- I often have no idea what the song is, can't understand a single word or pick up the beat or figure out the music, until something clues me in to what the song is. Then, all of a sudden, I find I'm able to hear it clearly.
Experience shapes our perception of the world on a moment-to-moment basis. This robust perceptual effect of experience parallels a change in the neural representation of stimulus features, though the nature of this representation and its plasticity are not well-understood. Spectrotemporal receptive field (STRF) mapping describes the neural response to acoustic features, and has been used to study contextual effects on auditory receptive fields in animal models. We performed a STRF plasticity analysis on electrophysiological data from recordings obtained directly from the human auditory cortex. Here, we report rapid, automatic plasticity of the spectrotemporal response of recorded neural ensembles, driven by previous experience with acoustic and linguistic information, and with a neurophysiological effect in the sub-second range. This plasticity reflects increased sensitivity to spectrotemporal features, enhancing the extraction of more speech-like features from a degraded stimulus and providing the physiological basis for the observed ‘perceptual enhancement’ in understanding speech.
What astonishes me about this is how quickly the brain is able to accomplish this -- although that is certainly matched by my own experience of suddenly being able to hear lyrics of a song once I recognize what's playing. As James Anderson put it, writing about the research in ReliaWire, "The findings... confirm hypotheses that neurons in the auditory cortex that pick out aspects of sound associated with language, the components of pitch, amplitude and timing that distinguish words or smaller sound bits called phonemes, continually tune themselves to pull meaning out of a noisy environment."
A related phenomenon is visual priming, which occurs when people are presented with a seemingly meaningless pattern of dots and blotches, such as the following:
Once you're told that the image is a cow, it's easy enough to find -- and after that, impossible to unsee.
Apparently, once the set of possibilities of what you're hearing (or seeing) is narrowed, your brain is much better at extracting meaning from noise. "Your brain tries to get around the problem of too much information by making assumptions about the world," co-author Christopher Holdgraf said. "It says, ‘I am going to restrict the many possible things I could pull out from an auditory stimulus so that I don’t have to do a lot of processing.’ By doing that, it is faster and expends less energy."
It makes me wonder about the University of Washington finding, though, if there might be an association between poor auditory discernment and attention-related disorders like ADHD. My own experience is that I can focus on what's being said in a noisy environment, it's just exhausting. Perhaps -- like with the song phenomenon, and things like visual priming -- chaotic brains like mine simply can't throw away extraneous information fast enough to retune. Eventually, it just gives up, and the whole world turns into white noise.
In any case, there's another fascinating, and mind-boggling, piece of how our brains make sense of the world. It's wonderful that evolution could shape such an amazingly adaptive device, although the survival advantage is obvious. The faster you are at pulling a signal out of the noise, the more likely you are to make the right decisions about what it is that you're perceiving -- whether it's you talking to a friend in a crowded bar or a proto-hominid on the African savanna trying to figure out if that odd shape in the grass is a predator lying in wait.
Even if it means that I personally would probably have been a lion's afternoon snack.
A point I've made here at Skeptophilia more than once is that I don't automatically disbelieve in anyone's claim of having a paranormal or religious experience, it's just that I'm doubtful. The reason for my doubt is that having a decent background in neurobiology, I know for a fact that our brains are (in astrophysicist Neil deGrasse Tyson's pithy phrase) "poor data-taking devices." We are swayed by our own biases -- put simply, what we expect to see or hear -- and are often overwhelmed by our own emotions, especially when they're powerful ones like fear or excitement.
What's alarming about this is that it doesn't honestly matter whether you're a skeptic or not; we're all prone to this. I heard a loud noise downstairs one evening -- it was, unfortunately, shortly after I'd been watching an episode of The X Files -- and as the Man of the House bravely volunteered to go investigate. I looked around for something with which to arm myself, and picked up a pair of fireplace tongs (prompting my wife to ask, "What're you gonna do, pinch the monster's belly fat?") By the time I actually went downstairs, I had worked myself up into a lather imagining what fearful denizens of the netherworld might have invaded our basement.
Turned out our cat had jumped up on the counter and knocked a ceramic mug onto the floor. I did not, for the record, pinch her belly fat with the tongs, although I certainly felt like she deserved it.
The thing is, we're all suggestible, and our imaginations make us prone both to seeing things that aren't there and misinterpreting the things that are there. It's why we have science; scientific tools don't get freaked out and imagine they've seen a ghost.
When I taught Critical Thinking, one of my assignments was for students to use PhotoShop (or an equivalent software) to create the best fake ghost, cryptid, or UFO photo they could. This was that year's winner. Pretty good, isn't it? [Image credit: Nathan Brewer, used with permission]
The reason this topic comes up is a pair of unrelated links I happened across within minutes of each other, that are mostly interesting in juxtaposition.
The first one is by "paranormal explorer, investigator, and researcher" Ashley Knibb. Knibb is a UK-based writer and ghost hunter who spends his time visiting sites of alleged hauntings with his team, then writing up their experiences. The one I stumbled across yesterday was about their recent investigation of Royal Gunpowder Mills, Waltham Abbey, Essex. The building, now a "Historical Site of Special Interest" maintained by the government, was (as you might guess from the name) originally an industrial complex for the manufacture of explosives. "Hundreds of lives had passed through these grounds; some of them cut short by the very materials that gave Britain its military edge," Knibb writes. "It’s no wonder the place has a reputation for being haunted... Nothing stirred, but there was an eerie sense that the building’s history had left an imprint. This was a place where weapons of war had been made, where accidents had claimed lives. Sometimes you don’t need voices; the atmosphere says enough."
The rest of the article, which is evocative and creepy, describes what Knibb and his assistants felt, saw, and heard during the night they spent in the Mills. One of them heard the name "Cooper" being spoken; another heard a faint "hello." They saw the sparkle of flashing lights that, upon arrival in the room where they seemed to originate, had no material source. More prosaic, one of their videocamera lights itself began to strobe. There were areas where the visitors experienced chills, and one of them had a profound experience of vertigo and nausea at one point. (To Knibb's credit, he recounts hearing a loud thud, which turned out to be the movement of a very-much-living staff member retrieving something from an upper room. "Ruling out," Knibb observes correctly, "is as important as ruling in.")
The second link is a paper in The Journal of the International Association for the Psychology of Religion, and is called "Sensing the Darkness: Dark Therapy, Authority, and Spiritual Experience." The gist of the paper is that there is a new trend called "Dark Therapy" where volunteers agree to spend a given amount of time in complete darkness, in search of numinous or otherwise enlightening experiences. Other senses are allowed; in fact, one of the purposes of being in the dark, proponents say, is to heighten your other sensory experiences. Some of these episodes are guided, and others not. The paper recounts the experiences of twelve participants who agreed to spend a block of time between seven and fourteen days in a well-furnished room that was completely dark.
Their responses are intriguing. The researchers (to their credit) do not weigh in on whether the experiences of the participants reflected an external truth, or were simply artifacts of the sensory deprivation and the workings of their minds. I would encourage you to read the original paper, but just to give you the flavor, here's what one person said after her stay in the dark room:
For the first time [in the dark] there was a lot of fear. Somehow like manifestation of fear that was coming, well, differently and sometimes it was like... sometimes sounds, sometimes some images, (. . .) some demonic visions (. . .) were appearing and finally I understood that this is all me, my projection, but that you have to go through it, but it was such realistic experiences, very realistic. (. . .) sometimes I heard something, or I had the feeling that somebody is there with me, and I don’t like it, I don’t like it at all.
What strikes me here is that like with ghost hunting, how much of what you experience is what you expected to experience? I don't doubt that Dark Therapy might be an interesting way to learn about your own mind, and how you cope with being deprived of one of your senses, and might even result in profound enlightenment. But there's a real danger with someone crossing over into believing that something like the "demonic visions" the volunteer experienced are manifestations of an external physical reality. We all come primed with our preconceived notions of what's out there; when in an unfamiliar situation where our emotions are ramped up, it'd be all to easy for those mental models to magnify into something that seems convincingly real.
Like I said, it's not that I'm saying I'm certain that Ashley Knibb's scary night at Royal Gunpowder Mills, or anyone else's experiences of the holy or the demonic or the supernatural, are one hundred percent imaginary. It's just that my generally skeptical outlook, and (especially) my training in neuroscience, makes me hesitant to accept personal anecdote as reality without any hard evidence. I'm convincible, but it takes more than "I saw it" (or, in the dark room, "I heard/felt it").
I might find your personal anecdote intriguing, or suggestive, or even worthy of further investigation. But to move from there into believing that some odd claim is true, I need more than that. The human mind is simply too frail, biased, and suggestible to trust without something more to back it up.
I'll end with a quote from John Adams, then a lawyer, later President of the United States: "Facts are stubborn things; and whatever may be our wishes, our inclinations, or the dictates of our passions, they cannot alter the state of facts and evidence."