Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Tuesday, November 4, 2025

Apocalypse already

When Vermont Governor George Aiken was asked in 1966 what should be done about the ongoing military debacle in Vietnam, he famously responded that we should simply declare victory and go home.

This approach -- which amounts to "say something counterfactual with confidence, and it will henceforth be true" -- is not unique to Aiken.  Look at Donald Trump's recent claims that he's ended eight wars, steamrolled over China's Xi Jinping with his masterful strategizing, is the most beloved president in the history of ever, his poll numbers are amazing believe me, ICE only arrests evil drug-dealing criminal illegal immigrant terrorists, grocery prices are way down, the Democrats are a hundred percent responsible for the government shutdown, and pay no attention to the man behind the curtain.

The problem, of course, occurs when people start to catch on and realize you've been talking out of your ass.


I've never seen this idea brought to such heights, though, as with a group of people I only found out a couple of days ago.  They're called Full Preterists -- and they have an answer to all the scoffers who laugh about the fact that every time a preacher predicts some prophecy or another from the Bible will come true on such-and-such a date, it doesn't happen.

Scoff all you will, the Full Preterists say.  You wanna know why all those preachers got it wrong?

It's because all the prophecies in the Bible already happened.

Yup.  Everything.  Not only Jesus's words in the Olivet Discourse that "the Sun will be darkened, and the Moon will not give its light; the stars will fall from the sky, and the heavenly bodies will be shaken," but all the stuff from the Books of Isaiah, Ezekiel, Jeremiah, and Daniel, and the bad acid trip that is the Book of Revelation.

Full Preterism apparently got its start in the sixteenth century with the Jesuit theologian and mystic Luis del Alcázar.  Del Alcázar was a major figure in the Counter Reformation, which was an attempt by the Catholics to prove to the Protestants that they were capable of cleaning their own house, thank you very much.  It generated some creditable attempts to rid the Vatican of corruption, but also spawned a resurgence of the Inquisition and a lot of loony philosophizing.  Del Alcázar very much belongs to the last-mentioned.  His book Vestigatio Arcani Sensus in Apocalypsi (An Investigation into the Hidden Sense of the Apocalypse) concluded that everything but the very last bit of the Book of Revelation -- the part about Jesus returning and creating Paradise on Earth -- had already taken place, and in fact occurred before John of Patmos wrote it in around 90 C.E.

So John was mostly writing history, not prophecy.

"But wait," you might be saying.  "What about stuff that's really specific?  Like the Star Wormwood thing in Revelation 8 that 'fell from heaven and poisoned a third of the fresh water on Earth and made it too bitter to drink'?  What about Revelation 6:12 where a giant earthquake rearranges all the continents, and the Sun turns black and the Moon red as blood?  What about the giant crowned locusts with iron armor, men's faces, women's hair, lions' teeth, and scorpions' stings, that come out of the Earth in Revelation 9?  It'd be kind of hard to miss all that."

Ha-ha, say the Full Preterists.  Of course you didn't miss that.  It's just that -- the inconvenient parts are symbolic.  You know, metaphors.  The Antichrist was Nero.  Or maybe Domitian.  Or was he the Beast?  Or are the Antichrist and the Beast the same?  The locusts are the armored soldiers of Rome (the sharp pointy objects they always carried are scorpions' stings).  The Tribulation was the persecution of Christians by Rome.  Or maybe the destruction of Jerusalem (and the Temple) in 70 C.E.  After all, in Matthew 24:34 Jesus himself says, "Truly I tell you, this generation will not pass away until all these things have taken place," which sounds pretty unequivocal, so somehow, all of it must be in the past, right?

Of course right.

Oddly enough, when del Alcázar said all this stuff, only a few people responded by saying, "Okay, now you're just making shit up."  I guess since the Counter Reformation went hand-in-hand with the Inquisition, it's understandable that most people went along with him.  If someone says, "Hey, y'all, listen to this crazy claim I just now pulled out of my ass," then follows it up with, "... and if you don't believe it, I'll have you tortured and then burned alive," the vast majority of us would say, "Oh, yeah, brilliant idea, my man.  Keep 'em coming, you're on a roll."  Full Preterism jumped from the Catholics to the Protestants when Dutch theologian Hugo Grotius read del Alcázar's book, said, "Okay, that makes total sense," and wrote his own book called Commentary on Certain Texts That Deal With the Antichrist in 1640 elaborating even further.  John Donne, of Death Be Not Proud fame, quoted del Alcázar in a sermon, even though it was at a point when the Church of England was ascendant and "papism" was frowned upon, to put it mildly.  French writer and theologian Firmin Abauzit, whose accomplishments included proofreading Isaac Newton's Principia Mathematica, was a seventeenth-century Full Preterist, who was highly influential in the church -- and in intellectual circles -- at the time.  The idea landed in America in 1845 with Robert Townley's The Second Advent of the Lord Jesus Christ: A Past Event, although apparently Townley later decided that the idea was silly and wrote a rebuttal of his own book.

Here's the problem, of course, and it's the same trap that closed on Harold Camping's ankle; if you make a highly detailed, extremely specific prediction, and it fails to come true, you're gonna lose credibility; but if you keep it vague and symbolic, people start asking awkward questions like "Why is this verse metaphorical, but that one is literally true?"  The Full Preterists seem to want to make the weirder prophecies in the Bible into metaphors and keep the pieces they like as the inerrant Word of God, which strikes me as mighty convenient.  At least the people who think it's all true, but the awkward bits simply haven't happened yet, are being consistent.

Me, I'm inclined to look at all of 'em with an expression like this:


But I guess that's no surprise to anyone.

Anyhow, I thought this was all interesting from a human psychology perspective.  Once we've decided on a worldview, anything that threatens it it leaves us scrambling like mad to keep the whole thing from collapsing, however far-fetched some of those solutions end up being.  Of course, I'm probably as guilty of that as the next guy; I've often wondered what I'd do if my rationalist, science-based view of reality received a serious challenge.

Like if the Four Apocalyptic Horsepersons showed up, or something.  My guess is I'd be pretty alarmed.  Although considering the fact that I live in the hinterlands of upstate New York, at least it'd give me something more interesting to do than my usual occupation, which is avoiding working on my current novel by watching the cows in the field across the road.

****************************************


Monday, November 3, 2025

Searching for Rosalia

Remember how in old college math textbooks, they'd present some theorem or another, and then say, "Proving this is left as an exercise for the reader"?

Well, I'm gonna pull the same nasty trick on you today, only it has nothing to do with math (I can hear the sighs of relief), and I'll give you at least a little more to go on than the conclusion.

This particular odd topic came to me, as so many of them do, from my Twin Brother Separated At Birth Andrew Butters, whose Substack you should definitely subscribe to (and read his fiction, too, which is astonishingly good).  He sent me a link from the site EvidenceNetwork.ca entitled, "A Continent Is Splitting in Two, the Rift Is Already Visible, and a New Ocean Is Set to Form," by Rosalia Neve, along with the message, "What do you think of this?"

Well, usually when he (or anyone else) sends me a link with a question like that, they're looking for an evaluation of the content, so I scanned through the article.  It turned out to be about something that I'm deeply interested in, and in fact have written about before here at Skeptophilia -- the geology of the Great Rift Valley in east Africa.  A quick read turned up nothing that looked questionable, although I did notice that none of it was new or groundbreaking (pun intended); the information was all decades old.  In fact, there wasn't anything in the article that you couldn't get from Wikipedia, leading me to wonder why this website saw fit to publish a piece on it as if it were recent research.

I said so to Andrew, and he responded, "Look again.  Especially at the author."

Back to the article I went.  The writer, Rosalia Neve, had the following "About the Author" blurb:
Dr. Rosalia Neve is a sociologist and public policy researcher based in Montreal, Quebec.  She earned her Ph.D. in Sociology from McGill University, where her work explored the intersection of social inequality, youth development, and community resilience.  As a contributor to EvidenceNetwork.ca, Dr. Neve focuses on translating complex social research into clear, actionable insights that inform equitable policy decisions and strengthen community well-being.

Curious.  Why would a sociologist who studies social inequality, youth development, and community resilience be writing about an oddity of African geology?  If there'd been mention of the social and/or anthropological implications of a continent fracturing, okay, that'd at least make some sense.  But there's not a single mention of the human element in the entire article.

The image of Dr. Neve from the article

So I did a Google search for "Rosalia Neve Montreal."  The only hits were from EvidenceNetwork.ca.  Then I searched "Rosalia Neve sociology."  Same thing.  Mighty peculiar that a woman with a Ph.D. in sociology and public policy has not a single publication that shows up on an internet search.  At this point, I started to notice some other oddities; her headshot (shown above) is blurry, and the article is full of clickbait-y ads that have nothing to do with geology, science, or (for that matter) sociology and public policy.

At this point, the light bulb went off, and I said to Andrew, "You think this is AI-generated?"

His response: "Sure looks like it."

But how to prove it?  It seemed like the best way was to try to find the author.  As I said, nothing in the content looked spurious, or even controversial.  So Andrew did an image search on Dr. Neve's headshot... and came up with zero matches outside of EvidenceNetwork.ca.  This is in and of itself suspicious.  Just about any (real) photograph you put into a decent image-location app will turn up something, except in the unusual circumstance that the photo really doesn't appear online anywhere

Our conclusion: Rosalia Neve doesn't exist, and the article and her "photograph" were both completely AI-generated.

[Nota bene: if Rosalia Neve is actually a real person and reads this, I will humbly offer my apologies.  But I strongly suspect I'll never have to make good on that.]

It immediately brought to mind something a friend posted last Friday:

What's insidious about all this is that the red flags in this particular piece are actually rather subtle.  People do write articles outside the area of their formal education; the irony of my objecting to this is not lost on me.  The information in the article, although unremarkable, appears to be accurate enough.  Here's the thing, though.  This article is convincing precisely because it's so straightforward, and because the purported author is listed with significant academic credentials, albeit ones unrelated to the topic of the piece.  Undoubtedly, the entire point of it is garnering ad revenue for EvidenceNetwork.ca.  But given how slick this all is, how easy would it be for someone with more nefarious intentions to slip inaccurate, inflammatory, or outright dangerously false information into an AI-generated article credited to an imaginary person who, we're told, has amazing academic credentials?  And how many of us would realize it was happening?

More to the point, how many of us would simply swallow it whole?

This is yet another reason I am in the No Way, No How camp on AI.  Here in the United States the current regime has bought wholesale the fairy tale that regulations are unnecessary because corporations will Do The Right Thing and regulate themselves in an ethical fashion, despite there being 1,483,279 counterexamples in the history of capitalism.  We've gone completely hands-off with AI (and damn near everything else) -- with the result that very soon, there'll be way more questionable stuff flooding every sort of media there is.

Now, as I said above, it might be that Andrew and I are wrong, and Dr. Neve is a real sociologist who just turns out to be interested in geology, just as I'm a linguist who is, too.  What do y'all think?  While I hesitate to lead lots of people to clicking the article link -- this, of course, is exactly what EvidenceNetwork.ca is hoping for -- do you believe this is AI-generated?  Critically, how could you prove it?

We'd all better start practicing how to get real good at this skill, real soon.

Detecting AI slop, I'm afraid, is soon going to be an exercise left for every responsible reader.

****************************************


Saturday, November 1, 2025

Weirdness one-upmanship

Thursday's post -- about a strange legend from England called the "fetch" and similar bits of odd folklore from Finland, Norway, and Tibet -- prompted several emails from loyal readers that can be placed under the heading of "You Think That's Wild, Wait'll You Hear This."

The first submission in the Weirdness One-Upmanship contest was about a Japanese legend called Kuchisake-onna (口裂け女), which translates to "the Slit-mouthed Woman."  The Kuchisake-onna appears to its victims as a tall, finely-dressed woman with long, lustrous straight black hair and the lower part of her face covered, carrying either a knife or a sharp pair of scissors.  She comes up and says, "Watashi wa kirei desu ka?" ("Am I pretty?")  This is also kind of a pun in Japanese, because kirei ("pretty") sounds a lot like kire ("cut").  In any case, by the time she asks the question you're kind of fucked regardless, because if you say no, she kills you with her knife.  If you say yes, she lowers her face covering to show that her mouth has been slit from ear to ear, and uses her sharp pointy object to do the same to you.

The only way out, apparently, is to tell her, "You're kind of average-looking."  At that point, the Kuchisake-onna is foiled.  It's a little like what happens if a vampire tries to gain access to the house of a grammar pedant:

Vampire: Can I enter your house?

Pedant:  I don't know, can you?

Vampire: *slinks away, humiliated*

So if you're ever confronted with a Kuchisake-onna, it will be the only time you'll ever come out ahead by telling someone "Eh, you're okay, I guess."

A man about to meet his fate at the hands of a Kuchisake-onna. The three women on the left don't seem especially concerned.  (From Ehon Sayoshigure by Hayami Shungyōsai, 1801)  [Image is in the Public Domain]

Perhaps unsurprisingly, the Kuchisake-onna has made multiple appearances in movies, anime, manga, video games, and at least one mockumentary that was taken seriously enough that people in Gifu Prefecture (where the film was set) were cautioned by one news source not to go outdoors after dark.

The second reader who contacted me asked me if I'd ever herd of the Panotti.  I speculated that it was some kind of Italian finger food that was a cross between pancetta and biscotti, but of course that turned out to be wrong.  The Panotti were a race of humanoids with extremely large ears who appeared in Pliny the Elder's book Natural History.  The reader even provided me with a picture:

A, um, Panottus as pictured in the Nuremberg Chronicle (1493)  [Image is in the Public Domain]

The Panotti, said Pliny, lived in a place called -- I shit you not -- the "All-Ears Islands" off the coast of Scythia.  The guy in the picture looks rather glum, though, doesn't he?  I guess I would, too, if I had twenty-kilogram weights hanging from the sides of my head.

A reader from Hawaii wrote to tell me about a legend called the Huakaʻi pō, which translates to "Nightmarchers."  This extremely creepy bit of folklore claims that dead warriors will sometimes arise from their graves and march their way to various sacred sites, chanting and blowing notes on conch shells.  Anyone who meets them will either be found dead the next morning, or will soon after die by violence.  The only way around this fate is to show the Huakaʻi pō the proper respect by lying face down on the ground until they pass; if you do that, they'll spare you.

That'd certainly save me, because if I was suddenly confronted at night by a bunch of dead Hawaiian warriors, I'd faint, because I'm just that brave.

The reader wrote:

People still sometimes plant rows of ti trees near their houses, because the ti is sacred in Hawaiian culture and the Nightmarchers can't walk through them.  Otherwise the Nightmarchers will walk right through your walls and suddenly appear in your house.  So without that protection, even staying indoors isn't enough.

Last, we have the Mapinguari, a cryptid from Brazil that I'd never heard of before.  The reader who clued me in on the Mapinguari commented that he would "rather meet a fetch, or even a tulpa, than one of these mofos," and when I looked into it I can't help but agree:

A statue of a Mapinguari in the Parque Ambiental Chico Mendes, Rio Branco, Brazil [Image credit: photographer Lalo Almeida]

These things -- which kind of look like the love child of Bigfoot and a cyclops -- also have an extra mouth where their belly button should be, because apparently one mouth isn't sufficient to devour their victims fast enough.  They're denizens of the Brazilian rain forest, and the name is thought to come from the Tupi-Guarani phrase mbaé-pi-guari (mbae "that, the thing" + "foot" + guarî "crooked, twisted"), because in some versions of the legend their feet are attached to their legs backwards so anyone seeing their footprints and trying to flee in the opposite direction will get caught and eaten.

So anyhow, thanks to the readers who responded to Thursday's post.  I guess we humans never run out of ways to use our creativity to scare the absolute shit out of each other.  Me, I'm just as glad to live in upstate New York, where I'm unlikely to run into Kuchisake-onna, Panotti, Huakaʻi pō, or Mapinguari.  Around here the main danger seems to be dying of boredom, which I suppose given my other choices doesn't seem like such a bad way to go.

****************************************


Friday, October 31, 2025

Signal out of noise

A paper this week out of the University of Washington describes research suggesting that intelligence is positively correlated with the ability to discern what someone is saying in a noisy room.

This was a little distressing to me, because I am terrible at this particular skill.  When I'm in a bar or other loud, chaotic environment, I can often pick out a few words, but understanding entire sentences is tricky.  I also run out of steam really quickly -- I can focus for a while, but suddenly the whole thing descends into a wall of noise.

The evidence, though, seems strong.  "The relationship between cognitive ability and speech-perception performance transcended diagnostic categories," said Bonnie Lau, lead author on the paper.  "That finding was consistent across all three groups studied [an autistic group, a group who had fetal alcohol syndrome, and a neurotypical control group]."

So.  Yeah.  Not a favorable result for yours truly.  I mean, I get why it makes sense; focusing on one conversation when there are others going on is a complex task.  "You have to segregate the streams of speech," Lau explained.  "You have to figure out and selectively attend to the person that you're interested in, and part of that is suppressing the competing noise characteristics.  Then you have to comprehend from a linguistic standpoint, coding each phoneme, discerning syllables and words.  There are semantic and social skills, too -- we're smiling, we're nodding.  All these factors increase the cognitive load of communicating when it is noisy."

While I'm not seriously concerned that about the implications regarding my own intelligence, it does make me wonder about sensory synthesis and interpretation in general.  A related phenomenon I've noticed is that if there is a song playing while there's noise going on -- in a restaurant, or on earphones at the gym -- I often have no idea what the song is, can't understand a single word or pick up the beat or figure out the music, until something clues me in to what the song is.  Then, all of a sudden, I find I'm able to hear it clearly.

A while back, some neuroscientists at the University of California - Berkeley elucidated what's happening in the brain that causes this oddity in auditory perception, and it provides an interesting contrast to this week's study.  A paper in Nature: Communications in 2016, by Christopher R. Holdgraf, Wendy de Heer, Brian Pasley, Jochem Rieger, Nathan Crone, Jack J. Lin, Robert T. Knight, and Frédéric E. Theunissen, considered how the perception of garbled speech changes when subjects are told what's being said -- and found through a technique called spectrotemporal receptive field mapping that the brain is able to retune itself in less than a second.

The authors write:
Experience shapes our perception of the world on a moment-to-moment basis.  This robust perceptual effect of experience parallels a change in the neural representation of stimulus features, though the nature of this representation and its plasticity are not well-understood.  Spectrotemporal receptive field (STRF) mapping describes the neural response to acoustic features, and has been used to study contextual effects on auditory receptive fields in animal models.  We performed a STRF plasticity analysis on electrophysiological data from recordings obtained directly from the human auditory cortex. Here, we report rapid, automatic plasticity of the spectrotemporal response of recorded neural ensembles, driven by previous experience with acoustic and linguistic information, and with a neurophysiological effect in the sub-second range.  This plasticity reflects increased sensitivity to spectrotemporal features, enhancing the extraction of more speech-like features from a degraded stimulus and providing the physiological basis for the observed ‘perceptual enhancement’ in understanding speech.
What astonishes me about this is how quickly the brain is able to accomplish this -- although that is certainly matched by my own experience of suddenly being able to hear lyrics of a song once I recognize what's playing.  As James Anderson put it, writing about the research in ReliaWire, "The findings... confirm hypotheses that neurons in the auditory cortex that pick out aspects of sound associated with language, the components of pitch, amplitude and timing that distinguish words or smaller sound bits called phonemes, continually tune themselves to pull meaning out of a noisy environment."

A related phenomenon is visual priming, which occurs when people are presented with a seemingly meaningless pattern of dots and blotches, such as the following:


Once you're told that the image is a cow, it's easy enough to find -- and after that, impossible to unsee.

"Something is changing in the auditory cortex to emphasize anything that might be speech-like, and increasing the gain for those features, so that I actually hear that sound in the noise," said study co-author Frédéric Theunissen.  "It’s not like I am generating those words in my head. I really have the feeling of hearing the words in the noise with this pop-out phenomenon.  It is such a mystery."

Apparently, once the set of possibilities of what you're hearing (or seeing) is narrowed, your brain is much better at extracting meaning from noise.  "Your brain tries to get around the problem of too much information by making assumptions about the world," co-author Christopher Holdgraf said.  "It says, ‘I am going to restrict the many possible things I could pull out from an auditory stimulus so that I don’t have to do a lot of processing.’  By doing that, it is faster and expends less energy."

It makes me wonder about the University of Washington finding, though, if there might be an association between poor auditory discernment and attention-related disorders like ADHD.  My own experience is that I can focus on what's being said in a noisy environment, it's just exhausting.  Perhaps -- like with the song phenomenon, and things like visual priming -- chaotic brains like mine simply can't throw away extraneous information fast enough to retune.  Eventually, it just gives up, and the whole world turns into white noise.

In any case, there's another fascinating, and mind-boggling, piece of how our brains make sense of the world.  It's wonderful that evolution could shape such an amazingly adaptive device, although the survival advantage is obvious.  The faster you are at pulling a signal out of the noise, the more likely you are to make the right decisions about what it is that you're perceiving -- whether it's you talking to a friend in a crowded bar or a proto-hominid on the African savanna trying to figure out if that odd shape in the grass is a predator lying in wait.  

Even if it means that I personally would probably have been a lion's afternoon snack.

****************************************


Thursday, October 30, 2025

Double take

I ended up going down a rabbit hole yesterday -- not, honestly, a surprising nor an infrequent occurrence -- when a friend of mine asked if I'd ever heard of an English legend called the "fetch."

I had, but only because I remembered it being mentioned in (once again, unsurprisingly) an episode of Doctor Who called "Image of the Fendahl," where it was treated as kind of the same thing as a doppelgänger, a supernatural double of a living person.  And just so I can't be accused of only citing Doctor Who references, the same idea was used in the extremely creepy episode of Kolchak: The Night Stalker called "Firefall," wherein an obnoxious and arrogant orchestra conductor ends up with a duplicate who also has the nasty habit of killing people and setting stuff on fire.  The scene where the actual conductor has figured out what is happening, leading him to take refuge in a church -- and the double has climbed up the outside wall and is peering in at him through the window -- freaked me right the hell out when I was twelve years old.


Anyhow, the fetch (in English folklore) is attested at least back to the sixteenth century, but it may derive from a much older legend, the Norse fylgjur.  A fylgja is a spirit that follows someone through their life -- the name comes from an Old Norse verb meaning "to accompany" -- and can take the form either of an animal or a woman (the latter, regardless of the sex of the person; a man's fylgja is never male).  This in turn may be related to the Old English concept of a mære, a malicious, usually female, spirit that visits you at night, and is the origin of our word nightmare.

I ended up looking for similar legends in other cultures, and turns out there are a lot of them.  One example is the Finnish etiäinen, a double that can only be vaguely glimpsed on occasion, and frequently precedes a person in performing actions (s)he later does for real.  You might catch a glimpse of your significant other opening and then closing a cabinet door in the kitchen, then when you look again, there's no one there -- and you later find out that (s)he was in an entirely different part of the house at the time.  But twenty minutes later, (s)he goes into the kitchen, and opens and closes the same cabinet door.

Apparently, appearances of the etiäinen aren't considered especially ominous; there's usually no special significance to be extracted from what actions they perform.  It's just "something that happens sometimes."  Not so the tulpa, a being originally from Tibetan folklore that was eagerly adopted (and transformed) by western Spiritualists.  Originally, the tulpa was a ghostly stalker that would attach itself to a person and follow them around, generally causing trouble (the name seems to come from the Tibetan sprul pa སྤྲུལ་པ་, meaning "phantom").  But once the Spiritualists got a hold of it, it turned into something you could deliberately create.  A tulpa is a creature produced by the collective psychic energy of a group of people, that then takes on a life of its own.  Prominent Spiritualist Alexandra David-Néel said, "Once the tulpa is endowed with enough vitality to be capable of playing the part of a real being, it tends to free itself from its maker's control," and relates the experience of creating one that initially was benevolent (she describes it as "a jolly, Friar-Tuck-type monk"), but eventually it developed independent thought, so she had to kill it.

Is it just me, or is this admission kind of... unsettling?

In any case, we once again have a television reference to fall back on, this time The X Files, in the alternately hilarious and horrifying episode "Arcadia," in which Mulder and Scully have to pose as a happily married couple in order to investigate a series of murders (Mulder embraces the role enthusiastically, much to Scully's continuing annoyance), and the tulpa turns out to create itself out of garbage like coffee grounds and old banana peels.

And if you think that just plain tulpas are as weird as it gets, there are apparently people who are so addicted to My Little Pony that they have tried focused meditation and lucid dreaming techniques to bring to life characters like Pinky Pie and Rainbow Dash.  This subset of the community of "bronies" call themselves "tulpamancers" and apparently honestly believe that these characters have become real through their efforts.  I'm a big believer in the principle of "You Do You," but the whole brony subculture kind of pushes that to the limit.  Lest you think I'm making this up -- and let me say I understand why you might think that -- here's an excerpt from the Wikipedia article on "brony fandom:"

The brony fandom has developed a fandom vernacular language known as bronyspeak, which heavily references the show's content.  Examples of bronyspeak terminology include ponysona (a personalized pony character representing the creator), ponification (transformation of non-pony entities into pony form), dubtrot (a brony version of dubstep), brohoof (a brony version of brofist), and brony itself.

The next obvious place to go was to look into the fact that apparently, a lot of "bronies" want the My Little Pony characters to be real so they can have sex with them, but I drew the line there, deciding that I'd better stop while I was (sort of) ahead.

Well, ahead of where I would have been, anyhow.  I'm shuddering when I think about the searches I already did, and the insanity they're going to trigger in the targeted ads on my social media feed.  I can only imagine the horror show that would have ensued if I'd researched imaginary friend brony sex.

I don't even like thinking about that.

It's a sacrifice, but I do it all for you, Dear Readers.

So anyhow, thanks just bunches to the friend who asked me about fetches.  You just never know where discussions with me are gonna lead.  I guess that's the risk you take in talking to a person who is (1) interested in just about everything, and (2) has the attention span of a fruit fly.  

You may frequently be baffled, but you'll never be bored.

****************************************


Wednesday, October 29, 2025

C'mon, you wanna live forever?

This morning I was casting about for topics for Skeptophilia and happened upon one that kind of made my brain explode.  Part of this was I came across it prior to my first cup of coffee, but even now that I'm reasonably well caffeinated it still leaves me in a superposition of "Okay, I get it" and "... wait, what?"

I use the "superposition" metaphor deliberately because this, like yesterday's post about Quantum Weeping Angels, is about the weirdness of quantum physics.  To frame this, let's start with a refresher on two concepts that will be familiar to most of you -- Schrödinger's Cat and the Many-Worlds Interpretation.

Schrödinger's Cat -- a thought experiment dreamed up by the brilliant physicist Erwin Schrödinger -- looked at the bizarre prediction that with a quantum process, the phenomenon exists in the form of a wave function describing the probabilities of various outcomes.  Until observed or measured, the wave function is the reality; it's not that the outcome is already decided, and we simply don't know at the moment which option is true (as in a classical situation like flipping a coin, prior to looking to see whether heads or tails came up).  Here, the physics seemed to indicate that in a quantum process, the outcome exists in a superposition of all possible outcomes, but when it's observed, the wave function collapses into one of them, and the probabilities of the others drop to zero.

Schrödinger thought this couldn't possibly be correct, even though the mathematics was impeccable and agreed with all the experimental data (and, in fact, still stands today).  His thought experiment locked a cat in a box with a flask of poison; the flask could be broken by a remote-controlled hammer triggered by detection of a particle by a Geiger counter (particle decay and radioactivity are inherently quantum probabilistic processes).  So, is the cat dead or alive?  It was ridiculous to think it could be both (until you open the box), but that was the inevitable outcome of the quantum model.


Not only did this seem like a nonsensical prediction, a lot of physicists objected to the role of an observer.  Why should looking at something (or measuring it) affect its physical state?  And besides, what do we mean by observer?  Does it have to be conscious, or is merely interacting enough?  If a photon hits a rock, is the rock somehow "observing" it and altering its quantum mechanical state?

As a way around this, another brilliant physicist, Hugh Everett, turned the whole thing on its head by saying maybe measurement or observation doesn't collapse the wave function, it splits it -- bifurcating the universe into two branches, one in which (for example) the cat dies, and the other in which it survives.  This idea -- which gave rise to hundreds of episodes on Star Trek alone, as well as my own novel Lock & Key -- pleased some people but massively pissed off others, because it results in staggering numbers of alternate universes which then are forever walled off from each other.  The Many-Worlds Interpretation, as it has come to be called, thus appears to be intrinsically unverifiable, and another example of Wolfgang Pauli's acerbic quip, "This isn't even wrong."

Okay, so far that's just background, and probably you already knew most or all of it.  But what the article I came across this morning did was to ask a simple question:

If Many-Worlds is correct, what is it like from the point of view of Schrödinger's Cat?

Or, since people might differ on whether a cat qualifies as an observer, suppose a human is inside the box, and within any given minute, the probability of surviving is exactly one-half.  According to Many-Worlds, at every moment there is a non-zero chance of surviving and a non-zero chance of dying.  What this implies is that in one branch of the universe, you survive every time.

In other words, the Many-Worlds Interpretation seems to guarantee immortality.

Peter Byrne, who wrote a biography of Hugh Everett, danced around the issue.  "It is unlikely, however, that Everett subscribed to this [quantum immortality] view," Byrne wrote, "as the only sure thing it guarantees is that the majority of your copies will die, hardly a rational goal."  Which may well be true, but the goal isn't the issue, is it?  The reality is the issue.  Philosopher David Lewis summed it up in a lecture, in a passage that if it doesn't give you the chills, you're made of sterner stuff than I am:

As all causes of death are ultimately quantum-mechanical in nature, on the Many-Worlds Interpretation, an observer should subjectively expect with certainty to go on forever surviving whatever dangers [he or she] may encounter, as there will always be possibilities of survival, no matter how unlikely; faced with branching events of survival and death, an observer should not equally expect to experience life and death, as there is no such thing as experiencing death, and should thus divide his or her expectations only among branches where they survive.

Which brings up a rather alarming question: if some version of me survives in at least one branch of the universe, whose consciousness does that "me" represent?  The usual approach is that the "me" in some other branch is unaware of the "me" in this branch, and goes on his merry way making different decisions than I'm making; but how can there be more than just a singular "me"?  If this is true, what does "me" even mean?

And the quantum immortality argument makes this infinitely worse.  Physicist and deep thinker Max Tegmark points out that while the overall probability of your being in the "surviving branch" drops by half every minute -- and therefore, eventually becomes a really small number -- from the point of view of the "you" that has survived every branch thus far, it will still always be fifty-fifty.

Tegmark writes:

Quantum immortality posits that no one ever dies, they only appear to.  Whenever I might die, there will be another universe in which I still live, some quantum event (however remotely unlikely) which saves me from death.  Hence, it is argued, I will never actually experience my own death, but from my own perspective will live forever, even as countless others will witness me die countless times.  Life however will get very lonely, since everyone I know will eventually die (from my perspective), and it will seem I am the only one who is living forever — in fact, everyone else is living forever also, but in different universes from me.

It's not all that I'm all that fond of the idea of kicking the bucket.  I'm like my dad, who was once asked by a family friend what he wanted written on his gravestone, and he deadpanned back, "He's Not Here Yet."

But even so, can we all agree that this is a ghastly thought?

Tegmark agrees, although his objection to it -- based on the either/or nature of the thought experiment, as compared to the gradual process of many deaths -- strikes me as fairly weak.  "The fading of consciousness is a continuous process," he writes.  "Although I cannot experience a world line in which I am altogether absent, I can enter one in which my speed of thought is diminishing, my memories and other faculties fading...  I am confident that even if [a person] cannot die all at once, he can gently fade away."

All righty, but I still want to know why the physics demonstrates that this can't be true.

So that's our unsettling journey through the deep waters of quantum physics for today.  And you thought yesterday's post about "there's no such thing as local realism" was bad.  Me, I think I need to have another cup of coffee and then go play with my puppy.  He never worries about physics and philosophy.  He never worries about much of anything, far as I can tell.

What an enviable quantum state to be in.

****************************************


Tuesday, October 28, 2025

Quantum angels

One of the reasons I get so impatient with woo-woos is that science is plenty cool enough without making shit up.

But because quantum physics is already weird even without any embellishment or misinterpretation, it's been particularly prone to being co-opted by woo-woos in their search for explanations supporting (choose one or more of the following):
  • homeopathy
  • psychic abilities
  • astrology
  • "natural healing"
  • the soul
  • "chakras" and "qi"
  • auras
But you don't need to do any of this to make quantum physics cool, and I've got two good examples.  Let's start with an experiment regarding quantum entanglement -- the linking of two particles in a state describable by a single wave function.  While this might seem uninteresting at first, what it implies is that altering the spin state of particle A would instantaneously change the spin state of its entangled partner, particle B -- regardless of how far apart the two were.  It's almost as if the two were engaging in faster-than-light communication.  Most physicists, of course, do not believe this is what happens -- that it's more like separating a pair of gloves, each in its own sealed box, and sending one to Alpha Centauri.  Then you open the box that's still here on Earth, and find it contains the right-handed glove; at that point, you automatically know that the one on Alpha Centauri must contain the left-handed glove.  Information didn't travel anywhere; that knowledge is just a function of how the pairing works.

However, entanglement is still one of those things that isn't fully explained, even that way.  There's a further twist on this, and that's where things get even more interesting.  Most physicists couple the entanglement phenomenon with the idea of "local realism" -- that the two particles' spin must have been pointing in some direction prior to measurement, even if we didn't know what it was.  Thus, the two entangled particles might have "agreed" (to use an admittedly anthropomorphic term) on what the spin direction would be prior to being separated, simulating communication where there was none, and preserving Einstein's idea that the theories of relativity prohibit faster-than-light communication.

Right?

Scientists at Delft University of Technology in the Netherlands seem to have closed that loophole.  Using an extremely fast random number generator, they altered the spin state of one of two entangled particles separated by 1.3 kilometers, and measured the effect on its partner.  The distance makes it impossible for sub-light-speed communication between the two.  This tosses out the idea of local realism; if the experiment's results hold -- and they certainly seem to be doing so -- the particles were indeed communicating faster than light, something that isn't supposed to be possible.  Einstein was so repelled by this idea that he called it "spooky action at a distance."

To quote the press release:
With the help of ICFO’s quantum random number generators, the Delft experiment gives a nearly perfect disproof of Einstein's world-view, in which "nothing travels faster than light" and “God does not play dice.”  At least one of these statements must be wrong.  The laws that govern the Universe may indeed be a throw of the dice.
If this wasn't weird and cool enough, a second experiment performed right here at Cornell University supported one of the weirdest results of quantum theory -- that a system cannot change while you're watching it.

Graduate students Yogesh Patil and Srivatsan K. Chakram cooled about a billion atoms of rubidium to a fraction of a degree above absolute zero, and suspended them between lasers.  Under such conditions, the atoms formed an orderly crystal lattice.  But because of an effect called "quantum tunneling," even though the atoms were cold -- and thus nearly motionless -- they could shift positions in the lattice, leading to the result that any given atom could be anywhere in the lattice at any time.

Patel and Chakram found that you can stop this effect simply by observing the atoms.

This is the best experimental verification yet of what's been nicknamed the Quantum Zeno effect, after the Greek philosopher who said that motion was impossible because anyone moving from Point A to Point B would have to cross half the distance, then half the remaining distance, then half again, and so on ad infinitum -- and thus would never arrive.  Motion, Zeno said, must therefore be an illusion.

"This is the first observation of the Quantum Zeno effect by real space measurement of atomic motion," lab director Mukund Vengalattore said.  "Also, due to the high degree of control we've been able to demonstrate in our experiments, we can gradually 'tune' the manner in which we observe these atoms.  Using this tuning, we've also been able to demonstrate an effect called 'emergent classicality' in this quantum system."

Myself, I'm not reminded so much of Zeno as I am of another thing that doesn't move while you watch it.


See what I mean?  You don't need to add all sorts of woo-woo nonsense to this stuff to make it fascinating.  It's cool enough on its own.

Of course, the problem is, understanding it takes some serious effort.  Physics is cool, but it's not easy.  All of which supports a contention I've had for years; that woo-wooism is, at its heart, based in laziness.

Me, I'd rather work a little harder and understand reality as it is.  Even if it leaves me afraid to blink.

****************************************