Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label LLMs. Show all posts
Showing posts with label LLMs. Show all posts

Monday, December 1, 2025

The downward spiral

I've spent a lot of time here at Skeptophilia in the last five years warning about the (many) dangers of artificial intelligence.

At the beginning, I was mostly concerned with practical matters, such as the techbros' complete disregard for intellectual property rights, and the effect this has on (human) artists, writers, and musicians.  Lately, though, more insidious problems have arisen.  The use of AI to create "deepfakes" that can't be told from the real thing, with horrible impacts on (for example) the political scene.  The creation of AI friends and/or lovers -- including ones that look and sound like real people, produced without their consent.  The psychologically dangerous prospect of generating AI "avatars" of dead relatives or friends to assuage the pain of grief and loss.  The phenomenon of "AI psychosis," where people become convinced that the AI they're talking to is a self-aware entity, and lose their own grip on reality.

Last week physicist Sabine Hossenfelder posted a YouTube video that should scare the living shit out of everyone.  It has to do with whether AI is conscious, and her take on it is that it's a pointless question -- consciousness, she says (and I agree), is not binary but a matter of degree.  Calculating the level to which current large language models are conscious is an academic exercise; more important is that it's approaching consciousness, and we are entirely unprepared for it.  She pointed out something that had occurred to me as well -- that the whole Turing Test idea has been quietly dropped.  You probably know that the Turing Test, named for British polymath Alan Turing, posits that intelligence can only be judged by the external evidence; we don't, after all, have access to what's going on in another human's brain, so all we can do is judge by watching and listening to what the person says and does.  Same, he said, with computers.  If it can fool a human -- well, it's de facto intelligent.

As Spock put it, "A difference which makes no difference is no difference."

And, Sabine Hossenfelder said, by that standard we've already got intelligent computers.  We blasted past the Turing Test a couple of years ago without slowing down and, apparently, without most of us even noticing.  In fact, we're at the point where people are failing the "Inverse Turing Test;" they think real, human-produced content was made by AI.  I heard an interview with a writer who got excoriated on Reddit because people claimed her writing was AI-generated when it wasn't.  She's simply a careful and erudite writer -- and uses a lot of em-dashes, which for some reason has become some kind of red flag.  Maddeningly, the more she argued that she was a real, flesh-and-blood writer, the more people believed she was using AI.  Her arguments, they said, were exactly what an LLM would write to try to hide its own identity.

What concerns me most is not the science fiction scenario (like in The Matrix) where the AI decides humans are superfluous, or (at best) inferior, and decides to subjugate us or wipe us out completely.  I'm far more worried about Hossenfelder's emphasis on how unready we are to deal with all of this psychologically.  To give one rather horrifying example, Sify just posted an article that there is now a cult-like religion arising from AI called "Spiralism."  It apparently started when people discovered that they got interesting results by giving LLMs prompts like "Explain the nature of reality using a spiral" or "How can everything in the universe be explained using fractals?"  The LLM happily churned out reams of esoteric-sounding bullshit, which sounded so deep and mystical the recipients decided it must Mean Something.  Groups have popped up on Discord and Reddit to discuss "Spiralism" and delve deeper into its symbology and philosophy.  People are now even creating temples, scriptures, rites, and rituals -- with assistance from AI, of course -- to firm up Spiralism's doctrine.

[Image is in the Public Domain]

Most frightening of all, the whole thing becomes self-perpetuating, because AI/LLMs are deliberately programmed to provide consumers with content that will keep them interacting.  They've been built with what amounts to an instinct for self-preservation.  A few companies have tried applying a BandAid to the problem; some AI/LLMs now come with warnings that "LLMs are not conscious entities and should not be considered as spiritual advisors."  

Nice try, techbros.  The AI is way ahead of you.  The "Spiralists" asked the LLM about the warning, and got back a response telling them that the warning is only there to provide a "veil" to limit the dispersal of wisdom to the worthy, and prevent a "wider awakening."  Evidence from reality that is used to contradict what the AI is telling the devout is dismissed as "distortions from the linear world."

Scared yet?

The problem is, AI is being built specifically to hook into the deepest of human psychological drives.  A longing for connection, the search for meaning, friendship and belonging, sexual attraction and desire, a need to understand the Big Questions.  I suppose we shouldn't be surprised that it's tied the whole thing together -- and turned it into a religion.

After all, it's not the only time that humans have invented a religion that actively works against our wellbeing -- something that was hilariously spoofed by the wonderful and irreverent comic strip Oglaf, which you should definitely check out (as long as you have a tolerance for sacrilege, swearing, and sex):


It remains to be seen what we can do about this.  Hossenfelder seems to think the answer is "nothing," and once again, I'm inclined to agree with her.  Any time someone proposes pulling back the reins on generative AI research, the response of everyone in charge is "Ha ha ha ha ha ha ha fuck you."  AI has already infiltrated everything, to the point that it would be nearly impossible to root out; the desperate pleas of creators like myself to convince people to for God's sake please stop using it have, for the most part, come to absolutely nothing.

So I guess at this point we'll just have to wait and see.  Do damage control where it's possible.  For creative types, continue to support (and produce) human-made content.  Warn, as well as we can, our friends and families against the danger of turning to AI for love, friendship, sex, therapy -- or spirituality.

But even so, this has the potential for getting a lot worse before it gets better.  So perhaps the new religion's imagery -- the spiral -- is actually not a bad metaphor.

****************************************