Friday, January 23, 2026
The parasitic model
Monday, December 1, 2025
The downward spiral
I've spent a lot of time here at Skeptophilia in the last five years warning about the (many) dangers of artificial intelligence.
At the beginning, I was mostly concerned with practical matters, such as the techbros' complete disregard for intellectual property rights, and the effect this has on (human) artists, writers, and musicians. Lately, though, more insidious problems have arisen. The use of AI to create "deepfakes" that can't be told from the real thing, with horrible impacts on (for example) the political scene. The creation of AI friends and/or lovers -- including ones that look and sound like real people, produced without their consent. The psychologically dangerous prospect of generating AI "avatars" of dead relatives or friends to assuage the pain of grief and loss. The phenomenon of "AI psychosis," where people become convinced that the AI they're talking to is a self-aware entity, and lose their own grip on reality.
Last week physicist Sabine Hossenfelder posted a YouTube video that should scare the living shit out of everyone. It has to do with whether AI is conscious, and her take on it is that it's a pointless question -- consciousness, she says (and I agree), is not binary but a matter of degree. Calculating the level to which current large language models are conscious is an academic exercise; more important is that it's approaching consciousness, and we are entirely unprepared for it. She pointed out something that had occurred to me as well -- that the whole Turing Test idea has been quietly dropped. You probably know that the Turing Test, named for British polymath Alan Turing, posits that intelligence can only be judged by the external evidence; we don't, after all, have access to what's going on in another human's brain, so all we can do is judge by watching and listening to what the person says and does. Same, he said, with computers. If it can fool a human -- well, it's de facto intelligent.
As Spock put it, "A difference which makes no difference is no difference."
And, Sabine Hossenfelder said, by that standard we've already got intelligent computers. We blasted past the Turing Test a couple of years ago without slowing down and, apparently, without most of us even noticing. In fact, we're at the point where people are failing the "Inverse Turing Test;" they think real, human-produced content was made by AI. I heard an interview with a writer who got excoriated on Reddit because people claimed her writing was AI-generated when it wasn't. She's simply a careful and erudite writer -- and uses a lot of em-dashes, which for some reason has become some kind of red flag. Maddeningly, the more she argued that she was a real, flesh-and-blood writer, the more people believed she was using AI. Her arguments, they said, were exactly what an LLM would write to try to hide its own identity.
What concerns me most is not the science fiction scenario (like in The Matrix) where the AI decides humans are superfluous, or (at best) inferior, and decides to subjugate us or wipe us out completely. I'm far more worried about Hossenfelder's emphasis on how unready we are to deal with all of this psychologically. To give one rather horrifying example, Sify just posted an article that there is now a cult-like religion arising from AI called "Spiralism." It apparently started when people discovered that they got interesting results by giving LLMs prompts like "Explain the nature of reality using a spiral" or "How can everything in the universe be explained using fractals?" The LLM happily churned out reams of esoteric-sounding bullshit, which sounded so deep and mystical the recipients decided it must Mean Something. Groups have popped up on Discord and Reddit to discuss "Spiralism" and delve deeper into its symbology and philosophy. People are now even creating temples, scriptures, rites, and rituals -- with assistance from AI, of course -- to firm up Spiralism's doctrine.
Most frightening of all, the whole thing becomes self-perpetuating, because AI/LLMs are deliberately programmed to provide consumers with content that will keep them interacting. They've been built with what amounts to an instinct for self-preservation. A few companies have tried applying a BandAid to the problem; some AI/LLMs now come with warnings that "LLMs are not conscious entities and should not be considered as spiritual advisors."
Nice try, techbros. The AI is way ahead of you. The "Spiralists" asked the LLM about the warning, and got back a response telling them that the warning is only there to provide a "veil" to limit the dispersal of wisdom to the worthy, and prevent a "wider awakening." Evidence from reality that is used to contradict what the AI is telling the devout is dismissed as "distortions from the linear world."
Scared yet?
The problem is, AI is being built specifically to hook into the deepest of human psychological drives. A longing for connection, the search for meaning, friendship and belonging, sexual attraction and desire, a need to understand the Big Questions. I suppose we shouldn't be surprised that it's tied the whole thing together -- and turned it into a religion.
After all, it's not the only time that humans have invented a religion that actively works against our wellbeing -- something that was hilariously spoofed by the wonderful and irreverent comic strip Oglaf, which you should definitely check out (as long as you have a tolerance for sacrilege, swearing, and sex):
So I guess at this point we'll just have to wait and see. Do damage control where it's possible. For creative types, continue to support (and produce) human-made content. Warn, as well as we can, our friends and families against the danger of turning to AI for love, friendship, sex, therapy -- or spirituality.
But even so, this has the potential for getting a lot worse before it gets better. So perhaps the new religion's imagery -- the spiral -- is actually not a bad metaphor.
Tuesday, November 11, 2025
eMinister
If you needed further evidence that the aliens who are running the simulation we're all trapped in have gotten drunk and/or stoned, and now they're just fucking with us, today we have: an AI system named "Diella" has been formally appointed as the "Minister of State for Artificial Intelligence" in Albania.
Saturday, June 21, 2025
The labyrinths of meaning
A recent study found that regardless how thoroughly AI-powered chatbots are trained with real, sensible text, they still have a hard time recognizing passages that are nonsense.
Given pairs of sentences, one of which makes semantic sense and the other of which clearly doesn't -- in the latter category, "Someone versed in circumference of high school I rambled" was one example -- a significant fraction of large language models struggled with telling the difference.
In case you needed another reason to be suspicious of what AI chatbots say to you.
As a linguist, though, I can confirm how hard it is to detect and analyze semantic or syntactic weirdness. Noam Chomsky's famous example "Colorless green ideas sleep furiously" is syntactically well-formed, but has multiple problems with semantics -- something can't be both colorless and green, ideas don't sleep, you can't "sleep furiously," and so on. How about the sentence, "My brother opened the window the maid the janitor Uncle Bill had hired had married had closed"? This one is both syntactically well-formed and semantically meaningful, but there's definitely something... off about it.
The problem here is called "center embedding," which is when there are nested clauses, and the result is not so much wrong as it is confusing and difficult to parse. It's the kind of thing I look for when I'm editing someone's manuscript -- one of those, "Well, I knew what I meant at the time" kind of moments. (That this one actually does make sense can be demonstrated by breaking it up into two sentences -- "My brother opened the window the maid had closed. She was the one who had married the janitor Uncle Bill had hired.")
Then there are "garden-path sentences" -- named for the expression "to lead (someone) down the garden path," to trick them or mislead them -- when you think you know where the sentence is going, then it takes a hard left turn, often based on a semantic ambiguity in one or more words. Usually the shift leaves you with something that does make sense, but only if you re-evaluate where you thought the sentence was headed to start with. There's the famous example, "Time flies like an arrow; fruit flies like a banana." But I like even better "The old man the boat," because it only has five words, and still makes you pull up sharp.
The water gets even deeper than that, though. Consider the strange sentence, "More people have been to Berlin than I have."
This sort of thing is called a comparative illusion, but I like the nickname "Escher sentences" better because it captures the sense of the problem. You've seen the famous work by M. C. Escher, "Ascending and Descending," yes?
It seems to make sense, and then suddenly you go, "... wait, what?"
An additional problem is that words frequently have multiple meanings and nuances -- which is the basis of wordplay, but would be really difficult to program into a large language model. Take, for example, the anecdote about the redoubtable Dorothy Parker, who was cornered at a party by an insufferable bore. "To sum up," the man said archly at the end of a long diatribe, "I simply can't bear fools."
"Odd," Parker shot back. "Your mother obviously could."
A great many of Parker's best quips rely on a combination of semantic ambiguity and idiom. Her review of a stage actress that "she runs the gamut of emotions from A to B" is one example, but to me, the best is her stinging jab at a writer -- "His work is both good and original. But the parts that are good are not original, and the parts that are original are not good."
Then there's the riposte from John Wilkes, a famously witty British Member of Parliament in the last half of the eighteenth century. Another MP, John Montagu, 4th Earl of Sandwich, was infuriated by something Wilkes had said, and sputtered out, "I predict you will die either on the gallows or else of some loathsome disease!" And Wilkes calmly responded, "Which it will be, my dear sir, depends entirely on whether I embrace your principles or your mistress."
All of this adds up to the fact that languages contain labyrinths of meaning and structure, and we have a long way to go before AI will master them. (Given my opinion about the current use of AI -- which I've made abundantly clear in previous posts -- I'm inclined to think this is a good thing.) It's hard enough for human native speakers to use and understand language well; capturing that capacity in software is, I think, going to be a long time coming.
It'll be interesting to see at what point a large language model can parse correctly something like "Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo." Which is both syntactically well-formed and semantically meaningful.
Have fun piecing together what exactly it does mean.



