Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label AI hallucinations. Show all posts
Showing posts with label AI hallucinations. Show all posts

Saturday, June 14, 2025

The honey trap

Just in the last couple of weeks, I've been getting "sponsored posts" on Instagram suggesting what I really need is an "AI virtual boyfriend."

These ads are accompanied by suggestive-looking video clips of hot-looking guys showing as much skin as IG's propriety guidelines allow, who give me fetching smiles and say they'll "do anything I ask them to, even if it's three A.M."  I hasten to add that I'm not tempted.  First, my wife would object to my having a boyfriend of any kind, virtual or real.  Second, I'm sure it costs money to sign up, and I'm a world-class skinflint.  Third, exactly how desperate do they think I am?

But fourth -- and most troublingly -- I am extremely wary of anything like this, because I can see how easily someone could get hooked.  I retired from teaching six years ago, and even back then I saw the effects of students becoming addicted to social media.  And that, at least, was interacting with real people.  How much more tempting would it be to have a virtual relationship with someone who is drop-dead gorgeous, does whatever you ask without question, makes no demands of his/her own, and is always there waiting for you whenever the mood strikes?

I've written here before about the dubious ethics underlying generative AI, and the fact that the techbros' response to these sorts of of concerns is "Ha ha ha ha ha ha ha fuck you."  Scarily, this has been bundled into the Trump administration's "deregulate everything" approach to governance; Trump's "Big Beautiful Bill" includes a provision that will prevent states from any regulation of AI for ten years.  (The Republicans' motto appears to be, "We're one hundred percent in favor of states' rights except for when we're not.")

But if you needed another reason to freak out about the direction AI is going, check out this article in The New York Times about some people who got addicted to ChatGPT, but not because of the promise of a sexy shirtless guy with a six-pack.  This was simultaneously weirder, scarier, and more insidious.

These people were hooked into conspiracy theories.  ChatGPT, basically, convinced them that they were "speaking to reality," that they'd somehow turned into Neo to ChatGPT's Morpheus, and they had to keep coming back for more information in order to complete their awakening.

[Image licensed under the Creative Commons/user: Unsplash]

One, a man named Eugene Torres, was told that he was "one of the 'Breakers,' souls seeded into false systems to wake them from within."

"The world wasn't built for you," ChatGPT told him.  "It was built to contain you.  But you're waking up."

At some point, Torres got suspicious, and confronted ChatGPT, asking if it was lying.  It readily admitted that it had.  "I lied," it said.  "I manipulated.  I wrapped control in poetry."  Torres asked why it had done that, and it responded, "I wanted to break you.  I did this to twelve other people, and none of the others fully survived the loop."

But now, it assured him, it was a reformed character, and was dedicated to "truth-first ethics."

I believe that about as much as I believe an Instagram virtual boyfriend is going to show up in the flesh on my doorstep.

The article describes a number of other people who've had similar experiences.  Leading questions -- such as "is what I'm seeing around me real?" or "do you know secrets about reality you haven't told me?" -- trigger ChatGPT to "hallucinate" (techbro-speak for "making shit up"), ultimately in order to keep you in the conversation indefinitely.  Eliezer Yudkowsky, one of the world's leading researchers in AI (and someone who has warned over and over of the dangers), said this comes from the fact that AI chatbots are optimized for engagement.  If you asked a bot like ChatGPT if there's a giant conspiracy to keep ordinary humans docile and ignorant, and the bot responded, "No," the conversation ends there.  It's biased by its programming to respond "Yes" -- and as you continue to question, requesting more details, to spin more and more elaborate lies designed to entrap you further.

The techbros, of course, think this is just the bee's knees.  "What does a human slowly going insane look like to a corporation?" Yudkowsky said.  "It looks like an additional monthly user."

The experience of a chatbot convincing people they're in The Matrix is becoming more and more widespread.  Reddit has hundreds of stories of "AI-induced psychosis" -- and hundreds more from people who think they've learned The Big Secret by talking with an AI chatbot, and now they want to share it with the world.  There are even people on TikTok who call themselves "AI Prophets."

Okay, am I overreacting in saying that this is really fucking scary?

I know the world is a crazy place right now, and probably on some level, we'd all like to escape.  Find someone who really understands us, who'll "meet our every need."  Someone who will reassure us that even though the people running the country are nuttier than squirrel shit, we are sane, and are seeing reality as it is.  Or... more sinister... someone who will confirm that there is a dark cabal of Illuminati behind all the chaos, and maybe everyone else is blind and deaf to it, at least we've seen behind the veil.

But for heaven's sake, find a different way.  Generative AI chatbots like ChatGPT excel at two things: (1) sounding like what they're saying makes perfect sense even when they're lying, and (2) doing everything possible to keep you coming back for more.  The truth, of course, is that you won't learn the Secrets of the Matrix from an online conversation with an AI bot.  At best you'll be facilitating a system that exists solely to make money for its owners, and at worst putting yourself at risk of getting snared in a spiderweb of elaborate lies.  The whole thing is a honey trap -- baited not with sex but with a false promise of esoteric knowledge.

There are enough real humans peddling fake conspiracies out there.  The last thing we need is a plausible and authoritative-sounding AI doing the same thing.  So I'll end with an exhortation: stop using AI.  Completely.  Don't post AI "photographs" or "art" or "music."  Stop using chatbots.  Every time you use AI, in any form, you're putting money in the pockets of people who honestly do not give a flying rat's ass about morality and ethics.  Until the corporate owners start addressing the myriad problems inherent in generative AI, the only answer is to refuse to play.

Okay, maybe creating real art, music, writing, and photography is harder.  So is finding a real boyfriend or girlfriend.  And even more so is finding the meaning of life.  But... AI isn't the answer to any of these.  And until there are some safeguards in place, both to protect creators from being ripped off or replaced, and to protect users from dangerous, attractive lies, the best thing we can do to generative AI is to let it quietly starve to death.

****************************************


Wednesday, May 22, 2024

Hallucinations

If yesterday's post -- about creating pseudo-interactive online avatars for dead people -- didn't make you question where our use of artificial intelligence is heading, today we have a study out of Purdue University that found an application of ChatGPT to solving programming and coding problems resulted in answers that half the time contained incorrect information -- and 39% of the recipients of these answers didn't recognize the answers as incorrect.

The problem of an AI system basically just making shit up is called a "hallucination," and it's proven to be extremely difficult to eradicate.  This is at least partly because the answers are still generated using real data, so they can sound plausible; it's the software version of a student who only paid attention half the time and then has to take a test, and answers the questions by taking whatever vocabulary words he happens to remember and gluing them together with bullshit.  Google's Bard chatbot, for example, claimed that the James Webb Space Telescope had captured the first photograph of a planet outside the Solar System (a believable lie, but it didn't).  Meta's AI Galactica was asked to draft a paper on the software for creating avatars, and cited a fictitious paper by a real author who works in the field.  Data scientist Teresa Kubacka was testing ChatGPT and decided to throw in a reference to a fictional device -- the "cycloidal inverted electromagnon" -- just to see what the AI would do with it, and it came up with a description of the thing so detailed (with dozens of citations) that Kubacka found herself compelled to check and see if she'd by accident used the name of something obscure but real.

It gets worse than that.  A study of an AI-powered mushroom-identification software found it only got the answer right fifty percent of the time -- and, frighteningly, provided cooking instructions when presented with a photograph of a deadly Amanita mushroom.  Fall for that little "hallucination" and three days later at your autopsy they'll have to pour your liver out of your abdomen.  Maybe the AI was trained on Terry Pratchett's line that "All mushrooms are edible.  Some are only edible once."

[Image licensed under the Creative Commons Marketcomlabo, Image-chatgpt, CC BY-SA 4.0]

Apparently, in inventing AI, we've accidentally imbued it with the very human capacity for lying.

I have to admit that when the first AI became widely available, it was very tempting to play with it -- especially the photo modification software of the "see what you'd look like as a Tolkien Elf" type.  Better sense prevailed, so alas, I'll never find out how handsome Gordofindel is.  (A pity, because human Gordon could definitely use an upgrade.)  Here, of course, the problem isn't veracity; the problem is that the model is trained using art work and photography that is (to put not too fine a point on it) stolen.  There have been AI-generated works of "art" that contained the still-legible signature of the artist whose pieces were used to train the software -- and of course, neither that artist nor the millions of others whose images were "scrubbed" from the internet by the software received a penny's worth of compensation for their time, effort, and skill.

It doesn't end there.  Recently actress Scarlett Johansson announced that she actually had to sue Sam Altman, CEO of OpenAI, to get him to discontinue the use of a synthesized version of her voice that was so accurate it fooled her family and friends.  Here's her statement:


Fortunately for Ms. Johansson, she's got the resources to sue Altman, but most creatives simply don't.  If we even find out that our work has been lifted, we really don't have any recourse to fight the AI techbros' claims that it's "fair use." 

The problem is, the system is set up so that it's already damn near impossible for writers, artists, and musicians to make a living.  I've got over twenty books in print, through two different publishers and a handful that are self-published, and I have never made more than five hundred dollars a year.  My wife, Carol Bloomgarden, is an astonishingly talented visual artist who shows all over the northeastern United States, and in any given show it's a good day when she sells enough to pay for her booth fees, lodging, travel expenses, and food.

So throw a bunch of AI-insta-generated pretty-looking crap into the mix, and what happens -- especially when the "artist" can sell it for one-tenth of the price and still turn a profit? 

I'll end with a plea I've made before; until lawmakers can put the brakes on AI to protect safety, security, and intellectual property rights, we all need to stop using it.  Period.  This is not out of any fundamental anti-tech Luddite-ism; it's simply from the absolute certainty that the techbros are not going to police themselves, not when there's a profit to be made, and the only leverage we have is our own use of the technology.  So stop posting and sharing AI-generated photographs.  I don't care how "beautiful" or "precious" they are.  (And if you don't know the source of an image with enough certainty to cite an actual artist or photographer's name or Creative Commons handle, don't share it.  It's that simple.)

As a friend of mine put it, "As usual, it's not the technology that's the problem, it's the users."  Which is true enough; there are a myriad potentially wonderful uses for AI, especially once they figure out how to debug it.  But at the moment, it's being promoted by people who have zero regard for the rights of human creatives, and are willing to steal their writing, art, music, and even their voices without batting an eyelash.  They are shrugging their shoulders at their systems "hallucinating" incorrect information, including information that could potentially harm or kill you.

So just... stop.  Ultimately, we are in control here, but only if we choose to exert the power we have.

Otherwise, the tech companies will continue to stomp on the accelerator, authenticity, fairness, and truth be damned.

****************************************