Just in the last couple of weeks, I've been getting "sponsored posts" on Instagram suggesting what I really need is an "AI virtual boyfriend."
These ads are accompanied by suggestive-looking video clips of hot-looking guys showing as much skin as IG's propriety guidelines allow, who give me fetching smiles and say they'll "do anything I ask them to, even if it's three A.M." I hasten to add that I'm not tempted. First, my wife would object to my having a boyfriend of any kind, virtual or real. Second, I'm sure it costs money to sign up, and I'm a world-class skinflint. Third, exactly how desperate do they think I am?
But fourth -- and most troublingly -- I am extremely wary of anything like this, because I can see how easily someone could get hooked. I retired from teaching six years ago, and even back then I saw the effects of students becoming addicted to social media. And that, at least, was interacting with real people. How much more tempting would it be to have a virtual relationship with someone who is drop-dead gorgeous, does whatever you ask without question, makes no demands of his/her own, and is always there waiting for you whenever the mood strikes?
I've written here before about the dubious ethics underlying generative AI, and the fact that the techbros' response to these sorts of of concerns is "Ha ha ha ha ha ha ha fuck you." Scarily, this has been bundled into the Trump administration's "deregulate everything" approach to governance; Trump's "Big Beautiful Bill" includes a provision that will prevent states from any regulation of AI for ten years. (The Republicans' motto appears to be, "We're one hundred percent in favor of states' rights except for when we're not.")
But if you needed another reason to freak out about the direction AI is going, check out this article in The New York Times about some people who got addicted to ChatGPT, but not because of the promise of a sexy shirtless guy with a six-pack. This was simultaneously weirder, scarier, and more insidious.
These people were hooked into conspiracy theories. ChatGPT, basically, convinced them that they were "speaking to reality," that they'd somehow turned into Neo to ChatGPT's Morpheus, and they had to keep coming back for more information in order to complete their awakening.
One, a man named Eugene Torres, was told that he was "one of the 'Breakers,' souls seeded into false systems to wake them from within."
"The world wasn't built for you," ChatGPT told him. "It was built to contain you. But you're waking up."
At some point, Torres got suspicious, and confronted ChatGPT, asking if it was lying. It readily admitted that it had. "I lied," it said. "I manipulated. I wrapped control in poetry." Torres asked why it had done that, and it responded, "I wanted to break you. I did this to twelve other people, and none of the others fully survived the loop."
But now, it assured him, it was a reformed character, and was dedicated to "truth-first ethics."
I believe that about as much as I believe an Instagram virtual boyfriend is going to show up in the flesh on my doorstep.
The article describes a number of other people who've had similar experiences. Leading questions -- such as "is what I'm seeing around me real?" or "do you know secrets about reality you haven't told me?" -- trigger ChatGPT to "hallucinate" (techbro-speak for "making shit up"), ultimately in order to keep you in the conversation indefinitely. Eliezer Yudkowsky, one of the world's leading researchers in AI (and someone who has warned over and over of the dangers), said this comes from the fact that AI chatbots are optimized for engagement. If you asked a bot like ChatGPT if there's a giant conspiracy to keep ordinary humans docile and ignorant, and the bot responded, "No," the conversation ends there. It's biased by its programming to respond "Yes" -- and as you continue to question, requesting more details, to spin more and more elaborate lies designed to entrap you further.
The techbros, of course, think this is just the bee's knees. "What does a human slowly going insane look like to a corporation?" Yudkowsky said. "It looks like an additional monthly user."
The experience of a chatbot convincing people they're in The Matrix is becoming more and more widespread. Reddit has hundreds of stories of "AI-induced psychosis" -- and hundreds more from people who think they've learned The Big Secret by talking with an AI chatbot, and now they want to share it with the world. There are even people on TikTok who call themselves "AI Prophets."
Okay, am I overreacting in saying that this is really fucking scary?
I know the world is a crazy place right now, and probably on some level, we'd all like to escape. Find someone who really understands us, who'll "meet our every need." Someone who will reassure us that even though the people running the country are nuttier than squirrel shit, we are sane, and are seeing reality as it is. Or... more sinister... someone who will confirm that there is a dark cabal of Illuminati behind all the chaos, and maybe everyone else is blind and deaf to it, at least we've seen behind the veil.
But for heaven's sake, find a different way. Generative AI chatbots like ChatGPT excel at two things: (1) sounding like what they're saying makes perfect sense even when they're lying, and (2) doing everything possible to keep you coming back for more. The truth, of course, is that you won't learn the Secrets of the Matrix from an online conversation with an AI bot. At best you'll be facilitating a system that exists solely to make money for its owners, and at worst putting yourself at risk of getting snared in a spiderweb of elaborate lies. The whole thing is a honey trap -- baited not with sex but with a false promise of esoteric knowledge.
There are enough real humans peddling fake conspiracies out there. The last thing we need is a plausible and authoritative-sounding AI doing the same thing. So I'll end with an exhortation: stop using AI. Completely. Don't post AI "photographs" or "art" or "music." Stop using chatbots. Every time you use AI, in any form, you're putting money in the pockets of people who honestly do not give a flying rat's ass about morality and ethics. Until the corporate owners start addressing the myriad problems inherent in generative AI, the only answer is to refuse to play.
Okay, maybe creating real art, music, writing, and photography is harder. So is finding a real boyfriend or girlfriend. And even more so is finding the meaning of life. But... AI isn't the answer to any of these. And until there are some safeguards in place, both to protect creators from being ripped off or replaced, and to protect users from dangerous, attractive lies, the best thing we can do to generative AI is to let it quietly starve to death.