Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label psycholgy. Show all posts
Showing posts with label psycholgy. Show all posts

Saturday, March 28, 2026

No guardrails

I was asked a couple of days ago if I think that AI is inherently bad.

My answer might surprise you; it was an unhesitating "no."  As a construct with -- thus far -- no sentient awareness, and therefore no intentionality, it isn't any more inherently evil than a rock.  Like anything, the problem comes with how beings with sentience and intentions use it, and more to the point, what guardrails are placed on it to prevent bad actors from misusing it.

And thus far, the deregulate-everything, corporate-capitalism-über-alles powers-that-be have seen fit to place no restrictions whatsoever on its uses, however harmful they might be.

If you think I'm exaggerating, here are four examples just from within the past few weeks of places where, in my opinion, any sane and moral person would say, "Oh, hell no," but the techbros are mostly shrugging and grinning and saying "ha ha ha ha ha ha ha fuck you."

A Dutch court had to force X/Twitter and its AI chatbot Grok, by way of massive fines (€100,000 per day for non-compliance) to stop users from using its "nudify" tool to produce child pornography and non-consensual adult pornographic images.  That such a tool even exists is sickening; that a court had to force Elon Musk's company to halt its use doubly so.  The problem, of course, is that the ruling only applies to use in the Netherlands; it's still widely available elsewhere.  So although the possession of child pornography is still illegal in most places, the AI tools people are using to produce it are still somehow legal.

And given that the current leadership in the United States was deeply entangled for decades in a horrific cult of pedophilia and abuse, it's doubtful any action will be taken over here.

The second example comes from a study out of Brown University that found people are using chatbots like ChatGPT as therapists, with alarming results.  Compared side-by-side to actual trained therapists, chatbots -- even those that had been trained on text based in modern psychoanalytic models and current therapeutic ethical standards -- consistently "mishandled crisis situations, gave responses that reinforced harmful beliefs about users or others, and used language that created a false appearance of empathy."

"For human therapists, there are governing boards and mechanisms for providers to be held professionally liable for mistreatment and malpractice," said Zainab Iftikhar, a computer scientist at Brown, who led the study.  "But when LLM counselors make these violations, there are no established regulatory frameworks."

Going hand-in-hand with this was a study out of Stanford University, where a team of researchers found that AI/LLM chatbots are being deliberately designed to incorporate sycophancy -- flattering, affirming, people-pleasing behaviors that facilitate users' desire to come back for more.  If you're in doubt about how intense (and scary) this effect is, here's a direct quote from the paper:
In our human experiments, even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right.  Yet despite distorting judgment, sycophantic models were trusted and preferred.  All of these effects persisted when controlling for individual traits such as demographics and prior familiarity with AI; perceived response source; and response style.  This creates perverse incentives for sycophancy to persist: the very feature that causes harm also drives engagement.

Last, and most alarming of all, is a study out of King's College London looked at the AI-based systems now being used more and more often in war games and military strategy simulation and analysis, and found that when pitted against each other, these programs fell back on threats of nuclear weapons use 95% of the time.  Kenneth Payne, who led the study, writes:

Nuclear escalation was near-universal: 95% of games saw tactical nuclear use and 76% reached strategic nuclear threats.  Claude and Gemini especially treated nuclear weapons as legitimate strategic options, not moral thresholds, typically discussing nuclear use in purely instrumental terms.  GPT-5.2 was a partial exception, limiting strikes to military targets, avoiding population centers, or framing escalation as “controlled” and “one-time.”  This suggests some internalised norm against unrestricted nuclear war, even if not the visceral taboo that has held among human decision-makers since 1945.

Given that Secretary of Defense Pete Hegseth's idea of high-level military strategy is "shoot 'em up pew-pew-pew," and that he recently got into a huge battle with the head of the AI firm Anthropic over Anthropic's demand that there be restrictions on the unethical use of their AI systems by the military (Hegseth unsurprisingly wanted no restrictions whatsoever, and called Anthropic's objections "woke," which is MAGA-speak for "me no like"), anyone with a shred of foresight, morality, or simple common sense finds this pretty fucking alarming.  So we've got a drunk right-wing talk show host running the largest military in the world with all the thoughtfulness and restraint of a seven-year-old boy playing with G. I. Joes, and now he wants to turn over the decision-making to AI agents that have no apparent problem with using nuclear weapons.

I see no way that could go wrong.

Look, I'm honestly not a pessimist; I've always been in agreement with my dad's assessment that it's better to be an optimist who is wrong than a pessimist who is right.  But this infiltration of AI into everything -- our morality, our relationships, our mental health services, our governments, our militaries -- has got to stop.  Put simply, we're not ready for it as a species.  It's a challenge we didn't evolve to face.  Governments have been reluctant to act, whether from not fully understanding the threat or, as here in the United States, because the tech firms are paying elected officials to pretend there's no problem.  Which it is, of course, doesn't matter in the slightest, because the result is the same.  

No guardrails.

So it's up to us to speak up.  Pressure your representatives to place some kind of restrictions on this.  The Netherlands managed, at least in the case of Elon Musk's child porn generator, so it's possible.  But not unless we fully comprehend what's happening here, and are willing to use the voices we have.  Otherwise, we're in a situation like the one biologist E. O. Wilson warned us about years ago: "The real problem of humanity is that we have Paleolithic emotions, medieval institutions, and godlike technology."

****************************************