In our human experiments, even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right. Yet despite distorting judgment, sycophantic models were trusted and preferred. All of these effects persisted when controlling for individual traits such as demographics and prior familiarity with AI; perceived response source; and response style. This creates perverse incentives for sycophancy to persist: the very feature that causes harm also drives engagement.
Last, and most alarming of all, is a study out of King's College London looked at the AI-based systems now being used more and more often in war games and military strategy simulation and analysis, and found that when pitted against each other, these programs fell back on threats of nuclear weapons use 95% of the time. Kenneth Payne, who led the study, writes:
Nuclear escalation was near-universal: 95% of games saw tactical nuclear use and 76% reached strategic nuclear threats. Claude and Gemini especially treated nuclear weapons as legitimate strategic options, not moral thresholds, typically discussing nuclear use in purely instrumental terms. GPT-5.2 was a partial exception, limiting strikes to military targets, avoiding population centers, or framing escalation as “controlled” and “one-time.” This suggests some internalised norm against unrestricted nuclear war, even if not the visceral taboo that has held among human decision-makers since 1945.
Given that Secretary of Defense Pete Hegseth's idea of high-level military strategy is "shoot 'em up pew-pew-pew," and that he recently got into a huge battle with the head of the AI firm Anthropic over Anthropic's demand that there be restrictions on the unethical use of their AI systems by the military (Hegseth unsurprisingly wanted no restrictions whatsoever, and called Anthropic's objections "woke," which is MAGA-speak for "me no like"), anyone with a shred of foresight, morality, or simple common sense finds this pretty fucking alarming. So we've got a drunk right-wing talk show host running the largest military in the world with all the thoughtfulness and restraint of a seven-year-old boy playing with G. I. Joes, and now he wants to turn over the decision-making to AI agents that have no apparent problem with using nuclear weapons.
I see no way that could go wrong.
Look, I'm honestly not a pessimist; I've always been in agreement with my dad's assessment that it's better to be an optimist who is wrong than a pessimist who is right. But this infiltration of AI into everything -- our morality, our relationships, our mental health services, our governments, our militaries -- has got to stop. Put simply, we're not ready for it as a species. It's a challenge we didn't evolve to face. Governments have been reluctant to act, whether from not fully understanding the threat or, as here in the United States, because the tech firms are paying elected officials to pretend there's no problem. Which it is, of course, doesn't matter in the slightest, because the result is the same.
No guardrails.
So it's up to us to speak up. Pressure your representatives to place some kind of restrictions on this. The Netherlands managed, at least in the case of Elon Musk's child porn generator, so it's possible. But not unless we fully comprehend what's happening here, and are willing to use the voices we have. Otherwise, we're in a situation like the one biologist E. O. Wilson warned us about years ago: "The real problem of humanity is that we have Paleolithic emotions, medieval institutions, and godlike technology."
