Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label child pornography. Show all posts
Showing posts with label child pornography. Show all posts

Saturday, March 28, 2026

No guardrails

I was asked a couple of days ago if I think that AI is inherently bad.

My answer might surprise you; it was an unhesitating "no."  As a construct with -- thus far -- no sentient awareness, and therefore no intentionality, it isn't any more inherently evil than a rock.  Like anything, the problem comes with how beings with sentience and intentions use it, and more to the point, what guardrails are placed on it to prevent bad actors from misusing it.

And thus far, the deregulate-everything, corporate-capitalism-über-alles powers-that-be have seen fit to place no restrictions whatsoever on its uses, however harmful they might be.

If you think I'm exaggerating, here are four examples just from within the past few weeks of places where, in my opinion, any sane and moral person would say, "Oh, hell no," but the techbros are mostly shrugging and grinning and saying "ha ha ha ha ha ha ha fuck you."

A Dutch court had to force X/Twitter and its AI chatbot Grok, by way of massive fines (€100,000 per day for non-compliance) to stop users from using its "nudify" tool to produce child pornography and non-consensual adult pornographic images.  That such a tool even exists is sickening; that a court had to force Elon Musk's company to halt its use doubly so.  The problem, of course, is that the ruling only applies to use in the Netherlands; it's still widely available elsewhere.  So although the possession of child pornography is still illegal in most places, the AI tools people are using to produce it are still somehow legal.

And given that the current leadership in the United States was deeply entangled for decades in a horrific cult of pedophilia and abuse, it's doubtful any action will be taken over here.

The second example comes from a study out of Brown University that found people are using chatbots like ChatGPT as therapists, with alarming results.  Compared side-by-side to actual trained therapists, chatbots -- even those that had been trained on text based in modern psychoanalytic models and current therapeutic ethical standards -- consistently "mishandled crisis situations, gave responses that reinforced harmful beliefs about users or others, and used language that created a false appearance of empathy."

"For human therapists, there are governing boards and mechanisms for providers to be held professionally liable for mistreatment and malpractice," said Zainab Iftikhar, a computer scientist at Brown, who led the study.  "But when LLM counselors make these violations, there are no established regulatory frameworks."

Going hand-in-hand with this was a study out of Stanford University, where a team of researchers found that AI/LLM chatbots are being deliberately designed to incorporate sycophancy -- flattering, affirming, people-pleasing behaviors that facilitate users' desire to come back for more.  If you're in doubt about how intense (and scary) this effect is, here's a direct quote from the paper:
In our human experiments, even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right.  Yet despite distorting judgment, sycophantic models were trusted and preferred.  All of these effects persisted when controlling for individual traits such as demographics and prior familiarity with AI; perceived response source; and response style.  This creates perverse incentives for sycophancy to persist: the very feature that causes harm also drives engagement.

Last, and most alarming of all, is a study out of King's College London looked at the AI-based systems now being used more and more often in war games and military strategy simulation and analysis, and found that when pitted against each other, these programs fell back on threats of nuclear weapons use 95% of the time.  Kenneth Payne, who led the study, writes:

Nuclear escalation was near-universal: 95% of games saw tactical nuclear use and 76% reached strategic nuclear threats.  Claude and Gemini especially treated nuclear weapons as legitimate strategic options, not moral thresholds, typically discussing nuclear use in purely instrumental terms.  GPT-5.2 was a partial exception, limiting strikes to military targets, avoiding population centers, or framing escalation as “controlled” and “one-time.”  This suggests some internalised norm against unrestricted nuclear war, even if not the visceral taboo that has held among human decision-makers since 1945.

Given that Secretary of Defense Pete Hegseth's idea of high-level military strategy is "shoot 'em up pew-pew-pew," and that he recently got into a huge battle with the head of the AI firm Anthropic over Anthropic's demand that there be restrictions on the unethical use of their AI systems by the military (Hegseth unsurprisingly wanted no restrictions whatsoever, and called Anthropic's objections "woke," which is MAGA-speak for "me no like"), anyone with a shred of foresight, morality, or simple common sense finds this pretty fucking alarming.  So we've got a drunk right-wing talk show host running the largest military in the world with all the thoughtfulness and restraint of a seven-year-old boy playing with G. I. Joes, and now he wants to turn over the decision-making to AI agents that have no apparent problem with using nuclear weapons.

I see no way that could go wrong.

Look, I'm honestly not a pessimist; I've always been in agreement with my dad's assessment that it's better to be an optimist who is wrong than a pessimist who is right.  But this infiltration of AI into everything -- our morality, our relationships, our mental health services, our governments, our militaries -- has got to stop.  Put simply, we're not ready for it as a species.  It's a challenge we didn't evolve to face.  Governments have been reluctant to act, whether from not fully understanding the threat or, as here in the United States, because the tech firms are paying elected officials to pretend there's no problem.  Which it is, of course, doesn't matter in the slightest, because the result is the same.  

No guardrails.

So it's up to us to speak up.  Pressure your representatives to place some kind of restrictions on this.  The Netherlands managed, at least in the case of Elon Musk's child porn generator, so it's possible.  But not unless we fully comprehend what's happening here, and are willing to use the voices we have.  Otherwise, we're in a situation like the one biologist E. O. Wilson warned us about years ago: "The real problem of humanity is that we have Paleolithic emotions, medieval institutions, and godlike technology."

****************************************


Wednesday, June 22, 2016

Blaming the victims

I would like, just once, to be able to read the news without being outraged.

Lately that wish has been a losing proposition.  Every other news story these days provides enough material to fuel thermonuclear-level fury in anyone who has a shred of sensibility and compassion.  It's reached the point where I'm thinking of avoiding the news altogether.  It seems preferable to remain ignorant than dying of a self-induced aneurysm.

Today's contribution from the Fountains of Rage Department hearkens back to the story of Brock Turner, the Stanford student who raped an unconscious woman behind a dumpster and got a slap-on-the-wrist six month jail sentence.  To add to the injustice, Turner's father and friends rose to his defense, never once mentioning the victim; the father expressed grief over his son's having to pay such a price for "twenty minutes of action."

At least in this case the victim found her voice, writing a letter to her attacker that was so poignant and powerful that it brought me to tears.  The judge in the case, Aaron Persky, has been the target of a well-deserved backlash because of his caving to white male privilege and victim blaming, and in fact was removed from another sexual assault case by Santa Clara county district attorney Jeff Rosen. "After ... the recent turn of events, we lack confidence that Judge Persky can fairly participate in this upcoming hearing in which a male nurse sexually assaulted an anesthetized female patient," Rosen said.

Well, yeah.  And it'd be nice if this kind of retribution were served around more generally.  Instead, we have two news stories that illustrate that even this level of justice is far from the rule.

First, we have a case in England where a wealthy Eton student who was found in possession of 1,185 images of child pornography was allowed to be tried under a false name in order to "protect his family's reputation."  In addition, he received no jail time -- he was given an eighteen-month suspended sentence.

The student, who was tried under the name of Andrew Picard, would probably have remained comfortably anonymous if it hadn't been for an article in The Daily Mirror that slipped up and revealed his true identity as Andrew Boeckman, son of Phillip J. Boeckman, a wealthy lawyer whose clients have included Goldman Sachs and J. P. Morgan.  The article vanished from the internet -- "mysteriously," says Summer Winterbottom in Evolve Politics -- but is still available in a cached copy, the link to which is in the article cited above.

Andrew Boeckman ("Andrew Picard") [image courtesy of the Wikimedia Commons]

The judge in the case, Peter Ross, seemed more sympathetic with Boeckman and his family than he did with the victims, some of whom were toddlers.  "Your family didn’t deserve that (suffering) but it is a consequence of this sort of offending," Ross said during the trial.  "Inevitably your privileged background and where you were going to school added a degree of frisson to the reporting."

Story #2 comes from my home state of New York, where a bill to help the survivors of child abuse was killed in the State Assembly by passing the deadline without coming to a vote.  The bill, sponsored by Assemblywoman Margaret Markey, would have increased the time a sexual abuse case could be pursued by five years, created a six-month window to revive old cases, and treated public and private entities identically in cases of sexual abuse.  The Assembly, however, saw fit to let the bill fail rather than allowing it to come to a vote.

Angry yet?  Just wait.  Because Catholic League President Bill Donohue crowed about the demise of the Child Victims Act, saying that Markey is a "principle enemy of the church" and that the act was a "sham."

Then he made the following statement, which I had to read three times before I could honestly believe my eyes: "This was a vindictive bill pushed by lawyers and activists out to rape the Catholic Church."

I beg your pardon?  Curious choice of words, given that what you're gloating about is protecting rapists.  But not content even with that outrageous statement, Donohue had the following to say in addition:
If the statute of limitations were lifted on offenses involving the sexual abuse of minors, the only winners would be greedy and bigoted lawyers out to line their pockets in a rash of settlements.  The big losers would be the poor, about whom the attorneys and activists care little: When money is funneled from parishioners to lawyers, services to the needy suffer.  The Catholic League is proud of its role in this victory.
How about the "big losers" now, who are the victims of predators who use their position of power and authority to inflict harm on children?   Donohue, and the members of the New York State Assembly who were complicit in this decision, have chosen to protect a powerful and wealthy institution rather than giving aid to the victims of sexual abuse.

Bill Donohue [image courtesy of the Wikimedia Commons]

But that's what people like Donohue, and British Judge Peter Ross, and California Judge Aaron Persky excel at; swiveling the blame around so that the victims become somehow culpable in their own injury.

The bottom line is that no institution, family, or individual should be above the law, regardless of their wealth, power, or self-perception of holiness.  The first priority in these cases should be the welfare of the victims, and seeking justice for the damage that has been inflicted upon them.  And the fact that people like Ross, Persky, and Donohue are in a position to deflect our attention from that priority makes them guilty of perpetuating a culture in which rape victims, however young, are to blame for their own suffering.