Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label Claude Code. Show all posts
Showing posts with label Claude Code. Show all posts

Friday, January 23, 2026

The parasitic model

A couple of years ago, I posted a frustrated screed about the potential for AI-generated slop to supplant actual creativity.  My anger at the whole thing is based on the fact that I put a great deal of time, effort, and passion into my writing -- not only here, but in my fiction.  The idea that someone could use large language model software and a few well-chosen prompts to produce an eighty-thousand-word-long novel in a matter of minutes, while it takes me months (sometimes years) of steady hard work to create and refine something of equal length -- well, it's maddening.

Still, I've at least been encouraged by the fact that there are folks taking a stand about this, and not only writers like myself, but people in the publishing industry.  Software has been written to detect AI-generated prose, and while it's not flawless, it does at least an adequate job.  My friend J. Scott Coatsworth, an excellent writer in his own right, for several years ran a queer-themed flash fiction contest, and was dismayed and disheartened by the fact that during its last run, he used AI-detection software to check the submissions -- and disqualified ten of them (out of something like two hundred) on that basis.  

While this isn't a very high percentage, what strikes me here is how low the incentive was to cheat.  There was no cash prize; the winners got into an anthology and received a free copy of it, which was lovely, but hardly a bag full of gold.  And, most astonishingly, the maximum word count was three hundred words.  Now, mind you, I'm not saying it's easy to write a good story that short; but for fuck's sake, it's less than a page.

How lazy can you get?

AI is being sneakily inserted into everything.  Those of you with email through Google have probably noticed that now if there's a back-and-forth chain of emails, you get an AI "summary of the conversation" whether you want it or not.  (There might be a way to opt out, which I'll look into if I get much more pissed off by it.)  Just a couple of days ago, I was part of three-person electronic exchange with two people I work with, and was completely weirded out when I saw at the top of the thread, "You sympathized with (person 1) for being sick, and both you and (person 2) said it was no problem, that you'd both cover for her and make sure her work got done in her absence, and to get well soon."

Thanks, Google AI, but I don't need my sympathies summarized.  Nor anything else I've emailed people about.  This is way too close to a stranger reading my private correspondence for my comfort.

Not that anything is private on the internet.

The problem has extended into other realms of writing, too.  Wikipedia has become so infested with AI-written articles -- with their attendant problem of "hallucinations," which is tech-speak for "fabricated bullshit" -- that the people running it put together WikiProject AI Cleanup, a program used to detect AI/LLM-generated articles based on common patterns in the writing style.

There's the often-cited issue with AI's fondness for em-dashes, but there are lots of other giveaways, too.  AI-generated prose often uses fulsome adjectives like "breathtaking" and "foundational" and "pivotal."  It's also fond of participial phrases at the end of sentences -- "... symbolizing the region's commitment to innovation."

Syntactic analysis of a simple sentence as done by a large language model [Image licensed under the Creative Commons DancingPhilosopher, Multiple attention heads, CC BY-SA 4.0]

But now, a tech entrepreneur named Siqi Chen has created an open-source plug-in for Anthropic's "Claude Code AI Assistant" that used the WikiProject's list of red flags as a starting point -- so that Claude Code can learn to write less like AI and more like a real person, and slip past the AI detectors.

Chen named his plug-in "The Humanizer."

What really torques me is how breezy Chen is about the whole thing.  "It’s really handy that Wikipedia went and collated a detailed list of 'signs of AI writing,'" Chen wrote on X.  "So much so that you can just tell your LLM to … not do that."

Maybe Chen and his ilk wouldn't be so fucking flippant about it if he were one us writers struggling to get our quarterly royalty checks out of the double digits.  AI is trained on human-created writing -- without a dime's worth of compensation for the actual authors, and tech companies fighting tooth and nail to make sure they can continue to rip us off for free -- as well as AI-generated slop taking a share of the space in the already-narrow publishing market.  

Funny how these issues of morality and intellectual property rights never bother the techbros as long as their own bank accounts are fat and happy.  It's a parasitic model for business, and people like Chen are no more likely to put the brakes on than a tick is likely to ask a dog for permission to bite.

The whole thing has become an arms race.  Good-faith publishers and consumers of written work try to figure out how to detect AI-generated prose, so the techbros respond by springboarding off that to find newer and better ways to evade detection.  We find new ways to shut it off, they find new places to insert it into our lives.  Here in the United States, the situation is only going to get worse; the current regime has a "deregulate everything" approach, because we all know how well corporations self-limit out of ethical considerations.

*brief pause to stop rolling my eyes*

So I'll end this post the way I've ended damn near every post I've done on AI.  Until there are regulations in place to protect the intellectual property of creative people, and to protect consumers from potentially dangerous "hallucinated" content, stop using AI.  Yes, I know it can create pretty pictures that are fun to post on social media.  Yes, I know you can use it to generate cool artwork to hang on your wall -- or for the cover of your book.  Yes, I know it makes writing stuff quicker and easier.  But at the moment, the damage far outweighs the benefits, and as we've seen over and over, tech companies are not going to address the concerns unless they have no choice.

The only option is for consumers to strangle it at its source.

****************************************