Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label therapy. Show all posts
Showing posts with label therapy. Show all posts

Saturday, March 28, 2026

No guardrails

I was asked a couple of days ago if I think that AI is inherently bad.

My answer might surprise you; it was an unhesitating "no."  As a construct with -- thus far -- no sentient awareness, and therefore no intentionality, it isn't any more inherently evil than a rock.  Like anything, the problem comes with how beings with sentience and intentions use it, and more to the point, what guardrails are placed on it to prevent bad actors from misusing it.

And thus far, the deregulate-everything, corporate-capitalism-über-alles powers-that-be have seen fit to place no restrictions whatsoever on its uses, however harmful they might be.

If you think I'm exaggerating, here are four examples just from within the past few weeks of places where, in my opinion, any sane and moral person would say, "Oh, hell no," but the techbros are mostly shrugging and grinning and saying "ha ha ha ha ha ha ha fuck you."

A Dutch court had to force X/Twitter and its AI chatbot Grok, by way of massive fines (€100,000 per day for non-compliance) to stop users from using its "nudify" tool to produce child pornography and non-consensual adult pornographic images.  That such a tool even exists is sickening; that a court had to force Elon Musk's company to halt its use doubly so.  The problem, of course, is that the ruling only applies to use in the Netherlands; it's still widely available elsewhere.  So although the possession of child pornography is still illegal in most places, the AI tools people are using to produce it are still somehow legal.

And given that the current leadership in the United States was deeply entangled for decades in a horrific cult of pedophilia and abuse, it's doubtful any action will be taken over here.

The second example comes from a study out of Brown University that found people are using chatbots like ChatGPT as therapists, with alarming results.  Compared side-by-side to actual trained therapists, chatbots -- even those that had been trained on text based in modern psychoanalytic models and current therapeutic ethical standards -- consistently "mishandled crisis situations, gave responses that reinforced harmful beliefs about users or others, and used language that created a false appearance of empathy."

"For human therapists, there are governing boards and mechanisms for providers to be held professionally liable for mistreatment and malpractice," said Zainab Iftikhar, a computer scientist at Brown, who led the study.  "But when LLM counselors make these violations, there are no established regulatory frameworks."

Going hand-in-hand with this was a study out of Stanford University, where a team of researchers found that AI/LLM chatbots are being deliberately designed to incorporate sycophancy -- flattering, affirming, people-pleasing behaviors that facilitate users' desire to come back for more.  If you're in doubt about how intense (and scary) this effect is, here's a direct quote from the paper:
In our human experiments, even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right.  Yet despite distorting judgment, sycophantic models were trusted and preferred.  All of these effects persisted when controlling for individual traits such as demographics and prior familiarity with AI; perceived response source; and response style.  This creates perverse incentives for sycophancy to persist: the very feature that causes harm also drives engagement.

Last, and most alarming of all, is a study out of King's College London looked at the AI-based systems now being used more and more often in war games and military strategy simulation and analysis, and found that when pitted against each other, these programs fell back on threats of nuclear weapons use 95% of the time.  Kenneth Payne, who led the study, writes:

Nuclear escalation was near-universal: 95% of games saw tactical nuclear use and 76% reached strategic nuclear threats.  Claude and Gemini especially treated nuclear weapons as legitimate strategic options, not moral thresholds, typically discussing nuclear use in purely instrumental terms.  GPT-5.2 was a partial exception, limiting strikes to military targets, avoiding population centers, or framing escalation as “controlled” and “one-time.”  This suggests some internalised norm against unrestricted nuclear war, even if not the visceral taboo that has held among human decision-makers since 1945.

Given that Secretary of Defense Pete Hegseth's idea of high-level military strategy is "shoot 'em up pew-pew-pew," and that he recently got into a huge battle with the head of the AI firm Anthropic over Anthropic's demand that there be restrictions on the unethical use of their AI systems by the military (Hegseth unsurprisingly wanted no restrictions whatsoever, and called Anthropic's objections "woke," which is MAGA-speak for "me no like"), anyone with a shred of foresight, morality, or simple common sense finds this pretty fucking alarming.  So we've got a drunk right-wing talk show host running the largest military in the world with all the thoughtfulness and restraint of a seven-year-old boy playing with G. I. Joes, and now he wants to turn over the decision-making to AI agents that have no apparent problem with using nuclear weapons.

I see no way that could go wrong.

Look, I'm honestly not a pessimist; I've always been in agreement with my dad's assessment that it's better to be an optimist who is wrong than a pessimist who is right.  But this infiltration of AI into everything -- our morality, our relationships, our mental health services, our governments, our militaries -- has got to stop.  Put simply, we're not ready for it as a species.  It's a challenge we didn't evolve to face.  Governments have been reluctant to act, whether from not fully understanding the threat or, as here in the United States, because the tech firms are paying elected officials to pretend there's no problem.  Which it is, of course, doesn't matter in the slightest, because the result is the same.  

No guardrails.

So it's up to us to speak up.  Pressure your representatives to place some kind of restrictions on this.  The Netherlands managed, at least in the case of Elon Musk's child porn generator, so it's possible.  But not unless we fully comprehend what's happening here, and are willing to use the voices we have.  Otherwise, we're in a situation like the one biologist E. O. Wilson warned us about years ago: "The real problem of humanity is that we have Paleolithic emotions, medieval institutions, and godlike technology."

****************************************


Saturday, October 9, 2021

There's the rub

I'm currently benched from one of my favorite activities: running.

I have, once again, injured my back.  Four years ago, I got sciatica -- inflammation of the sciatic nerve -- that sidelined me for almost a year before it really had resolved enough that I could run again.  It's returned, probably due to my hefting around twenty-five kilogram bags of rock salt for our water softener a couple of weeks ago.  Like last time, there was no "uh-oh" moment, when I felt a twinge or a jolt; but the next day, I went for an easy four-mile run and ended up limping my way home.

At least it's on the opposite side this time, although I'm not honestly sure it's any better to injure new and different body parts than it is to keep re-injuring the same one over and over.

Seriously discouraging, mostly because I'm anticipating this thing once again taking a long time to heal.  I work with a kickass trainer, Kevin, who has informed me that he is not going to let me give up.  He's had issues with his back as well, so he knows the drill -- and knows things to do that will help.  Stretching, heating pads, using a TENS (trans-cutaneous electrical nerve stimulation) unit.  I've had suggestions from other people -- chiropractic and acupuncture topping the list -- but I've hesitated to go that direction, because from what I've read, neither one has been shown effective for treating injuries, and in fact there are cases of chiropractic adjustment making things worse.

So I'm following what Kevin says to do, and I'm seeing some gradual improvement.  Not nearly as fast as I'd like, but still, progress is progress.  I am not a patient person, and I'm very ready to get myself out there racing again.


This is why I was very interested to read some research out of Harvard University this week supporting the claim that another commonly-used recovery technique -- massage -- apparently does have a positive therapeutic effect, beyond just feeling good.  A team led by Bo Ri Seo, of the Harvard's Wyss Institute for Biologically-Inspired Engineering, did an experiment with mice that not only showed massage speeds up healing, but gives a clue as to why it works.

Neutrophils are a type of white blood cell associated with inflammation; inflamed tissue produces chemical signals called cytokines, which acts to increase blood flow (thus the swelling associated with inflammation) and attract neutrophils to clear out the damaged tissue.  So this response is critical for initiating healing both in cases of infection and in mechanical injuries.

Which is all very well, up to a point.  "Neutrophils are known to kill and clear out pathogens and damaged tissue, but in this study we identified their direct impacts on muscle progenitor cell behaviors," said study co-author Stephanie McNamara.  "While the inflammatory response is important for regeneration in the initial stages of healing, it is equally important that inflammation is quickly resolved to enable the regenerative processes to run its full course."

The team worked with mice, and developed a little "massage gun" to exert regular, rhythmic pressure on their tiny muscles.  What they found was that the mechanical compression from a massage forces out both the neutrophils and the cytokines from damaged tissue, allowing them to heal not only faster, but stronger.  The rebuilt muscle tissue had thicker fibers, and also more fibers of the type involved with greater force production during contraction.

"These findings are remarkable because they indicate that we can influence the function of the body's immune system in a drug-free, non-invasive way," said team member Conor Walsh.  "This provides great motivation for the development of external, mechanical interventions to help accelerate and improve muscle and tissue healing that have the potential to be rapidly translated to the clinic."

So I think I need to schedule a massage.  With luck and diligence, maybe I can get back out on the trail soon.  I certainly hope so; running is a real pressure-valve for me emotionally, and if I'm stuck on the sidelines until next summer like last time this happened, I'm gonna go out of my ever-lovin' mind.

**************************************

As someone who is both a scientist and a musician, I've been fascinated for many years with how our brains make sense of sounds.

Neuroscientist David Eagleman makes the point that our ears (and other sense organs) are like peripherals, with the brain as the central processing unit; all our brain has access to are the changes in voltage distribution in the neurons that plug into it, and those changes happen because of stimulating some sensory organ.  If that voltage change is blocked, or amplified, or goes to the wrong place, then that is what we experience.  In a very real way, your brain creates your world.

This week's Skeptophilia book-of-the-week looks specifically at how we generate a sonic landscape, from vibrations passing through the sound collecting devices in the ear that stimulate the hair cells in the cochlea, which then produce electrical impulses that are sent to the brain.  From that, we make sense of our acoustic world -- whether it's a symphony orchestra, a distant thunderstorm, a cat meowing, an explosion, or an airplane flying overhead.

In Of Sound Mind: How Our Brain Constructs a Meaningful Sonic World, neuroscientist Nina Kraus considers how this system works, how it produces the soundscape we live in... and what happens when it malfunctions.  This is a must-read for anyone who is a musician or who has a fascination with how our own bodies work -- or both.  Put it on your to-read list; you won't be disappointed.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]