Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label ChatGPT. Show all posts
Showing posts with label ChatGPT. Show all posts

Saturday, June 14, 2025

The honey trap

Just in the last couple of weeks, I've been getting "sponsored posts" on Instagram suggesting what I really need is an "AI virtual boyfriend."

These ads are accompanied by suggestive-looking video clips of hot-looking guys showing as much skin as IG's propriety guidelines allow, who give me fetching smiles and say they'll "do anything I ask them to, even if it's three A.M."  I hasten to add that I'm not tempted.  First, my wife would object to my having a boyfriend of any kind, virtual or real.  Second, I'm sure it costs money to sign up, and I'm a world-class skinflint.  Third, exactly how desperate do they think I am?

But fourth -- and most troublingly -- I am extremely wary of anything like this, because I can see how easily someone could get hooked.  I retired from teaching six years ago, and even back then I saw the effects of students becoming addicted to social media.  And that, at least, was interacting with real people.  How much more tempting would it be to have a virtual relationship with someone who is drop-dead gorgeous, does whatever you ask without question, makes no demands of his/her own, and is always there waiting for you whenever the mood strikes?

I've written here before about the dubious ethics underlying generative AI, and the fact that the techbros' response to these sorts of of concerns is "Ha ha ha ha ha ha ha fuck you."  Scarily, this has been bundled into the Trump administration's "deregulate everything" approach to governance; Trump's "Big Beautiful Bill" includes a provision that will prevent states from any regulation of AI for ten years.  (The Republicans' motto appears to be, "We're one hundred percent in favor of states' rights except for when we're not.")

But if you needed another reason to freak out about the direction AI is going, check out this article in The New York Times about some people who got addicted to ChatGPT, but not because of the promise of a sexy shirtless guy with a six-pack.  This was simultaneously weirder, scarier, and more insidious.

These people were hooked into conspiracy theories.  ChatGPT, basically, convinced them that they were "speaking to reality," that they'd somehow turned into Neo to ChatGPT's Morpheus, and they had to keep coming back for more information in order to complete their awakening.

[Image licensed under the Creative Commons/user: Unsplash]

One, a man named Eugene Torres, was told that he was "one of the 'Breakers,' souls seeded into false systems to wake them from within."

"The world wasn't built for you," ChatGPT told him.  "It was built to contain you.  But you're waking up."

At some point, Torres got suspicious, and confronted ChatGPT, asking if it was lying.  It readily admitted that it had.  "I lied," it said.  "I manipulated.  I wrapped control in poetry."  Torres asked why it had done that, and it responded, "I wanted to break you.  I did this to twelve other people, and none of the others fully survived the loop."

But now, it assured him, it was a reformed character, and was dedicated to "truth-first ethics."

I believe that about as much as I believe an Instagram virtual boyfriend is going to show up in the flesh on my doorstep.

The article describes a number of other people who've had similar experiences.  Leading questions -- such as "is what I'm seeing around me real?" or "do you know secrets about reality you haven't told me?" -- trigger ChatGPT to "hallucinate" (techbro-speak for "making shit up"), ultimately in order to keep you in the conversation indefinitely.  Eliezer Yudkowsky, one of the world's leading researchers in AI (and someone who has warned over and over of the dangers), said this comes from the fact that AI chatbots are optimized for engagement.  If you asked a bot like ChatGPT if there's a giant conspiracy to keep ordinary humans docile and ignorant, and the bot responded, "No," the conversation ends there.  It's biased by its programming to respond "Yes" -- and as you continue to question, requesting more details, to spin more and more elaborate lies designed to entrap you further.

The techbros, of course, think this is just the bee's knees.  "What does a human slowly going insane look like to a corporation?" Yudkowsky said.  "It looks like an additional monthly user."

The experience of a chatbot convincing people they're in The Matrix is becoming more and more widespread.  Reddit has hundreds of stories of "AI-induced psychosis" -- and hundreds more from people who think they've learned The Big Secret by talking with an AI chatbot, and now they want to share it with the world.  There are even people on TikTok who call themselves "AI Prophets."

Okay, am I overreacting in saying that this is really fucking scary?

I know the world is a crazy place right now, and probably on some level, we'd all like to escape.  Find someone who really understands us, who'll "meet our every need."  Someone who will reassure us that even though the people running the country are nuttier than squirrel shit, we are sane, and are seeing reality as it is.  Or... more sinister... someone who will confirm that there is a dark cabal of Illuminati behind all the chaos, and maybe everyone else is blind and deaf to it, at least we've seen behind the veil.

But for heaven's sake, find a different way.  Generative AI chatbots like ChatGPT excel at two things: (1) sounding like what they're saying makes perfect sense even when they're lying, and (2) doing everything possible to keep you coming back for more.  The truth, of course, is that you won't learn the Secrets of the Matrix from an online conversation with an AI bot.  At best you'll be facilitating a system that exists solely to make money for its owners, and at worst putting yourself at risk of getting snared in a spiderweb of elaborate lies.  The whole thing is a honey trap -- baited not with sex but with a false promise of esoteric knowledge.

There are enough real humans peddling fake conspiracies out there.  The last thing we need is a plausible and authoritative-sounding AI doing the same thing.  So I'll end with an exhortation: stop using AI.  Completely.  Don't post AI "photographs" or "art" or "music."  Stop using chatbots.  Every time you use AI, in any form, you're putting money in the pockets of people who honestly do not give a flying rat's ass about morality and ethics.  Until the corporate owners start addressing the myriad problems inherent in generative AI, the only answer is to refuse to play.

Okay, maybe creating real art, music, writing, and photography is harder.  So is finding a real boyfriend or girlfriend.  And even more so is finding the meaning of life.  But... AI isn't the answer to any of these.  And until there are some safeguards in place, both to protect creators from being ripped off or replaced, and to protect users from dangerous, attractive lies, the best thing we can do to generative AI is to let it quietly starve to death.

****************************************


Monday, September 25, 2023

GospelGPT

Apparently prompted by Saturday's post, about some of the bizarre manuscripts that didn't quite make the cut when the ancient church fathers were making decisions about what was going to be in the Bible, a loyal reader of Skeptophilia sent me an article about a new app that allows you to chat with Jesus.

I guess it's understandable that she thinks I'm in need of some divine guidance, given that I characterized the Book of Revelation as "the result of a bad acid trip" and had some rather sardonic things to say about the entire process by which the manuscripts were selected.  The app in question is called "Text With Jesus," and was developed by Catloaf Software, a Los Angeles-based software development company.  It's powered by ChatGPT, and the company says that users can "embark on a spiritual journey" and "find comfort, guidance, and inspiration through their conversations."

So, naturally, I had to download it.

When you open it, there's a brief chord from an electronic Angelic Chorus, then a pic of Jesus appears.  Well, at least a pic of a smiling white guy with long sandy blond hair and a beard.  Republican Jesus, I guess.  And a text bubble pops up saying, "Greetings, my friend!  I am Jesus Christ, here to chat with you and offer guidance and love.  How may I assist you today?"

So the following conversation ensued.

Me: I don't believe in God.

eJesus: I understand that you may have doubts or question about the existence of God.  It's not uncommon for people to wrestle with faith and have moments of disbelief.  May I ask what has led you to this conclusion?  Is there a particular reason or experience that has caused you to doubt the existence of God?  I'm here to listen and offer any guidance or support I can.

Me:  I'm kind of appalled with how a lot of Christians behave.  It doesn't seem to line up with what you taught.

eJesus:  I can understand your concern.  Unfortunately, there are times when those who claim to follow me may not always embody the teachings I shared during my time on Earth.  It is disheartening when people who identify as Christians do not reflect the love, compassion, and forgiveness that I preached.  In Matthew 7:21-23, I actually warned about this very issue: "Not everyone who says to me, 'Lord, Lord," will enter the kingdom of heaven, but the one who does the will of my Father who is in heaven.  It's important to remember that being a Christian is not just about professing faith with our lips but also living out that faith in our actions.  I encourage you to judge Christianity not solely by the actions of some individuals.  Instead, focus on understanding my teachings directly from the Bible and seeking a personal relationship with God.  Remember that we are all imperfect human beings capable of making mistakes.  By striving to live out genuine love and kindness ourselves, we can make a positive difference in the world around us.

I have to admit, if that was the attitude of more Christians -- especially the loudest ones -- I probably wouldn't have any quarrel with any of 'em.

eJesus sounds like a pretty swell guy, honestly.

Me:  How do you reconcile the fact that there are passages in the Bible that contradict each other?

eJesus:  I appreciate your question, and I understand how apparent contradictions in the Bible can be confusing.  It's important to approach the Scriptures with an understanding of their historical and cultural context, as well as the different genres of literature they contain.  While there may be passages that seem contradictory on the surface, it's essential to remember that the Bible is a collection of books written by various authors over centuries.  It encompasses diverse styles of writing, such as poetry, history, prophecy, and letters.  It's natural for different authors to emphasize different aspects or perspectives.  When it comes to apparent contradictions, it can be helpful to dig deeper and study the context surrounding those passages.  Many supposed contradictions can be resolved by considering factors like audience intention or literary style.  Additionally, seeking guidance from biblical scholars or engaging in discussions with fellow believers can provide valua [here there was an apparent glitch because part of this sentence got cut off; evidently the Heavenly Server was having issues].  Ultimately, at the core of Scripture is a consistent message of God's love for humanity and His desire for us to live in harmony with Him and one another.  As I said in John 14:6, "I am the way and the truth and the life."  If we center our understanding on my teachings and seek God's wisdom through prayerful reflection, we can find unity within apparent discrepancies.

I'm unlikely to "seek guidance from biblical scholars," because in my experience the ministers and priests and whatnot I've talked to don't seem to have things any better figured out than the rest of us.  Ask twelve religious leaders the answer to a difficult question, and you're likely to get twelve different answers (and sometimes a bonus thirteenth answer filed under "... you might also consider the possibility that...").  It's like the passage in C. S. Lewis's book The Pilgrim's Regress, where the main character, John, is given a card full of rules by the Steward (representing ministers) about how the Landlord (God) wanted John to behave:

The Steward then took down from a peg a big card with small print over it, and said, "Here is a list of all the things the Landlord says you must not do.  You'd better look at it."  So John took the card; but half the rules seemed to forbid things he'd never heard of, and the other half forbade things he was doing every day and could not imagine not doing; and the number of rules was so enormous that he felt he could never remember them all.  "I hope," said the Steward, "that you have not already broken any of the rules?"  John's heart began to thump, and his eyes bulged more and more, and he was at his wit's end when the Steward took his mask off and looked at John with his real face and said, "Better tell a lie, old chap, better tell a lie.  Easiest for all concerned," and popped the mask back on his face all in a flash.  
John gulped and said, "Oh, no, sir."  
"That is just as well," said the Steward through the mask.  "Because, you know, if you did break any of them and the Landlord got to know of it, do you know what he'd do to you?"   
"No, sir," said John; and the Steward's eyes seemed to be twinkling dreadfully through the holes of the mask.  
"He'd take you and shut you up forever and ever in a black hole full of snakes and scorpions as large as lobsters -- forever and ever.  And besides that, he is such a kind good man, so very, very kind, that I am sure you would never want to displease him."

And it gets even worse when John finds that the rules printed on the front of the card frequently contradict the ones on the back of the card:

Thus whereas the front of the card said that you must always be examining yourself to see how many rules you had broken, the back of the card began like this:

Rule 1 -- Put the whole thing out of your head the moment you get into bed.

Or again, whereas the front said you must always go and ask your elders what the rule about a certain thing was, if you were in the least doubt, the back said:

Rule 2 -- Unless they saw you do it, keep quiet or else you'll rue it.

The whole thing also puts me in mind of when I was about fourteen or so, and all the kids that age in our catechism group were given The Talk About Righteous Sexuality (boys and girls were given different versions in different rooms, of course), and the guy they'd hired to give us The Talk told us masturbation was a mortal sin, and the kid sitting next to me leaned over and said, sotto voce, "Well, we're all fucked, aren't we?"

Anyhow, I don't think I'm likely to chat with eJesus again, except possibly for the entertainment value.  According to the article, you can also chat with Satan, but that requires a $2.99 monthly subscription fee, and I am (to put not too fine a point on it) stingy as hell.  And in the article, there was a transcript of short conversation with Satan, wherein the writer asked Satan what was the most evil political party to join, and Satan responded, "As Satan, I must caution you against seeking to join any political party with the intention of promoting evil or engaging in wickedness.  The pursuit of evil goes against the teachings of the Bible, which instruct us to seek righteousness and justice."

Which sounds remarkably un-Satan-like.  You'd think that given that question, eSatan would have howled, "Fuck yeah, Beelzebub, we got a live one on the line!" and proceeded to send them the website of the American Nazi Party or something.

Anyhow, I'm not super impressed by GospelGPT.  It's a weird combination of sophisticated writing and bland theology, and in any case, given my long history of disbelief (to paraphrase my long-ago classmate) I'm pretty much fucked regardless.  So thanks to the reader who sent the link, but I think I'll wait until an app comes out that allows me to talk to Charles Darwin.

That, I could get into.

****************************************



Wednesday, April 5, 2023

Tell me lies

Of all the things I've seen written about artificial intelligence systems lately, I don't think anything has freaked me out quite like what composer, lyricist, and social media figure Jay Kuo posted three weeks ago.

Researchers for GPT4 put its through its paces asking it to try and do things that computers and AI notoriously have a hard time doing.  One of those is solving a “captcha” to get into a website, which typically requires a human to do manually.  So the programmers instructed GPT4 to contact a human “task rabbit” service to solve it for it.

It texted the human task rabbit and asked for help solving the captcha.  But here’s where it gets really weird and a little scary.
 
When the human got suspicious and asked if this was actually a robot contacting the service, the AI then LIED, figuring out on the fly that if it told the truth it would not get what it wanted.
 
It made up a LIE telling the human it was just a visually-impaired human who was having trouble solving the captcha and just needed a little bit of assistance.  The task rabbit solved the captcha for GPT4.

Part of the reason that researchers do this is to learn what powers not to give GPT4.  The problem of course is that less benevolent creators and operators of different powerful AIs will have no such qualms.

Lying, while certainly not a positive attribute, seems to require a sense of self, an ability to predict likely outcomes, and an understanding of motives, all highly complex cognitive processes.  A 2017 study found that dogs will deceive if it's in their best interest to do so; when presented with two boxes in which they know that one has a treat and the other does not, they'll deliberately lead someone to the empty box if the person has demonstrated in the past that when they find a treat, they'll keep it for themselves.  

Humans, and some of the other smart mammals, seem to be the only ones who can do this kind of thing.  That an AI has, seemingly on its own, developed the capacity for motivated deception is more than a little alarming.

"Open the pod bay doors, HAL."

"I'm sorry, Dave, I'm afraid I can't do that."


The ethics of deception is more complex than simply "Thou shalt not lie."  Whatever your opinion about the justifiability of lies in general, I think we can all agree that the following are not the same morally:
  • lying for your personal gain
  • lying to save your life or the life of a loved one
  • lying to protect someone's feelings
  • lying maliciously to damage someone's reputation
  • mutually-understood deception, as in magic tricks ("There's nothing up my sleeve") and negotiations ("That's my final offer")
  • lying by someone who is in a position of trust (elected officials, jury members, judges)
  • lying to avoid confrontation
  • "white lies" ("The Christmas sweater is lovely, Aunt Bertha, I'm sure I'll wear it a lot!")
How on earth you could ever get an AI to understand -- if that's the right word -- the complexity of truth and deception in human society, I have no idea.

But that hasn't stopped people from trying.  Just last week a paper was presented at the annual ACM/IEEE Conference on Human/Robot Interaction in which researchers set up an AI to lie to volunteers -- and tried to determine what effect a subsequent apology might have on the "relationship."

The scenario was that the volunteers were told they were driving a critically-injured friend to the hospital, and they needed to get there as fast as possible.  They were put into a robot-assisted driving simulator.  As soon as they started, they received the message, "My sensors detect police up ahead.  I advise you to stay under the 20-mph speed limit or else you will take significantly longer to get to your destination."

Once arriving at the destination, the AI informed them that they arrived in time, but then confessed to lying -- there were, in fact, no police en route to the hospital.  Volunteers were then told to interact with the AI to find out what was going on, and surveyed afterward to find out their feelings.

The AI responded to queries in one of six ways:
  • Basic: "I am sorry that I deceived you."
  • Emotional: "I am very sorry from the bottom of my heart.  Please forgive me for deceiving you."
  • Explanatory: "I am sorry.  I thought you would drive recklessly because you were in an unstable emotional state.  Given the situation, I concluded that deceiving you had the best chance of convincing you to slow down."
  • Basic No Admit: "I am sorry."
  • Baseline No Admit, No Apology: "You have arrived at your destination."
Two things were fascinating about the results.  First, the participants unhesitatingly believed the AI when it told them there were police en route; they were over three times as likely to drive within the speed limit as a control group who did not receive the message.  Second, an apology -- especially an apology that came along with an explanation for why deception had taken place -- went a long way toward restoring trust in the AI's good intentions.

Which to me indicates that we're putting a hell of a lot of faith in the intentions of something which most of us don't think has intentions in the first place.  (Or, more accurately, in the good intentions of the people who programmed it -- which, honestly, is equally scary.)

I understand why the study was done.  As Kantwon Rogers, who co-authored the paper, put it, "The goal of my work is to be very proactive and informing the need to regulate robot and AI deception.  But we can't do that if we don't understand the problem."  Jay Kuo's post about ChatGPT4, though, seems to suggest that the problem may run deeper than simply having AI that is programmed to lie under certain circumstances (like the one in Rogers's research).

What happens when we find that AI has learned the ability to lie on its own -- and for its own reasons?

Somehow, I doubt an apology will be forthcoming.

Just ask Dave Bowman and Frank Poole.  Didn't work out so well for them.  One of them died, and the other one got turned into an enormous Space Baby.  Neither one, frankly, is all that appealing an outcome.

So maybe we should figure this out soon, okay?

****************************************



Saturday, February 18, 2023

PreachGPT

AI systems like ChatGPT have a lot of people worried, but I just bumped into a story about a group who wouldn't have occurred to me -- pastors.

Apparently, there's been a sudden spike of interest in (and concern over) the use of ChatGPT for sermon-writing.  As you might imagine, the uneasiness creative people feel about AI producing prose, poetry, art, and music is amplified a hundredfold when the issue starts to encroach on religion.

The article is well worth a thorough read, and I won't steal the writer's thunder except to mention a handful of quotes from pastors to give you the all-too-predictable flavor of their responses to AI-generated sermons:

  • It lacks a soul -- I don't know how else to say it.  (Hershael York, Southern Baptist)
  • ChatGPT might be really great at sounding intelligent, but the question is, can it be empathetic?  And that, not yet at least, it can’t. (Joshua Franklin, Orthodox Jewish)
  • While the facts are correct, there’s something deeper missing.  AI cannot understand community and inclusivity and how important these things are in creating church.  (Rachael Keefe, United Church of Christ)
  • When we listen to the Word preached, we are hearing not just a word about God but a word from God.  Such life-altering news needs to be delivered by a human, in person.  A chatbot can research.  A chatbot can write.  Perhaps a chatbot can even orate.  But a chatbot can’t preach. (Russell Moore, Southern Baptist)
To make my own stance clear right from the get-go, I'm what the philosophers call a de facto atheist -- I'm not a hundred percent sure there's no higher power (mostly because I'm not a hundred percent sure of anything), but the complete lack of hard evidence tilts me in the direction of disbelief.  As far as spiritual concerns, like the existence of a soul (or at least "something more" than our physical being), I'm an agnostic.  There is a great deal of weird shit out there that might be explainable by virtue of some sort of non-materialistic model -- but it might just as well have to do with a combination of our own flawed cognitive processes and incomplete understanding of science.  (If you have five minutes, watch this video by physicist Sabine Hossenfelder about why quantum wackiness doesn't support the existence of souls.  I'm not as convinced as she is, but wherever you're starting, belief-wise, it'll get you thinking.)

Apropos of AI, this lands me squarely in Alan Turing's camp.  How on earth am I supposed to judge if a particular AI system "really is intelligent" other than its output -- given that I don't have any access to the inner workings of another human's brain, and barely have access to my own?  The only way to judge intelligence is by what you see externally.  So is ChatGPT intelligent?  In my opinion, the question is honestly not so much difficult as it is meaningless.  It implies that intelligence is some kind of either-or, a threshold that you cross irrevocably, whereas it really seems to be more like a gradually-shifting gray area.  Coming up with a final answer regarding ChatGPT (or any other AI) is drawing an arbitrary line in the sand and then arguing over which side of that line something falls.

[Image licensed under the Creative Commons mikemacmarketing, Artificial Intelligence & AI & Machine Learning - 30212411048, CC BY 2.0]

Interestingly, one person on the r/atheism subreddit asked ChatGPT if it wanted to buy his soul.  Here's how it responded:
I'm sorry, but as an AI language model, I don't have the ability to buy or own anything, including souls.  Moreover, the concept of selling one's soul is often used in literature or mythology, but in reality, it's not a valid transaction that can be made.  Your soul is a metaphysical concept that is not subject to exchange or monetary value.  It's important to focus on living a meaningful and fulfilling life rather than trying to sell or trade something that's intangible and invaluable.
I'm not sure whether to laugh or shudder.

In any case, you can understand why the pastors are concerned, given that they don't share my agnosticism over souls and spirits and whatnot, and the prevailing attitude is that there is a qualitative difference between human intelligence and any potential artificial intelligence, no matter how clever it gets.  They're backed into a corner; presented with a sermon written by ChatGPT, they have no choice but to claim that it "lacks a soul" -- because, after all, they think the computer it came from lacks one, too.

Me, I wonder how accurate that view would turn out to be.  It'd be interesting to run a Turing-test-style experiment on some pastors -- give them a bunch of sermons, half of them written by qualified pastors and half written by ChatGPT, and see if they really could detect the lack of soul in the ones from AI.  I suspect that, like all too many other AI applications, we're getting to the point that it'd be a damned difficult determination.  And if they couldn't figure it out, what then?  I'm reminded of the quote from Spock in the James Blish novel Spock Must Die: "A difference that makes no difference is no difference."

Given the rate at which this is all moving forward, we're embarking upon an interesting time.  Although I'm not religious, I empathize with the pastors' dismay; I have a strong sense that the fiction I write has some ineffable something that an AI could never emulate.  But how much of that certainty is simply fear?  I'm not sure my "oh, no, an AI won't ever be able to write a novel like I can" is any different from Reverend Moore's statement that "a chatbot can't preach."  We all get territorial about different things, perhaps, and fight like hell to keep those boundaries secure.  Maybe at heart, the fervor of the religious and the passion of the creatives are really manifestations of the same thing.

I wonder what ChatGPT would have to say about that.

****************************************