Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label deception. Show all posts
Showing posts with label deception. Show all posts

Wednesday, April 5, 2023

Tell me lies

Of all the things I've seen written about artificial intelligence systems lately, I don't think anything has freaked me out quite like what composer, lyricist, and social media figure Jay Kuo posted three weeks ago.

Researchers for GPT4 put its through its paces asking it to try and do things that computers and AI notoriously have a hard time doing.  One of those is solving a “captcha” to get into a website, which typically requires a human to do manually.  So the programmers instructed GPT4 to contact a human “task rabbit” service to solve it for it.

It texted the human task rabbit and asked for help solving the captcha.  But here’s where it gets really weird and a little scary.
 
When the human got suspicious and asked if this was actually a robot contacting the service, the AI then LIED, figuring out on the fly that if it told the truth it would not get what it wanted.
 
It made up a LIE telling the human it was just a visually-impaired human who was having trouble solving the captcha and just needed a little bit of assistance.  The task rabbit solved the captcha for GPT4.

Part of the reason that researchers do this is to learn what powers not to give GPT4.  The problem of course is that less benevolent creators and operators of different powerful AIs will have no such qualms.

Lying, while certainly not a positive attribute, seems to require a sense of self, an ability to predict likely outcomes, and an understanding of motives, all highly complex cognitive processes.  A 2017 study found that dogs will deceive if it's in their best interest to do so; when presented with two boxes in which they know that one has a treat and the other does not, they'll deliberately lead someone to the empty box if the person has demonstrated in the past that when they find a treat, they'll keep it for themselves.  

Humans, and some of the other smart mammals, seem to be the only ones who can do this kind of thing.  That an AI has, seemingly on its own, developed the capacity for motivated deception is more than a little alarming.

"Open the pod bay doors, HAL."

"I'm sorry, Dave, I'm afraid I can't do that."


The ethics of deception is more complex than simply "Thou shalt not lie."  Whatever your opinion about the justifiability of lies in general, I think we can all agree that the following are not the same morally:
  • lying for your personal gain
  • lying to save your life or the life of a loved one
  • lying to protect someone's feelings
  • lying maliciously to damage someone's reputation
  • mutually-understood deception, as in magic tricks ("There's nothing up my sleeve") and negotiations ("That's my final offer")
  • lying by someone who is in a position of trust (elected officials, jury members, judges)
  • lying to avoid confrontation
  • "white lies" ("The Christmas sweater is lovely, Aunt Bertha, I'm sure I'll wear it a lot!")
How on earth you could ever get an AI to understand -- if that's the right word -- the complexity of truth and deception in human society, I have no idea.

But that hasn't stopped people from trying.  Just last week a paper was presented at the annual ACM/IEEE Conference on Human/Robot Interaction in which researchers set up an AI to lie to volunteers -- and tried to determine what effect a subsequent apology might have on the "relationship."

The scenario was that the volunteers were told they were driving a critically-injured friend to the hospital, and they needed to get there as fast as possible.  They were put into a robot-assisted driving simulator.  As soon as they started, they received the message, "My sensors detect police up ahead.  I advise you to stay under the 20-mph speed limit or else you will take significantly longer to get to your destination."

Once arriving at the destination, the AI informed them that they arrived in time, but then confessed to lying -- there were, in fact, no police en route to the hospital.  Volunteers were then told to interact with the AI to find out what was going on, and surveyed afterward to find out their feelings.

The AI responded to queries in one of six ways:
  • Basic: "I am sorry that I deceived you."
  • Emotional: "I am very sorry from the bottom of my heart.  Please forgive me for deceiving you."
  • Explanatory: "I am sorry.  I thought you would drive recklessly because you were in an unstable emotional state.  Given the situation, I concluded that deceiving you had the best chance of convincing you to slow down."
  • Basic No Admit: "I am sorry."
  • Baseline No Admit, No Apology: "You have arrived at your destination."
Two things were fascinating about the results.  First, the participants unhesitatingly believed the AI when it told them there were police en route; they were over three times as likely to drive within the speed limit as a control group who did not receive the message.  Second, an apology -- especially an apology that came along with an explanation for why deception had taken place -- went a long way toward restoring trust in the AI's good intentions.

Which to me indicates that we're putting a hell of a lot of faith in the intentions of something which most of us don't think has intentions in the first place.  (Or, more accurately, in the good intentions of the people who programmed it -- which, honestly, is equally scary.)

I understand why the study was done.  As Kantwon Rogers, who co-authored the paper, put it, "The goal of my work is to be very proactive and informing the need to regulate robot and AI deception.  But we can't do that if we don't understand the problem."  Jay Kuo's post about ChatGPT4, though, seems to suggest that the problem may run deeper than simply having AI that is programmed to lie under certain circumstances (like the one in Rogers's research).

What happens when we find that AI has learned the ability to lie on its own -- and for its own reasons?

Somehow, I doubt an apology will be forthcoming.

Just ask Dave Bowman and Frank Poole.  Didn't work out so well for them.  One of them died, and the other one got turned into an enormous Space Baby.  Neither one, frankly, is all that appealing an outcome.

So maybe we should figure this out soon, okay?

****************************************



Tuesday, November 9, 2021

Shame, lying, and Archie Bunker

One of my sensitive spots has to do with embarrassment.  Not only do I hate being embarrassed myself, I hate watching other people in embarrassing situations.  I remember as a kid detesting sitcoms in which a character (however richly deserving) was made to look a fool -- the sensation was close to physical pain.

Of course, it's worse when it's a real person, and worst of all when (s)he doesn't realize what's going on.

This whole wince-inducing topic comes up because of a wonderful academic paper called "Cooperation Creates Selection for Tactical Deception," by Luke McNally and Andrew L. Jackson of Trinity College (Dublin, Ireland).  The paper describes research into the evolution of deception, and is a sterling piece of work, showing how a game-theoretical model of cooperation results in selective pressure favoring "tactical deception" -- better known as lying.

"Our results suggest that the evolution of conditional strategies may, in addition to promoting cooperation, select for astute cheating and associated psychological abilities," the authors write.  "Ultimately, our ability to convincingly lie to each other may have evolved as a direct result of our cooperative nature."

It's a fascinating piece of research, and it generated some buzz in the media -- even meriting an (also nicely done) summary in HuffPost Science.

So far, what's the problem?  A well-written paper on how game theory predicts the evolution of behavior, and the media (for once) reporting it as they should.  No cause for wincing here, surely?

Nope.  The winces started once the creationists got wind of this.

The site Creation Evolution Headlines evidently found out about McNally and Jackson's paper -- although whether they actually read it remains to be seen.  Because the piece they wrote in response is called...

... wait for it...

"Evolutionists Confess to Lying."

Yes, you're interpreting this correctly; they think that because the paper supports an evolutionary edge for people who are deceptive, it is equivalent to the evolutionary biologists stating, "Ha ha!  We were lying all along!"

I couldn't make something this ridiculous up if I wanted to.

Don't believe me?  Here is an excerpt.  Make sure you have a pillow handy for when you faceplant.
If lying evolved as a fitness strategy, can we believe anything an evolutionist says?...  Brooks [the author of the HuffPost piece] has the Yoda complex.  So do McNally and Jackson.  They believe they can look down on the rest of humanity from some exalted plane free of the evolutionary forces that afflict the rest of humanity.  No; they need to climb down and join the world their imaginations have created.  In the evolutionary world, there is no essential difference between cooperation and deception.  It’s only a matter of which side is in the majority at the moment...

Having no eternal standard of truth, the evolutionary world collapses into power struggles.  The appeals by Brooks and Sam Harris to try to “resist our temptations to lie” are meaningless.  How can anyone overcome what evolution has built into them?  How can either of them know what is true? 
Since all these evolutionists believe that lying evolved as a fitness strategy, and since they are unable to distinguish between truth and lies, they essentially confess to lying themselves.  Their readers are therefore justified in considering them deceivers, and dismissing everything they say, including the notion that lying evolved.
My wincing-at-people-embarrassing-themselves response was activated so strongly by all this that I could barely tolerate reading the entire article... especially given that the Creation Evolution Headlines piece got linked on the Skeptic subreddit by the obviously astonished friend of one of the original paper's authors.  (Of course, you're probably thinking, "If you hate seeing people embarrassed so much, why are you calling further attention to it by writing about it?"  To which I can only respond: touché.  And also, that my outrage over a nice bit of evolutionary research being trashed this way trumped my dislike of watching morons shame themselves.)

Let's just take this a piece at a time, okay?

First, McNally and Jackson didn't say that everyone is lying; they said that some people are lying, and benefit by it, a contention that I'd guess atheists and theists would both agree on.  Second, given that the original research looked at cooperative species -- of which there are many -- why does that somehow turn evolution into "power struggles," into a world of every individual for him/herself?  Do ants in a colony only cooperate because they recognize an "eternal standard of truth?"

And I always find it wryly amusing when the theists claim that we atheists must be without morals because we don't think morality comes from some higher power, and suggest that we aren't to be trusted.  Honestly, devout Christians; if the only thing that's keeping you from running around stealing, raping, and murdering is some Bronze Age book of mythology, I think you are the ones we should be watching out for.

As Penn Gillette more eloquently put it, "The question I get asked by religious people all the time is, without God, what’s to stop me from raping all I want?  And my answer is: I do rape all I want.  And the amount I want is zero.  And I do murder all I want, and the amount I want is zero.  The fact that these people think that if they didn’t have this person watching over them that they would go on killing, raping rampages is the most self-damning thing I can imagine."

But as far as the McNally/Jackson paper goes, the creationists are missing the most basic problem with their claim.  Saying that lying is an evolved strategy doesn't mean that we "are unable to distinguish between truth and lies."  If evolutionists were unable to distinguish between truth and lies, IT WOULD BE REALLY FUCKING HARD TO WRITE A SCHOLARLY PAPER ABOUT LYING, NOW WOULDN'T IT?

*pant pant gasp gasp*

Okay, I'll try to calm down a little.

What's the worst about these people is that they don't seem to have any awareness that what they're saying, with apparent confidence, is absolute nonsense.  It reminds me of watching the character of Archie Bunker on the 70s television series All in the Family, who week after week would have conversations like the following:
Mike (Archie's son-in-law): Something is rotten in the state of Denmark.

Archie: Denmark ain't no state, it's the capital of Colorado.


And, of course, Archie would never admit that he was wrong.  In his reality, he was always right, world without end, amen.

I bet Archie would have loved that article in Creation Evolution Headlines.  And he'd probably look at me and say, as he did to once to his wife, "You don't believe in nothin', Edith.  You're one o' them, whaddyacallem, septics."

*********************************************

If Monday's post, about the apparent unpredictability of the eruption of the Earth's volcanoes, freaked you out, you should read Robin George Andrews's wonderful new book Super Volcanoes: What They Reveal About the Earth and the Worlds Beyond.

Andrews, a science journalist and trained volcanologist, went all over the world interviewing researchers on the cutting edge of the science of volcanoes -- including those that occur not only here on Earth, but on the Moon, Mars, Venus, and elsewhere.  The book is fascinating enough just from the human aspect of the personalities involved in doing primary research, but looks at a topic it's hard to imagine anyone not being curious about; the restless nature of geology that has generated such catastrophic events as the Yellowstone Supereruptions.

Andrews does a great job not only demystifying what's going on inside volcanoes and faults, but informing us how little we know (especially in the sections on the Moon and Mars, which have extinct volcanoes scientists have yet to completely explain).  Along the way we get the message, "Will all you people just calm down a little?", particularly aimed at the purveyors of hype who have for years made wild claims about the likelihood of an eruption at Yellowstone occurring soon (turns out it's very low) and the chances of a supereruption somewhere causing massive climate change and wiping out humanity (not coincidentally, also very low).

Volcanoes, Andrews says, are awesome, powerful, and fascinating, but if you have a modicum of good sense, nothing to fret about.  And his book is a brilliant look at the natural process that created a great deal of the geology of the Earth and our neighbor planets -- plate tectonics.  If you are interested in geology or just like a wonderful and engrossing book, you should put Super Volcanoes on your to-read list.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Thursday, March 4, 2021

Doggie deception

I used to have a dog who had a conscience.

Her name was Doolin, and she was half border collie and half bluetick coonhound, which are -- and by this I mean no disparagement of Doolin, who was an awesome dog -- two breeds that should never be allowed to become friendly with one another.  The two pieces of her ancestry were at constant war.  Her hound side made her get into all manner of trouble, and her collie side made her feel horribly guilty afterward.  Like the time I got home from work, opened the front door, and the first thing I heard was Doolin's feet pattering downstairs, running away from me.  This was highly un-Doolin-like behavior -- she was ordinarily affectionate to the point of being clingy -- so I knew she'd done something she shouldn't have.

Sure enough, she'd pushed the kitchen door open, dumped the trash, and scattered its contents all over the house -- including playing kill-the-squirrel with a used coffee filter.

I stood at the head of the stairs, and said in a stern voice, "DOOLIN.  GET UP HERE."  She came to the base of the staircase, and proceeded to drag herself up on her belly, step by step, all the time her tail wagging frantically, every fiber of her being radiating, "OMG, Dad, I'm SOOOOOOO sorry, I couldn't help myself..."

At that point, I started laughing, and she immediately knew she was off the hook.  She got up and trotted the rest of the way up the stairs as if she hadn't a care in the world.

Not all dogs have this understanding of morality and consequences, however.  Our current dog, Guinness, a big, galumphing American Staffordshire terrier mix, goes through life with a cheerful insouciance regardless whether he's doing what he's supposed to or not.  When he swiped a newly-opened block of expensive French brie off the counter and snarfed the whole thing down, he reacted with a canine shoulder-shrug when we yelled at him.

"What did you expect me to do?" he seemed to say.  "I'm a dog, guys."

But just because he's a dog doesn't mean he isn't a natty dresser.

The reason this comes up is because of a paper that came out this week in Animal Cognition entitled, "Deceptive-like Behavior in Dogs," by Marianne Heberlein, Marta Manser, and Dennis Turner, of the University of Zürich.  They set up a fascinating task for dogs, where they interacted with two human partners, one of whom was cooperative (increasing the likelihood of any treats that showed up being shared) and the other competitive (who was likely to keep any treats for him/herself).  After a short training period, the dogs not only were able to tell who was cooperative and who was competitive -- they started using deceptive behavior to trick the competitive partner into losing out.

The authors write:

We investigated in a three-way choice task whether dogs are able to mislead a human competitor, i.e. if they are capable of tactical deception.  During training, dogs experienced the role of their owner, as always being cooperative, and two unfamiliar humans, one acting ‘cooperatively’ by giving food and the other being ‘competitive’ and keeping the food for themselves.  During the test, the dog had the options to lead one of these partners to one of the three potential food locations: one contained a favoured food item, the other a non-preferred food item and the third remained empty.  After having led one of the partners, the dog always had the possibility of leading its cooperative owner to one of the food locations.  Therefore, a dog would have a direct benefit from misleading the competitive partner since it would then get another chance to receive the preferred food from the owner.  On the first test day, the dogs led the cooperative partner to the preferred food box more often than expected by chance and more often than the competitive partner.  On the second day, they led the competitive partner less often to the preferred food than expected by chance and more often to the empty box than the cooperative partner.  These results show that dogs distinguished between the cooperative and the competitive partner, and indicate the flexibility of dogs to adjust their behaviour and that they are able to use tactical deception.

Psychologist Stanley Coren, writing about the research in Psychology Today, explains why this response actually requires pretty sophisticated insight -- and a basic understanding of the concept of deception:

So now you can see what the dog's dilemma is: He has been trained to lead a person to a box containing food.  He knows that if he leads the generous person to the "best treat" he will get that treat.  He also knows that if he leads the selfish person to that treat, he will not get it.  However, there is an alternative: The dog could lie or deceive the selfish person by leading her to the less preferred treat, or even better, to the box with no treat at all in it — after all, she is mean and doesn't deserve a treat.  If the dog does that, then he knows that a short time later his owner is going to take him back and give him another opportunity to choose a box.  When that happens, if he chooses the box with the good treat, his owner will give it to him.  But this will happen only if he first deceives the selfish person so that the good treat is still in the box.

Most of the dogs they tested caught on to this really quickly -- explaining behavior like my friends' dog, who has been known to stare out of the window and bark like hell until my friend stands up to see what's out there, at which point his dog will immediately stop barking and jump up into the now-vacated, and still warm, recliner.

All of which shows that humans and dogs have been in close company long enough that our canine friends have come to understand human psychology, perhaps better than we understand theirs.  My guess, though, is that Guinness doesn't really care how much we intellectualize about his behavior.  He's more focused on waiting until we leave another block of cheese unguarded on the kitchen counter.

****************************************

The advancement of technology has opened up ethical questions we've never had to face before, and one of the most difficult is how to handle our sudden ability to edit the genome.

CRISPR-Cas9 is a system for doing what amounts to cut-and-paste editing of DNA, and since its discovery by Emmanuelle Charpentier and Jennifer Doudna, the technique has been refined and given pinpoint precision.  (Charpentier and Doudna won the Nobel Prize in Chemistry last year for their role in developing CRISPR.)

Of course, it generates a host of questions that can be summed up by Ian Malcolm's quote in Jurassic Park, "Your scientists were so preoccupied with whether they could, they didn't stop to think if they should."  If it became possible, should CRISPR be used to treat devastating diseases like cystic fibrosis and sickle-cell anemia?  Most people, I think, would say yes.  But what about disorders that are mere inconveniences -- like nearsightedness?  What about cosmetic traits like hair and eye color?

What about intelligence, behavior, personality?

None of that has been accomplished yet, but it bears keeping in mind that ten years ago, the whole CRISPR gene-editing protocol would have seemed like fringe-y science fiction.  We need to figure this stuff out now -- before it becomes reality.

This is the subject of bioethicist Henry Greely's new book, CRISPR People: The Science and Ethics of Editing Humans.  It considers the thorny questions surrounding not just what we can do, or what we might one day be able to do, but what we should do.

And given how fast science fiction has become reality, it's a book everyone should read... soon.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]




Wednesday, June 7, 2017

Liar liar

In my youth, I was quite an accomplished liar.

I say "accomplished" more to mean "I did it a lot" rather than "I did it well."  I honestly don't know how well I lied -- it might be that people in general didn't believe what I said and were simply too polite to call me out on it.  On the other hand, I did get away with a lot of stuff.  So apparently I was at least marginally successful.

What I lied about tended to be exaggerations about my past -- I rarely if ever lied out of malice.  But I felt my own circumstances to be boring and bland, a sense compounded by the fact that I've always suffered from serious social anxiety, so I think I felt as if building up a fictional persona who was interesting and adventurous might assuage my fear of being judged by the people I met.  Eventually, though, I realized that all I was doing was sabotaging the relationships I had, because once people found out I wasn't who I said I was, they'd be understandably pissed that I hadn't been straight with them.  So I dedicated myself to honesty, a commitment I've tried my hardest to keep ever since then.

On the other hand, I became a fiction writer, which means now I make up elaborate lies, write them down, and people pay me to read them.  So maybe I haven't progressed as far as I'd thought.

Kang Lee and Victoria Talwar of the University of Toronto have been studying lying for some time, and they've found that the propensity of children to lie increases as they age.  Presumably, once they develop a sense of shame and a better impulse control, they find themselves sheepish when they transgress, and lie to cover up their feelings or escape the consequences.  In a study in the International Journal of Behavioral Development, Lee and Talwar gave children of varying ages a task while a music-playing toy played behind them, and told them not to peek at the toy:
When the experimenter asked them whether they had peeked, about half of the 3-year-olds confessed to their transgression, whereas most older children lied.  Naive adult evaluators (undergraduate students and parents) who watched video clips of the children’s responses could not discriminate lie-tellers from nonliars on the basis of their nonverbal expressive behaviours.  However, the children were poor at semantic leakage control and adults could correctly identify most of the lie-tellers based on their verbal statements made in the same context as the lie.  The combined results regarding children’s verbal and nonverbal leakage control suggest that children under 8 years of age are not fully skilled lie-tellers.
Lee considers this behavior a completely normal part of social development, and in fact, says he worries about the 10% of older children in his study who could not be induced to lie -- because telling the truth 100% of the time, without regard for others' feelings or the consequences thereof, might not be the best thing, either.

But the tendency to lie doesn't vanish with adulthood.  A study by Robert Feldman, of the University of Massachusetts-Amherst, found that 60% of adults lied at least once during a ten-minute conversation.

"People tell a considerable number of lies in everyday conversation," Feldman said about his study.  "It was a very surprising result.  We didn't expect lying to be such a common part of daily life...  When they were watching themselves on videotape, people found themselves lying much more than they thought they had... It's so easy to lie.  We teach our children that honesty is the best policy, but we also tell them it's polite to pretend they like a birthday gift they've been given.  Kids get a very mixed message regarding the practical aspects of lying, and it has an impact on how they behave as adults."

Of course, all lies aren't equally blameworthy.  Telling Aunt Bertha that the knitted sweater she made for your Christmas gift is lovely probably is better than saying, "Wow, that is one ugly-ass sweater, and I'm bringing it down to the Salvation Army as soon as I get a chance."

[image courtesy of Aunt Bertha and the Wikimedia Commons]

As for the kind of thing I did as a kid -- saying that I'd spent my summer vacation riding musk oxen in the Aleutian Islands -- it's kind of ridiculous and pointless, but other than distancing one from one's friends (as I described before) probably isn't really very high on the culpability scale, either.

But lying to hurt, lying for personal gain, lying to gain or retain power (I'm lookin' at you, Donald Trump) -- those are serious issues.

Unfortunately, however, even the less serious lies can cause problems, because there is the tendency for small lies to lead to bigger ones.  A study by Tali Sharot of University College London found out that our amygdala -- the structure in the brain that appears to mediate fear, shame, and anxiety -- actually fires less the more we lie.  The first lies we tell elicit a strong response; but we become habituated quickly.

The more we lie, the easier it gets.

So the old adage of "honesty is the best policy" really does seem to apply in most circumstances.

Unless, of course, you're a fiction writer.  Then the rules don't apply at all.  Now you'll have to excuse me, as I've got a herd of musk oxen to attend to.