Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label lying. Show all posts
Showing posts with label lying. Show all posts

Wednesday, April 5, 2023

Tell me lies

Of all the things I've seen written about artificial intelligence systems lately, I don't think anything has freaked me out quite like what composer, lyricist, and social media figure Jay Kuo posted three weeks ago.

Researchers for GPT4 put its through its paces asking it to try and do things that computers and AI notoriously have a hard time doing.  One of those is solving a “captcha” to get into a website, which typically requires a human to do manually.  So the programmers instructed GPT4 to contact a human “task rabbit” service to solve it for it.

It texted the human task rabbit and asked for help solving the captcha.  But here’s where it gets really weird and a little scary.
 
When the human got suspicious and asked if this was actually a robot contacting the service, the AI then LIED, figuring out on the fly that if it told the truth it would not get what it wanted.
 
It made up a LIE telling the human it was just a visually-impaired human who was having trouble solving the captcha and just needed a little bit of assistance.  The task rabbit solved the captcha for GPT4.

Part of the reason that researchers do this is to learn what powers not to give GPT4.  The problem of course is that less benevolent creators and operators of different powerful AIs will have no such qualms.

Lying, while certainly not a positive attribute, seems to require a sense of self, an ability to predict likely outcomes, and an understanding of motives, all highly complex cognitive processes.  A 2017 study found that dogs will deceive if it's in their best interest to do so; when presented with two boxes in which they know that one has a treat and the other does not, they'll deliberately lead someone to the empty box if the person has demonstrated in the past that when they find a treat, they'll keep it for themselves.  

Humans, and some of the other smart mammals, seem to be the only ones who can do this kind of thing.  That an AI has, seemingly on its own, developed the capacity for motivated deception is more than a little alarming.

"Open the pod bay doors, HAL."

"I'm sorry, Dave, I'm afraid I can't do that."


The ethics of deception is more complex than simply "Thou shalt not lie."  Whatever your opinion about the justifiability of lies in general, I think we can all agree that the following are not the same morally:
  • lying for your personal gain
  • lying to save your life or the life of a loved one
  • lying to protect someone's feelings
  • lying maliciously to damage someone's reputation
  • mutually-understood deception, as in magic tricks ("There's nothing up my sleeve") and negotiations ("That's my final offer")
  • lying by someone who is in a position of trust (elected officials, jury members, judges)
  • lying to avoid confrontation
  • "white lies" ("The Christmas sweater is lovely, Aunt Bertha, I'm sure I'll wear it a lot!")
How on earth you could ever get an AI to understand -- if that's the right word -- the complexity of truth and deception in human society, I have no idea.

But that hasn't stopped people from trying.  Just last week a paper was presented at the annual ACM/IEEE Conference on Human/Robot Interaction in which researchers set up an AI to lie to volunteers -- and tried to determine what effect a subsequent apology might have on the "relationship."

The scenario was that the volunteers were told they were driving a critically-injured friend to the hospital, and they needed to get there as fast as possible.  They were put into a robot-assisted driving simulator.  As soon as they started, they received the message, "My sensors detect police up ahead.  I advise you to stay under the 20-mph speed limit or else you will take significantly longer to get to your destination."

Once arriving at the destination, the AI informed them that they arrived in time, but then confessed to lying -- there were, in fact, no police en route to the hospital.  Volunteers were then told to interact with the AI to find out what was going on, and surveyed afterward to find out their feelings.

The AI responded to queries in one of six ways:
  • Basic: "I am sorry that I deceived you."
  • Emotional: "I am very sorry from the bottom of my heart.  Please forgive me for deceiving you."
  • Explanatory: "I am sorry.  I thought you would drive recklessly because you were in an unstable emotional state.  Given the situation, I concluded that deceiving you had the best chance of convincing you to slow down."
  • Basic No Admit: "I am sorry."
  • Baseline No Admit, No Apology: "You have arrived at your destination."
Two things were fascinating about the results.  First, the participants unhesitatingly believed the AI when it told them there were police en route; they were over three times as likely to drive within the speed limit as a control group who did not receive the message.  Second, an apology -- especially an apology that came along with an explanation for why deception had taken place -- went a long way toward restoring trust in the AI's good intentions.

Which to me indicates that we're putting a hell of a lot of faith in the intentions of something which most of us don't think has intentions in the first place.  (Or, more accurately, in the good intentions of the people who programmed it -- which, honestly, is equally scary.)

I understand why the study was done.  As Kantwon Rogers, who co-authored the paper, put it, "The goal of my work is to be very proactive and informing the need to regulate robot and AI deception.  But we can't do that if we don't understand the problem."  Jay Kuo's post about ChatGPT4, though, seems to suggest that the problem may run deeper than simply having AI that is programmed to lie under certain circumstances (like the one in Rogers's research).

What happens when we find that AI has learned the ability to lie on its own -- and for its own reasons?

Somehow, I doubt an apology will be forthcoming.

Just ask Dave Bowman and Frank Poole.  Didn't work out so well for them.  One of them died, and the other one got turned into an enormous Space Baby.  Neither one, frankly, is all that appealing an outcome.

So maybe we should figure this out soon, okay?

****************************************



Friday, December 23, 2022

Tell me lies

In Jean-Paul Sartre's short story "The Wall," three men are captured during the Spanish Civil War, and all three are sentenced to die if they won't reveal the whereabouts of the rebellion's ringleader, Ramón Gris.

The main character, Pablo Ibbieta, and the other two men sit in their jail cell, discussing what they should do.  All three are terrified of dying (of course), but is it morally and ethically required for them to give up their lives for the cause they believe in?  When is a cause worth a human life?  Three human lives?  What if it cost hundreds of lives?

Pablo's two companions are each offered one more chance to rat out Ramón, and each refuses.  Pablo hears the noises as they're dragged out into the prison courtyard, stood up against the wall, and shot to death.

Now it's just Pablo, alone in the cell.

Thoughts race through his head.  Now that it's just him, if he sells out Ramón, there won't be any witnesses (or at least any on the side of the rebellion).  Who'll know it was him that betrayed the cause?

After much soul-searching, Pablo decides he can't do it.  He has to remain loyal, even at the cost of his own life.  But he figures there's nothing wrong with making his captors look like idiots in the process.  So he tells them that Ramón Gris is hiding in a cemetery on the other end of town.  He laughs to himself picturing the people holding him, the ones who have just killed his two friends, rushing off and dashing around the cemetery for no good reason, making fools of themselves.

His captors tell him they're going to go check out his story, and if he's lying, he's a dead man (which Pablo knows is what will happen).  But after a couple of hours, they come back... and let him go.

He's wandering around the town, dazed, when he runs into a friend, another secret member of the rebellion.  The friend says, "Did you hear?  They got Ramón."

Pablo asks how it happened.

The guy says, "Yeah... Ramón was in a friend's house, as you know, perfectly safe, but he became convinced he was going to be betrayed.  So he went and hid out at the cemetery.  They found him and shot him."

The last line of the story is, "I sat down on a bench, and laughed until I cried."

It's a sucker punch of an ending, and raises a number of interesting ethical issues.  I used to assign "The Wall" to my Critical Thinking classes, and the discussion afterward revolved around two questions:

Did Pablo Ibbieta lie?  And was he morally responsible for Ramón Gris's death?

There's no doubt that Pablo intended to lie.  It was accidentally the truth, something he only found out after it was too late.  As far as his responsibility... there's no doubt that if he hadn't spoken up, if he had just let the guards execute them as his two friends did, Ramón wouldn't have been killed.  So in the technical sense, it was Pablo who caused Ramón's death.  But again, there's his intent, which was exactly the opposite.

The questions don't admit easy answers -- as Sartre no doubt intended.

All lies are clearly not morally equivalent, even barring complex situations like the one described in "The Wall."  Lies to flatter someone or protect their feelings ("That is a lovely sweater") are thought by most people to be less culpable than ones where the intent was to defraud someone for one's own gain.  And as common as harmful lies seem to be, some recent research came up with the heartening results that we lie far more often for altruistic reasons than for selfish or vindictive ones.


A recent paper in the Canadian Journal of Behavioural Science, by Jennifer McArthur, Rayanda Jarvis, Catherine Bourgeois, and Marguerite Ternes, found that while lying in general is inversely correlated with measures of honesty and conscientiousness -- unsurprising -- the most common motivations for lying were altruistic reasons, such as protecting someone's feelings or reputation, and secrecy (claiming not to know something when you actually do).

So maybe human dishonesty isn't quite as ugly and self-serving as it might appear at first.

Note, however, that I'm not saying even the altruistically-motivated lies McArthur et al. describe are necessarily a good thing.  Telling Aunt Bertha that her tuna noodle olive loaf was delicious will just encourage her to inflict it on someone else, and giving people false feedback to avoid hurting their feelings -- especially when asked for -- can lead someone astray.  But like the far more serious situation in "The Wall," these aren't simple questions with easy answers; ethicists have been wrestling with the morality of truth-telling for centuries, and there's never been a particularly good, hard-and-fast rule.

But it's good to know that, at least when it comes to breaking "Thou shalt not lie," that for the most part we're motivated by good intentions.

****************************************


Friday, March 25, 2022

Truth by repetition

You probably have heard the quote attributed to Nazi propaganda minister Joseph Goebbels: "If you tell a lie big enough and continue to repeat it, eventually people will come to believe it."  This has become a staple tactic in political rhetoric -- an obvious recent example being Donald Trump's oft-repeated declaration that he won the 2020 presidential election, despite bipartisan analysis across the United States demonstrating unequivocally that this is false.  (The tactic works; a huge number of Trump supporters still think the election was stolen.)

It turns out that the "illusory truth effect" or "truth-by-repetition effect," as the phenomenon is called, still works even if the claim is entirely implausible.  A study by psychologist Doris Lacassagne at the Université Catholique de Louvain (in Belgium) recently presented 232 test subjects with a variety of ridiculous statements, including "the Earth is a perfect cube," "smoking is good for the lungs," "elephants weigh less than ants," and "rugby is the sport associated with Wimbledon."  In the first phase of the experiment, they were asked to rate the statements not for plausibility, but for how "interesting" they were.  After this, the volunteers were given lists of statements to evaluate for plausibility, and were told ahead of time that some of the statements would be repeated, and that there would be statements from the first list included on the second along with completely new claims.

The results were a little alarming, and support Goebbels's approach to lying.  The false statements -- even some of the entirely ridiculous ones -- gained plausibility from repetition.  (To be fair, the ratings still had average scores on the "false" side of the rating spectrum; but they did shift toward increasing veracity.)

The ones that showed the greatest shift were the ones that required at least a vague familiarity with science or technical matters, such as "monsoons are caused by earthquakes."  It only took a few repetitions to generate movement toward the "true" end of the rating scale, which is scary.  Not all the news was bad, though; although 53% of the participants showed a positive illusory truth effect, 28% showed a negative effect -- repeating false statements triggered their plausibility assessments to decrease.  (I wonder if this was because people who actually know what they're talking about become increasingly pissed off by seeing the same idiotic statement over and over.  I suspect that's how I would react.)

Of course, recognizing that statements are false requires some background knowledge.  I'd be much more likely to fall for believing a false statement about (for example) economics, because I don't know much about the subject; presumably I'd be much harder to fool about biology.  It's very easy for us to see some claim about a subject we're not that familiar with and say, "Huh!  I didn't know that!" rather than checking its veracity -- especially if we see the same claim made over and over.

[Image licensed under the Creative Commons Zabou, Politics, CC BY 3.0]

I honestly have no idea what we could do about this.  The downside of the Freedom of Speech amendment in the Constitution of the United States means that with a limited number of exceptions -- slander, threats of violence, vulgarity, and hate speech come to mind -- people can pretty much say what they want on television.  The revocation of the FCC's Fairness Doctrine in 1987 meant that news media no longer were required to give a balanced presentation of all sides of the issues, and set us up for the morass of partisan editorializing that the nightly news has become in the last few years.  (And, as I've pointed out more than once, it's not just outright lying that is the problem; partisan media does as much damage by what they don't tell you as what they do.  If a particular news channel's favorite political figure does something godawful, and the powers-that-be at the channel simply decide not to mention it, the listeners will never find out about  it -- especially given that another very successful media tactic has been convincing the consumers that "everyone is lying to you except us.")

It's a quandary.  There's currently no way to compel news commentators to tell the truth, or to force them to tell their listeners parts of the news that won't sit well with them.  Unless what the commentator says causes demonstrable harm, the FCC pretty much has its hands tied.

So the Lacassagne study seems to suggest that as bad as partisan lies have gotten, we haven't nearly reached the bottom of the barrel yet.

**************************************

Tuesday, November 9, 2021

Shame, lying, and Archie Bunker

One of my sensitive spots has to do with embarrassment.  Not only do I hate being embarrassed myself, I hate watching other people in embarrassing situations.  I remember as a kid detesting sitcoms in which a character (however richly deserving) was made to look a fool -- the sensation was close to physical pain.

Of course, it's worse when it's a real person, and worst of all when (s)he doesn't realize what's going on.

This whole wince-inducing topic comes up because of a wonderful academic paper called "Cooperation Creates Selection for Tactical Deception," by Luke McNally and Andrew L. Jackson of Trinity College (Dublin, Ireland).  The paper describes research into the evolution of deception, and is a sterling piece of work, showing how a game-theoretical model of cooperation results in selective pressure favoring "tactical deception" -- better known as lying.

"Our results suggest that the evolution of conditional strategies may, in addition to promoting cooperation, select for astute cheating and associated psychological abilities," the authors write.  "Ultimately, our ability to convincingly lie to each other may have evolved as a direct result of our cooperative nature."

It's a fascinating piece of research, and it generated some buzz in the media -- even meriting an (also nicely done) summary in HuffPost Science.

So far, what's the problem?  A well-written paper on how game theory predicts the evolution of behavior, and the media (for once) reporting it as they should.  No cause for wincing here, surely?

Nope.  The winces started once the creationists got wind of this.

The site Creation Evolution Headlines evidently found out about McNally and Jackson's paper -- although whether they actually read it remains to be seen.  Because the piece they wrote in response is called...

... wait for it...

"Evolutionists Confess to Lying."

Yes, you're interpreting this correctly; they think that because the paper supports an evolutionary edge for people who are deceptive, it is equivalent to the evolutionary biologists stating, "Ha ha!  We were lying all along!"

I couldn't make something this ridiculous up if I wanted to.

Don't believe me?  Here is an excerpt.  Make sure you have a pillow handy for when you faceplant.
If lying evolved as a fitness strategy, can we believe anything an evolutionist says?...  Brooks [the author of the HuffPost piece] has the Yoda complex.  So do McNally and Jackson.  They believe they can look down on the rest of humanity from some exalted plane free of the evolutionary forces that afflict the rest of humanity.  No; they need to climb down and join the world their imaginations have created.  In the evolutionary world, there is no essential difference between cooperation and deception.  It’s only a matter of which side is in the majority at the moment...

Having no eternal standard of truth, the evolutionary world collapses into power struggles.  The appeals by Brooks and Sam Harris to try to “resist our temptations to lie” are meaningless.  How can anyone overcome what evolution has built into them?  How can either of them know what is true? 
Since all these evolutionists believe that lying evolved as a fitness strategy, and since they are unable to distinguish between truth and lies, they essentially confess to lying themselves.  Their readers are therefore justified in considering them deceivers, and dismissing everything they say, including the notion that lying evolved.
My wincing-at-people-embarrassing-themselves response was activated so strongly by all this that I could barely tolerate reading the entire article... especially given that the Creation Evolution Headlines piece got linked on the Skeptic subreddit by the obviously astonished friend of one of the original paper's authors.  (Of course, you're probably thinking, "If you hate seeing people embarrassed so much, why are you calling further attention to it by writing about it?"  To which I can only respond: touché.  And also, that my outrage over a nice bit of evolutionary research being trashed this way trumped my dislike of watching morons shame themselves.)

Let's just take this a piece at a time, okay?

First, McNally and Jackson didn't say that everyone is lying; they said that some people are lying, and benefit by it, a contention that I'd guess atheists and theists would both agree on.  Second, given that the original research looked at cooperative species -- of which there are many -- why does that somehow turn evolution into "power struggles," into a world of every individual for him/herself?  Do ants in a colony only cooperate because they recognize an "eternal standard of truth?"

And I always find it wryly amusing when the theists claim that we atheists must be without morals because we don't think morality comes from some higher power, and suggest that we aren't to be trusted.  Honestly, devout Christians; if the only thing that's keeping you from running around stealing, raping, and murdering is some Bronze Age book of mythology, I think you are the ones we should be watching out for.

As Penn Gillette more eloquently put it, "The question I get asked by religious people all the time is, without God, what’s to stop me from raping all I want?  And my answer is: I do rape all I want.  And the amount I want is zero.  And I do murder all I want, and the amount I want is zero.  The fact that these people think that if they didn’t have this person watching over them that they would go on killing, raping rampages is the most self-damning thing I can imagine."

But as far as the McNally/Jackson paper goes, the creationists are missing the most basic problem with their claim.  Saying that lying is an evolved strategy doesn't mean that we "are unable to distinguish between truth and lies."  If evolutionists were unable to distinguish between truth and lies, IT WOULD BE REALLY FUCKING HARD TO WRITE A SCHOLARLY PAPER ABOUT LYING, NOW WOULDN'T IT?

*pant pant gasp gasp*

Okay, I'll try to calm down a little.

What's the worst about these people is that they don't seem to have any awareness that what they're saying, with apparent confidence, is absolute nonsense.  It reminds me of watching the character of Archie Bunker on the 70s television series All in the Family, who week after week would have conversations like the following:
Mike (Archie's son-in-law): Something is rotten in the state of Denmark.

Archie: Denmark ain't no state, it's the capital of Colorado.


And, of course, Archie would never admit that he was wrong.  In his reality, he was always right, world without end, amen.

I bet Archie would have loved that article in Creation Evolution Headlines.  And he'd probably look at me and say, as he did to once to his wife, "You don't believe in nothin', Edith.  You're one o' them, whaddyacallem, septics."

*********************************************

If Monday's post, about the apparent unpredictability of the eruption of the Earth's volcanoes, freaked you out, you should read Robin George Andrews's wonderful new book Super Volcanoes: What They Reveal About the Earth and the Worlds Beyond.

Andrews, a science journalist and trained volcanologist, went all over the world interviewing researchers on the cutting edge of the science of volcanoes -- including those that occur not only here on Earth, but on the Moon, Mars, Venus, and elsewhere.  The book is fascinating enough just from the human aspect of the personalities involved in doing primary research, but looks at a topic it's hard to imagine anyone not being curious about; the restless nature of geology that has generated such catastrophic events as the Yellowstone Supereruptions.

Andrews does a great job not only demystifying what's going on inside volcanoes and faults, but informing us how little we know (especially in the sections on the Moon and Mars, which have extinct volcanoes scientists have yet to completely explain).  Along the way we get the message, "Will all you people just calm down a little?", particularly aimed at the purveyors of hype who have for years made wild claims about the likelihood of an eruption at Yellowstone occurring soon (turns out it's very low) and the chances of a supereruption somewhere causing massive climate change and wiping out humanity (not coincidentally, also very low).

Volcanoes, Andrews says, are awesome, powerful, and fascinating, but if you have a modicum of good sense, nothing to fret about.  And his book is a brilliant look at the natural process that created a great deal of the geology of the Earth and our neighbor planets -- plate tectonics.  If you are interested in geology or just like a wonderful and engrossing book, you should put Super Volcanoes on your to-read list.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Wednesday, March 10, 2021

Shooting the bull

There's a folk truism that goes, "Don't try to bullshit a bullshitter."

The implication is that people who exaggerate and/or lie routinely, either to get away with things or to create an overblown image of themselves, know the technique so well that they can always spot it in others.  This makes bullshitting a doubly attractive game; not only does it make you slick, impressing the gullible and allowing you to avoid responsibility, it makes you savvy and less likely to be suckered yourself.

Well, a study published this week in The British Journal of Social Psychology, conducted by Shane Littrell, Evan Risko, and Jonathan Fugelsang, has shown that like many folk truisms, this isn't true at all.

In fact, the research supports the opposite conclusion.  At least one variety of regular bullshitting leads to more likelihood of falling for bullshit from others.

[Image licensed under the Creative Commons Inkscape by Anynobody, composing work: Mabdul ., Bullshit, CC BY-SA 3.0]

The researchers identified two main kinds of bullshitting, persuasive and evasive.  Persuasive bullshitters exaggerate or embellish their own accomplishments to impress others or fit in with their social group; evasive ones dance around the truth to avoid damaging their own reputations or the reputations of their friends.

Because of the positive shine bullshitting has with many, the researchers figured most people who engage either type wouldn't be shy about admitting it, so they used self-reporting to assess the bullshit levels and styles of the eight hundred participants.  They then gave each a more formal measure of cognitive ability, metacognitive insight, intellectual overconfidence, and reflective thinking, then a series of pseudo-profound and pseudoscientific statements mixed in with real profound and truthful statements, to see if they could tell them apart.

The surprising result was that the people who were self-reported persuasive bullshitters were significantly worse at detecting pseudo-profundity than the habitually honest; the evasive bullshitters were better than average.

"We found that the more frequently someone engages in persuasive bullshitting, the more likely they are to be duped by various types of misleading information regardless of their cognitive ability, engagement in reflective thinking, or metacognitive skills," said study lead author Shane Littrell, of the University of Waterloo.  "Persuasive BSers seem to mistake superficial profoundness for actual profoundness.  So, if something simply sounds profound, truthful, or accurate to them that means it really is.  But evasive bullshitters were much better at making this distinction."

Which supports a contention that I've had for years; if you lie for long enough, you eventually lose touch with what the truth is.  The interesting fact that persuasive and evasive bullshitting aren't the same in this respect might be because evasive bullshitters engage in this behavior because they're highly sensitive to people's opinions, both of themselves and of others.  This would have the effect of making them more aware of what others are saying and doing, and becoming better at sussing out what people's real motives are -- and whether they're being truthful or not.  But persuasive bullshitters are so self-focused that they aren't paying much attention to what others say, so any subtleties that might clue them in to the fact they they're being bullshitted slip right by.

I don't know whether this is encouraging or not.  I'm not sure if the fact that it's easier to lie successfully to a liar is a point to celebrate by those of us who care about the truth.  But it does illustrate the fact that our common sense about our own behavior sometimes isn't very accurate.  As usual, approaching questions from a skeptical scientific angle is the best.

After all, no form of bullshit can withstand that.

****************************************

Last week's Skeptophilia book-of-the-week was about the ethical issues raised by gene modification; this week's is about the person who made CRISPR technology possible -- Nobel laureate Jennifer Doudna.

In The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race, author Walter Isaacson describes the discovery of how the bacterial enzyme complex called CRISPR-Cas9 can be used to edit genes of other species with pinpoint precision.  Doudna herself has been fascinated with scientific inquiry in general, and genetics in particular, since her father gave her a copy of The Double Helix and she was caught up in what Richard Feynman called "the joy of finding things out."  The story of how she and fellow laureate Emmanuelle Charpentier developed the technique that promises to revolutionize our ability to treat genetic disorders is a fascinating exploration of the drive to understand -- and a cautionary note about the responsibility of scientists to do their utmost to make certain their research is used ethically and responsibly.

If you like biographies, are interested in genetics, or both, check out The Code Breaker, and find out how far we've come into the science-fiction world of curing genetic disease, altering DNA, and creating "designer children," and keep in mind that whatever happens, this is only the beginning.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Thursday, December 28, 2017

Daubenmire, pants afire

Things have come to a sorry state when it's the strict Christians who are advocating lying.

I wish I was making this up.  Look, I know I'm not religious myself, but if someone subscribes to a belief system that encourages them to be more honest, to treat their fellow humans with greater respect, to be more generous and compassionate, I've got no quarrel with it whatsoever.  But it's shocking how often it goes the other way -- religion being used as an excuse to exercise some of our worst instincts, including prejudice, insularity, bigotry, and suspicion.

And, apparently, dishonesty.  Evangelist and Christian activist "Coach" Dave Daubenmire, on his radio program Pass the Salt, was ranting against the people who voted against Roy Moore in the Alabama State Senate election, and said something that was more than a little troubling:
When I hear people say, "Well, Judge Moore is not worthy of the office if he’s lying about what he did," I want to grab them and I want to slap them upside the stinking head.  Judge Moore is trying to infiltrate an ungodly system and the stakes in this campaign are so great for the cause of Christ and Judge Moore is being lambasted by the holier-than-thou Christians who think [the Bible] says we can never lie. 
It’s best to lie if it advances the kingdom of God.  There, I said it.
Well, first; "think" the bible says you're not supposed to lie?  I mean, there's an entire freakin' commandment about not bearing false witness.  And I found the following without even trying hard:
  • There are six things that the Lord hates, seven that are an abomination to him: haughty eyes, a lying tongue, and hands that shed innocent blood, a heart that devises wicked plans, feet that make haste to run to evil, a false witness who breathes out lies, and one who sows discord among brothers. (Proverbs 6:16-19)
  • You shall not steal; you shall not deal falsely; you shall not lie to one another. (Leviticus 19:11)
  • The getting of treasures by a lying tongue is a fleeting vapor and a snare of death. (Proverbs 21:6)
  • Therefore, having put away falsehood, let each one of you speak the truth with his neighbor, for we are members one of another. (Ephesians 4:25)
  • No one who practices deceit shall dwell in my house; no one who utters lies shall continue before my eyes. (Psalm 101:7)
  • But as for the cowardly, the faithless, the detestable, as for murderers, the sexually immoral, sorcerers, idolaters, and all liars, their portion will be in the lake that burns with fire and sulfur, which is the second death. (Revelation 21:8)
  • A false witness will not go unpunished, and he who breathes out lies will perish. (Proverbs 19:9)
Which sounds pretty unequivocal, even to a godless heathen like myself.


Also, consider what it is that Daubenmire is excusing Moore from lying about.  Moore has steadfastly denied allegations of sexual harassment against girls as young as fourteen.  So it's not like he lied about how much beer he drank last night.  These lies are about hurting children, for fuck's sake.

Okay, yeah, I know at this point they're only allegations.  But what's interesting is that Daubenmire never argues that Moore didn't do these things.  He's saying that even if he did, and lied about it, he still deserves to be in the Senate because he will "advance the kingdom of God."

All I can say is, if the kingdom of God has Moore and Daubenmire as spokesmen, maybe the "ungodly system" would be a step up.

Oh, and before I get off the topic; there's another quote from the bible that doesn't so much apply to lying in general as it does to people like Daubenmire and Moore.  It's 1 John 4:1, do you know it?
Beloved, do not believe every spirit, but test the spirits to see whether they are from God, for many false prophets have gone out into the world.

Wednesday, June 7, 2017

Liar liar

In my youth, I was quite an accomplished liar.

I say "accomplished" more to mean "I did it a lot" rather than "I did it well."  I honestly don't know how well I lied -- it might be that people in general didn't believe what I said and were simply too polite to call me out on it.  On the other hand, I did get away with a lot of stuff.  So apparently I was at least marginally successful.

What I lied about tended to be exaggerations about my past -- I rarely if ever lied out of malice.  But I felt my own circumstances to be boring and bland, a sense compounded by the fact that I've always suffered from serious social anxiety, so I think I felt as if building up a fictional persona who was interesting and adventurous might assuage my fear of being judged by the people I met.  Eventually, though, I realized that all I was doing was sabotaging the relationships I had, because once people found out I wasn't who I said I was, they'd be understandably pissed that I hadn't been straight with them.  So I dedicated myself to honesty, a commitment I've tried my hardest to keep ever since then.

On the other hand, I became a fiction writer, which means now I make up elaborate lies, write them down, and people pay me to read them.  So maybe I haven't progressed as far as I'd thought.

Kang Lee and Victoria Talwar of the University of Toronto have been studying lying for some time, and they've found that the propensity of children to lie increases as they age.  Presumably, once they develop a sense of shame and a better impulse control, they find themselves sheepish when they transgress, and lie to cover up their feelings or escape the consequences.  In a study in the International Journal of Behavioral Development, Lee and Talwar gave children of varying ages a task while a music-playing toy played behind them, and told them not to peek at the toy:
When the experimenter asked them whether they had peeked, about half of the 3-year-olds confessed to their transgression, whereas most older children lied.  Naive adult evaluators (undergraduate students and parents) who watched video clips of the children’s responses could not discriminate lie-tellers from nonliars on the basis of their nonverbal expressive behaviours.  However, the children were poor at semantic leakage control and adults could correctly identify most of the lie-tellers based on their verbal statements made in the same context as the lie.  The combined results regarding children’s verbal and nonverbal leakage control suggest that children under 8 years of age are not fully skilled lie-tellers.
Lee considers this behavior a completely normal part of social development, and in fact, says he worries about the 10% of older children in his study who could not be induced to lie -- because telling the truth 100% of the time, without regard for others' feelings or the consequences thereof, might not be the best thing, either.

But the tendency to lie doesn't vanish with adulthood.  A study by Robert Feldman, of the University of Massachusetts-Amherst, found that 60% of adults lied at least once during a ten-minute conversation.

"People tell a considerable number of lies in everyday conversation," Feldman said about his study.  "It was a very surprising result.  We didn't expect lying to be such a common part of daily life...  When they were watching themselves on videotape, people found themselves lying much more than they thought they had... It's so easy to lie.  We teach our children that honesty is the best policy, but we also tell them it's polite to pretend they like a birthday gift they've been given.  Kids get a very mixed message regarding the practical aspects of lying, and it has an impact on how they behave as adults."

Of course, all lies aren't equally blameworthy.  Telling Aunt Bertha that the knitted sweater she made for your Christmas gift is lovely probably is better than saying, "Wow, that is one ugly-ass sweater, and I'm bringing it down to the Salvation Army as soon as I get a chance."

[image courtesy of Aunt Bertha and the Wikimedia Commons]

As for the kind of thing I did as a kid -- saying that I'd spent my summer vacation riding musk oxen in the Aleutian Islands -- it's kind of ridiculous and pointless, but other than distancing one from one's friends (as I described before) probably isn't really very high on the culpability scale, either.

But lying to hurt, lying for personal gain, lying to gain or retain power (I'm lookin' at you, Donald Trump) -- those are serious issues.

Unfortunately, however, even the less serious lies can cause problems, because there is the tendency for small lies to lead to bigger ones.  A study by Tali Sharot of University College London found out that our amygdala -- the structure in the brain that appears to mediate fear, shame, and anxiety -- actually fires less the more we lie.  The first lies we tell elicit a strong response; but we become habituated quickly.

The more we lie, the easier it gets.

So the old adage of "honesty is the best policy" really does seem to apply in most circumstances.

Unless, of course, you're a fiction writer.  Then the rules don't apply at all.  Now you'll have to excuse me, as I've got a herd of musk oxen to attend to.

Monday, May 1, 2017

Poker face

A wag once said, "Artificial intelligence is twenty years in the future, and always will be."  It's a trenchant remark; predictions about when we'd have computers that could truly think have been off the mark ever since scientists at the Dartmouth Summer Research Project in Artificial Intelligence stated that they would have the problem cracked in a few months...

... back in 1956.

Still, progress has been made.  We now have software that learns from its mistakes, can beat grand masters at strategy games like chess, checkers, and Go, and have come damn close to passing the Turing test.  But the difficulty of emulating human intelligence in a machine has proven to be more difficult than anyone would have anticipated, back when the first computers were built in the 1940s and 1950s.

We've taken a new stride recently, however.  Just a couple of months ago, researchers at the University of Alberta announced that they had created software that could beat human champions at Texas Hold 'Em, a variant of poker.  Why this is remarkable -- and more of a feat than computers that can win chess -- is that all previous game-playing software involved games in which both players have identical information about the state of the game.  In poker, there is hidden information.  Not only that, but a good poker player needs to know how to bluff.

In other words... lie.


Michael Bowling, who led the team at the University of Alberta, said that this turned out to be a real challenge.  "These poker situations are not simple," Bowling said.  "They actually involve asking, 'What do I believe about my opponent’s cards?'"

But the program, called DeepStack, turned out to be quite good at this, despite the daunting fact that in Texas Hold 'Em there are about 10160 decision points -- more unique scenarios than there are atoms in the universe.  But instead of analyzing all the possibilities, as a program might do in chess (such an approach in this situation would be, for all practical purposes, impossible), DeepStack plays much like a person would -- by speculating on the likelihood of certain outcomes based on the limited information it has.

"It will do its thinking on the fly while it is playing," Bowling said.  "It can actually generalize situations it's never seen before."

Which is pretty amazing.  But not everyone is as impressed as I am.

When Skeptophilia frequent flier Rick Wiles, of End Times radio, heard about DeepStack, he was appalled that we now had a computer that could deceive. "I'm still thinking about programming robots to lie," Wiles said.  "This has been done to us for the past thirty, forty, fifty years -- Deep State has deliberately lied to the public because they concluded that it was in our best interest not to be told the truth...  What's even scarier about the robots that can lie is that they weren't programmed to lie, they learned to lie.  Who's the father of all lies?  Satan is the father of all lies.  Are we going to have demon-possessed artificially intelligent robots?  Is it possible to have demonic spirit to possess an artificial intelligent machine?  Can they possess idols?  Can they inhabit places?  Yeah.  Absolutely.  They can take possession of animals.  They can attach themselves to inanimate objects.  If you have a machine that is capable of lying, then it has to be connected to Lucifer.  Now we’re back to the global brain.  This is where they’re going.  They’re building a global brain that will embody Lucifer’s mind and so Lucifer will be deceiving people through the global brain."

So there's that.  But the ironic thing is that, all demonic spirit bullshit aside, Wiles may not be so far wrong.  While I think the development of artificial intelligence is fascinating, and I can understand why researchers find it compelling, you have to worry what our creations might think about us once they do reach sentience.  This goes double if you can no longer be sure that what the computer is telling you is the truth.

Maybe what we should be worried about is not a computer that can pass the Turing test; it's one that can pass the Turing test -- and chooses to pretend, for its own reasons, that it can't.

I mean, the last thing I want is to go on record as saying I agree with Rick Wiles on anything.  But still.

So that's our rather alarming news for the day.  It's not that I think we're headed into The Matrix any time soon; but the idea that we might be supplanted by intelligent machines of our own making, the subject of countless science fiction stories, may not be impossible after all.

And maybe the artificial intelligence of twenty years in the future may not be as far away as we thought.

Monday, December 19, 2016

The risks of paltering

In Richard Feynman's brilliant autobiography Surely You're Joking, Mr. Feynman!, he tells the story of his experience as an undergraduate practical joker.  One day while his fraternity brothers were asleep, he took one of the frat house doors off its hinges and hid it behind the oil tank in the basement.  Of course, when the theft was discovered, everyone wanted to know which of them had pilfered the door.  Everyone denied it but Feynman:
I was coming down the stairs and they said, "Feynman!  Did you take the door?" 
"Oh, yeah," I said.  "I took the door.  You can see the scratches on my knuckles here, that I got when my hands scraped against the wall as I was carrying it down into the basement."
Knowing Feynman to be a wiseass, everyone rolled their eyes and assumed he was lying.

The door stayed missing, and still no one confessed.  (Well, actually, someone had, of course!)  Finally the president of the fraternity was so miffed that he called a general meeting at dinner time and asked each member to swear on his word of honor whether or not he'd taken the door:
So he goes around the table, and asks each guy, one by one: "Jack, did you take the door?" 
"No, sir, I did not take the door." 
"Tim: did you take the door?" 
"No, sir!  I did not take the door!" 
"Maurice, did you take the door?" 
"No, I did not take the door, sir." 
"Feynman, did you take the door?" 
"Yeah, I took the door." 
"Cut it out, Feynman, this is serious!  Sam: did you take the door..."  It went all the way around.  Everyone was shocked.  There must be some real rat in the fraternity who didn't respect the fraternity word of honor! 
That night I left a note with a little picture of the oil tank and the door next to it, and the next day they found the door and put it back. 
Some time later I finally admitted to taking the door, and I was accused by everybody of lying.  They couldn't remember what I had said.  All they could remember was their conclusion after the president of the fraternity had gone around the table and asked everybody, that nobody admitted taking the door.  The idea they remembered, but not the words!
I always use this story in my Critical Thinking classes to spur a discussion into the nature of lying.  Was Feynman, by deliberately telling the truth so unconvincingly that no one believed him, actually guilty of lying?

I didn't know until yesterday that this practice actually has a name: misleading by telling the truth is called paltering, and was the subject of a study released just last week in the Journal of Personality and Social Psychology.  Called "Artful Paltering: The Risks and Rewards of Using Truthful Statements to Mislead Others," the study (by Todd Rogers, Richard Zeckhauser, Francesca Gino, and Michael I. Norton of Harvard, and Maurice E. Schweitzer of the University of Pennsylvania) shows that paltering works -- but it comes with a cost.

Professor Richard Feynman, palterer extraordinaire [image courtesy of the Wikimedia Commons]

Their experiment presented volunteers with a variety of scenarios in which people are represented as lying outright, misleading by omission, and misleading by paltering -- telling the truth in such a way as to mislead.  The scenarios included negotiations for a car purchase, negotiations over the sale of a piece of property, and negotiations over the development of a piece of property for commercial use.  The results were strikingly uniform; lying outright was considered the most unethical, but paltering was close -- especially when the palter was made in response to a direct question (as it was in Feynman's case).  The authors write:
Taken together, our studies identify paltering as a distinct and frequently employed form of deception. Paltering is a common negotiation tactic.  Negotiators who palter claim value but also increase the likelihood of impasse and, if discovered, risk harm to their reputations.  This latter finding suggests that those who might view paltering as a (deceptive) strategy for claiming more value in a negotiation must be cautious.  It may be effective in the short-term but harmful to relationships if discovered.
Which is exactly what Feynman discovered.  People are much more likely to focus on the results and the intent -- they care less about the actual words spoken.  So a palterer who says after being found out, "But I told the literal truth!  It's not my fault you interpreted it wrong!" is not likely to gain much in the way of credibility.  In fact, they are generally looked upon as only a tiny notch above someone who told a bald-faced lie.

This does open up an interesting question, though; to what extent is it incumbent upon the recipient of information to be smart enough (or do enough research) to detect when lying or paltering is occurring?  I'm not trying to blame the victim, here; but the principle of caveat emptor has been around for millennia, and I have to admit that I tend to lose sympathy with someone who got hoodwinked when a bit of quick research could have uncovered the deception.  As with everything in the realm of ethics, there are no easy, hard-and-fast answers.  But it's nice to have a word to put on lying-by-telling-the-truth,  and it gives us one more thing to be on the lookout for in car negotiations, real estate purchases -- and political discussions.