Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label fake news. Show all posts
Showing posts with label fake news. Show all posts

Friday, March 25, 2022

Truth by repetition

You probably have heard the quote attributed to Nazi propaganda minister Joseph Goebbels: "If you tell a lie big enough and continue to repeat it, eventually people will come to believe it."  This has become a staple tactic in political rhetoric -- an obvious recent example being Donald Trump's oft-repeated declaration that he won the 2020 presidential election, despite bipartisan analysis across the United States demonstrating unequivocally that this is false.  (The tactic works; a huge number of Trump supporters still think the election was stolen.)

It turns out that the "illusory truth effect" or "truth-by-repetition effect," as the phenomenon is called, still works even if the claim is entirely implausible.  A study by psychologist Doris Lacassagne at the Université Catholique de Louvain (in Belgium) recently presented 232 test subjects with a variety of ridiculous statements, including "the Earth is a perfect cube," "smoking is good for the lungs," "elephants weigh less than ants," and "rugby is the sport associated with Wimbledon."  In the first phase of the experiment, they were asked to rate the statements not for plausibility, but for how "interesting" they were.  After this, the volunteers were given lists of statements to evaluate for plausibility, and were told ahead of time that some of the statements would be repeated, and that there would be statements from the first list included on the second along with completely new claims.

The results were a little alarming, and support Goebbels's approach to lying.  The false statements -- even some of the entirely ridiculous ones -- gained plausibility from repetition.  (To be fair, the ratings still had average scores on the "false" side of the rating spectrum; but they did shift toward increasing veracity.)

The ones that showed the greatest shift were the ones that required at least a vague familiarity with science or technical matters, such as "monsoons are caused by earthquakes."  It only took a few repetitions to generate movement toward the "true" end of the rating scale, which is scary.  Not all the news was bad, though; although 53% of the participants showed a positive illusory truth effect, 28% showed a negative effect -- repeating false statements triggered their plausibility assessments to decrease.  (I wonder if this was because people who actually know what they're talking about become increasingly pissed off by seeing the same idiotic statement over and over.  I suspect that's how I would react.)

Of course, recognizing that statements are false requires some background knowledge.  I'd be much more likely to fall for believing a false statement about (for example) economics, because I don't know much about the subject; presumably I'd be much harder to fool about biology.  It's very easy for us to see some claim about a subject we're not that familiar with and say, "Huh!  I didn't know that!" rather than checking its veracity -- especially if we see the same claim made over and over.

[Image licensed under the Creative Commons Zabou, Politics, CC BY 3.0]

I honestly have no idea what we could do about this.  The downside of the Freedom of Speech amendment in the Constitution of the United States means that with a limited number of exceptions -- slander, threats of violence, vulgarity, and hate speech come to mind -- people can pretty much say what they want on television.  The revocation of the FCC's Fairness Doctrine in 1987 meant that news media no longer were required to give a balanced presentation of all sides of the issues, and set us up for the morass of partisan editorializing that the nightly news has become in the last few years.  (And, as I've pointed out more than once, it's not just outright lying that is the problem; partisan media does as much damage by what they don't tell you as what they do.  If a particular news channel's favorite political figure does something godawful, and the powers-that-be at the channel simply decide not to mention it, the listeners will never find out about  it -- especially given that another very successful media tactic has been convincing the consumers that "everyone is lying to you except us.")

It's a quandary.  There's currently no way to compel news commentators to tell the truth, or to force them to tell their listeners parts of the news that won't sit well with them.  Unless what the commentator says causes demonstrable harm, the FCC pretty much has its hands tied.

So the Lacassagne study seems to suggest that as bad as partisan lies have gotten, we haven't nearly reached the bottom of the barrel yet.

**************************************

Tuesday, May 18, 2021

Tweets and backfires

Let me ask you a hypothetical question.

You're over on Twitter, and you post a link making a political claim of some sort.  Shortly thereafter, you get responses demonstrating that the claim your link made is completely false.  Would you...

  1. ... delete the tweet, apologize, and be more careful about what you post in the future?
  2. ... shrug, say "Meh, whatever," and continue posting at the same frequency/with the same degree of care?
  3. ... flip off the computer and afterward be more likely to post inflammatory and/or false claims?

I know this sounds like a setup, and it is, but seriously; why wouldn't everyone select answer #1?  As I discussed in a post just a few days ago, we all make mistakes, and we all hate the feeling of finding out we're in error.  So given that most animal species learn to avoid choices that lead to experiencing pain, why is the answer actually more commonly #3?


I'm not just making a wild claim up myself in order to have a topic to blog about.  The fact that most people increase their rate of promulgating disinformation after they've been caught at it is the subject of a paper that was presented last week at the CHI Conference on Human Factors in Computing Systems called, "Perverse Downstream Consequences of Debunking: Being Corrected by Another User for Posting False Political News Increases Subsequent Sharing of Low Quality, Partisan, and Toxic Content in a Twitter Field Experiment."  The title could pretty much function as the abstract; in an analysis of two thousand Twitter users who post political tweets, the researchers looked at likelihood of posting false information after having errors pointed out online, and found, amazingly enough, a positive correlation.

"We find causal evidence that being corrected decreases the quality, and increases the partisan slant and language toxicity, of the users’ subsequent retweets," the authors write.  "This suggests that being publicly corrected by another user shifts one's attention away from accuracy -- presenting an important challenge for social correction approaches."

"Challenge" isn't the right word; it's more like "tendency that's so frustrating it makes anyone sensible want to punch a wall."  The researchers, Mohsen Mosleh (of the University of Exeter) and Cameron Martel, Dean Eckles, and David Rand (of the Massachusetts Institute of Technology), have identified the twenty-first century iteration of the backfire effect -- a well-studied phenomenon showing that being proven wrong makes you double down on whatever your claim was.  But here, it apparently makes you not only double down on that claim, but on every other unfounded opinion you have.

In what universe does being proven wrong make you more confident?

I swear, sometimes I don't understand human psychology at all.  Yeah, I guess you could explain it by saying that someone who has a dearly-held belief questioned is more motivated in subsequent behavior by the insecurity they're experiencing than by any commitment to the truth, but it still makes no sense to me.  The times I've been caught out in an error, either here at Skeptophilia or elsewhere, were profoundly humbling and (on occasion) outright humiliating, and the result was (1) I apologized for my error, and (2) I was a hell of a lot more careful what I posted thereafter.

What I didn't do was to say "damn the torpedoes, full speed ahead."

This does pose a quandary.  Faced with a false claim on social media, do we contradict it?  I don't have the energy to go after every piece of fake news I see; I usually limit myself to posts that are explicitly racist, sexist, or homophobic, because I can't in good conscience let that kind of bullshit go unchallenged.  But what if the outcome is said racist, sexist, or homophobe being more likely to post such claims in the future?

Not exactly the result I'm looking for, right there.

So that's our discouraging piece of research for today.  I honestly don't know what to do about a tendency that is so fundamentally irrational.  Despite all of our science and technology, a lot of our behavior still seems to be caveman-level.  "Ogg say bad thing about me.  Me bash Ogg with big rock."

***********************************

Too many people think of chemistry as being arcane and difficult formulas and laws and symbols, and lose sight of the amazing reality it describes.  My younger son, who is the master glassblower for the chemistry department at the University of Houston, was telling me about what he's learned about the chemistry of glass -- why it it's transparent, why different formulations have different properties, what causes glass to have the colors it does, or no color at all -- and I was astonished at not only the complexity, but how incredibly cool it is.

The world is filled with such coolness, and it's kind of sad how little we usually notice it.  Colors and shapes and patterns abound, and while some of them are still mysterious, there are others that can be explained in terms of the behavior of the constituent atoms and molecules.  This is the topic of the phenomenal new book The Beauty of Chemistry: Art, Wonder, and Science by Philip Ball and photographers Wenting Zhu and Yan Liang, which looks at the chemistry of the familiar, and illustrates the science with photographs of astonishing beauty.

Whether you're an aficionado of science or simply someone who is curious about the world around you, The Beauty of Chemistry is a book you will find fascinating.  You'll learn a bit about the chemistry of everything from snowflakes to champagne -- and be entranced by the sheer beauty of the ordinary.

[Note: if you purchase this book from the image/link below, part of the proceeds goes to support Skeptophilia!]


Saturday, April 10, 2021

Bullshitometry

Having spent 32 years as a high school teacher, I developed a pretty sensitive bullshit detector.

It was a necessary skill.  Kids who have not taken the time to understand the topic being studied are notorious for bullshitting answers on essay questions, often padding their writing with vague but sciency-sounding words.  An example is the following, which is verbatim (near as I can recall) from an essay on how photosynthesis is, and is not, the reverse of aerobic cellular respiration:
From analyzing photosynthesis and the process of aerobic cellular respiration, you can see that certain features are reversed between the two reactions and certain things are not.  Aerobic respiration has the Krebs Cycle and photosynthesis has the Calvin Cycle, which are also opposites in some senses and not in others.  Therefore, the steps are not the same.  So if you ran them in reverse, those would not be the same, either.
I returned this essay with one comment: "What does this even mean?"  The student in question at least had the gumption to admit he'd gotten caught.  He grinned sheepishly and said, "You figured out that I had no idea what I was talking about, then?"  I said, "Yup."  He said, "Guess I better study next time."

I said, "Yup."

Developing a sensitive nose for bullshit is critical not only for teachers, because there's a lot of it out there, and not just in academic circles.  Writer Scott Berkun addressed this in his wonderful piece, "How to Detect Bullshit," which gives some concrete suggestions about how to figure out what is USDA grade-A prime beef, and what is the cow's other, less pleasant output.  One of the best is simply to ask the questions, "How do you know that?", "Who else has this opinion?", and "What is the counter-argument?"

You say your research will revolutionize the field?

Says who?  Based on what evidence?

He also says to be very careful whenever anyone says, "Studies show," because usually if studies did show what the writer claims, (s)he'd be specific about what those studies were.  Vague statements like "studies show" are often a red flag that the claim doesn't have much in its favor.

Remember Donald Trump's "People are telling me" and "I've heard from reliable sources" and "A person came up to me at my last rally and said"?

Those mean, "I just now pulled this claim out of my ass."

Using ten-dollar buzzwords is also a good way to cover up the fact that you're sailing close to the wind.  Berkun recommends asking, "Can you explain this in simpler terms?"  If the speaker can't give you a good idea of what (s)he's talking about without resorting to jargon, the fancy verbiage is fairly likely to be there to mislead.

This is the idea behind BlaBlaMeter, a website I discovered a while back, into which you can cut-and-paste text and get a score (from 0 to 1.0) for how much bullshit it contains.  I'm not sure what the algorithm does besides detecting vague filler words, but it's a clever idea.  It'd certainly be nice to have a rigorous way to detect it when you're being bamboozled with words.



The importance of being able to detect fancy-sounding nonsense was highlighted by the acceptance of a paper for the International Conference on Atomic and Nuclear Physics -- when it turned out that the paper had been created by hitting iOS Autocomplete over and over.  The paper, written (sort of) by Christoph Bartneck, associate professor at the Human Interface Technology laboratory at the University of Canterbury in New Zealand, was titled "Atomic Energy Will Have Been Made Available to a Single Source" (the title was also generated by autocomplete), and contained passages such as:
The atoms of a better universe will have the right for the same as you are the way we shall have to be a great place for a great time to enjoy the day you are a wonderful person to your great time to take the fun and take a great time and enjoy the great day you will be a wonderful time for your parents and kids.
Which, of course, makes no sense at all.  In this case, I wonder if the reviewers simply didn't bother to read the paper -- or read a few sample sentences and found that they (unlike the above) made reasonable sense, and said, "Looks fine to me."

Although I'd like to think that even considering my lack of expert status on atomic and nuclear physics, I'd have figured out that what I was looking at was ridiculous.

On a more serious note, there's a much more pressing reason that we all need to arm ourselves against bullshit, because so much of what's on the internet is outright false.  A team of political fact-checkers was hired by Buzzfeed News to sift through claims on politically partisan Facebook pages, and found that on average, a third of the claims made by partisan sites were outright false.  And lest you think one side was better than the other, the study found that both right and left were making a great many unsubstantiated, misleading, or wrong claims.  And we're not talking about fringe-y wingnut sites here; these were sites that if you're on Facebook you see reposts from on a daily basis -- Occupy Democrats, Breitbart, AlterNet, Fox News, The Blaze, The Other 98%, NewsMax, Addicting Info, Right Wing News, and U.S. Uncut.

What this means is that when you see posts from these sites, there is (overall) about a 2/3 chance that what you're seeing is true.  So if you frequent those pages -- or, more importantly, if you're in the habit of clicking "share" on every story that you find mildly appealing -- you damn well better be able to figure out which third is wrong.

The upshot of it is, we all need better bullshit filters.  Given that we are bombarded daily by hundreds of claims from the well-substantiated to the outrageous, it behooves us to find a way to determine which is which.

And, if you're curious, a 275-word passage from this Skeotphilia post was rated by BlaBlaMeter as having a bullshit rating of 0.13.  Which I find reassuring.  Not bad, considering the topic I was discussing.

**************************************

This week's Skeptophilia book-of-the-week is a bit of a departure from the usual science fare: podcaster and author Rose Eveleth's amazing Flash Forward: An Illustrated Guide to the Possibly (and Not-So-Possible) Tomorrows.

Eveleth looks at what might happen if twelve things that are currently in the realm of science fiction became real -- a pill becoming available that obviates the need for sleep, for example, or the development of a robot that can make art.  She then extrapolates from those, to look at how they might change our world, to consider ramifications (good and bad) from our suddenly having access to science or technology we currently only dream about.

Eveleth's book is highly entertaining not only from its content, but because it's in graphic novel format -- a number of extremely talented artists, including Matt Lubchansky, Sophie Goldstein, Ben Passmore, and Julia Gförer, illustrate her twelve new worlds, literally drawing what we might be facing in the future.  Her conclusions, and their illustrations of them, are brilliant, funny, shocking, and most of all, memorable.

I love her visions even if I'm not sure I'd want to live in some of them.  The book certainly brings home the old adage of "Be careful what you wish for, you may get it."  But as long as they're in the realm of speculative fiction, they're great fun... especially in the hands of Eveleth and her wonderful illustrators.

[Note: if you purchase this book from the image/link below, part of the proceeds goes to support Skeptophilia!]



Thursday, March 25, 2021

A tsunami of lies

One of the ways in which the last few years have changed me is that it has made me go into an apoplectic rage when I see people sharing false information on social media.

I'm not talking about the occasional goof; I've had times myself that I've gotten suckered by parody news accounts, and posted something I thought was true that turns out to be some wiseass trying to be funny.  What bothers me is the devastating flood of fake news on everything from vaccines to climate change to politics, exacerbated by "news" agencies like Fox and OAN that don't seem to give a shit about whether what they broadcast is true, only that it lines up with the agenda of their directors.

I've attributed this tsunami of lies to two reasons: partisanship and ignorance.  (And to the intersection of partisanship and ignorance, where lie the aforementioned biased media sources.)  If you're ignorant of the facts, of course you'll be prone to falling for an appealing falsehood; and partisanship in either direction makes you much more likely to agree unquestioningly with a headline that lines up with what you already believed to be true.

Turns out -- ironically -- the assumption that the people sharing fake news are partisan, ignorant, or both might itself be an appealing but inaccurate assessment of what's going on.  A study in Nature this week has generated some curious results showing that once again, reality turns out to be more complex than our favored black-and-white assessments of the situation.


[Image is in the Public Domain]

A study by Ziv Epstein, Mohsen Mosleh, Antonio Arechar, Dean Eckles, and David Rand (of the Massachusetts Institute of Technology) and Gordon Pennycook (of the University of Regina) decided to see what was really motivating people to share false news stories online, and they found -- surprisingly -- that sheer carelessness played a bigger role than either partisanship or ignorance.  In "Shifting Attention to Accuracy Can Reduce Misinformation Online," the team describes a series of experiments involving over a thousand volunteers that leads us to the heartening conclusion that there might be a better way to stem the flood of lies online than getting people to change their political beliefs or engaging in a massive education program.

The setup of the study was as simple as it was elegant.  They first tested the "ignorance" hypothesis by taking test subjects and presenting them with various headlines, some true and some false, and asked them to determine which were which.  It turns out people are quite good at this; there was a full 56-point difference between the likelihood of correctly identifying true and false headlines and making a mistake.

Next, they tested the "partisanship" hypothesis.  The test subjects did worse on this task, but still the error rate wasn't as big as you might guess; people were still 10% less likely to rate true statements as false (or vice versa) even if those statements agreed with the majority stance of their political parties.  So partisanship plays a role in erroneous belief, but it's not the set of blinders many -- including myself -- would have guessed.

Last -- and this is the most interesting test -- they asked volunteers to assess their likelihood of sharing the news stories online, based upon their headlines.  Here, the difference between sharing true versus false stories dropped to only six percentage points.  Put a different way, people who are quite good at discerning false information overall, and still pretty good at recognizing it even when it runs counter to their political beliefs, will still share the news story anyhow.

What it seems to come down to is simple carelessness.  It's gotten so easy to share links that we do it without giving it much thought.  I know I've been a bit shame-faced when I've clicked "retweet" to a link on Twitter, and gotten the message, "Don't you want to read the article first?"  (In my own defense, it's usually been because the story in question is from a source like Nature or Science, and I've gotten so excited by whatever it was that I clicked "retweet" right away even though I fully intend to read the article afterward.  Another reason is the exasperating way Twitter auto-refreshes at seemingly random moments, so if you don't respond to a post right away, it might disappear forever.)  

Improving the rate at which people detected (and chose not to share) fake headlines turned out to be remarkably easy to tweak.  The researchers found that reminding people of the importance of accuracy at the start of the experiment decreased the volunteers' willingness to share false information, as did asking them to assess the accuracy of the headline prior to making the decision about whether to share it. 

It does make me wonder, though, about the role of pivotal "nodes" in the flow of misinformation -- a few highly-motivated people who start the ball of fake news rolling, with the rest of us spreading around the links (whatever our motivation for doing so) in a more piecemeal fashion.  A study by Zignal Labs, for example, found that the amount of deceptive or outright false political information on Twitter went down by a stunning 73% after Donald Trump's account was closed permanently.  (Think of what effect it might have had if Twitter had made this decision back in 2015.)

In any case, to wrap this up -- and to do my small part in addressing this problem -- just remember before you share anything that accuracy matters.  Truth matters.  It's very easy to click "share," but with that ease comes a responsibility to make sure that what we're sharing is true.  We ordinary folk can't dam the flow of bullshit singlehandedly, but each one of us has to take seriously our role in stopping up the leaks, small as they may seem.

******************************************

Last week's Skeptophilia book-of-the-week, Simon Singh's The Code Book, prompted a reader to respond, "Yes, but have you read his book on Fermat's Last Theorem?"

In this book, Singh turns his considerable writing skill toward the fascinating story of Pierre de Fermat, the seventeenth-century French mathematician who -- amongst many other contributions -- touched off over three hundred years of controversy by writing that there were no integer solutions for the equation  an + bn = cn for any integer value of n greater than 2, then adding, "I have discovered a truly marvelous proof of this, which this margin is too narrow to contain," and proceeding to die before elaborating on what this "marvelous proof" might be.

The attempts to recreate Fermat's proof -- or at least find an equivalent one -- began with Fermat's contemporaries, Evariste de Gaulois, Marin Mersenne, Blaise Pascal, and John Wallis, and continued for the next three centuries to stump the greatest minds in mathematics.  It was finally proven that Fermat's conjecture was correct by Andrew Wiles in 1994.

Singh's book Fermat's Last Theorem: The Story of a Riddle that Confounded the World's Greatest Minds for 350 Years describes the hunt for a solution and the tapestry of personalities that took on the search -- ending with a tour-de-force paper by soft-spoken British mathematician Andrew Wiles.  It's a fascinating journey, as enjoyable for a curious layperson as it is for the mathematically inclined -- and in Singh's hands, makes for a story you will thoroughly enjoy.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Tuesday, December 10, 2019

Misremembering the truth

There are two distinct, but similar-sounding, cognitive biases that I've written about many times here at Skeptophilia because they are such tenacious barriers to rational thinking.

The first, confirmation bias, is our tendency to uncritically accept claims when they fit with our preconceived notions.  It's why a lot of conservative viewers of Fox News and liberal viewers of MSNBC sit there watching and nodding enthusiastically without ever stopping and saying, "... wait a moment."

The other, dart-thrower's bias, is more built-in.  It's our tendency to notice outliers (because of their obvious evolutionary significance as danger signals) and ignore, or at least underestimate, the ordinary as background noise.  The name comes from the thought experiment of being in a bar while there's a darts game going on across the room.  You'll tend to notice the game only when there's an unusual throw -- a bullseye, or perhaps impaling the bartender in the forehead -- and not even be aware of it otherwise.

Well, we thought dart-thrower's bias was more built into our cognitive processing system and confirmation bias more "on the surface" -- and the latter therefore more culpable, conscious, and/or controllable.  Now, it appears that confirmation bias might be just as hard-wired into our brains as dart-thrower's bias is.

A paper appeared this week in Human Communication Research, describing research conducted by a team led by Jason Coronel of Ohio State University.  In "Investigating the Generation and Spread of Numerical Misinformation: A Combined Eye Movement Monitoring and Social Transmission Approach," Coronel, along with Shannon Poulsen and Matthew D. Sweitzer, did a fascinating series of experiments that showed we not only tend to accept information that agrees with our previous beliefs without question, we honestly misremember information that disagrees -- and we misremember it in such a way that in our memories, it further confirms our beliefs!

The location of memories (from Memory and Intellectual Improvement Applied to Self-Education and Juvenile Instruction, by Orson Squire Fowler, 1850) [Image is in the Public Domain]

What Coronel and his team did was to present 110 volunteers with passages containing true numerical information on social issues (such as support for same-sex marriage and rates of illegal immigration).  In some cases, the passages agreed with what (according to polls) most people believe to be true, such as that the majority of Americans support same-sex marriage.  In other cases, the passages contained information that (while true) is widely thought to be untrue -- such as the fact that illegal immigration across the Mexican border has been dropping for years and is now at its lowest rates since the mid-1990s.

Across the board, people tended to recall the information that aligned with the conventional wisdom correctly, and the information that didn't incorrectly.  Further -- and what makes this experiment even more fascinating -- is that when people read the unexpected information, data that contradicted the general opinion, eye-tracking monitors recorded that they hesitated while reading, as if they recognized that something was strange.  In the immigration passage, for example, they read that the rate of immigration had decreased from 12.8 million in 2007 to 11.7 million in 2014, and the readers' eyes bounced back and forth between the two numbers as if their brains were saying, "Wait, am I reading that right?"

So they spent longer on the passage that conflicted with what most people think -- and still tended to remember it incorrectly.  In fact, the majority of people who did remember wrong got the numbers right -- 12.8 million and 11.7 million -- showing that they'd paid attention and didn't just scoff and gloss over it when they hit something they thought was incorrect.  But when questioned afterward, they remembered the numbers backwards, as if the passage had actually supported what they'd believed prior to the experiment!

If that's not bad enough, Coronel's team then ran a second experiment, where the test subjects read the passage, then had to repeat the gist to another person, who then passed it to another, and so on.  (Remember the elementary school game of "Telephone?")  Not only did the data get flipped -- usually in the first transfer -- subsequently, the difference between the two numbers got greater and greater (thus bolstering the false, but popular, opinion even more strongly).  In the case of the immigration statistics, the gap between 2007 and 2014 not only changed direction, but by the end of the game it had widened from 1.1 million to 4.7 million.

This gives you an idea what we're up against in trying to counter disinformation campaigns.  And it also illustrates that I was wrong in one of my preconceived notions; that people falling for confirmation bias are somehow guilty of locking themselves deliberately into an echo chamber.  Apparently, both dart-thrower's bias and confirmation bias are somehow built into the way we process information.  We become so certain we're right that our brain subconsciously rejects any evidence to the contrary.

Why our brains are built this way is a matter of conjecture.  I wonder if perhaps it might be our tribal heritage at work; that conforming to the norm, and therefore remaining a member of the tribe, has a greater survival value than being the maverick who sticks to his/her guns about a true but unpopular belief.  That's pure speculation, of course.  But what it illustrates is that once again, our very brains are working against us in fighting Fake News -- which these days is positively frightening, given how many powerful individuals and groups are, in a cold and calculated fashion, disseminating false information in an attempt to mislead us, frighten us, or anger us, and so maintain their positions of power.

***********************

This week's Skeptophilia book of the week is brand new; Brian Clegg's wonderful Dark Matter and Dark Energy: The Hidden 95% of the Universe.  In this book, Clegg outlines "the biggest puzzle science has ever faced" -- the evidence for the substances that provide the majority of the gravitational force holding the nearby universe together, while simultaneously making the universe as a whole fly apart -- and which has (thus far) completely resisted all attempts to ascertain its nature.

Clegg also gives us some of the cutting-edge explanations physicists are now proposing, and the experiments that are being done to test them.  The science is sure to change quickly -- every week we seem to hear about new data providing information on the dark 95% of what's around us -- but if you want the most recently-crafted lens on the subject, this is it.

[Note: if you purchase this book from the image/link below, part of the proceeds goes to support Skeptophilia!]





Saturday, November 9, 2019

Poisoned by preconceived notions

If you needed something else to make you worry about our capacity to make decisions based on facts, go no further than a study that came out this week from the University of Texas at Austin.

Entitled "Fake News on Social Media: People Believe What They Want to Believe When it Makes No Sense At All," the study was conducted by Patricia L. Moravec, Randall K. Minas, and Alan R. Dennis of the McCombs School of Business.  And its results should be seriously disheartening for just about everyone.

What they did was a pair of experiments using students who were "social media literate" -- i.e., they should know social media's reputation for playing fast and loose with the truth -- first having them evaluate fifty headlines as true or false, and then giving them headlines with "Fake News" flags appended.  In each case, there was an even split -- in the first experiment, between true and false headlines, and in the second, between true and false headlines flagged as "Fake."

In both experiments, the subjects were hooked up to an electroencephalogram (EEG) machine, to monitor their brain activity as they performed the task.

In the first experiment, it was found -- perhaps unsurprisingly -- that people are pretty bad at telling truth from lies when presented only with a headline.  But the second one is the most interesting, and also the most discouraging.  Because what the researchers found is that when a true headline is flagged as false, and a false headline is flagged as true, this causes a huge spike in activity of the prefrontal cortex -- a sign of cognitive dissonance as the subject tries desperately to figure out how this can be so -- but only if the labeling of the headline as such disagrees with what they already believed.


[Image is in the Public Domain]

So we're perfectly ready to believe the truth is a lie, or a lie is the truth, if it fits our preconceived notions.  And worse still, what the researchers saw is that in general, even though subjects had an uncomfortable amount of cognitive processing going on when they were confronted by something that was the opposite of what they thought was true, it didn't have much influence over what they thought was true after the experiment.

In other words, you can label the truth a lie, or a lie the truth, but it won't change people's minds if they already believed the opposite.  Our ability to discern fact from fiction, and use that information to craft our view of the world, is poisoned by our preconceived notions of what we'd like to be true.

Before you start pointing fingers, the researchers also found that there was no good predictor of how well subjects did on this test.  They were all bad -- Democrats and Republicans, higher IQ and lower IQ, male and female.

"When we’re on social media, we’re passively pursuing pleasure and entertainment," said Patricia Moravec, who was lead author of the study, in an interview with UT News.  "We’re avoiding something else...  The fact that social media perpetuates and feeds this bias complicates people’s ability to make evidence-based decisions.  But if the facts that you do have are polluted by fake news that you truly believe, then the decisions you make are going to be much worse."

This is insidious because even if we are just going on social media to be entertained, the people posting political advertisements on social media aren't.  They're trying to change our minds.  And what the Moravec et al. study shows is that we're not only lousy at telling fact from fiction, we're very likely to get suckered by a plausible-sounding lie (or, conversely, to disbelieve an inconvenient truth) if it fits with our preexisting political beliefs.

Which makes it even more incumbent on the people who run social media platforms (yeah, I'm lookin' at you, Mark Zuckerberg) to have on-staff fact checkers who are empowered to reject ads on both sides of the political aisle that are making false claims.  It's not enough to cite free speech rights as an excuse for abrogating your duty to protect people from immoral and ruthless politicians who will say or do anything to gain or retain power.  The people in charge of social media are under no obligation to run any ad someone's willing to pay for.  It's therefore their duty to establish criteria for which ads are going to show up -- and one of those criteria should surely be whether it's the truth.

The alternative is that our government will continue to be run by whoever has the cleverest, most attractive propaganda.  And as we've seen over the past three years, this is surely a recipe for disaster.

**********************************

This week's Skeptophilia book recommendation is a fun book about math.

Bet that's a phrase you've hardly ever heard uttered.

Jordan Ellenberg's amazing How Not to Be Wrong: The Power of Mathematical Thinking looks at how critical it is for people to have a basic understanding and appreciation for math -- and how misunderstandings can lead to profound errors in decision-making.  Ellenberg takes us on a fantastic trip through dozens of disparate realms -- baseball, crime and punishment, politics, psychology, artificial languages, and social media, to name a few -- and how in each, a comprehension of math leads you to a deeper understanding of the world.

As he puts it: math is "an atomic-powered prosthesis that you attach to your common sense, vastly multiplying its reach and strength."  Which is certainly something that is drastically needed lately.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]





Thursday, June 27, 2019

An inoculation against nonsense

In my Critical Thinking classes, I always included an assignment that was a contest to see who could create the most convincing fake photo or video clip of a ghost, Bigfoot, UFO, or some other paranormal phenomenon.  The students loved it, but I had a very definite reason for assigning it; to show them how easy it is to do digital manipulation.  And some of the results were seriously creepy -- a few were so completely convincing that if I hadn't known better (and hadn't been a skeptic about such matters anyhow) I can easily see myself believing they were real.

My hope was that if they saw that high school students can generate plausible fakes, they should be on guard about believing photographic "evidence" they see online and in the news.  In fact, to mislead people you don't even have to manipulate photos, all you have to do is mislabel them -- look at the Oregon GOP's official site labeling a photograph of a protest by loggers as being a group of armed "militia" who were threatening the Democrats who had insisted that Republican congresspeople come back and vote on climate change legislation rather than skipping town so the measure would fail because of not reaching a quorum.

The result, of course, was like throwing gasoline on a fire -- which is almost certainly what the GOP wanted.

The ghost photo assignment, though, shows that you can inoculate people against being fooled by (oh, how I hate this phrase) "fake news."  And my anecdotal evidence of the success of such a strategy got a boost in a piece of research out of the University of Cambridge that appeared this week in Palgrave Communications, called "Fake News Game Confers Psychological Resistance Against Online Misinformation," by Jon Roozenbeek and Sander van der Linden.  In their study, they gave volunteers a game called "Bad News" to play, which challenges them to create the most convincing fake news article they can.  Players get points for how many people in the game are convinced, and lose "credibility points" if their stories get rejected.

[Image licensed under the Creative Commons GDJ, FAKE NEWS, CC0 1.0]

What the researchers found was remarkable.  People who played the game showed a 21% decrease in their confidence levels toward fake news -- but no such drop in their belief in real news.  So it didn't turn them into cynics, thinking that everything they see is false, it simply made them aware of what kinds of features are woven into fake news to make it more attractive.

"Research suggests that fake news spreads faster and deeper than the truth, so combating disinformation after-the-fact can be like fighting a losing battle," said Sander van der Linden, Director of the Cambridge Social Decision-Making Lab and co-author of the study.  "We wanted to see if we could pre-emptively debunk, or ‘pre-bunk’, fake news by exposing people to a weak dose of the methods used to create and spread disinformation, so they have a better understanding of how they might be deceived.  This is a version of what psychologists call ‘inoculation theory’, with our game working like a psychological vaccination...  We find that just fifteen minutes of gameplay has a moderate effect, but a practically meaningful one when scaled across thousands of people worldwide, if we think in terms of building societal resistance to fake news."

"We are shifting the target from ideas to tactics," added co-author Jon Roozenbeek.  "By doing this, we are hoping to create what you might call a general ‘vaccine’ against fake news, rather than trying to counter each specific conspiracy or falsehood."

All of which is a cheering thought.  There's always the problem, when you teach people critical thinking skills, for them to slide from gullibility to cynicism, and not see that disbelieving everything out of hand is as lazy (and inaccurate) as believing everything out of hand.  What we have here is a strategy for giving people immunity to pseudoscience and conspiracy theories, which in today's world we sorely need.

Of course, there'll always be the ones who resist what you're trying to teach them -- the anti-vaxxers of critical thinking, so to speak.  But with luck, techniques like this might reduce their numbers to manageable proportions, and increase the likelihood of herd immunity for the rest of us.

***************************************

Richard Dawkins is a name that often sets people's teeth on edge.  However, the combative evolutionary biologist, whose no-holds-barred approach to young-Earth creationists has given him a well-deserved reputation for being unequivocally devoted to evidence-based science and an almost-as-well-deserved reputation for being hostile to religion in general, has written a number of books that are must-reads for anyone interested in the history of life on Earth -- The Blind Watchmaker, Unweaving the Rainbow, Climbing Mount Improbable, and (most of all) The Ancestor's Tale.

I recently read a series of essays by Dawkins, collectively called A Devil's Chaplain, and it's well worth checking out, whatever you think of the author's forthrightness.  From the title, I expected a bunch of anti-religious screeds, and I was pleased to see that they were more about science and education, and written in Dawkins's signature lucid, readable style.  They're all good, but a few are sheer brilliance -- his piece, "The Joy of Living Dangerously," about the right way to approach teaching, should be required reading in every teacher-education program in the world, and "The Information Challenge" is an eloquent answer to one of the most persistent claims of creationists and intelligent-design advocates -- that there's no way to "generate new information" in a genome, and thus no way organisms can evolve from less complex forms.

It's an engaging read, and I recommend it even if you don't necessarily agree with Dawkins all the time.  He'll challenge your notions of how science works, and best of all -- he'll make you think.

[If you purchase this book using the image/link below, part of the proceeds will go to support Skeptophilia!]





Monday, May 14, 2018

Fast forward

In today's contribution from the Unintentional Irony Department, we have: a study out of the University of Buffalo that examined the pervasiveness of false information on Twitter, which a Twitter user summarized incorrectly, then posted the inaccurate summary...  on Twitter.

The study, which appeared in the May 11 issue of the journal Natural Hazards, looked at the responses of people who interacted with tweeted false information following Hurricane Sandy and the Boston Marathon shootings.  What they found was interesting, if a little disheartening.  Of the people who chose to respond to the false tweets:
  • 86 to 91 percent of the users fostered the spread of the false news, either by retweeting or "liking" the original tweet;
  • 5 to 9 percent looked for confirmation, most often by retweeting and requesting anyone who had accurate information to respond.
  • 1 to 9 percent were dubious right from the get-go, and said they had information indicating the original tweet was incorrect.
So it's kind of discouraging that given tweets the researchers knew were false, only around ten percent of the people who chose to respond even asked the question of whether the content of the tweet was factually correct.

[Image licensed under the Creative Commons Ibrahim.ID, Socialmedia-pm, CC BY-SA 4.0]

Jun Zhuang, lead author of the study, was up front about how alarming this is.  "These findings are important because they show how easily people are deceived during times when they are most vulnerable and the role social media platforms play in these deceptions," Zhuang said.  However, he also pointed out what was the first thing that occurred to me when I read the study.  "[However], it's possible that many people saw these tweets, decided they were inaccurate and chose not to engage."

Which, despite my frequently combative attitude here at Skeptophilia, is how I usually approach that sort of thing online.  I've found that posting rebuttals to total strangers seldom accomplishes anything, more often than not resulting in your being called a know-it-all or a deluded mouthpiece for the [fill in with your favorite political party] or simply a hopeless dunderhead.  So my guess is -- and it is just a surmise -- that the people who actually chose to interact with the tweets in question were (1) a minority, and (2) heavily skewed toward ones who already had a tendency to believe the claim in question.

In other words, yet another example of confirmation bias.

Which is what makes where I found out about this study even more wryly amusing.  Because I got the link to the Zhuang et al. study on Twitter -- from a tweet that said, "STUDY SHOWS THAT 90% OF WHAT YOU READ ON TWITTER IS FALSE!"

Well, as I hope I don't need to point out to loyal readers of Skeptophilia, that is actually not what the study said.  Not even remotely.  So a tweet saying that 90% of what's on Twitter is false was false itself.

And, for the record, I didn't respond to it, unless you consider this post a response, which I suppose it is.

What compounds this whole thing is the tendency of people to retweet (or repost) links after only having read the headline -- witness the Science Post article with the headline, "Study: 70% of Facebook Users Only Read the Headline of Science Stories Before Commenting," which was shared all over the place, despite the fact that the article contained no links to any studies, just repeated the claim in the headline, and followed up with several iterations of "Lorem Ipsum:"
Lorem ipsum dolor sit amet, consectetur adipiscing elit.  Nullam consectetur ipsum sit amet sem vestibulum eleifend.  Donec sed metus nisi.  Quisque ultricies nulla a risus facilisis vestibulum.  Ut luctus feugiat nisi, eget molestie magna faucibus vitae.  Morbi luctus orci eget semper fringilla.  Proin vestibulum neque a ultrices aliquet.  Fusce imperdiet purus in euismod accumsan.  Suspendisse potenti.  Nullam efficitur feugiat nibh, at pellentesque mauris.  Suspendisse potenti.  Maecenas efficitur urna velit, ut gravida enim vestibulum eu.  Nullam suscipit finibus tellus convallis lacinia.  Aenean ex nunc, posuere sit amet mauris ac, venenatis efficitur nulla.  Nam auctor eros eu libero rutrum, ac tristique nunc tincidunt.  Mauris eu turpis rutrum mi scelerisque volutpat.
I wonder how many people shared that article after only reading the headline.

Speaking of irony.

So anyway, I'll just beseech you once again to read the whole article before you evaluate it, and evaluate the whole article before you share it.  Ask questions.  Look for supporting information.  Consult such fact-checking sites as Snopes and PolitiFact.  Consider source bias -- and the natural tendency to confirmation bias we all have.

Because the last thing we need is more people blindly fast-forwarding fake news.

***********************

This week's recommended book is an obscure little tome that I first ran into in college.  It's about a scientific hoax -- some chemists who claimed to have discovered what they called "polywater," a polymerized form of water that was highly viscous and stayed liquid from -70 F to 500 F or above.  The book is a fascinating, and often funny, account of an incident that combines confirmation bias with wishful thinking with willful misrepresentation of the evidence.  Anyone who's interested in the history of science or simply in how easy it is to fool the overeager -- you should put Polywater by Felix Franks on your reading list.






Friday, January 19, 2018

Climbing Mount Stupid

So the long-awaited "Fake News Awards," intended to highlight the "most DISHONEST and CORRUPT members of the media," were announced yesterday.

Or at least, Donald Trump attempted to announce them.  Under a minute after the announcement was made, the site crashed, and last I checked, hadn't been fixed.  But a screen capture done before the site went down lets us know who the winners were.  They seem to fall into two categories:
  1. Simple factual misreporting, 100% of which were corrected by the news agency at fault after more accurate information was brought forth.
  2. Anyone who dared to criticize Donald Trump.
Unsurprisingly, this included CNN, The Washington Post, and The New York Times.  The tweetstorm from Trump hee-hawing about how he'd really shown the press a thing or two by calling them all mean nasty poopyhead fakers ended with his mantra "THERE IS NO COLLUSION," which is more than ever seeming like "Pay no attention to the man behind the curtain."

So far, this is unremarkable, given that accusing everyone who disagrees with him of lying, while simultaneously claiming that he is always right, has been part of Trump's playbook ever since he jumped into politics.  But just last week a study, authored by S. Mo Jang and Joon K. Kim of the University of South Carolina School of Journalism and Mass Communications, brought the whole "fake news" think into sharper focus.  Because their research has shown that people are perfectly accepting that fake, corrupt news media exist...

... but that people of the other political party are the only ones who are falling for it.

The study, which appeared in Computers in Human Behavior, was titled, "Third Person Effects of Fake News: Fake News Regulation and Media Literacy Interventions."  The authors write:
Although the actual effect of fake news online on voters’ decisions is still unknown, concerns over the perceived effect of fake news online have prevailed in the US and other countries.  Based on an analysis of survey responses from national samples (n = 1299) in the US, we found a strong tendency of the third-person perception.  That is, individuals believed that fake news would have greater effects on out-group members than themselves or in-group members.  Additionally, we proposed a theoretical path model, identifying the antecedents and consequences of the third-person perception.  The results showed that partisan identity, social undesirability of content, and external political efficacy were positive predictors of the third-person perception.  Interestingly, our findings revealed that third-person perception led to different ways of combating fake news online.  Those with a greater level of third-person perception were more likely to support the media literacy approach but less likely to support the media regulation approach.
Put more simply, people tended to think they were immune to the effects of fake news themselves -- i.e., they "saw through it."  The other folks, though, were clearly being fooled.

Probably the only reasonable explanation of why everyone doesn't agree with me, right?

Of course right.

It's just the Dunning-Kruger effect again, isn't it?  Everyone thinks they're smarter than average.


All this amounts to is another way we insulate ourselves from even considering the possibility that we might be wrong.  Sure, there are wrong people out there, but it can't be us.

Or as a friend of mine put it, "The first rule of Dunning-Kruger Club is that you don't know you belong to Dunning-Kruger Club."

Jang and Kim focused on American test subjects, but it'd be interesting to see how much this carried over across cultures.  As I've observed before, a lot of the American cultural identity revolves around how much better we are than everyone else.  This attitude of American exceptionalism -- the "'Murika, Fuck Yeah!" approach -- not only stops us from considering other possible answers to the problems we face, but prevents any challenge to the path we are taking.

It'd be nice to think that studies like this would pull people up short and make them reconsider, but I'm guessing it won't.  We have far too much invested in our worldviews to examine them closely because of a couple of ivory-tower scientists.

And anyway, even if they are right, and people are getting suckered by claims of fake news when it fits their preconceived notions to accept them, they can't mean me, right?  I'm too smart to get fooled by that.

I'm significantly above average, in fact.

Tuesday, November 7, 2017

Stopping the rumor machine

Twenty-six people are dead in yet another mass shooting, this one in a Baptist church in Sutherland Springs, a small community 21 miles from San Antonio, Texas.

The killer, Devin Patrick Kelley, died near the scene of the crime.  He had been fired upon by a local resident as he fled the church, and was later found in his car, dead of a gunshot wound.  It is at present undetermined if the bullet that killed him came from the resident's gun, or if it was a self-inflicted wound.

Devin Patrick Kelley

Wiser heads than mine have already taken up the issue of stricter gun control, especially in cases like Kelley's.  Kelley was court martialled in 2012 for an assault on his wife and child, spent a year in prison, and was dishonorably discharged.  All I will say is that I find it a little hard to defend an assault rifle being in the hands of a man who had been convicted of... assault.

I also have to throw out there that the whole "thoughts and prayers" thing is getting a little old.  If thoughts and prayers worked, you'd think the attack wouldn't have happened in the first place, given that the victims were in a freakin' church when it occurred.

But that's not why I'm writing about Kelley and the Sutherland Springs attack.  What I'd like to address here is how, within twelve hours of the attack, there was an immediate attempt by damn near everybody to link Kelley to a variety of groups, in each case to conform to the claimant's personal bias about how the world works.

Here are just a few of the ones I've run into:
  • Someone made a fake Facebook page for Kelley in which there was a photograph of his weapon, a Ruger AR-556, with the caption, "She's a bad bitch."
  • Far-right-wing activists Mike Cernovich and Alex Jones immediately started broadcasting the claim that Kelley was a member of Antifa.  This was then picked up by various questionable "news" sources, including YourNewsWire.com, which trumpeted the headline, "Texas Church Shooter Was Antifa Member Who Vowed to Start Civil War."
  • Often using the Alex Jones article as evidence, Twitter erupted Sunday night with a flurry of claims that Kelley was a Democrat frustrated by Donald Trump's presidential win, and was determined to visit revenge on a bunch of god-fearing Republicans.
  • An entirely different bunch of folks on Twitter started the story that Kelley was actually a Muslim convert named Samir al-Hajeeda.  Coincidentally, Samir al-Hajeeda was blamed by many of these same people for the Las Vegas shootings a month ago.  It's a little hard to fathom how anyone could believe that, given the fact that both gunmen died at the scene of the crime.
  • Not to be outdone, the website Freedum Junkshun claimed that Kelley was an "avid atheist" named Raymond Peter Littlebury, who was "on the payroll of the DNC."
And so on and so forth.

Look, I've made the point before.  You can't stop this kind of thing from zinging at light speed around the interwebz.  Fake news agencies gonna fake news, crazies gonna craze, you know?  Some of these sources were obviously pseudo-satirical clickbait right from the get-go.  I mean, did anyone even look at the name of the site Freedum Junkshun and wonder why they spelled it that way?

And for heaven's sake, Mike Cernovich and Alex Jones?  At this point, if Cernovich and Jones said the grass was green, I'd want an independent source to corroborate the claim.

So it's not the existence of these ridiculous claims I want to address.  It's the people who hear them and unquestioningly believe them.

I know it's easy to fall into the confirmation bias trap -- accepting a claim because it's in line with what you already believed, be it that all conservatives are violent gun nuts, all liberals scheming slimeballs, all Muslims potential suicide bombers, all religious people starry-eyed fanatics, all atheists amoral agents of Satan himself.  It takes work to counter our tendency to swallow whole any evidence of what we already believed.

But you know what?  You have to do it.  Because otherwise you become prey to the aforementioned crazies and promoters of fake news clickbait.  If you don't corroborate what you post, you're not supporting your beliefs; you're playing right into the hands of people who are trying to use your singleminded adherence to your sense of correctness to achieve their own ends.

At the time of this writing, we know next to nothing about Devin Patrick Kelley other than his military record and jail time.  We don't know which, if any, political affiliation he had, whether or not he was religious, whether he was an activist or simply someone who wanted to kill people.  So all of this speculation, all of these specious claims, are entirely vacuous.

Presumably at some point we'll know more about Kelley.  At the moment, we don't.

So please please please stop auto-posting these stories.  At the very least, cross-check what you post against other sources, and check out a few sources from different viewpoints.  (Of course if you cross-check Breitbart against Fox News, or Raw Story against ThinkProgress, you're gonna get the same answer.  That's not cross-checking, that's slamming the door on the echo chamber.)

Otherwise you are not only falling for nonsense, you are directly contributing to the divisiveness that is currently ripping our nation apart.

As the brilliant physicist Richard Feynman put it: "You must be careful not to believe something simply because you want it to be true.  Nobody can fool you as easily as you can fool yourself."

Saturday, September 30, 2017

Fake news reflex

I have never been one to post political stuff on social media.

For one thing, I don't think it changes anyone's mind.  Besides the fact that most people simply rah-rah the stuff they already believed and ignore everything else, there's also the tendency of folks to read the headline only -- one study that compared clicks to shares found that 59% of the links shared to social media had not been opened by the person sharing them.

Then there's the fact that most of the time, I just don't want to get into it with people.  That may be surprising coming from someone who writes a blog that is sometimes controversial, occasionally downright incendiary.  But when I get on social media, I'm really not looking for a fight.  I'd much rather see funny memes and pictures of cute puppies than to get into a snarling match over, for example, how, where, and how much we should respect the American flag.

Which is why it was ill-advised of me to post a story from Vice that appeared three days ago, describing a move by Trump administration officials from the Department of Justice to argue in the 2nd Court of Appeals that employers should be able to fire employees for being gay.  The case in question, Zarda v. Altitude Express, originated from an incident in 2010 when skydiving instructor Donald Zarda sued his former employer, alleging that his firing had been based solely on his sexual orientation.

[image courtesy of the Wikimedia Commons]

Predictably, I found this appalling, and in a moment of pique, I posted it to Twitter, which auto-posted it to my Facebook.  Most of the responses I got shared my anger at the situation, but one of them said, simply, "Fake news."

And no, she wasn't making a joke.  I know that she's fairly conservative, and this kind of heavy-handed federal government interference in the court system runs pretty counter to the Republican narrative of small government and a hands-off approach to local and state jurisprudence, so I'm guessing that she saw that if it were true, it'd be pretty hypocritical.  (More and more, it's become clear that the current administration wants small government until they want big government, and see no contradiction at all in demanding both, practically at the same time.)

So she just called it "fake news" and forthwith dismissed it. 

It's not, in fact, fake news at all.  I know Vice is pretty strongly left-leaning, so it's reasonable to view what they post through that lens; but a five-minute Google search brought me to the amicus curiae brief filed by attorneys for the Department of Justice, and it's exactly what the Vice article described.  Failing that, there were dozens of media sources -- left, center, and right -- that carried the story, and all said substantially the same thing.

(One hopeful note; given how badly DOJ attorney Hashim Mooppan's arguments crashed and burned in front of Appellate Court Judge Rosemary Pooler, it looks likely that the strategy may have backfired rather spectacularly, as an overview of the case in Slate describes.)

So it obviously wasn't "fake news," regardless of your political persuasion or your attitude toward LGBT individuals, discrimination cases, or Vice.  What on earth could prompt someone to say that?  I know the person who made the comment is quite intelligent, articulate, and well-spoken.  We don't agree on much politically, but we've always been pretty cordial to each other despite our differences.

It's a troubling impulse.  Confirmation bias, where you accept claims for which there is little to no evidence because it fits with what you already believed, is as illogical as rejecting claims because they run counter to the talking points from your political party.

In fact, the latter may well be worse, because that immediate, reflexive, knee-jerk rejection of what you want very much not to be true makes you ignore facts that could flag when you've made a mistake -- when you have a belief that, in fact, is not correct.  It insulates you from catching your own errors in judgment, logic, or simple fact.

Which might well be comforting, but it doesn't lead to better understanding.  Me, I prefer to admit I'm wrong and correct the mistake.  As Carl Sagan put it, "It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring."

And this extends to political arguments which, although they often involve emotions and competing interests, should still be based on actual factual information.  I'll end with another quote, this one from Senator Daniel Patrick Moynihan: "You are entitled to your own opinions, but you are not entitled to your own facts."