Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Thursday, March 25, 2021

A tsunami of lies

One of the ways in which the last few years have changed me is that it has made me go into an apoplectic rage when I see people sharing false information on social media.

I'm not talking about the occasional goof; I've had times myself that I've gotten suckered by parody news accounts, and posted something I thought was true that turns out to be some wiseass trying to be funny.  What bothers me is the devastating flood of fake news on everything from vaccines to climate change to politics, exacerbated by "news" agencies like Fox and OAN that don't seem to give a shit about whether what they broadcast is true, only that it lines up with the agenda of their directors.

I've attributed this tsunami of lies to two reasons: partisanship and ignorance.  (And to the intersection of partisanship and ignorance, where lie the aforementioned biased media sources.)  If you're ignorant of the facts, of course you'll be prone to falling for an appealing falsehood; and partisanship in either direction makes you much more likely to agree unquestioningly with a headline that lines up with what you already believed to be true.

Turns out -- ironically -- the assumption that the people sharing fake news are partisan, ignorant, or both might itself be an appealing but inaccurate assessment of what's going on.  A study in Nature this week has generated some curious results showing that once again, reality turns out to be more complex than our favored black-and-white assessments of the situation.


[Image is in the Public Domain]

A study by Ziv Epstein, Mohsen Mosleh, Antonio Arechar, Dean Eckles, and David Rand (of the Massachusetts Institute of Technology) and Gordon Pennycook (of the University of Regina) decided to see what was really motivating people to share false news stories online, and they found -- surprisingly -- that sheer carelessness played a bigger role than either partisanship or ignorance.  In "Shifting Attention to Accuracy Can Reduce Misinformation Online," the team describes a series of experiments involving over a thousand volunteers that leads us to the heartening conclusion that there might be a better way to stem the flood of lies online than getting people to change their political beliefs or engaging in a massive education program.

The setup of the study was as simple as it was elegant.  They first tested the "ignorance" hypothesis by taking test subjects and presenting them with various headlines, some true and some false, and asked them to determine which were which.  It turns out people are quite good at this; there was a full 56-point difference between the likelihood of correctly identifying true and false headlines and making a mistake.

Next, they tested the "partisanship" hypothesis.  The test subjects did worse on this task, but still the error rate wasn't as big as you might guess; people were still 10% less likely to rate true statements as false (or vice versa) even if those statements agreed with the majority stance of their political parties.  So partisanship plays a role in erroneous belief, but it's not the set of blinders many -- including myself -- would have guessed.

Last -- and this is the most interesting test -- they asked volunteers to assess their likelihood of sharing the news stories online, based upon their headlines.  Here, the difference between sharing true versus false stories dropped to only six percentage points.  Put a different way, people who are quite good at discerning false information overall, and still pretty good at recognizing it even when it runs counter to their political beliefs, will still share the news story anyhow.

What it seems to come down to is simple carelessness.  It's gotten so easy to share links that we do it without giving it much thought.  I know I've been a bit shame-faced when I've clicked "retweet" to a link on Twitter, and gotten the message, "Don't you want to read the article first?"  (In my own defense, it's usually been because the story in question is from a source like Nature or Science, and I've gotten so excited by whatever it was that I clicked "retweet" right away even though I fully intend to read the article afterward.  Another reason is the exasperating way Twitter auto-refreshes at seemingly random moments, so if you don't respond to a post right away, it might disappear forever.)  

Improving the rate at which people detected (and chose not to share) fake headlines turned out to be remarkably easy to tweak.  The researchers found that reminding people of the importance of accuracy at the start of the experiment decreased the volunteers' willingness to share false information, as did asking them to assess the accuracy of the headline prior to making the decision about whether to share it. 

It does make me wonder, though, about the role of pivotal "nodes" in the flow of misinformation -- a few highly-motivated people who start the ball of fake news rolling, with the rest of us spreading around the links (whatever our motivation for doing so) in a more piecemeal fashion.  A study by Zignal Labs, for example, found that the amount of deceptive or outright false political information on Twitter went down by a stunning 73% after Donald Trump's account was closed permanently.  (Think of what effect it might have had if Twitter had made this decision back in 2015.)

In any case, to wrap this up -- and to do my small part in addressing this problem -- just remember before you share anything that accuracy matters.  Truth matters.  It's very easy to click "share," but with that ease comes a responsibility to make sure that what we're sharing is true.  We ordinary folk can't dam the flow of bullshit singlehandedly, but each one of us has to take seriously our role in stopping up the leaks, small as they may seem.

******************************************

Last week's Skeptophilia book-of-the-week, Simon Singh's The Code Book, prompted a reader to respond, "Yes, but have you read his book on Fermat's Last Theorem?"

In this book, Singh turns his considerable writing skill toward the fascinating story of Pierre de Fermat, the seventeenth-century French mathematician who -- amongst many other contributions -- touched off over three hundred years of controversy by writing that there were no integer solutions for the equation  an + bn = cn for any integer value of n greater than 2, then adding, "I have discovered a truly marvelous proof of this, which this margin is too narrow to contain," and proceeding to die before elaborating on what this "marvelous proof" might be.

The attempts to recreate Fermat's proof -- or at least find an equivalent one -- began with Fermat's contemporaries, Evariste de Gaulois, Marin Mersenne, Blaise Pascal, and John Wallis, and continued for the next three centuries to stump the greatest minds in mathematics.  It was finally proven that Fermat's conjecture was correct by Andrew Wiles in 1994.

Singh's book Fermat's Last Theorem: The Story of a Riddle that Confounded the World's Greatest Minds for 350 Years describes the hunt for a solution and the tapestry of personalities that took on the search -- ending with a tour-de-force paper by soft-spoken British mathematician Andrew Wiles.  It's a fascinating journey, as enjoyable for a curious layperson as it is for the mathematically inclined -- and in Singh's hands, makes for a story you will thoroughly enjoy.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



No comments:

Post a Comment