Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label internet. Show all posts
Showing posts with label internet. Show all posts

Wednesday, March 19, 2025

Bootstraps

Yesterday's post, about the strange resurgence of a fifty-year-old claim that the Dogon tribe of west Africa found out about Sirius's invisible-to-the-naked-eye white dwarf companion star from space-traveling aliens, spurred a conversation with a friend about the nature of the internet.

As useful as it is -- many of us spend a significant fraction of our waking hours connected to it -- it has its downsides.  I had made the point in yesterday's post that stuff like "E.T. Visits the Dogon People" would never gain the traction, spread, and longevity that it does without the internet.  The web is a fantastic conduit for knowledge, an amazing repository for factual information -- and a dreadfully efficient facilitator for the distribution of bullshit.

My friend, though, went one step further.

"The way the internet is set up," he said, "it not only acts as a conductor for bullshit, but it actually creates it.  There's a self-referential quality to the internet that makes the generation of loony nonsense inevitable.  It's why I wasn't surprised when generative A.I. started 'hallucinating' -- basically, making shit up that sounded so plausible that people believed it, like the A.I. mushroom foraging guide that recommended eating Amanita mushrooms with your t-bone steak.  It takes almost nothing to get the ball rolling, and pretty soon you've got some serious craziness to deal with.  Then, once it starts, how do you get people to stop believing?  Their belief expands the craziness, and around and around it goes.  It's the snowball effect on steroids."

I asked him if he could give me some examples, and he said he'd send me some links.

The result sent me down a rabbit hole, which I'll share a bit of with you here.

One of the most persistent and long-lived examples of this phenomenon is one I had never heard of before.  It's called Markovian Parallax Denigrate, after the subject line of hundreds of messages posted to Usenet all the way back in 1996.  The message texts were a random list of words, such as the following real example:

jitterbugging McKinley Abe break Newtonian inferring caw update Cohen air collaborate rue sportswriting rococo invocate tousle shadflower Debby Stirling pathogenesis escritoire adventitious novo ITT most chairperson Dwight Hertzog different pinpoint dunk McKinley pendant firelight Uranus episodic medicine ditty craggy flogging variac brotherhood Webb impromptu file countenance inheritance cohesion refrigerate morphine napkin inland Janeiro nameable yearbook hark

Well, it's a seemingly random list.  *raises one eyebrow in a meaningful manner*  Even though most people believe that the MPD messages are nonsense and were either produced by an early experimental text generator or chatbot, or else someone trying to troll everyone and get their fifteen minutes of fame, there are people who are still trying to "decode" the messages and figure out what they "really mean."  After everyone got all stirred up, it seemed so damned anticlimactic to say they were just a list of words.  Interestingly, no one has ever claimed responsibility; an article on The Daily Dot called it "the internet's oldest and weirdest mystery."

Then there's Cicada 3301, a set of seven puzzles posted between 2012 and 2014 on the weird, conspiracy-ish site 4chan.  The first two puzzles were solved; the others remain unsolved (and there are still people working on them today).  The stated purpose of the puzzles was to "recruit intelligent individuals," but for whom or what?  Various people suggested the source of the puzzles (and therefore the recruiting agency) could be the CIA, the NSA, M16, Mossad, a free-agent mercenary group, or a "Masonic conspiracy." 

One person who successfully solved the first puzzle was invited to join a private forum, where he was questioned about his knowledge of cryptography and his attitudes toward online freedom and censorship.  He played along for a while, but eventually got spooked and quit the forum -- and later inquiries found that the site itself had been deleted.

To this day no one knows for sure who Cicada 3301 is or what the website's purpose was -- but there's still an online community of people discussing it, over a decade later.

The best example of something on the internet taking on a life of its own, though, is "This Man."  Back in 2008, a website popped up called "Ever Dreamed Of This Man?"  It was accompanied by a sketch:


Along with the image was a story about a "well-known New York City psychiatrist" whose patient reported seeing "This Man" repeatedly in his dreams; when a second patient came to him with a similar tale, the psychiatrist forwarded the sketch to colleagues, and found that a number of them had patients with recurring dreams about the guy -- some neutral, some sexual, some violent.  In some dreams he was the dreamer's father; in others, a teacher; in many of them, he was a stranger.  The one common thread was his appearance -- and the extreme vividness with which people recalled him.

Well, responses started pouring in.  Thousands of people reported dreaming about him, and posted lengthy descriptions of what they'd experienced.  How could this be -- how could people from all over the world suddenly find themselves dreaming about the same man?  Who was the mysterious man, and what could it mean that he was appearing in dreaming minds worldwide?

As you might already be suspecting, the whole tale had been a lie right from the get-go.  There was no "well-known New York City psychiatrist," and the entire set-up of the story was a hoax.  It had all been the brainchild of an Italian online marketer named Andrea Natella to get clicks on his website, and to drum up notoriety for a marketing campaign.  

The responses, however, were very real.  Even when Natella more or less got caught at his game and confessed in 2008, people kept saying they'd dreamed about This Man, and No he's real really he is.  Natella was interviewed by Vice in 2015, and described the whole thing -- how he'd gotten the idea, how he'd been found out, and so on -- and despite that, he is still receiving hundreds of emails and letters every week from people who claim to have dreamed about This Man -- or, weirder still, to have seen him in real life.  A few claim to know who he actually is.  (And we reach weirdvana with an Indian guru named Arud Kannan Ayya, who contacted Natella to tell him that he is This Man and that's why he has magical guru powers.)

So even shouting "HEY Y'ALL I ADMIT IT I MADE THE WHOLE THING UP" isn't enough to put the quietus on this phenomenon. Once it starts up, it's like these online claims lift themselves by their own bootstraps, and at that point they're unstoppable.  And I have to admit that my friend has a point; without the internet, it's hard to imagine how any of these could have gotten the traction they did.

In any case, we're pretty well stuck with the internet, for good or bad, at least until the next Miyake Event comes along and blows the whole thing to smithereens.  Myself, I'll put up with stuff like This Man, Cicada 3301, and Markovian Parallax Denigrate rather than having to deal with the aftermath of that.

****************************************


Monday, June 13, 2022

The google trap

The eminent physicist Stephen Hawking said, "The greatest enemy of knowledge is not ignorance; it is the illusion of knowledge."

Somewhat more prosaically, my dad once said, "Ignorance can be cured.  We're all ignorant about some things.  Stupid, on the other hand, goes all the way to the bone."

Both of these sayings capture an unsettling idea; that often it's more dangerous to think you understand something than it is to admit you don't.  This idea was illustrated -- albeit using an innocuous example -- in a 2002 paper called "The Illusion of Explanatory Depth" by Leo Rozenblit and Frank Keil, of Yale University.  What they did is to ask people to rate their level of understanding of a simple, everyday object (for example, how a zipper works), on a scale of zero to ten.  Then, they asked each participant to write down an explanation of how zippers work in as much detail as they could.  Afterward, they asked the volunteers to re-rate their level of understanding.

Across the board, people rated themselves lower the second time, after a single question -- "Okay, then explain it to me" -- shone a spotlight on how little they actually knew.

The problem is, unless you're in school, usually no one asks the question.  You can claim you understand something, you can even have a firmly-held opinion about it, and there's no guarantee that your stance is even within hailing distance of reality.

And very rarely does anyone challenge you to explain yourself in detail.

[Image is in the Public Domain]

If that's not bad enough, a recent paper by Adrian Ward (of the University of Texas - Austin) showed that not only do we understand way less than we think we do, we fold what we learn from other sources into our own experiential knowledge, regardless of the source of that information.  Worse still, that incorporation is so rapid and smooth that afterward, we aren't even aware of where our information (right or wrong) comes from.

Ward writes:

People frequently search the internet for information.  Eight experiments provide evidence that when people “Google” for online information, they fail to accurately distinguish between knowledge stored internally—in their own memories—and knowledge stored externally—on the internet.  Relative to those using only their own knowledge, people who use Google to answer general knowledge questions are not only more confident in their ability to access external information; they are also more confident in their own ability to think and remember.  Moreover, those who use Google predict that they will know more in the future without the help of the internet, an erroneous belief that both indicates misattribution of prior knowledge and highlights a practically important consequence of this misattribution: overconfidence when the internet is no longer available.  Although humans have long relied on external knowledge, the misattribution of online knowledge to the self may be facilitated by the swift and seamless interface between internal thought and external information that characterizes online search.  Online search is often faster than internal memory search, preventing people from fully recognizing the limitations of their own knowledge.  The internet delivers information seamlessly, dovetailing with internal cognitive processes and offering minimal physical cues that might draw attention to its contributions.  As a result, people may lose sight of where their own knowledge ends and where the internet’s knowledge begins.  Thinking with Google may cause people to mistake the internet’s knowledge for their own.

I recall vividly trying, with minimal success, to fight this in the classroom.  Presented with a question, many students don't stop to try to work it out themselves, they immediately jump to looking it up on their phones.  (One of many reasons I had a rule against having phones out during class, another exercise in frustration given how clever teenagers are at hiding what they're doing.)  I tried to make the point over and over that there's a huge difference between looking up a fact (such as the average number of cells in the human body) and looking up an explanation (such as how RNA works).  I use Google and/or Wikipedia for the former all the time.  The latter, on the other hand, makes it all too easy simply to copy down what you find online, allowing you to have an answer to fill in the blank irrespective of whether you have the least idea what any of it means.

Even Albert Einstein, pre-internet though he was, saw the difference, and the potential problem therein.  Once asked how many feet were in a mile, the great physicist replied, "I don't know.  Why should I fill my brain with facts I can find in two minutes in any standard reference book?”

In the decades since Einstein's said this, that two minutes has shrunk to about ten seconds, as long as you have internet access.  And unlike the standard reference books he mentioned, you have little assurance that the information you found online is even close to right.

Don't get me wrong; I think that our rapid, and virtually unlimited, access to human knowledge is a good thing.  But like most good things, it comes at a cost, and that cost is that we have to be doubly cautious to keep our brains engaged.  Not only is there information out there that is simply wrong, there are people who are (for various reasons) very eager to convince you they're telling the truth when they're not.  This has always been true, of course; it's just that now, there are few barriers to having that erroneous information bombard us all day long -- and Ward's paper shows just how quickly we can fall for it.

The cure is to keep our rational faculties online.  Find out if the information is coming from somewhere reputable and reliable.  Compare what you're being told with what you know to be true from your own experience.  Listen to or read multiple sources of information -- not only the ones you're inclined to agree with automatically.  It might be reassuring to live in the echo chamber of people and media which always concur with our own preconceived notions, but it also means that if something is wrong, you probably won't realize it.

Like I said in Saturday's post, finding out you're wrong is no fun.  More than once I've posted stuff here at Skeptophilia and gotten pulled up by the short hairs when someone who knows better tells me I've gotten it dead wrong.  Embarrassing as it is, I've always posted retractions, and often taken the original post down.  (There's enough bullshit out on the internet without my adding to it.)

So we all need to be on our guard whenever we're surfing the web or listening to the news or reading a magazine.  Our tendency to absorb information without question, regardless of its provenance -- especially when it seems to confirm what we want to believe -- is a trap we can all fall into, and Ward's paper shows that once inside, it can be remarkably difficult to extricate ourselves.

**************************************

Saturday, May 28, 2022

Social media dissociation

I suspect that many of my readers will resonate with my desire to fritter away less time on social media.

I don't mean the actual "social" part of social media.  I have friends whom I seldom if ever get to see, and especially since the pandemic started, visiting online is about my only opportunity.  I greatly value those conversations.  What I'm referring to is the aimless scrolling, looking for new content, any new content.  Trying to find a distraction even though I know that a dozen other things, from listening to some music, to playing with my dogs, to going for a run -- even weeding the garden -- will leave me feeling better.

But -- once again, as I'm sure many of you can attest -- it can be exceedingly hard to say "enough" and close the app.  It was one thing when your connectivity had to be via a desktop or laptop computer; but now that just about all of us (even me, Luddite though I am) are carrying around our social media addiction in our pockets, it's way too easy to say "just a few more minutes" and drop back into the world of scrolling.

One effect I've noticed it's had on me is a shortening of my attention span.  Something has to be absolutely immersive to keep my attention for over five minutes.  Two of my favorite YouTube science channels, the wonderful Veratasium and physicist Sabine Hossenfelder's awesome Science Without the Gobbledygook, have videos that average at about ten to twelve minutes long, and man... sometimes that is a struggle, however fascinating the topic.

I don't like this trend.  I won't say I've ever had the best of focus -- distractions and my wandering mind have been issues since I was in grade school -- but social media have made it considerably worse.  Frequently I think about how addicted I am to scrolling, and it's a real cause of worry.

But then I start scrolling again and forget all about it.

That last bit was the subject of a study from the University of Washington that was presented last month at the CHI Conference on Human Factors in Computing Systems.  In, "'I Don’t Even Remember What I Read': How Design Influences Dissociation on Social Media," a team led by Amanda Baughan looked at how social media apps are actually designed to have this exact effect -- and that although we frequently call it an addiction, it is more accurately described as dissociation.

"Dissociation is defined by being completely absorbed in whatever it is you're doing," Baughan said, in an interview with Science Daily.  "But people only realize that they've dissociated in hindsight.  So once you exit dissociation there's sometimes this feeling of: 'How did I get here?'  It's like when people on social media realize: 'Oh my gosh, how did thirty minutes go by?  I just meant to check one notification.'"

Which is spot-on.  Even the title is a bullseye; after a half-hour on Twitter, I'd virtually always be hard-pressed to tell you the content of more than one or two of the tweets I looked at.  The time slips by, and it feels very much like I glance up at the clock, and three hours are gone without my having anything at all to show for it.

It always reminds me of a quote from C. S. Lewis's The Screwtape Letters.  While I (obviously) don't buy into the theology, his analysis of time-wasting by the arch-demon Screwtape is scarily accurate:
As this condition becomes more fully established, you will be gradually freed from the tiresome business of providing Pleasures as temptations.  As the uneasiness and his reluctance to face it cut him off more and more from all real happiness, and as habit renders the pleasures of vanity and excitement and flippancy at once less pleasant and harder to forgo (for that is what habit fortunately does to a pleasure) you will find that anything or nothing is sufficient to attract his wandering attention.  You no longer need a good book, which he really likes, to keep him from his prayers or his work or his sleep; a column of advertisements in yesterday’s paper will do.  You can make him waste his time not only in conversation he enjoys with people whom he likes, but in conversations with those he cares nothing about on subjects that bore him.  You can make him do nothing at all for long periods.  You can keep him up late at night, not roistering, but staring at a dead fire in a cold room.  All the healthy and outgoing activities which we want him to avoid can be inhibited and nothing given in return, so that at last he may say, as one of my own patients said on his arrival down here [in hell], "I now see that I spent most of my life in doing neither what I ought nor what I liked."

That last line, especially, is a fair knockout, and it kind of makes me suspicious that social media may have been developed down in hell after all.

Baughan, however, says maybe we shouldn't be so hard on ourselves.  "I think people experience a lot of shame around social media use," she said.  "One of the things I like about this framing of 'dissociation' rather than 'addiction' is that it changes the narrative.  Instead of: 'I should be able to have more self-control,' it's more like: 'We all naturally dissociate in many ways throughout our day -- whether it's daydreaming or scrolling through Instagram, we stop paying attention to what's happening around us.'"

Even so, for a lot of us, it gets kind of obsessive at times.  It's worse when I'm anxious or depressed, when I crave a distraction not only from unpleasant external circumstances but from the workings of my own brain.  And it's problematic that when that occurs, the combination of depression and social media create a feedback loop that keeps me from seeking out activities -- which sometimes just means turning off the computer and doing something, anything, different -- that will actually shake me out of my low mood.

But she's right that shaming ourselves isn't productive, either.  Maybe a lot of us could benefit by some moderation in our screen time, but self-flagellation doesn't accomplish anything.  I'm not going to give up on social media entirely -- like I said, without it I would lose touch with too many contacts I value -- but setting myself some stricter time limits is probably a good idea.

And now that you've read this, maybe it's time for you to shut off the device, too.  What are you going to do instead?  I think I'll go for a run.

**************************************

Friday, May 14, 2021

The network of nonsense

I've long been fascinated with communication network theory -- the model that maps out the rules behind the spread of information (and its ugly cousin, disinformation).  Back in my day (you'll have to imagine me saying this in a creaky old-geezer voice) both moved a lot more slowly; communities devoted to conspiracies, for example, had to rely on such clunky modes of transmission as newsletters, magazines, and word-of-mouth.

Now?  The internet, and especially social media, have become rapid-transit networks for bullshit.  The phenomenon of a certain idea, video, meme, or link "going viral" has meant that virtually overnight, it can go from being essentially unknown to basically everyone who is online seeing it.  There was nothing even close to comparable forty years ago.

Communications network theory looks at connectedness between different communities and individuals, the role of nodes (people or groups who are multiply-connected to many other people and groups), and "tastemakers" -- individuals whose promotion of something virtually guarantees it gaining widespread notice.  The mathematics of this model is, unfortunately, over my head, but the concepts are fascinating.  Consider the paper that came out this week in the journal Social Media and Society, "From 'Nasa Lies' to 'Reptilian Eyes': Mapping Communication About 10 Conspiracy Theories, Their Communities, and Main Propagators on Twitter," by Daniela Mahl, Jing Zeng, and Mike Schäfer of the University of Zürich.

In this study, they looked at the communities that have grown up around ten different conspiracy theories:

  1. Agenda 21, which claims that the United Nations has a plan to strip nations of their sovereignty and launch a one-world government
  2. The anti-vaccination movement
  3. The Flat Earthers
  4. Chemtrails -- the idea we're being dosed with psychotropic chemicals via jet exhaust contrails
  5. Climate change deniers
  6. Directed energy weapons -- high-intensity beams are being used to kill people and start natural disasters like major forest fires
  7. The Illuminati
  8. Pizzagate -- the claim that the Democrats are running some kind of nationwide human trafficking/pedophilia ring
  9. The Reptilians -- many major world leaders are reptilian aliens in disguise, and you can sometimes catch a glimpse of their real appearance in video clips
  10. "9/11 was an inside job"

They also looked at connections to two non-conspiracy communities -- pro-vaccination and anti-flat-Earth.

The researchers analyzed thousands of different accounts and tens of thousands of tweets to see what kind of overlap there was between these twelve online communities, as based on hashtag use, retweets, and so on.

What they found was that the communities studied formed eight tightly-networked clusters.  Here's a diagram of their results:


There are a couple of interesting features of this.

First, that six of the communities are so entangled that they form two multiply-connected clusters, the chemtrail/Illuminati/Reptilians cluster, and the Pizzagate/9/11/climate change denial clusters.  Both make sense considering who is pushing each of them -- the first by such conspiracy loons as David Icke, and the second by far-right media like Fox, OAN, and Newsmax.

Note, however, that even if three of the other conspiracy theories -- the anti-vaxxers, Agenda 21, and directed energy weapons -- are distinct enough that they form their own nodes, they still have strong connections to all the others.  The only one that stands out as essentially independent of all the others is the Flat Earthers.

Evidently the Flerfs are so batshit crazy that even the other crazies don't want to have anything to do with them.

This demonstrates something that I've long believed; that acceptance of one loony idea makes you more likely to fall for others.  Once you've jettisoned evidence-based science as your touchstone for deciding what is the truth, you'll believe damn near anything.

The other thing that jumps out at me is that the pro-vaccine and anti-flat-Earth groups have virtually no connections to any of the others.  They are effectively closed off from the groups they're trying to counter.  What this means is discouraging; that the people working to fight the network of nonsense by creating accounts dedicated to promoting the truth are sitting in an echo chamber, and their well-meant and fervent messages are not reaching the people whose minds need to be changed.

It's something that I've observed before; that it's all very well for people on Twitter and Facebook to post well-reasoned arguments about why Tucker Carlson, Tomi Lahren, Marjorie Taylor Greene, and Lauren Boebert are full of shit, but they're never going to be read by anyone who doesn't already agree.

It's why Fox News is so insidious.  Years ago, they and their spokespeople, commentators like Rush Limbaugh and Ann Coulter, started off by convincing their listeners that everyone else was lying.  Once you've decided that the only way to get the truth is to rely on one single source, you're at the mercy of the integrity and accuracy of that source.  In the case of Fox, you are vulnerable to being manipulated by a group of people whose representation of the news is so skewed it has run afoul of Great Britain's Office of Communications multiple times on the basis of inaccuracy, partiality, and inflammatory content.  (And in fact, last year Fox began an international streaming service in the UK, largely motivated by the fact that online content is outside the jurisdiction of the Office of Communications.)

Mahl et al. write:

Both anti-conspiracy theory communities, Anti-Flat Earth and Pro-Vaccination, are centered around scientists and medical practitioners.  Their use of pro-conspiracy theory hashtags likely is an attempt to directly engage and confront users who disseminate conspiracy theories.  Studies from social psychology have shown that cross-group communication can be an effective way to resolve misunderstandings, rumors, and misinformation.  By deliberately using pro-conspiracy hashtags, anti-conspiracy theory accounts inject their ideas into the conspiracists’ conversations.  However, our study suggests that this visibility does not translate into cross-group communication, that is, retweeting each other’s messages.  This, in turn, indicates that debunking efforts hardly traverse the two clusters.

I wish I had an answer to all this.  It's one thing if a group of misinformed people read arguments countering their beliefs and reject them; it's another thing entirely if the misinformed people are so isolated from the truth that they never even see it.  Twitter and Facebook have given at least a nod toward deplatforming the worst offenders -- one study found that the flow of political misinformation on Twitter dropped by 75% after Donald Trump's account was suspended -- but it's not dealing with the problem as a whole, because there even if you delete the platforms of the people responsible for the wellspring of bullshit, there will always be others waiting in the wings to step in and take over.

However discouraging this is, it does mean that the skeptics and science types can't give up.  Okay, we're not as multiply-connected as the wackos are; so we have to be louder, more insistent, more persistent.  Saying "oh, well, nothing we can do about it" and throwing in the towel will have only one effect; making sure the disinformation platforms reach more people and poison more conduits of discourse.

And I, for one, am not ready to sit back and accept that as inevitable.

********************************

I have often been amazed and appalled at how the same evidence, the same occurrences, or the same situation can lead two equally-intelligent people to entirely different conclusions.  How often have you heard about people committing similar crimes and getting wildly different sentences, or identical symptoms in two different patients resulting in completely different diagnoses or treatments?

In Noise: A Flaw in Human Judgment, authors Daniel Kahneman (whose wonderful book Thinking, Fast and Slow was a previous Skeptophilia book-of-the-week), Olivier Sibony, and Cass Sunstein analyze the cause of this "noise" in human decision-making, and -- more importantly -- discuss how we can avoid its pitfalls.  Anything we can to to detect and expunge biases is a step in the right direction; even if the majority of us aren't judges or doctors, most of us are voters, and our decisions can make an enormous difference.  Those choices are critical, and it's incumbent upon us all to make them in the most clear-headed, evidence-based fashion we can manage.

Kahneman, Sibony, and Sunstein have written a book that should be required reading for anyone entering a voting booth -- and should also be a part of every high school curriculum in the world.  Read it.  It'll open your eyes to the obstacles we have to logical clarity, and show you the path to avoiding them.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Friday, July 3, 2020

Today's post -- retraction

Hi all,

Just as a head's up:

I received several comments & emails (all polite, which I appreciate) setting me straight on the topic of today's post -- apparently there is something to the seizure- and migraine-inducing capacity of the Ravelry website.  So I (and Dr. Bartholomew of Psychology Today) were just plain wrong.

If you want more information, here's a website that gives more accurate information.

I've taken the post down.  My apologies for spreading misinformation, which is exactly the opposite of what I set out to do here at Skeptophilia.  But thanks for the readers who took the time to tell me to look deeper and reconsider what I'd written.

cheers,

Gordon

Tuesday, December 12, 2017

Wikipedia, accuracy, and the Swanson conversion

I'm of two minds about Wikipedia.

I think it's a great resource for quick lookups, and use it myself for that sort of thing.  A study by Thomas Chesney found that experts generally consider Wikipedia to be pretty accurate, although the same study admits that others have concluded that 13% of Wikipedia entries have errors (how serious those errors are is uncertain; an error in a single date is certainly more forgivable than one that gives erroneous information about a major world event).  Another study concluded that between one-half and one-third of deliberately inserted errors are corrected within 48 hours.

But still.  That means that between one-half and two-thirds of deliberately inserted errors weren't corrected within 48 hours, which is troubling.  Given the recent squabbles over "fake news," having a source that could get contaminated by bias or outright falsehood, and remain uncorrected, is troubling.

Plus, there's the problem with error sneaking in, as it were, through the back door.  Sometimes claims are posted on Wikipedia (and elsewhere) by people who honestly think what they're stating is correct, and once that happens, there tends to be a snake-swallowing-its-own-tail pattern of circular citations, and before you know it, what was a false claim suddenly becomes enshrined as fact.

As an example of this, consider the strange case of the Swanson conversion.

The Swanson conversion, which sounds like the title of an episode of The Big Bang Theory but isn't, is a piece of the reaction of cellular respiration.  Without geeking out on this too extremely -- and my students will attest that I get way too excited about how cool cellular respiration is -- the background on this is as follows.

Cellular respiration, which is the set of reactions by which our cells burn glucose and release energy to power everything we do, has three major steps: glycolysis, the Krebs cycle, and the electron transport chain.  Each of those is made of dozens of sub-reactions, which I will refrain from describing (although like I said, they're extremely cool).  But there's one piece of it that doesn't have an official name, and that's the step that links glycolysis (the first step) to the Krebs cycle (the second step).

[image courtesy of the Wikimedia Commons, and the irony of the source of this image does not escape me]

Again, trying not to be too technical, here, but at the end of glycolysis, the original glucose molecule has been split in two (in fact, "glycolysis" is Greek for "sugar breaking").  The two halves are called pyruvate, and they're three-carbon compounds.  Before they can be thrown into the Krebs cycle, however, they have to lose one carbon (in the form of carbon dioxide), thus forming acetate, which can be introduced into the first step of Krebs.

So what's that carbon-losing step called?  Apparently, "the Swanson conversion."  It's in Wikipedia, not to mention many other websites describing the reactions of respiration.

The problem?  The name "Swanson conversion" was given to the linking step by a high school biology teacher named Swanson when his students asked him why that bit of the reaction didn't have a name, and he said, "hell, I dunno.  Let's call it 'the Swanson conversion.'"  And it stuck...

... especially when one of his students posted it to Wikipedia as the correct name.

When Swanson found out, he at first was annoyed, but after discussing it with his students, allowed it to remain as a test to see how quickly errors on Wikipedia were corrected.  And... it wasn't.  In fact, others who have wondered, as my students did, why this step doesn't have a name stumbled on this and thought, "Cool!  Now I know what to call it!" and posted it on their websites.  And now, this name that started out as an inside joke between a biology teacher and his students has become the semi-official name of the step.

Swanson, for his part, says he uses it as an example of how you can't trust what's online without checking your sources.  The problem is, how do you check the sources on something like this?  Once the aforementioned self-referential merry-go-round has been engaged, it becomes damn near impossible to figure out what's correct.  Especially in cases like this, which is that the correct answer to "what is the name of ____?" is, "There isn't one."  All too easy to say, "Well, I guess this one must be correct, since it's all over the place."

I realize this is a pretty unique situation, and I'm not trying to impugn the accuracy of Wikipedia as a whole.  I still use it for looking up simple facts -- after all, I'm from the generation during whose childhood if you wanted to know what year Henry VIII was crowned King of England, and didn't have an encyclopedia at home, you had to get in your car and drive to the library to look it up.  I think Wikipedia, errors and all, is a pretty significant step upward.

However, it does mean that we need to keep our brains engaged when we read stuff on the internet -- and, as always, try to find independent corroboration.  Because otherwise, we'll have people believing that one of the reactions of photosynthesis is called "the Bonnet activation."  And heaven knows, we wouldn't want that.

Wednesday, November 16, 2016

Viral nonsense

One of the most frustrating things about social media is the tendency of a lot of people to post something (or respond to it) without reading any more than the headline.  I got blasted for my post two days ago asking conscientious Republicans to stand up and repudiate the people who are responsible for the upswing in hate crimes, who apparently think that the recent election gives them carte blanche to sink to their worst tendencies.  This caused one woman to shriek, "I am so sick and tired of nonsense like this!  I am GREATLY OFFENDED that you seem to think that all Republicans are racists!"

Which, if you read the post, is exactly the opposite of what I wrote.  My point was that I know most Republicans aren't racists, but it is now their obligation to condemn the ones who are.

Couple the mental laziness of assuming the headline tells you everything you need to know with the unfortunate tendency of people to forward things without checking on their veracity, and you have a real problem.  Of course, the latter is a phenomenon I've railed against so much here in Skeptophilia that I hardly need to mention it again.  But there's a more insidious force at work here -- the fact that people are now creating sensationalized, often incendiary, "fake news" designed for one reason and one reason only -- to score clicks, and therefore advertising revenue.

Let's start with a study called "Lies, Damn Lies, and Viral Content" led by Craig Silverman of Columbia University that looked at the speed with which stories from these fake news sites can circulate through social media. "Rather than acting as a source of accurate information, online media frequently promote misinformation in an attempt to drive traffic and social engagement," Silverman said. "Many news sites apply little or no basic verification to the claims they pass on. Instead, they rely on linking-out to other media reports, which themselves often only cite other media reports as well... The extent to which a fake news article can get traction was surprising to me."

Max Read, editor of Gawker, put it more succinctly: "Already ankle-deep in smarmy bullshit and fake ‘viral’garbage, we are now standing at the edge of a gurgling swamp of it."

Among the rather unsettling conclusions of Silverman's study is that not only are the consumers to blame, the mainstream media is often content to hit the fast-forward button themselves.  "Many news sites apply little or no basic verification to the claims they pass on," Silverman writes.  "Instead, they rely on linking-out to other media reports, which themselves often only cite other media reports as well."

What is wryly amusing about all of this is that I first heard about this study in none other than The Daily Mail, which published it without any apparent sense of irony.

The BBC in a recent report states that the problem is worse even than a lack of quality control.  There are now websites whose entire raison d'être is the creation of false stories that have the ring of truth, and who then do everything they can to make sure that these stories get the maximum circulation possible.  Sites like The National Report call themselves "satire" -- but no one seems to be laughing.  Unlike The Onion, which is obviously tongue-in-cheek satire to anyone with a reasonable IQ, The National Report isn't trying to be funny.  They're trying to outrage, to scare, to whip up anger -- and to make money.

Site founder and owner Allen Montgomery is up front about this. "There are highs that you get from watching traffic spikes and kind of baiting people into the story," he says. "I just find it to be a lot of fun... There are times when it feels like a drug."

It's big business, too.  "Obviously the headline is key, and the domain name itself is very much a part of the formula -- you need to have a fake news site that looks legitimate as can be," Montgomery says.  "Beyond the headline and the first couple of paragraphs people totally stop reading, so as long as the first two or three paragraphs sound like legitimate news then you can do whatever you want at the end of the story and make it ridiculous...  We've had stories that have made $10,000.  When we really tap in to something and get it to go big then we're talking about in the thousands of dollars that are made per story."

And of course, social media plays right into the hands of people like Montgomery.  It only takes one click to forward a story to your Facebook friends or Twitter followers, and damn the consequences.  The frightening thing is that such garbage circulating around the internet is reaching so many people so quickly, the contention that it could affect elections is well within the realm of possibility.

Of course, far be it from anyone to take responsibility for any of this. Just a couple of days ago, Mark Zuckerberg, founder of Facebook, said that news stories (fake and otherwise) on social media "surely had no impact" on the election.

"More than 99% of content on Facebook is authentic," Zuckerberg said.  "Only a very small amount is fake news and hoaxes.  The hoaxes that do exist are not limited to one partisan view, or even to politics."

Which sounds like nothing but equivocation and denial of responsibility to me.  Not to mention complete bullshit.  99% accuracy of Facebook content, my ass.


As I've said before, it is incumbent upon consumers of all kinds of media to verify what they're reading, especially before they pass it along.  With sites like The National Report out there, and the increasing tendency of people not to think critically -- well, all I can say is, if you can't take five damn minutes to check Snopes, you're part of the problem.

Saturday, August 20, 2016

Memory offload

A couple of years ago, I had a student who had what seemed to me a weird approach to figuring things out.  When presented with a question he didn't know the answer to, his immediate response was to pull out his school-issued iPad and Google it.  Often, he didn't even give his brain a chance to wrestle with the question; if the answer wasn't immediately obvious, out came the electronics.

This became an even bigger obstacle when we were studying genetics.  Genetics is, more than anything else at the introductory-biology level, about learning a process.  There are a few important terms -- recessive, dominant, phenotype, allele, and so on -- but the point is to learn a systematic way of thinking about how genes work.

But given a problem -- a set of data that (for example) would allow you to determine whether the gene for Huntington's disease is recessive or dominant -- he would simply look it up.

"What have you learned by doing that?" I asked him, trying to keep the frustration out of my voice.

"I got the right answer," he said.

"But the answer isn't the point!"  Okay, at that point my frustration was pretty clear.

I think the issue I had with this student comes from two sources.  One is the education system's unfortunate emphasis on Getting The Right Answer -- that if you have The Right Answer on your paper, it doesn't matter how you got it, or whether you really understand how to get there.  But the other is our increasing reliance on what amounts to external memory -- usually in the form of the internet.  When we don't know something, the ease and accessibility of answers online makes us default to that, rather than taking the time to search our own memories for the answer.

[image courtesy of the Wikimedia Commons]

That latter phenomenon was the subject of a study that was published this week in the journal Memory.  Called "Cognitive Offloading: How the Internet is Increasingly Taking Over Human Memory," the study, by cognitive psychologists Benjamin Storm, Sean Stone, and Aaron Benjamin, looked at how people approach the recall of information, and found that once someone has started relying on the internet, it becomes the go-to source, superseding one's own memory:
The results revealed that participants who previously used the Internet to gain information were significantly more likely to revert to Google for subsequent questions than those who relied on memory.  Participants also spent less time consulting their own memory before reaching for the Internet; they were not only more likely to do it again, they were likely to do it much more quickly.  Remarkably, 30% of participants who previously consulted the Internet failed to even attempt to answer a single simple question from memory.
This certainly mirrors my experience with my students.  Not all of them are as hooked to their electronics as the young man in my earlier anecdote, but it is becoming more and more common for students to bypass thinking altogether and jump straight to Google.

"Memory is changing," lead author Storm said.  "Our research shows that as we use the Internet to support and extend our memory we become more reliant on it.  Whereas before we might have tried to recall something on our own, now we don't bother.  As more information becomes available via smartphones and other devices, we become progressively more reliant on it in our daily lives."

What concerns me is something that the researchers say was outside the scope of their research; what effect this might have on our own cognitive processes.  It's one thing if the internet becomes our default, but that our memories are still there, unaltered, should the Almighty Google not be available. It's entirely another if our continual reliance on external "offloaded" memory ultimately weakens our own ability to process, store, and recall.  It's not as far-fetched as it sounds; there have been studies that suggest that mental activity can stave off or slow down dementia, so the "if you don't use it, you lose it" aphorism may work just as much for our brains as it does for our muscles.

In any case, I'm becoming more and more adamant about students putting away the electronics.  They don't question the benefits of doing calisthenics in P.E. (although they complain about it); it's equally important to do the mental calisthenics of processing and recalling without leaning on the crutch of the internet.  And from the research of Storm et al., it's sounding like the automatic jump to "let's Google it" is a habit a lot of us need to break.

Saturday, December 26, 2015

A reason to keep going

This week we are seeing the final installment of the wonderful Washington Post weekly column "What Was Fake On the Internet This Week?"  It's not that the writer, Caitlin Dewey, has run out of material, that a wave of logic and skepticism has swept across the interwebz, rendering her job pointless.

Actually, it's the opposite.  After doing the column for a year and a half, Dewey is feeling defeated.

I understand her despondency, and I won't say I don't feel something of the same myself at times.  Dewey not only feels up against a rising tide of credulous idiocy, but also the inevitable money motive of the clickbait sites -- Now8News, The World News Daily Report, Before It's News, Above Top Secret, Infowars.  These all straddle the line between honest attempts to peddle a viewpoint, however crazy, and a completely pragmatic desire to devise headlines that get people to click the links and activate the advertising revenue it brings.

Dewey writes:
Frankly, this column wasn’t designed to address the current environment.  This format doesn’t make sense.  I’ve spoken to several researchers and academics about this lately, because it’s started to feel a little pointless.  Walter Quattrociocchi, the head of the Laboratory of Computational Social Science at IMT Lucca in Italy, has spent several years studying how conspiracy theories and misinformation spread online, and he confirmed some of my fears: Essentially, he explained, institutional distrust is so high right now, and cognitive bias so strong always, that the people who fall for hoax news stories are frequently only interested in consuming information that conforms with their views — even when it’s demonstrably fake.
Pretty hard to argue that point.

I was talking to my son about the problem yesterday evening, and his initial response was to agree with Dewey.  What she -- and I -- are attempting to do largely amounts to what my dad used to call pissing in a rainstorm.  (Had a way with words, my dad.)  But on reconsideration, Nathan said, "Well, think of it this way.  Let's say that of the people who read your blog, 90% are already rationalists and skeptics, and are only reading it for the amusement value, or to validate their own opinions.  That's still 10% for whom the issues are still in play.  How many hits do you get a day?"

"About a thousand, give or take," I said.

"So, that's a hundred people you're reaching every day who still might be convinced.  It's like the swing states in an election.  They may be few in number, but they're the ones whose votes count the most."

Smart kid.  And as he put it, "Having a hundred swing voters a day read your posts isn't too damn bad, when you think about it."

[image courtesy of the Wikimedia Commons]

Couple that with an email I got yesterday from a loyal reader, who had the following to say:
Merry Christmas to you and yours, Gordon.  I want to take this opportunity to tell you how much I appreciate the time you put into being a voice of reason in the whirlwind of craziness.  I can't imagine how you keep plugging away at this, but dammit, someone needs to be saying these things.  Kudos to you, and I hope Skeptophilia is around for many more years.
It's not only the personal validation of Fighting The Good Fight that keeps me going; it's knowing that there are people who are still reading, thinking, and talking, and who might in some small way be inspired to keep it up by reading what I write.  Yes, the internet is full of sensationalist trash and clickbait sites; it's an awfully good conduit for bullshit.  But it also links minds from across the world, and I can't help but feel optimistic about that.  As ZestFinance CEO Douglas Merrill put it, "All of us is smarter than any of us."

So if you're reading this, thank you, whether you've come here because you're undecided, come to have your opinions validated, or come to scoff at someone you disagree with.  If you're still reading and thinking, you're doing what you need to do.  Even if there will always be people in the world who renounce logic and reason, there is nothing to be gained by the logical and reasonable amongst us staying silent.

Tuesday, April 23, 2013

Crowd funding for antigravity

I've commented before how the advent of the internet has changed information transfer -- both in good ways (such as the availability of databases for quick fact-checking) and bad (such as offering a rapid, and virtually unstoppable, conduit for bullshit).  What I want to look at today is the way that the internet has changed how we view ideas and innovation -- again, in good ways and bad.

In the past, informal groups of like-minded individuals generally coalesced in some kind of formal setting -- a school, a church, a community center.  Now, there are places like Reddit where people, most of whom have never met, come together to discuss everything from gaming to world news.   This is all to the good, of course; I check several "subreddits" daily, including the ones that specialize in stories on science and skepticism.

The problem is, of course, like-minded people are... like-minded.  Groups form that seem to have the sole purpose of reinforcing the opinions that the members already had.  (And for every group, there's an equal and opposite group.  Check out "Conspiracy" and "Conspiratard" for a pair of good examples.)

Reddit isn't the only place this happens.  The same devolution into self-reinforcing silliness can even infect groups that started with the best of intents.  Take "Kickstarter," for example.

Kickstarter started out as a way for people with great ideas in any field, but who lacked the funds to see them realized, to get small donations from large numbers of people.  From their front page, the organizers of Kickstarter say,
Kickstarter is a new way to fund creative projects.

We’re a home for everything from films, games, and music to art, design, and technology. Kickstarter is full of projects, big and small, that are brought to life through the direct support of people like you. Since our launch in 2009, more than 3.9 million people have pledged over $577 million, funding more than 39,000 creative projects. Thousands of creative projects are raising funds on Kickstarter right now.
There's no doubt it's a groundbreaking idea.   CNN called Kickstarter "paradigm-shifting" -- which is certainly apt.  But the problem is that once you've opened up the gates to anyone, you've opened up the gates to... anyone.  The field starts widening to include people who are, to put not too fine a point on it, wingnuts.  Take the Kickstarter proposal by Peter Fred, for example, that has as its aim making an anti-gravity device:
The gravity theory that I am trying to promote has the fundamental hypothesis that gravitational phenomena is the result of transferred momentum produced by "stopped wind" a term which will be described. We already know a lot about momentum and the dynamics of wind.  Thus the fundamental idea of my theory is further interpretable in terms of familiar physics.
Of course, there's the obligatory declaration that everything we think we know about physics is wrong:
This lesson from the past seems to be lost on today's scientists who seem once again to be championing an unphysical idea that is supported by widespread observation.   The unthinking acceptance of the observationally supported "strange" and mysterious  idea that mass can warp space or that it can attract other mass has resulted in a preposterous universe where 95% of it is little understood.   This situation does not call for hordes of experimenters spending billions keeping the ancient basic mysterious hypothesis in place.  What it calls for is a lone self-financed theorists working for years the attic trying to come up with an idea that would replace the idea mass can attract other mass or warp space. 
He then goes on to explain what he believes to be the real mechanism causing the pull of gravity, which is that cool air is "attracted to" warm air, causing lift.  To illustrate this, he uses the following diagram, which apparently comes from a middle school Earth Science text, with some added clumsy application of Photoshop:


 Note that he has simply blurred out the return arrows, showing the complete movement of air in the convection cell -- so that it looks like some mysterious force is making the cool air over the ocean move toward the warm air on land.  (For those of you who haven't had any atmospheric science, what's actually happening is that the warm air is rising because of a change in density, and the cool air from the ocean is being pulled in to replace it; the reverse happens at high altitudes, creating an "overturning" of air between the land and the ocean.)

So, what's my problem with this?  A crank has a silly idea.  Big deal.

Well, the big deal is that it was launched two weeks ago and he's already raised almost $500, presumably donated by people whose training in physics ended in sixth grade.  As with the Conspiracy subreddit, wingnuts attract other wingnuts.  Now, I know that he's unlikely to reach his goal ($15,000), and by the policy of Kickstarter, if the goal isn't reached no one loses their money.  But my worry is twofold: first, dumb ideas gain credibility by appearing here; and second, Kickstarter itself loses credibility by hosting them.

Maybe I should temper this, however, with the admission that the same kind of altered approach has changed the publishing industry -- and has allowed me to e-publish my own fiction (note the handsome lineup of book covers on the right of your screen, ready for your Kindle or Nook).  And while this change has, in some sense, opened the floodgates to people publishing garbage, it has offered an entrée to talented writers who became frustrated with the gatekeeping aspect of the old agent/publishing editor route.  And there's a Darwinian aspect to all this; lousy novels can't compete with good ones, and get lost in the morass of self-published manuscripts after having sold copies only to the writer's significant other, parents, and best friend.  Likewise, ideas such as the aforementioned antigravity device simply don't get enough money to launch.

So maybe the fact that the bad ideas on Kickstarter won't get funded is its saving grace.  I have to admit that there have been some cool projects that have succeeded; consider the 3-D Pen (you should definitely watch the video on this one), the workout shirt that changes color to show your muscle activity, and, of course the Cthulhu knitted ski mask:


Also comes in "Slime of R'lyeh green."

So there you are.  Look around at some of the Kickstarter projects -- there are some really interesting ones.  Whatever else you might say about it, Kickstarter is a unique approach to funding innovation, just as e-publishing is to writing, and Reddit is to the formation of intellectual communities.  And if each of those things comes with a downside, what doesn't?  It's just another feature of our technological evolution, an outcome of human intelligence that never fails to fascinate and surprise me.