Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label statistics. Show all posts
Showing posts with label statistics. Show all posts

Wednesday, December 13, 2023

Miraculous mathematics

I've blogged before about "miraculous thinking" -- the idea that an unlikely occurrence somehow has to be a miracle simply based on its improbability.  But yesterday I ran into a post on the wonderful site RationalWiki that showed, mathematically, why this is a silly stance.

Called "Littlewood's Law of Miracles," after British mathematician John Edensor Littlewood, the man who first codified it in this way, it goes something like this:
  • Let's say that a "miracle" is defined as something that has a likelihood of occurring of one in a million.
  • We are awake, aware, and engaged on the average about eight hours a day.
  • An event of some kind occurs about once a second.  During the eight hours we are awake, aware, and engaged, this works out to 28,800 events per day, or just shy of a million events in an average month.  (864,000, to be precise.)
  • The likelihood of observing a one-in-a-million event in a given month is therefore 1-(999,999/1,000,000)^1,000,000, or about 0.632.  In other words, we have better than 50/50 odds of observing a miracle next month!
Of course, this is some fairly goofy math, and makes some silly assumptions (one discrete event every second, for example, seems like a lot).  But Littlewood does make a wonderful point; given that we're only defining post hoc the unlikeliness of an event that has already occurred, we can declare anything we want to be a miracle just based on how surprised we are that it happened.  And, after all, if you want to throw statistics around, the likelihood of any event happening that has already happened is 100%.

So, like the Hallmark cards say, Miracles Do Happen. In fact, they're pretty much unavoidable.

Peter Paul Rubens, The Miracle of St. Ignatius (1617) [Image is in the Public Domain]

You hear this sort of thing all the time, though, don't you?  A quick perusal of sites like Miracle Stories will give you dozens of examples of people who survived automobile accidents without a scratch, made recoveries from life-threatening conditions, were just "in the right place at the right time," and so on.  And it's natural to sit up and take notice when these things happen; this is a built-in perceptual error called dart-thrower's bias.  This fallacy is named after a thought experiment of being in a pub while there's a darts game going on across the room, and simply asking the question: when do you notice the game?  When there's a bullseye, of course.  The rest is just background noise.  And when you think about it, it's very reasonable that we have this bias.  After all, what has the greater evolutionary cost -- noticing the outliers when they're irrelevant, or not noticing the outliers when they are relevant?  It's relatively obvious that if the unusual occurrence is a rustle in the grass, it's far better to pay attention to it when it's the wind than not to pay attention to it when it's a lion.

And of course, on the Miracle Stories webpage, no mention is made of all of the thousands of people who didn't seem to merit a miracle, and who died in the car crash, didn't recover from the illness, or were in the wrong place at the wrong time.  That sort of thing just forms the unfortunate and tragic background noise to our existence -- and it is inevitable that it doesn't register with us in the same way.

So, we should expect miracles, and we are hardwired to pay more attention to them than we do to the 999,999 other run-of-the-mill occurrences that happen in a month.  How do we escape from this perceptual error, then?

Well, the simple answer is that in some senses, we can't.  It's understandable to be surprised by an anomalous event or a seemingly unusual pattern.  Think, for example, how astonished you'd be if you flipped a coin and got ten heads in a row.  You'd probably think, "Wow, what's the likelihood?" -- but any other ordered pattern of heads and tails, say, H-T-T-H-H-H-T-H-T-T -- has exactly the same probability of occurring.  It's just that the first one looks like a meaningful pattern, and the second one doesn't. 

The solution, of course, is the same as the solution for just about everything; don't turn off your brain.  It's okay to think, at first, "That was absolutely amazing!  How can that be?", as long as afterwards we think, "Well, there are thousands of events going on around me right now that are of equally low probability, so honestly, it's not so weird after all."

All of this, by the way, is not meant to diminish your wonder at the complexity of the universe, just to direct that wonder at the right thing.  The universe is beautiful, mysterious, and awe-inspiring.  It is also, fortunately, understandable when viewed through the lens of science.  And I think that's pretty cool -- even if no miracles occur today.

****************************************



Tuesday, November 30, 2021

The law of small numbers

A few days ago, I had a perfectly dreadful day.

The events varied from the truly tragic (receiving news that a former student had died) to the bad but mundane (losing a ghostwriting job I'd been asked to do because the person I was working for turned out to be a lunatic, and had decided I was part of a conspiracy against him -- the irony of which has not escaped me) to the "I'll-probably-laugh-about-this-later-but-right-now-I'm-not" (my dog, Guinness, recovering from his recent illness, and feeling chipper enough to swipe and destroy my wife's favorite hat) to the completely banal (my computer demanding an operating system update when I was in the middle of working, tying it up for two and a half hours).

All of this brought to mind the idea of streaks of bad (or good) luck -- something that you find people so completely convinced of that it's nearly impossible to get them to break their conviction that it sometimes happens.  We've all had days when everything seems to go wrong -- when we have what my dad used to call "the reverse Midas touch -- everything you touch turns to shit."  There are also, regrettably fewer, days when we seem to have inordinate good fortune.  My question of the day is: is there something to this?

Of course, regular readers of this blog are already anticipating that I'll answer "no."  There are actually three reasons to discount this phenomenon.  Two have already been the subjects of previous blog posts, so I'll only mention them in brief.

One is the fact that the human brain is wired to detect patterns.  We tend to take whatever we perceive and try to fit it into an understandable whole.  So when several things go wrong in a row -- even when, as with my experiences last week, they are entirely unrelated occurrences -- we try to make them into a pattern.

The second is confirmation bias -- the tendency of humans to use insignificant pieces of evidence to support what we already believe to be true, and to ignore much bigger pieces of evidence to the contrary.  I had four bad things, of varying degrees of unpleasantness, occur one day last week.  By mid-day I had already decided, "this is going to be a bad day."  So any further events -- the computer update, for example -- only reinforced my assessment that "this day is going to suck."  Good things -- like the fact that even though our dog is back to getting into trouble, he is recovering; like the the fact that we've been enjoying the International Ceramics Congress workshops this weekend; like the fact that lovely wife brought me a glass of red wine after dinner -- get submerged under the unshakable conviction that the day was a lost cause.

It's the third one I want to consider more carefully.

I call it the Law of Small Numbers.  Simply put: in any sufficiently small data sample, you will find anomalous, and completely meaningless, patterns.

To take a simple model: let's consider flipping a fair coin.  You would expect that if you flip said coin 1000 times, you will find somewhere near 500 heads and 500 tails. On the other hand, what if you look at any particular run of, say, six flips?

[Image licensed under the Creative Commons ICMA Photos, Coin Toss (3635981474), CC BY-SA 2.0]

In any six-flip run, the statisticians tell us, all possible combinations are equally likely; a pattern of HTTHTH has exactly the same likelihood of showing up as does HHHHHH -- namely, 1/64.  The problem is that the second looks like a pattern, and the first doesn't.  And so if the second sequence is the one that actually emerges, we become progressively more amazed as head after head turns up -- because somehow, it doesn't fit our concept of the way statistics should work.  In reality, if the second pattern amazes us, the first should as well -- when the fifth coin comes up tails, we should be shouting, "omigod, this is so weird" -- but of course, the human mind doesn't work that way, so it's only the second run that seems odd.

Another thing is that in the second case, the six-flip run of all heads, when it come to the seventh flip, what will it be?  It's hard for people to shake the conviction that after six heads, the seventh is bound to be tails, or at least that tails is more likely.  In fact, the seventh flip has exactly the same likelihood of turning up heads as all the others -- 1/2.

All of this brings up how surprisingly hard it is for statisticians to model true randomness.  If a sequence of numbers (for example) is actually random, all possible combinations of two numbers, three numbers, four numbers, and so on should be equally likely.  So, if you have a truly random list of (say) ten million one-digit numbers, there is a possibility that somewhere on that list there are ten zeroes in a row.  It would look like a meaningful pattern -- but it isn't.

This is part of what makes it hard to create truly randomized multiple-choice tests.  As a former science teacher, I frequently gave my classes multiple-choice quizzes, and I tried to make sure that the correct answers were placed fairly randomly.  But apparently, there's a tendency for test writers to stick the correct answer in the middle of the list -- thus the high school student's rule of thumb, which is, "if you don't know the answer, guess 'c'."

Randomness, it would seem, is harder to detect (and create) than most people think.  And given our tendency to see patterns where there are none, we should be hesitant to decide that the stars are against us on certain days.  In fact, we should expect days where there are strings of bad (or unusually good) occurrences.  It's bound to happen.  It's just that we notice it when several bad things happen on the same day, and don't tend to notice when they're spread out, because that, somehow, "seems more random" -- when, in reality, both distributions are random.

I keep telling myself that.  But it is hard to quell what my mind keeps responding -- "thank heaven it's a new week - it's bound to be better than last week was."

Well, maybe.  I do agree with another thing my dad used to tell me: "I'd rather be an optimist who is wrong than a pessimist who is right."  I'm just hoping that the statisticians don't show up and burst my bubble.

***********************************

It's astonishing to see what the universe looks like on scales different from those we're used to.  The images of galaxies and quasars and (more recently) black holes are nothing short of awe-inspiring.  However, the microscopic realm is equally breathtaking -- which you'll find out as soon as you open the new book Micro Life: Miracles of the Microscopic World.

Assembled by a team at DK Publishers and the Smithsonian Institution, Micro Life is a compendium of photographs and artwork depicting the world of the very small, from single-celled organisms to individual fungus spores to nerve cells to the facets of a butterfly's eye.  Leafing through it generates a sense of wonder at the complexity of the microscopic, and its incredible beauty.  If you are a biology enthusiast -- or are looking for a gift for a friend who is -- this lovely book is a sure-fire winner.  You'll never look the same way at dust, pollen, algae, and a myriad of other things from the natural world that you thought you knew.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Monday, July 26, 2021

The odds of creation

In today's contribution from the Department of Specious Statistics, the owner of a biblical timeline business and self-proclaimed mathematician has stated that she has calculated the likelihood of the biblical creation story being wrong as "less than 1 in 479 million."

Margaret Hunter, who owns Bible Charts and Timelines of Duck, West Virginia, stated in an interview, "I realized the twelve items listed in the Genesis creation account are confirmed by scientists today as being in the correct order, starting with light being separated from darkness, plants coming before animals and ending with man.  Think of the problem like this.  Take a deck of cards.  Keep just one suit—let’s say hearts.  Toss out the ace.  Hand the remaining twelve cards to a one year old child.  Ask him/her to hand you the cards one at a time.  In order.  What are the chances said toddler will start with the two and give them all to you in order right up to the king?"

Not very high, Hunter correctly states.  "Being a mathematician, I like thinking about things like this," she says.  "Moses had less than one chance in 479 million of just correctly guessing [the sequence of the creation account].  To me, the simplest explanation is Moses got it straight from the Creator."

Righty-o.  This just brings up a few questions in my mind, to wit:
  1. Are you serious?
  2. Maybe you "like thinking about mathematics," but you seem to know fuck-all about science.
  3. There's a town called "Duck, West Virginia?" 
I have to give her one thing; she got the odds of toddler-mediated correct card-ordering right.  For any twelve different objects, the number of possible combinations is 12!  (For non-mathematicians, this isn't just me saying the previous sentence excitedly.  12!, read as "twelve factorial," is 12*11*10*9*8*7*6*5*4*3*2*1, or 479,001,600.)
 
However, the major problem with this is that we can all take a look at the events in the biblical creation story, and see immediately that Moses didn't get them right.  Here, according to the site Christian Answers, is the order of creation:
  1. the Earth
  2. light
  3. day & night
  4. air
  5. water
  6. dry land
  7. seed-bearing plants with fruit
  8. the Sun, Moon, and stars
  9. water creatures
  10. birds
  11. land animals (presumably birds don't count)
  12. humans
One immediate problem I see is that there was day and night three days before the Sun was created, which seems problematic to me, as the following photograph illustrates:

[Image is in the Public Domain courtesy of NASA/JPL]

But of course, the problems don't end there.  Birds before the rest of "land animals?"  Plants before the Sun and Moon?  The plants are actually the ones on the list that are the most wildly out of order -- seed-bearing plants didn't evolve until the late Devonian, a long time after "water creatures" (the Devonian is sometimes called "the Age of Fish," after all), and an even longer time (about 4.5 billion years, to be precise) after the formation of the Sun.  Humans do come in the correct place, right there at the end, but the rest of it seems like kind of a hash.

So even if we use Hunter's mathematics, we run up against the unfortunate snag that if putting the twelve events of creation in the right order has a 1 in 479,001,600 likelihood of happening by chance, then the likelihood of putting them in the wrong order by chance is 479,001,599 in 479,001,600.  Which is what happened.  Leading us to the inevitable conclusion, so well supported by the available hard evidence, that Moses was just making shit up.

You know, I really wish you creationists would stop even pretending that this nonsense is scientific.  Just stick with your "the Bible says it, I believe it, and that settles it" approach, because every time you dabble your toes in the Great Ocean of Science, you end up getting knocked over by a wave and eating a mouthful of sand.  And it's becoming kind of embarrassing to watch, frankly.  Thank you.

**************************************

One of the characteristics which is -- as far as we know -- unique to the human species is invention.

Given a problem, we will invent a tool to solve it.  We're not just tool users; lots of animal species, from crows to monkeys, do that.  We're tool innovators.  Not that all of these tools have been unequivocal successes -- the internal combustion engine comes to mind -- but our capacity for invention is still astonishing.

In The Alchemy of Us: How Humans and Matter Transformed One Another, author Ainissa Ramirez takes eight human inventions (clocks, steel rails, copper telegraph wires, photographic film, carbon filaments for light bulbs, hard disks, scientific labware, and silicon chips) and looks not only at how they were invented, but how those inventions changed the world.  (To take one example -- consider how clocks and artificial light changed our sleep and work schedules.)

Ramirez's book is a fascinating lens into how our capacity for innovation has reflected back and altered us in fundamental ways.  We are born inventors, and that ability has changed the world -- and, in the end, changed ourselves along with it.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Tuesday, May 4, 2021

Patterns out of noise

We all have intuition and common sense about how the world works, and it is fascinating how often that intuition is wrong.

Not that I like having my worldview called into question, mind you; but I have to admit there's a certain thrill in discovering that there are subtleties I had never considered.  Take, for example, Benford's Law, that I first heard about a while back while listening to the radio program Freakonomics.  In any reasonably unrestricted data set, what should be the relative frequencies of the first digit?  Put another way, if I was to take a set of numbers (like the populations of all of the incorporated villages, towns, and cities in the United States) and look only at the first digits, how many of them would be 1s, 2s, 3s, and so on?

On first glance, I saw no reason that the distribution shouldn't be anything but equal.  That's what a set of random numbers means, right?  And how are the populations of municipalities ranging from ten people all the way up to several million anything other than a collection of random numbers?

Well, you've probably already guessed this isn't right.  Lining up the frequencies of 1s through 9s in order, you get a perfect inverse relationship.  About 30% of the first digits are 1s, all the way down to only 5% being 9s.

Why is this?  Well, the simple answer is that the statisticians are still arguing about it.  But it does give a way to catch when a supposedly real data set has been altered or fudged; the real data set will conform to Benford's Law, and (very likely) the altered one won't.

Another interesting one, and in fact the reason why I was thinking about this topic, is Zipf's Law, named after American linguist George Kingsley Zipf, who first attempted a mathematical explanation of why it works.  Zipf's Law looks at the frequencies of different words in long passages of text, and finds that there's an inverse relationship, similar to what we saw with Benford's Law.  In English, the most commonly used word is "the."  The next most common ("of") has half that frequency.  The third ("and") has one-third the frequency.  And on down the line; the tenth most frequent word occurs at one-tenth the frequency of the most common one, and so forth.

Zipf's Law has been tested in dozens of different languages, including conlangs like Esperanto, and it always holds.  So does the related pattern called the Brevity Law (there's an inverse relationship between the length of a word and how commonly it's used), and -- to me the most fascinating -- the Law of Hapax Legomenon, which states that in long passages of text, about half of the words will only occur once (the name comes from the Greek ἅπαξ λεγόμενον, meaning "being said once").

Where things get really interesting is that these three laws -- Zipf's Law, the Brevity Law, and the Law of Hapax Legomenon -- may have relevance to the search for extraterrestrial intelligence.  Say we pick up what seems like radio-wave-encoded language from another star system.  The difficulty is obvious; translating a passage from another language when we don't know the sound-to-meaning correspondence is mind-bogglingly difficult (although it has been accomplished, most famously Alice Kober's and Michael Ventris's decipherment of the Linear B script of Crete).  

The task seems even more hopeless for an alien language, that shares no genetic roots with any human language, and thus the most useful tool we have -- noting similarities with known related languages -- is a non-starter.  Just like Dr. Ellie Arroway in Contact, we'd be faced first with the seemingly insurmountable problem of figuring out if it is an actual alien language, and not just noise or gibberish.


The three laws I mentioned may solve at least that much of the problem.  The fact that they've been shown to govern the frequency distribution of every language tested, including completely unrelated ones like Japanese and Swahili, suggests that they might represent a universal tendency.  Just as Benford's Law can help statisticians identify falsified data sets, the three laws of word frequency distribution might help us tell if what we've picked up is truly language.

It still leaves the linguists with the daunting task of figuring out what it all means, but at least they won't be working fruitlessly on something that turns out to be mere noise.

I find the whole thing fascinating, not only from the alien angle (which you'd probably predict I'd love) but because it once again demonstrates that our intuition about things can lead us astray.  Who would have guessed, for example, that half of the words in a long passage of text would occur only once?  I love the way science, and scientific analysis, can correct our fallible "common sense" about how things work.

And, as with Zipf, Brevity, and Hapax Legomenon, open up doors to understanding things we never dreamed of.

****************************************

Ever get frustrated by scientists making statements like "It's not possible to emulate a human mind inside a computer" or "faster-than-light travel is fundamentally impossible" or "time travel into the past will never be achieved?"

Take a look at physicist Chiara Marletto's The Science of Can and Can't: A Physicist's Journey Through the Land of Counterfactuals.  In this ambitious, far-reaching new book, Marletto looks at the phrase "this isn't possible" as a challenge -- and perhaps, a way of opening up new realms of scientific endeavor.

Each chapter looks at a different open problem in physics, and considers what we currently know about it -- and, more importantly, what we don't know.  With each one, she looks into the future, speculating about how each might be resolved, and what those resolutions would imply for human knowledge.

It's a challenging, fascinating, often mind-boggling book, well worth a read for anyone interested in the edges of scientific knowledge.  Find out why eminent physicist Lee Smolin calls it "Hugely ambitious... essential reading for anyone concerned with the future of physics."

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]
 

Monday, April 5, 2021

Coincidence and meaning

A friend and loyal reader of Skeptophilia sent me a link to an interview with author Sharon Hewitt Rawlette about her recent book, The Source and Significance of Coincidences, along with a note saying, "Would love to hear your thoughts about this."

I'm usually loath to give my opinion about a claim after reading a summary, book review, or interview without reading the book itself, but considering that I had issues with just about everything in the interview I can say with some confidence that it's unlikely the book would make me any less doubtful.  Rawlette's idea is that coincidences -- at least some of them -- "mean something."  Other than two events coinciding, which is the definition of coincidence.  Here's how she defines it:

For me, a coincidence is something that is not blatantly supernatural. It could be just chance. But there’s part of you that says, "This seems more meaningful than that."  And maybe just seems a little too improbable to be explained as chance.  It seems too meaningful to you, personally, given where you are in your life.  It’s something that makes you wonder, "Is there something more?"

Coincidences can certainly be startling, I'll admit that.  I was on my way to an appointment a while back and was listening to Sirius XM Radio's classical station "Symphony Hall," and one of my favorite pieces came on -- Beethoven's Moonlight Sonata.  I was maybe two-thirds of the way through the first movement when I arrived, and I was short on time so regretfully had to turn the music off and get out of the car.

When I opened the door to the waiting room, there was music coming over the speakers.  Beethoven's Moonlight Sonata -- at almost precisely the same spot where I'd turned off the radio.

Immediately, I wondered if they were also listening to Sirius XM, but they weren't.  It was the usual selection of calming music you hear in doctors' offices everywhere.  It really had been... "just a coincidence."

[Image licensed under the Creative Commons Karry manessa", Coincidence with Smile, CC BY-SA 4.0]

But did it mean anything?  How would I know?  And if it did mean something... what?

Rawlette tells us what her criteria are:

I don’t think there’s a really cut and dry answer.  There are a variety of factors that I look at in my own life when I’m trying to figure out whether something is just a coincidence or something more.  One of those is how improbable it really is...  But I also think an important element is how you feel about it.  What is your intuition telling you?  How strongly do you feel about it?  And is it telling you something that really seems to help you emotionally?  Spiritually?  Is it providing you with guidance?

Here, we're moving onto some seriously shaky ground.

First of all, there's improbability.  How do you judge that?  I'd say that the probability of a random selection on a classical music station being the same as the selection playing in a doctor's office at the same time is pretty damn low, but that's just a hand-waving "seems that way to me" assessment.  Amongst the difficulties is that humans are kind of terrible at statistical reckoning.  For example, let's say you throw two coins twenty times each.  With the first coin, you get twenty heads in a row.  With the second coin, you get the following:

HTTHHHTHTHTTHHHTHTTH

Which one of those two occurrences is likelier?

It turns out that they have exactly the same probability: (1/2)^20.  A very, very small number.  The reason most people pick the second as likelier is that it looks random, and comes close to the 50/50 distribution of heads and tails that we all learned was what came out of random coin-flips back in the seventh grade.  The first, on the other hand, looks like a pattern, and it seems weird and improbable.

The second problem is that here -- as with Rawlette's coincidences -- we're only assessing their probability after the fact.  In our coin flip patterns above, after they happen the probability that they happened is 100%.  I'll agree with her insofar as to say that in the first case (twenty heads in a row), I'd want to keep flipping the coin to see what would come up next, and if I keep getting heads, to see if I could figure out what was going on.  The second, corresponding much more to what I expected, wouldn't impel me to investigate further.

But the fact remains that as bizarre as it sounds, if you throw a (fair) coin a huge number of times -- say, a billion times -- the chance of there being twenty heads in a row somewhere in the array of throws is nearly 100%.  (Any statisticians in the studio audience could calculate for us what the actual probability is; suffice it to say it's pretty good.)

Third, of course, is that we run smack into our old friend dart-thrower's bias -- our hard-wired tendency to notice what seem to us to be outliers.  We don't pay any attention to all the times we walk into the doctor's office (or anywhere else) and the music playing isn't what we were just listening to, because it's so damn common.  The times the music is the same stand out -- and thus, we tend both to overcount them and weigh them more heavily in our attention and our memories.

Rawlette also doesn't seem to have any sort of criteria for telling the difference between random coincidence, meaningful coincidence, and something that is a deliberately targeted "sign" or "message" directed at you personally, other than how you feel about it:

I think the most impactful coincidences in people’s lives tend to be most improbable.  It’s very hard to explain them away.  But, the counterpart to that is that those coincidences also seem to have a very strong emotional impact on us.  They’re not only very improbable—very strange—but they carry a very strong emotional weight.  And we can’t escape that they’re significant somehow, even if we’re not exactly sure what the message is.  And, often, they do turn out to be life-changing.
So you are estimating how likely something is, assessing whether it was likely after the fact, deciding what the event's significance is, and deciding what the message (if any) consisted of.  It's putting a lot of confidence in our own abilities to perceive and understand the world correctly.  And if there's one thing I've learned from years of teaching neuroscience, it's that our sensory/perceptive and cognitive systems are (as Neil deGrasse Tyson put it) "poor data-taking devices... full of ways of getting it wrong."  I don't trust my own brain most of the time.  It's got a poor, highly-distractible attention span, an unreliable memory, and gets clogged up with emotions all too easily.  It's why I went into science; I learned really early that my personal interpretations of the world were all too often wrong, and I needed a more rigorous, reliable algorithm for determining what I believed to be true.

Now, I won't say I'm never prone to giving emotional weight to events after the fact.  As an example, I was quite close to my Aunt Pauline, my grandfather's youngest sister (youngest of twelve children!).  Pauline was a sweet person, childless and ten years a widow, when I was going to college at the University of Louisiana.  Every once in a while -- maybe every two or three months or so -- I'd stop by her house on the way home from school.  It wasn't far out of the way, and she was always thrilled to see me, and would bring out the coffee and a tray of cookies to share as we chatted.  One day, it occurred to me that it'd been a while since I'd seen her.  I don't know why she came to my mind; nothing I can think of reminded me.  I just suddenly thought, "I should stop by Aunt Pauline's and see how she's doing."

So I did.  She was cheerful as ever, and we had a lovely visit.

Two days later, she died of a heart attack at age 73.

I don't think I'd be human if the thought "how strange I was impelled to visit her!" didn't go through my mind.  But even back then, when I was twenty years old and much more prone to believe in unscientific explanations for things, it didn't quite sit right with me.  I visited with Aunt Pauline regularly anyhow; it certainly wasn't the first time I'd gotten in my car at the university and thought, "Hey, I should drop by."  I had lots of other older relatives who had died without my being at all inclined to visit immediately beforehand.  The "this is weird" reaction I had was understandable enough, but that by itself didn't mean there was anything supernatural going on.

I was really glad I'd gotten to see her, but I just didn't --and don't -- think I was urged to visit her by God, the Holy Spirit, the collective unconscious, or whatnot.  It was simply a fortuitous, but circumstantial, coincidence.

Rawlette then encourages us not to passively wait around for meaningful coincidences to occur to us, but to seek them out actively:
I think one of the most important things, when you experience a coincidence, is to keep an open mind about where it’s coming from and what it might mean.  Because it’s very easy to try to fit a coincidence into the way of thinking about the world that we already have—whatever our worldview is.  And coincidences generally come into our lives to expand that worldview.  They generally won’t fit neatly into the boxes that we have.  We might try to shove them in there, so we can stop thinking about it and make them less mysterious, but they generally are going to make us question some things that we thought we knew about the world.
What this puts me in mind of is the odd pastime of being a "Randonaut" -- using a random number generator to produce a set of geographical coordinates near you, going there, and looking for something strange -- about which I wrote a couple of years ago.  People report finding all sorts of bizarre things, some of them quite disturbing, while doing this.  I won't deny that it's kind of a fun concept, and no intrinsically weirder than my wife's near-obsession with geocaching, but it suffers from the same problems we considered earlier when you try to ascribe too much meaning to what you find.  If you're told to go to a random location and look around until you find something odd, with no criteria and no limitations, you're putting an awful lot of confidence in your own definition of "odd."  And, as I point out in the post, in my experience Weird Shit is Everywhere.  Wherever you are, if you look hard enough, you can find something mysterious, something that seems like a coincidence or a message or (at least) a surprise, but all that means is you had no real restrictions on what you were looking for, and that the world is an interesting place.

As an aside, this reminds me of my college friend's proof that all numbers are interesting:
  • Assume that there are some numbers that are uninteresting.
  • Let "x" be the first such number.
  • Since being the first uninteresting number is itself interesting, this contradicts our initial assumption, and there are no uninteresting numbers.
Anyhow, all this rambling is not meant to destroy your sense that the universe we live in is mysterious and beautiful.  It is both, and much more.  I am just exceedingly cautious about ascribing meaning to events without a hell of a lot more to go on than my faulty intuition.  I'd much rather rely on the tried-and-true methods of science to determine what's out there, which for me uncovers plenty enough stunningly bizarre stuff to occupy my mind indefinitely.

But like I began with: I haven't read Rawlette's book, and if you have and I'm missing the point, please enlighten me in the comments section.  I don't want to commit the Straw Man fallacy, mischaracterizing her claim and then arguing against that mischaracterization.  But from her interview, all I can say is that I'm not really buying it.

On the other hand, if the next few times I go from my car to an office, exactly the same music is playing again and again, I'll happily reconsider my stance -- all arguments about the statistics of flipping twenty heads in a row notwithstanding.

**************************************

This week's Skeptophilia book-of-the-week is a bit of a departure from the usual science fare: podcaster and author Rose Eveleth's amazing Flash Forward: An Illustrated Guide to the Possibly (and Not-So-Possible) Tomorrows.

Eveleth looks at what might happen if twelve things that are currently in the realm of science fiction became real -- a pill becoming available that obviates the need for sleep, for example, or the development of a robot that can make art.  She then extrapolates from those, to look at how they might change our world, to consider ramifications (good and bad) from our suddenly having access to science or technology we currently only dream about.

Eveleth's book is highly entertaining not only from its content, but because it's in graphic novel format -- a number of extremely talented artists, including Matt Lubchansky, Sophie Goldstein, Ben Passmore, and Julia Gförer, illustrate her twelve new worlds, literally drawing what we might be facing in the future.  Her conclusions, and their illustrations of them, are brilliant, funny, shocking, and most of all, memorable.

I love her visions even if I'm not sure I'd want to live in some of them.  The book certainly brings home the old adage of "Be careful what you wish for, you may get it."  But as long as they're in the realm of speculative fiction, they're great fun... especially in the hands of Eveleth and her wonderful illustrators.

[Note: if you purchase this book from the image/link below, part of the proceeds goes to support Skeptophilia!]



Wednesday, January 30, 2019

Miraculous mathematics

I've blogged before about "miraculous thinking" -- the idea that an unlikely occurrence somehow has to be a miracle simply based on its improbability.  But yesterday I ran into a post on the wonderful site RationalWiki that showed, mathematically, why this is a silly stance.

Called "Littlewood's Law of Miracles," after British mathematician John Edensor Littlewood, the man who first codified it in this way, it goes something like this:
  • Let's say that a "miracle" is defined as something that has a likelihood of occurring of one in a million.
  • We are awake, aware, and engaged on the average about eight hours a day.
  • An event of some kind occurs about once a second.  During the eight hours we are awake, aware, and engaged, this works out to 28,800 events per day, or just shy of a million events in an average month.  (864,000, to be precise.)
  • The likelihood of observing a one-in-a-million event in a given month is therefore 1-(999,999/1,000,000)1,000,000 , or about 0.63.  In other words, we have better than 50/50 odds of observing a miracle next month!
Of course, this is some fairly goofy math, and makes some silly assumptions (one discrete event every second, for example, seems like a lot).  But Littlewood does make a wonderful point; given that we're only defining post hoc the unlikeliness of an event that has already occurred, we can declare anything we want to be a miracle just based on how surprised we are that it happened.  And, after all, if you want to throw statistics around, the likelihood of any event happening that has already happened is 100%.

So, like the Hallmark cards say, Miracles Do Happen.  In fact, they're pretty much unavoidable.

Peter Paul Rubens, The Miracle of St. Ignatius (1617) [Image is in the Public Domain]

You hear this sort of thing all the time, though, don't you?  A quick perusal of sites like Miracle Stories will give you dozens of examples of people who survived automobile accidents without a scratch, made recoveries from life-threatening conditions, were just "in the right place at the right time," and so on.  And it's natural to sit up and take notice when these things happen; this is a built-in perceptual error called dart-thrower's bias.  This fallacy is named after a thought experiment of being in a pub while there's a darts game going on across the room, and simply asking the question: when do you notice the game?  When there's a bullseye, of course.  The rest is just background noise.  And when you think about it, it's very reasonable that we have this bias.  After all, what has the greater evolutionary cost -- noticing the outliers when they're irrelevant, or not noticing the outliers when they are relevant?  It's relatively obvious that if the unusual occurrence is a rustle in the grass, it's far better to pay attention to it when it's the wind than not to pay attention to it when it's a lion.

And of course, on the Miracle Stories webpage, no mention is made of all of the thousands of people who didn't seem to merit a miracle, and who died in the car crash, didn't recover from the illness, or were in the wrong place at the wrong time.  That sort of thing just forms the unfortunate and tragic background noise to our existence -- and it is inevitable that it doesn't register with us in the same way.

So, we should expect miracles, and we are hardwired to pay more attention to them than we do to the 999,999 other run-of-the-mill occurrences that happen in a month.  How do we escape from this perceptual error, then?

Well, the simple answer is that in some senses, we can't.  It's understandable to be surprised by an anomalous event or an unusual pattern.  (Think, for example, how astonished you'd be if you flipped a coin and got ten heads in a row.  You'd probably think, "Wow, what's the likelihood?" -- but any other pattern of heads and tails, say, H-T-T-H-H-H-T-H-T-T -- has exactly the same probability of occurring.  It's just that the first looks like a meaningful pattern, and the second one doesn't.)  The solution, of course, is the same as the solution for just about everything; don't turn off your brain.  It's okay to think, at first, "That was absolutely amazing!  How can that be?", as long as afterwards we think, "Well, there are thousands of events going on around me right now that are of equally low probability, so honestly, it's not so weird after all."

All of this, by the way, is not meant to diminish your wonder at the complexity of the universe, just to direct that wonder at the right thing.  The universe is beautiful, mysterious, and awe-inspiring.  It is also, fortunately, understandable when viewed through the lens of science.  And I think that's pretty cool -- even if no miracles occur today.

**********************************

In 1983, a horrific pair of murders of fifteen-year-old girls shook the quiet countryside of Leicestershire, England.  Police investigations came up empty-handed, and in the interim, people who lived in the area were in fear that there was a psychopath in their midst.

A young geneticist from the University of Leicestershire, Alec Jeffreys, stepped up with what he said could catch the murderer -- a new (at the time) technique called DNA fingerprinting.  He was able to extract a clear DNA signature from the bodies of the victims, but without a match -- without any one else's DNA to compare it to -- there was no way to use it to catch the criminal.

The way police and geneticists teamed up to catch an insane child killer is the subject of Joseph Wambaugh's book The Blooding.  It is an Edgar Award nominee, and is impossible to put down.  This case led to the now-commonplace use of DNA fingerprinting in forensics labs -- and its first application in a criminal trial makes for fascinating reading.

[If you purchase the book from Amazon using the image/link below, part of the proceeds goes to supporting Skeptophilia!]





Friday, January 4, 2019

Criticism bias

Following hard on the heels of yesterday's guardedly optimistic post about the potential malleability of people's views on such fraught topics as politics, today we have a recent and markedly less cheering study wherein we find that we don't apply the same moral lens to our own opinions as we do to other people's.

It may seem self-evident, but it's still kind of disappointing.  And the piece of research that showed this -- by Jack Cao, , Max Kleiman-Weiner, and Mahzarin R. Banaji of Harvard University's Department of Psychology -- is as elegant as it is incontrovertible.

In "People Make the Same Bayesian Judgments They Criticize in Others," which appeared in November's issue of Psychological Science, we find out that people are quick to use dispassionate evidence and logic to make their own decisions, but don't like it when other people do the same thing.

What Cao et al. did was to present test subjects with a simple scenario.  For example, a surgeon walks into the operating room to perform a procedure.  Is the surgeon more likely to be male or female?  Another one said that you're being attended by a doctor and a nurse.  One is male and one is female.  Which is which?

Clearly, just by statistics -- regardless what you think of issues of gender equality -- doctors and/or surgeons are more likely to be male and nurses more likely to be female.  And, in fact, almost everyone applied that logic to their own choices.  But then the researchers turned the tables.  Instead of asking the subjects what they thought about the question, they presented the answers given by a fictional stranger.  Jim answered that the surgeon and the doctor were more likely to be male and the nurse more likely to be female.  How does Jim rank on scales of morality, intelligence, and respect for equal rights?

Based on that one piece of information, respondents were harsh.  Almost across the board, people criticized Jim, saying he was less moral, less intelligent, and less likely to support equal rights than someone who had answered the other way.   "People don't like it when someone uses group averages to make judgments about individuals from different social groups who are otherwise identical.  They perceive that person as not only lacking in goodness, but also lacking in intelligence," Cao said, in a press release/interview in EurekAlert.  "But when it comes to making judgments themselves, these people make the same type of judgment that they had so harshly criticized in others...  This is important because it suggests that the distance between our values and the people we are is greater than we might think.  Otherwise, people would not have made judgments in a way that they found to be morally bankrupt and incompetent in others."

[Image licensed under the Creative Commons Deval Kulshrestha, Statua Iustitiae, CC BY-SA 4.0]

This is troubling in a couple of respects.  One is that we tend to give ourselves far more slack than we do other people.  Part of this, of course, is that we know our own internal state (at least insofar as it is possible).  We know our own attitudes and morals, while we're only guessing about other people's.  So when we apply purely statistical arguments to a question like the ones posed by Cao et al., we can say, "Okay, I know this sounds biased, but I'm not, actually.  I'm just basing my answer on the numbers, not what I think should be the case."

The other, though, is even worse.  It's how willing we are to be severely critical of other people based upon virtually nothing in the way of evidence.  How often do we find out one thing about someone -- he's a Catholic, she's a Republican, he's a lawyer, she's a teenager -- and decide we know a great many other things about them without any further information?  Worse still, once those decisions are made, we base our moral judgments on what we think we know, and they become very resistant to change.

As a high school teacher, I can't tell you the number of times I've been asked questions like, "How do you handle dealing with being disrespected by surly teenagers every day?"  Well, the truth is, the vast majority of the kids in my classes aren't surly at all, and the last time I was seriously disrespected by a student was a very long time ago.  But that knee-jerk judgment that if a person is a teenager, (s)he must be a pain in the ass, is automatic, widespread, and pervasive -- and remarkably difficult to challenge.

I think what this demands is a little bit of humility about our own fallibility.  We can't help making judgments, but we need to step back and examine them for what they are before we simply accept them.  Eradicating this kind of on-the-fly evaluation is the key to eliminating racism, sexism, and various other forms of bigotry that are based not on any kind of empirical evidence, but on our tendency to use one or two facts to infer complex understanding.

As Oliver Wendell Holmes put it, "No generalization is worth a damn, including this one."  Or, to quote skeptic and writer Michael Shermer, "Don't believe everything you think."

****************************************

This week's Skeptophilia book recommendation is one of personal significance to me -- Michael Pollan's latest book, How to Change Your Mind.  Pollan's phenomenal writing in tours de force like The Omnivore's Dilemma and The Botany of Desire shines through here, where he takes on a controversial topic -- the use of psychedelic drugs to treat depression and anxiety.

Hallucinogens like DMT, LSD, ketamine, and psilocybin have long been classified as schedule-1 drugs -- chemicals which are off limits even for research except by a rigorous and time-consuming approval process that seldom results in a thumbs-up.  As a result, most researchers in mood disorders haven't even considered them, looking instead at more conventional antidepressants and anxiolytics.  It's only recently that there's been renewed interest, when it was found that one administration of drugs like ketamine, under controlled conditions, was enough to alleviate intractable depression, not just for hours or days but for months.

Pollan looks at the subject from all angles -- the history of psychedelics and why they've been taboo for so long, the psychopharmacology of the substances themselves, and the people whose lives have been changed by them.  It's a fascinating read -- and I hope it generates a sea change in our attitudes toward chemicals that could help literally millions of people deal with disorders that can rob their lives of pleasure, satisfaction, and motivation.

[If you purchase the book from Amazon using the image/link below, part of the proceeds goes to supporting Skeptophilia!]




Thursday, November 1, 2018

Mental health priorities

What do you hear about more on the news, homicides in the United States, or suicides in the United States?

Unless you're watching drastically different news media than I do, you answered "homicide."  The media, and many people in the government, harp continuously on how dangerous our cities are, how we're all terribly vulnerable, and how you need to protect yourself.  This, of course, plays right into the narrative of groups like the NRA, whose bread and butter is convincing people they're unsafe.

Now, don't get me wrong; there are dangerous places in the United States and elsewhere.  And I'm not arguing against -- hell, I'm not even addressing -- the whole issue of gun ownership and a person's right to defend him or herself.  But the sense in this country that homicide is a huge problem and suicide is largely invisible reflects a fundamental untruth.

Because in the United States, suicide is almost three times more common than homicide.  The most recent statistics on homicide is that there are 5.3 homicides per 100,000 people.  Not only is this lower than the global average (which in 2016 was 7.3 violent deaths per 100,000 people), it has been declining steadily since 1990.

Suicide, on the other hand?  The current rate is 13.0 suicides per 100,000 people, and unlike homicide, the rate has been steadily increasing.  Between 1999 and 2014, the suicide rate in the United States went up by 24%.

It's appalling that most Americans don't know this.  A study released this week by researchers at the University of Washington, Northeastern University, and Harvard University showed that the vast majority of United States citizens rank homicide as a far higher risk than suicide.

"This research indicates that in the scope of violent death, the majority of U.S. adults don't know how people are dying," said Erin Morgan, lead author and doctoral student in the Department of Epidemiology at the University of Washington School of Public Health.  "Knowing that the presence of a firearm increases the risk for suicide, and that firearm suicide is substantially more common than firearm homicide, may lead people to think twice about whether or not firearm ownership and their storage practices are really the safest options for them and their household...  The relative frequencies that respondents reported didn't match up with the state's data when we compared them to vital statistics.  The inconsistency between the true causes and what the public perceives to be frequent causes of death indicates a gap in knowledge."

[Image licensed under the Creative Commons Wildengamuld, Free Depression Stock Image, CC BY-SA 4.0]

This further highlights the absurdity of our abysmal track record for mental health care.  Political careers are made over stances on crime reduction.  How many politicians even mention mental health policy as part of their platform?

The result is that even a lot of people who have health insurance have lousy coverage for mental health services.  Medications like antipsychotics and anxiolytics are expensive and often not covered, or only are partially covered.  I have a friend who has delayed getting on (much-needed) antidepressants for years -- mostly because of the difficulty of finding a qualified psychiatrist who can prescribe them, the fact that his health insurance has piss-poor mental health coverage, and the high co-pay on the medication itself.

No wonder the suicide rate is climbing.  Dealing with mental health is simply not a national priority.

It's time to turn this around.  Phone your local, state, and federal representatives.  My guess is that at least some part of the inaction is not deliberate; I'll bet that just as few of them know the statistics on suicide and homicide as the rest of the populace.

But once we know, it's time to act.  As study co-author Erin Morgan put it, "We know that this is a mixture of mass and individual communication, but what really leads people to draw the conclusions that they do?  If people think that the rate of homicide is really high because that's what is shown on the news and on fictional TV shows, then these are opportunities to start to portray a more realistic picture of what's happening."

*************************************

This week's Skeptophilia book recommendation is a wonderful read -- The Immortal Life of Henrietta Lacks by Rebecca Skloot.  Henrietta Lacks was the wife of a poor farmer who was diagnosed with cervical cancer in 1951, and underwent an operation to remove the tumor.  The operation was unsuccessful, and Lacks died later that year.

Her tumor cells are still alive.

The doctor who removed the tumor realized their potential for cancer research, and patented them, calling them HeLa cells.  It is no exaggeration to say they've been used in every medical research lab in the world.  The book not only puts a face on the woman whose cells were taken and used without her permission, but considers difficult questions about patient privacy and rights -- and it makes for a fascinating, sometimes disturbing, read.

[If you purchase the book from Amazon using the image/link below, part of the proceeds goes to supporting Skeptophilia!]



Thursday, October 18, 2018

Statistical fudging

The last thing we need right now is for people to have another reason to lose their trust in scientists.

It's a crucial moment.  On the one hand, we have the Intergovernmental Panel on Climate Change, which just a week and a half ago released a study that we have only twenty or so years left in which we can take action to limit the warming to an average of 1.5-2.0 C by 2050 -- and even that will almost certainly increase the number of major storms, shift patterns of rainfall, cause a drastic rise in sea level, and increase the number of deadly heat waves.  And it bears mention that a lot of climate scientists think that even this is underselling the point, giving politicians the sense that we can wait to take any action at all.  "It’s always five minutes to midnight, and that is highly problematic," said Oliver Geden, social scientist and visiting fellow at the Max Planck Institute for Meteorology in Hamburg, Germany.  "Policymakers get used to it, and they think there’s always a way out."

Then on the other hand we have our resident Stable Genius, Donald Trump, who claimed two days ago that he understands everything he needs to know about climate because he has "a natural instinct for science."  To bolster this claim, he made a statement that apparently sums up the grand total of his expertise in climatology, which is that "climate goes back and forth, back and forth."  He then added, "You have scientists on both sides of it.  My uncle was a great professor at MIT for many years, Dr. John Trump.  And I didn’t talk to him about this particular subject, but... I will say that you have scientists on both sides of the picture."

It bears mention that Dr. John Trump was an electrical engineer, not a climatologist.  And Donald Trump didn't even ask him for an opinion.

So we have scientists trying like hell to get the public to see that scientific results are reliable, and people like Trump and his cronies trying to portray them as engaging in no better than guesswork and speculation (and of having an agenda).  That's why I did a serious facepalm when I read the article sent to me a few days ago by a friend and frequent contributor to Skeptophilia, Andrew Butters, author and blogger over at Potato Chip Math (which you should all check out because it's awesome).

This article, which appeared over at CBC, comes from a different realm of science -- medical research.  It references a paper authored by Min Qi Wang, Alice F. Yan, and Ralph V. Katz that appeared in Annals of Internal Medicine, titled, "Researcher Requests for Inappropriate Analysis and Reporting: A U.S. Survey of Consulting Biostatisticians."

If the title isn't alarming enough by itself, take a look at what Wang et al. found:
Inappropriate analysis and reporting of biomedical research remain a problem despite advances in statistical methods and efforts to educate researchers...  [Among] 522 consulting biostatisticians... (t)he 4 most frequently reported inappropriate requests rated as “most severe” by at least 20% of the respondents were, in order of frequency, removing or altering some data records to better support the research hypothesis; interpreting the statistical findings on the basis of expectation, not actual results; not reporting the presence of key missing data that might bias the results; and ignoring violations of assumptions that would change results from positive to negative.  These requests were reported most often by younger biostatisticians.
The good news is that a lot of the biostatisticians reported refusing the requests to alter the data.  (Of course, given that this is self-reporting, you have to wonder how many would voluntarily say, "Yeah, I do that all the time.")

"I feel like I've been asked to do quite a few of these at least once," said Andrew Althouse, biostatistician at the University of Pittsburgh.  "I do my best to stand my ground and I've never falsified data....  I was once pressured by a surgeon to provide data on 10-year survival rates after a particular surgical intervention.  The problem — the 10-year data didn't exist because the hospital hadn't been using the procedure long enough...  The surgeon argued with me that it was really important and pleaded with me to find some way to do this.  He eventually relented, but it was one of the most jarring examples I've experienced."

[Image is in the Public Domain]

McGill University bioethicist Jonathan Kimmelman is among those who are appalled by this finding.  "If statisticians are saying no, that's great," he said.  "But to me this is still a major concern...  Everyone has had papers that are turned down by journals because your results were not statistically significant.  Getting tenure, getting pay raises, all sorts of things depend on getting into those journals so there is really strong incentives for people to fudge or shape their findings in a way that it makes it more palatable for those journals.  And what that shows is that there are lots of instances where there is threat of adulteration of the evidence that we use."

It's not surprising that, being human, scientists are prone to the same foibles and pitfalls as the rest of us.  However, you'd think that if you go into science, it's because you have a powerful commitment to the truth.  As Kimmelman says, the stakes are high -- not only prestige, but grant money.  Still, one would hope ethics would win over expediency.

And this is a particularly pivotal moment, when we have an administration that is deeply in the pockets of the corporations, and has shown a complete disregard for scientific findings and the opinions of experts.  The last thing we need is to give them more ammunition for claiming that science is unreliable.

But it's still a good thing, really, that Wang et al. have done this study.  You can't fix a problem when you don't know anything about it.  (Which is a truism Trump could learn from.  "Climate goes back and forth, back and forth," my ass.)  It's to be hoped that this will lead to better oversight of statistical analysis and a more stringent criterion during peer review.  Re-establishing the public trust in scientists is absolutely critical.  Our lives, and the long-term habitability of the Earth, could depend on it.

 ***********************************

This week's Skeptophilia book recommendation is something everyone should read.  Jonathan Haidt is an ethicist who has been studying the connections between morality and politics for twenty-five years, and whose contribution to our understanding of our own motives is second to none.  In The Righteous Mind: Why Good People are Divided by Politics, he looks at what motivates liberals and conservatives -- and how good, moral people can look at the same issues and come to opposite conclusions.

His extraordinarily deft touch for asking us to reconsider our own ethical foundations, without either being overtly partisan or accepting truly immoral stances and behaviors, is a needed breath of fresh air in these fractious times.  He is somehow able to walk that line of evaluating our own behavior clearly and dispassionately, and holding a mirror up to some of our most deep-seated drives.

[If you purchase the book from Amazon using the image/link below, part of the proceeds goes to supporting Skeptophilia!]