Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Tuesday, August 3, 2021

The voices of our ancestors

One of the (many) reasons I love science is that as a process, it opens up avenues to knowledge that were previously thought closed.  Couple that with the vast improvements in technological tools, and you have a powerful combination for exploring realms that once were not considered "science" at all.

Take, for example, historical linguistics, the discipline that studies the languages spoken by our ancestors.  It is a particular fascination of mine -- in fact, it is the field I studied for my MA.  (Yes, I know I spent 32 years teaching biology.  It's a long story.)  I can attest to the fact that it's a hard enough subject, even when you have a plethora of written records to work with, as I did (my thesis was on the effects of the Viking invasions on Old English and Old Gaelic).  When records are scanty, or worse yet, non-existent, the whole thing turns into a highly frustrating, and highly speculative, topic.

This is the field of "reconstructive linguistics" -- trying to infer the characteristics of the languages spoken by our distant ancestors, for the majority of which we have not a single written remnant.  If you look in an etymological dictionary, you will see a number of words that have starred ancestral root words, such as *tark, an inferred verb stem from Proto-Indo-European that means "to twist."  (A descendant word that has survived until today is torque.)  The asterisk means that the word is "unattested" -- i.e., there's no proof that this is what the word actually was, in the original ancestor language, because there are no written records of Proto-Indo-European.  And therein, of course, lies the problem.  Because it's an unattested word, no one can ever be sure if it's correct (which the linguists will tell you straight up; they're not trying to claim more than they should -- thus the asterisks). 

So if you think a particular Proto-Indo-European root reconstructs as *lug and your colleague thinks it's *wuk, you can argue about it till next Sunday and you still will never be certain who's right, as there are very few Proto-Indo-Europeans around these days who could tell you for sure.

Okay, then how do the linguists even come up with a speculative ancestral root?  The inferred words in etymological dictionaries come mainly from the application of one of the most fundamental rules of linguistics: Phonetic changes are regular.




As a quick illustration of this -- and believe me, I could write about this stuff all day -- we have Grimm's Law, which describes how stops in Proto-Indo-European became fricatives in Germanic languages, but they remained stops in other surviving (non-Germanic) Indo-European languages.  One example is the shift of /p/ to /f/, which is why we have foot (English), fod (Norwegian), Fuss (German), fótur (Icelandic), and so on, but poús (Greek), pes (Latin), peda (Lithuanian), etc.  These sorts of sound correspondences allowed us to make guesses about what the original word sounded like.

Note the use of the past tense in the previous sentence.  Because now linguists have a tool that will take a bit of the guesswork out of reconstructive linguistics -- and shows promise to bringing it into the realm of a true science.

An article in Science World Report, entitled "Ancient Languages Reconstructed by Linguistic Computer Program, describes how a team of researchers at the University of British Columbia and the University of California - Berkeley has developed software that uses inputted lexicons to reconstruct languages.  (Read their original paper here.)  This tool automates a process that once took huge amounts of painstaking research, and even this first version has had tremendous success -- the first run of the program, using data from 637 Austronesian languages currently spoken in Asia and the South Pacific, generated proto-Austronesian roots for which 85% matched the roots derived by experts in that language family to within one phoneme or fewer.

What I'm curious about, of course, is how good the software is at deriving root words for which we do have written records.  In other words, checking its results against something other than the unverifiable derivations that historical linguists were already doing.  For example, would the software be able to take lexicons from Spanish, French, Portuguese, Italian, Catalan, Provençal, and so on, and correctly infer the Latin stems?  To me, that would be the true test; to see what the shortcomings were, you have to have something real to check its results against. 

But even so, it's a pretty nifty new tool.  Just the idea that we can make some guesses at what language our ancestors spoke six-thousand-odd years ago is stunning, and the fact that someone has written software that reduces the effort to accomplish this is cool enough to set my little Language Nerd Heart fluttering.  It is nice to see reconstructive linguistics using the tools of science, thus bringing together two of my favorite things.  Why, exactly, I find it so exciting to know that *swey may have meant "to whistle" to someone six millennia ago, I'm not sure.  But the fact that we now have a computer program that can check our guesses is pretty damn cool.

**********************************************

Author and biochemist Camilla Pang was diagnosed with autism spectrum disorder at age eight, and spent most of her childhood baffled by the complexities and subtleties of human interactions.  She once asked her mother if there was an instruction manual on being human that she could read to make it easier.

Her mom said no, there was no instruction manual.

So years later, Pang recalled the incident and decided to write one.

The result, Explaining Humans: What Science Can Teach Us About Life, Love, and Relationships, is the best analysis of human behavior from a biological perspective since Desmond Morris's classic The Naked Ape.  If you're like me, you'll read Pang's book with a stunned smile on your face -- as she navigates through common, everyday behaviors we all engage in, but few of us stop to think about.

If you're interested in behavior or biology or simply agree with the Greek maxim "gnothi seauton" ("know yourself"), you need to put this book on your reading list.  It's absolutely outstanding.

[Note:  if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Monday, August 2, 2021

Sponges, bird brains, and ugly plants

There's a story about Socrates, who was asked what he thought about his reputation for being the smartest man in the world.

The great philosopher thought for a moment, and responded, "If I am, it is only because I alone realize how little I know."

I think there's something to this.  Ignorance confers a kind of cockiness sometimes; another great thinker, Bertrand Russell, once said, "The whole problem with the world is that fools and fanatics are always so certain of themselves, and wiser people so full of doubts."  It's inevitable that learning generates some level of humility, because one is always reminded of how much left there is to learn.

This is probably why I was so damn cocky as a college freshman.  Once I got to be a junior, I realized how foolish that was, as I got an inkling of how much I didn't know.  (Of course, nearly failing Classical Mechanics also had a dampening effect on my ego.  That was the moment I realized I didn't have the brains to be a physicist.)

Whenever I start perusing scientific journals -- a common occupation, as I'm looking for topics for Skeptophilia -- I'm amazed at what we've uncovered about the world we live in, and also how much there is left to learn.  That was one of my main takeaways from three scientific papers I came across last week; that, and a sense of wonder at how cool science is.

The first was a link sent to me by my buddy (and fellow writer) Gil Miller.  A paper in Nature by Elizabeth Turner, paleontologist at the Harquail School of Earth Sciences at Laurentian University, describes a find at a dig site in northwestern Canada that seems to contain fossils of one of the earliest and simplest animal groups -- sponges

What's mind-boggling about this discovery is that the rocks of the Stone Knife Formation, where the fossils were discovered, are about 890 million years old.  So if confirmed, this would predate the next-oldest undisputed sponge fossils by 350 million years.  This might just get a shoulder shrug, because most people -- myself included, unless I force myself to stop and think about it -- get lost when the numbers get large, so a 350 million year gap falls into the "it's big, but I can't visualize how big" category.  Let me put this number in perspective for you: if you went back 350 million years from today, you'd be in a world where there were no dinosaurs -- the earliest dinosaurs wouldn't appear for another 90 million years or so.

That's how far back Turner's discovery pushes the earliest animals.

If confirmed, this would place the origin of animals prior to the Cryogenian Period (also called the "Snowball Earth") of between 720 and 635 million years ago, one of the most massive worldwide glaciation events known.

The second paper, in Science Advances, is about the evolution of modern dinosaurs -- or, as we usually call them, "birds."  It's striking that the ancestors of today's birds survived a catastrophic bottleneck at the end of the Cretaceous Period 66 million years ago, caused by the double whammy of a massive meteorite collision and a near-simultaneous flood basalt eruption in what is now India.  (Scientists have yet to determine if the two events were connected -- if, perhaps, the collision destabilized the crust and caused the eruption.)

The paper centers on the discovery of a fantastically well-preserved fossil of Ichthyornis, an aquatic bird species of about 70 million years ago.  Picture a  gull with teeth, and you have a pretty good idea of what Ichthyornis looked like.  

Reconstruction of Icthyornis dispar [Image licensed under the Creative Commons El fosilmaníaco, Ichthyornis restoration, CC BY-SA 4.0]

What is remarkable about this fossil is the preservation of the skull, which gives the researchers a good look at the structure of the brain it once enclosed.  What they found is that the likelihood of a bird lineage surviving the bottleneck was largely due to one thing -- brain size.  Put simply, when the extinction came, the big dumb species tended to die out, and the small smart species survived.  

"Living birds have brains more complex than any known animals except mammals," said study lead investigator Christopher Torres, of the University of Texas and Ohio University.  "This new fossil finally lets us test the idea that those brains played a major role in their survival...  If a feature of the brain affected survivorship, we would expect it to be present in the survivors but absent in the casualties, like Ichthyornis.  That's exactly what we see here."

The third paper, in Nature, is about one of the world's weirdest plants -- Welwitschia mirabilis, of the deserts of Namibia.  The number of bizarre features of this plant are too many to list, but include:
  • The plant can live thousands of years, but only ever has two leaves.  (The Afrikaans name for the plant, tweeblaarkanniedood means, "two leaves, doesn't die.)  The leaves are strap-like and can eventually grow to four meters in length.  They eventually get shredded by the wind into what looks like a giant pile of seaweed.
  • The root is also about four meters in length, and looks like a giant carrot.
  • Despite its appearance, its closest familiar relatives are conifers, like pines, spruces, and firs.
To me it falls into the "ugly but fascinating" category.

[Image licensed under the Creative Commons Muriel Gottrop, Welwitschia at Ugab River basin, CC BY-SA 3.0]

The current paper is about the Welwitschia genome, which has a number of interesting features.  First, it seems to have originated when there was a spontaneous duplication of the DNA about 85 million years ago that led to its being genetically isolated from its near relatives, after which it continued to develop along its own lines.  Duplication of the genome has an advantage -- providing extra copies of vital genes, so if mutation knocks out a copy, there's still a functional one available -- but it has the disadvantage of overproduction of gene products (too much of a protein can be as bad as not enough; this is why chromosomal duplications, as in Down syndrome, lead to developmental problems).

Welwitschia solved the disadvantage by a process called methylation, which chemically ties up and shuts down genes.  This is done during normal development in many species, where turning genes on and off at the right times is absolutely critical, and it also knocks out genetic parasites called transposons (a transposon is a segment of DNA that is able to copy itself and splice those copies elsewhere in the DNA -- a sort of copy-and-paste function gone haywire).  So Welwitschia ended up with a huge genome, of which a lot -- the researchers found about 55% -- is composed of shut-down transposons and other methylated (i.e. non-functional) sequences.

Also very weird is the balance between the different nitrogenous bases in Welwitschia's DNA.  You probably know that the "alphabet" of DNA is made up of four bases -- adenine, thymine, cytosine, and guanine -- and that they pair together like puzzle pieces, A with T, C with G.  So in normal DNA, there will always be exactly as much A as T and exactly as much C as G.

But the other ratios -- A to C, for example -- vary by species.  Still, the number of A/T pairs and C/G pairs is usually fairly close.  Unsurprisingly, this plant, which is an exception to so many rules, is an exception to this one as well -- only 29% of its DNA is made up of C/G pairs.

The upshot: this paper shows that an ugly but fascinating plant is even more interesting than we'd realized.

All of this, published just in the last week.  Which brings me back to Socrates.  I'm not claiming to be anywhere near as smart as he was, but I do share one belief with him.

So much to learn, so little time.

**********************************************

Author and biochemist Camilla Pang was diagnosed with autism spectrum disorder at age eight, and spent most of her childhood baffled by the complexities and subtleties of human interactions.  She once asked her mother if there was an instruction manual on being human that she could read to make it easier.

Her mom said no, there was no instruction manual.

So years later, Pang recalled the incident and decided to write one.

The result, Explaining Humans: What Science Can Teach Us About Life, Love, and Relationships, is the best analysis of human behavior from a biological perspective since Desmond Morris's classic The Naked Ape.  If you're like me, you'll read Pang's book with a stunned smile on your face -- as she navigates through common, everyday behaviors we all engage in, but few of us stop to think about.

If you're interested in behavior or biology or simply agree with the Greek maxim "gnothi seauton" ("know yourself"), you need to put this book on your reading list.  It's absolutely outstanding.

[Note:  if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Saturday, July 31, 2021

Fast modules, slow modules, and ghost photographs

Yesterday, a friend of mine sent me a YouTube video link about the frightening paranormal legends from the Superstition Mountains in Arizona.  The video doesn't provide much in the way of evidence but I have to admit it was pretty atmospheric.  Well, one thing led to another, and soon I was looking at photographs of alleged ghosts, and completely creeping myself out.

Just so I can share the experience with you, here are a few that I found especially shiver-inducing.

First, from a security camera in a library in Evansville, Indiana, comes this image of a hunched, shadowy creature creeping across the floor... of the Children's Reading Room:


Or how about this one, an old photograph from the 1940s that shows a screaming ghost reaching out towards an unsuspecting young couple:


Or this shot of a stern man standing behind an elderly woman -- a man who supposedly wasn't there when the photograph was taken:


Or the shadow in the kitchen -- a shadow cast by no object visible in the photograph.  This one immediately reminded me of the episode "Identity Crisis" from Star Trek: The Next Generation -- one of the flat-out scariest episodes they ever did.  If you've seen it, you probably recall the moment Geordi is in the Holodeck, one by one removing the shadows of all of the individuals in the simulation he's standing in -- and ending up with one shadow left over:


So, anyway, there I am, getting more and more weirded out (and still, for some reason, not simply switching to a website with cute pictures of puppies, or something).  And I thought, "Why am I freaking out about all of this?  Not only have I never had a single experience of anything supernatural, I don't even believe in any of this stuff.  I am morally certain that all of these photographs were either deliberate hoaxes, or were camera malfunctions/artifacts, or are examples of pareidolia -- some completely natural explanation must be responsible.  So why am I scared?"

And my mind returned to a book that was a Skeptophilia book-of-the-week a while back, Thinking, Fast and Slow by Daniel Kahneman, the psychologist who won the Nobel Prize in economics in 2002.  Kahneman's specialty is why humans make irrational decisions; his research into how that applies to economic decision-making is why he won the Nobel.  More interesting to me, though, is the facet of his research that shows that human thinking is split into two discrete modules -- a fast module and a slow one.  And those two modules are frequently at odds with one another.

The fast module is what allows us to take quick stock of what's around us.  It is, for example, what allows us to do an immediate assessment of the following photograph:


No "rational thinking" is needed to come to the conclusion that this woman is angry.  On the other hand, the slow module is invoked when doing a math problem, like what is 223 x 1,174?  The vast majority of us could solve that problem, but it would take time and concentration.  (The fact that there are savants who can solve problems like that nearly instantaneously makes me wonder if their brains are somehow wired to do math with the fast module of the brain; merely a speculation, but it's suggestive.)

As an example of how the two modules can be at odds, consider the "Linda Problem."  Participants in a study were told a story about Linda, a single woman, intelligent and outspoken, who was very concerned with issues of social justice.  The participants were then asked which of the following possibilities was more likely: (1) Linda is a bank teller; or (2) Linda is a bank teller and is active in the feminist movement.  By a vast majority, participants chose option 2.  (Did you?)

The problem is, option 2 is wrong.  Not just maybe wrong, it's completely wrong, as in impossible.  How could the likelihood of Linda's being a feminist bank teller exceed the likelihood of her being a bank teller?  All feminist bank tellers are bank tellers; adding an extra detail to the description can only have the effect of decreasing the probability.  (To make this clearer, how can there be more brown dogs than there are dogs?)  But the fast module's quick assessment of the situation was that from the information given, she was very likely to be a feminist; the likelihood that she was a bank teller was equal in both possibilities; so it jumped to the (incorrect) conclusion that the combined probability was higher.

So, you can see how the fast module, however useful it is in making the snap judgments that are essential in getting us through the day, is not, at its basis, rational.  It is primed by previous experience, and is inherently biased toward finding the quickest answer possible, even if that answer is completely contrary to rationality.

And that, I think, explains why a diehard skeptic can still be completely weirded out by ghost pictures.  The slow module in my brain thinks, "Okay, pareidolia.  Or the photo was doctored.  No way is this real." My fast module, on the other hand, is thinking, "Good lord, that's terrifying!  Time for to dump a liter or two of adrenaline into my bloodstream!"  And no amount of soothing talk from my slow module seems to make any difference.

Especially the photo with the creeping thing in the library.  That one is freakin' scary.

**************************************

One of the characteristics which is -- as far as we know -- unique to the human species is invention.

Given a problem, we will invent a tool to solve it.  We're not just tool users; lots of animal species, from crows to monkeys, do that.  We're tool innovators.  Not that all of these tools have been unequivocal successes -- the internal combustion engine comes to mind -- but our capacity for invention is still astonishing.

In The Alchemy of Us: How Humans and Matter Transformed One Another, author Ainissa Ramirez takes eight human inventions (clocks, steel rails, copper telegraph wires, photographic film, carbon filaments for light bulbs, hard disks, scientific labware, and silicon chips) and looks not only at how they were invented, but how those inventions changed the world.  (To take one example -- consider how clocks and artificial light changed our sleep and work schedules.)

Ramirez's book is a fascinating lens into how our capacity for innovation has reflected back and altered us in fundamental ways.  We are born inventors, and that ability has changed the world -- and, in the end, changed ourselves along with it.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Friday, July 30, 2021

Working titles

An author friend of mine recently posted a dilemma; she had come up with a killer title for her work-in-progress only to find out that another author had grabbed it first.  What to do?

Well, except for very famous, high monetary-value stories -- such as the ones owned by the Mouse Who Shall Not Be Named -- few titles are actually trademarked, which means that legally, you can publish a book under a title that's already been used.  In terms of common courtesy, however, the best answer comes from Wile E. Coyote: "Back to the old fiasco hatchery."

Myself, I think titles are critical.  They're one of the first things a potential reader sees (the first is most likely the cover illustration).  I find it intriguing to consider what people choose for titles, especially in cases where the choice is highly un-memorable.  Consider the formulaic approach, used most commonly in spaceship-and-alien science fiction: "The" + "alien sounding word" + one of the following words: "Maneuver, Gambit, Strategy, Solution, Encounter, Factor, Machine, Incident, Syndrome."   The Sqr'll'nutz Factor. The Bäbu'shkä Maneuver.  That sort of thing.

This book isn't real, but it definitely should be, because I would read the hell out of it.  (For other amazing examples, visit the page "Fake Book Titles Extravaganza!"  Do not try to drink anything while looking at this website.  You have been warned.)

The problem is, formulaic titles are often so ridiculously uncreative that they will promptly blend in with all of the other Encounters and Gambits and Maneuvers you've read about, and as a writer, that's definitely not the impression you want to create.  Memorable titles are short, pithy, and intriguing.  I tend to like metaphorical titles -- ones which provoke curiosity ("What on earth could that be referring to?") coupled with an "Aha!" moment when you read the story and actually figure it out.

As some examples, here are some of my favorite titles I've run across:
  • All Hallow's Eve (Charles Williams)
  • A Murder is Announced (Agatha Christie)
  • Closet Full of Bones (A. J. Aalto)
  • The Lathe of Heaven (Ursula LeGuin)
  • The Eyes of the Amaryllis (Natalie Babbitt)
  • Among the Dolls (William Sleator)
  • The Ocean at the End of the Lane (Neil Gaiman)
  • Everything is Illuminated (Jonathan Safran Foer) - and interestingly, I didn't particularly like this book.  But the title is awesome.
  • Something Wicked This Way Comes (Ray Bradbury)
  • Midnight in the Garden of Good and Evil (John Berendt)
  • Things Fall Apart (Chinua Achebe)
  • Their Eyes Were Watching God (Zora Neale Hurston)
  • The Girl Who Loved Tom Gordon (Stephen King)
  • The Stupidest Angel (Christopher Moore)
  • The Fifth Elephant (Terry Pratchett)
  • Wolves in the Walls (Neil Gaiman)
And a few that I think don't work so well:
  • "O, Whistle and I'll Come To You, My Lad" (M. R. James) - a brilliant, and terrifying, short story with a title that's way too long and cumbersome.
  • A Wind in the Door (Madeleine l'Engle) - an interesting title, but what the hell is the relevance?  At the end of the story, a door blows shut for no apparent reason, and I presume we're supposed to raise an eyebrow and say, "Ahhhh, now I see"?
  • Dorothy Sayers's novels are kind of a mixed bag.  Busman's Honeymoon is really clever and intriguing, but Unnatural Death is generic and boring (aren't all murder mysteries about unnatural deaths)?  Interestingly, the latter started out as The Dawson Pedigree -- a much better title, in my opinion -- then for some reason she chose to go with the bland.
  • Brandy of the Damned (Colin Wilson) - oh, come on.  I doubt the damned will get brandy, frankly.
  • Postern of Fate (Agatha Christie) - my opinion may be colored by the fact that I think this is far and away the worst book she ever wrote -- rambling, incoherent, with long passages of supposed-to-be-witty repartee, and after reading it I still have no clue why the title is relevant to the plot.
  • The Island of the Sequined Love Nun (Christopher Moore) - okay, I love Moore's novels and I know he was trying to give it a campy title.  Actually it's an awesome book - but the title is just goofy.
So, anyway, that gives you an idea of what I shoot for, with titles.  Here are a few titles I've come up with that I think work pretty well.  I'll leave it to you to decide if you think they're intriguing or dreadful.
  • The Dead Letter Office
  • Slings & Arrows
  • The Shambles
  • We All Fall Down (novella)
  • Whistling in the Dark
  • Kári the Lucky
  • Descent into Ulthoa
  • "The Pool of Ink" (short story)
  • "The Germ Theory of Disease" (short story)
**************************************

One of the characteristics which is -- as far as we know -- unique to the human species is invention.

Given a problem, we will invent a tool to solve it.  We're not just tool users; lots of animal species, from crows to monkeys, do that.  We're tool innovators.  Not that all of these tools have been unequivocal successes -- the internal combustion engine comes to mind -- but our capacity for invention is still astonishing.

In The Alchemy of Us: How Humans and Matter Transformed One Another, author Ainissa Ramirez takes eight human inventions (clocks, steel rails, copper telegraph wires, photographic film, carbon filaments for light bulbs, hard disks, scientific labware, and silicon chips) and looks not only at how they were invented, but how those inventions changed the world.  (To take one example -- consider how clocks and artificial light changed our sleep and work schedules.)

Ramirez's book is a fascinating lens into how our capacity for innovation has reflected back and altered us in fundamental ways.  We are born inventors, and that ability has changed the world -- and, in the end, changed ourselves along with it.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Thursday, July 29, 2021

The cost of personal courage

I have been following, from some distance, the hue-and-cry over Simone Biles's removing herself from competition on the U.S. Olympic gymnastics team.  Biles was completely up-front about why.  "You have to be there 100%," she told reporters.  "If not, you get hurt.  Today has been really stressful.  I was shaking.  I couldn't nap.  I have never felt like this going into a competition, and I tried to go out and have fun.  But once I came out, I was like, 'No.  My mental is not there.'  It's been a long year, and I think we are too stressed out.  We should be out here having fun.  Sometimes that's not the case."

Well, immediately the pundits started weighing in.  Charlie Kirk called her a "selfish sociopath" and bemoaned the fact that "we are raising a generation of weak people like Simone Biles."  Clay Travis suggested she be removed from future competition because she couldn't be relied on.  Piers Morgan was perhaps the worst -- not surprising given his ugly commentary in the past.  "Are 'mental health issues' now the go-to excuse for any poor performance in elite sport?  What a joke...  Sorry Simone Biles, but there's nothing heroic or brave about quitting because you're not having 'fun' – you let down your team-mates, your fans and your country."

And so on.  The criticism came fast and furious.  There were voices who spoke up in support of her decision, but it seemed to me the nastiness was a lot louder.

[Image licensed under the Creative Commons Agência Brasil Fotografias, Simone Biles Rio 2016e, CC BY 2.0]

Or maybe I'm just sensitive.  Other writers have spoken with more authority about the rigors of Olympic training and gymnastics in particular, not only the physical aspects but the mental, topics which I am unqualified to discuss.  But whatever the context, there is one thing I'm dead certain about.

If someone says they're struggling mentally and/or emotionally, you fucking well believe them.

I have fought mental illness all my life.  I've been open about this here before; I have come to realize it is no more shameful than any other chronic condition.  I do know, however, first-hand how debilitating anxiety can be.  I've also suffered from moderate-to-severe depression, fortunately now ameliorated by medications and a family who is understanding and supportive.  So at present, I'm doing okay.

But it hasn't always been that way.  For much of my life, I was in a situation where "suck it up and deal" and "be tough, be a man" and "you should be thankful for what you have" were the consistent messages.  Therapy was for the weak; psychiatric care (and meds) were for people who were crazy.  There's nothing wrong with you, I was told.  You just spend too much time feeling sorry for yourself and worrying about things you can't control.

The result?  Twice I was suicidal, once at age seventeen and once at age twenty, to the point that I had a plan and a method and was ready to go for it.  That I didn't -- fortunately -- is really only due to one thing; I was scared.  I spent a good bit of my first marriage haunted by suicidal ideation, and there the only thing that kept me alive was my commitment to my students, and later, to my children.

But I thought about it.  Every.  Single.  Damn.  Day.

That a bunch of self-appointed arbiters of proper behavior have told this remarkable young woman "No, I don't care how you feel or what you're going through, get back in there and keep performing for us" is somewhere beyond reprehensible.  I don't even have a word strong enough for it.  If you haven't experienced the hell of anxiety, panic attacks, and depression, you have zero right to criticize someone else, especially when she's doing what people in a bad mental space should be doing -- advocating for herself, setting her limits, and admitting when she can't manage to do something.

I wish I had known how to do that when I was twenty-four (Simone Biles's age).  But I was still a good fifteen years from understanding the mental illness I have and seeking out help -- and unashamedly establishing my own personal boundaries.

So to all the critics out there who think they know what Simone Biles should do better than she does -- shut the fuck up.  I presume you wouldn't go up to a person with a serious physical illness and have the temerity to tell them what they can and can't do, and to pass judgment on them if they don't meet your standards.  This is no different.  We have a mental health crisis in this country; skyrocketing incidence of diagnosed mental illnesses and uncounted numbers who go undiagnosed and unaided, and a health care system that is unable (or unwilling) to address these problems effectively.  What Simone Biles did was an act of bravery, and she deserves unequivocal support for it.  The cost of personal courage shouldn't be nasty invective from a bunch of self-appointed authorities who have never set foot on the road she has walked.

And those who can't understand that should at least have the good grace to keep their damn opinions to themselves.

**************************************

One of the characteristics which is -- as far as we know -- unique to the human species is invention.

Given a problem, we will invent a tool to solve it.  We're not just tool users; lots of animal species, from crows to monkeys, do that.  We're tool innovators.  Not that all of these tools have been unequivocal successes -- the internal combustion engine comes to mind -- but our capacity for invention is still astonishing.

In The Alchemy of Us: How Humans and Matter Transformed One Another, author Ainissa Ramirez takes eight human inventions (clocks, steel rails, copper telegraph wires, photographic film, carbon filaments for light bulbs, hard disks, scientific labware, and silicon chips) and looks not only at how they were invented, but how those inventions changed the world.  (To take one example -- consider how clocks and artificial light changed our sleep and work schedules.)

Ramirez's book is a fascinating lens into how our capacity for innovation has reflected back and altered us in fundamental ways.  We are born inventors, and that ability has changed the world -- and, in the end, changed ourselves along with it.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Wednesday, July 28, 2021

Bad news from the future

My current work-in-progress (well, work-in-extremely-slow-progress) is a fall-of-civilization novel called In the Midst of Lions that I swear wasn't inspired by the events of the last year and a half.  Set in 2035, it chronicles the struggles of five completely ordinary people to survive in a hellscape that has been created by an all-too-successful rebellion and war, that one character correctly calls "burning down the house you're locked in" because the resulting chaos is as deadly to the rebels as to the people they're rebelling against.

I suppose it's natural enough to assume the future is gonna be pretty bad.  I mean, look around.  The United States is gearing up for another catastrophic heat wave, we're in the middle of a pandemic, and so much of the western U.S. is on fire that the smoke is making it difficult to breathe here in upstate New York.

I try to stay optimistic, but being an inveterate worrier, it's hard at times.

Albert Goodwin, Apocalypse (1903) [Image is in the Public Domain]

If the current news isn't bad enough, just yesterday I ran into not one but two people who claim to be time travelers from the future who have come back somehow to let us know that we're in for a bad time.

The first, who calls himself Javier, goes by the moniker @UnicoSobreviviente ("only survivor") and posts videos allegedly from the year 2027 on TikTok.  "I just woke up in a hospital and I don’t know what happened," he says.  "Today is February 13, 2027 and I am alone in the city."

How he's posting on TikTok in 2021 if he's stuck in 2027, he never explains.

However, I must admit the videos are a little on the creepy side.  They do appear to show a city devoid of human life.  On the other hand, everything looks like it's in pretty good shape.  One theme I've had to deal with in my own novel is how fast stuff would fall apart/stop working if we were to stop maintaining it -- the answer, in most cases, seems to be "pretty damn fast."  (If you are looking for a somewhat depressing but brilliantly interesting read, check out the book The World Without Us by Alan Weisman, which considers this question in detail.)

So either Javier showed up immediately after the rest of humanity vanished, or else his videos are just an example of a cleverly-edited hoax.

I know which I think is more likely.

The other alleged time traveler goes by the rather uncreative name @FutureTimeTraveler, and also posts on TikTok (apparently this is the preferred mode by which time travelers communicate with the present).  And he says our comeuppance is gonna be a lot sooner than 2027.  He says it will come at the hands of seven-foot-four-inch aliens with "long, distorted skulls" who will land on Earth on May 24, 2022.  They're called Nirons, he says, and come in peace, but humans (whose habit of fucking up alien encounters has been the subject of countless movies and television shows) decide it's an invasion and fire on them.  This initiates a war.

So we've got an alien race who can cross interstellar space fighting a species who thinks it's impressive when a billionaire launches himself for a few minutes aboard what appears to be a giant metal dick.

Guess who wins.

Interestingly, this is not the first case of an alleged time traveler talking about future attacks by Nirons.  Another TikTok user, @ThatOneTimeTraveler, says the Nirons come from Saturn and we're going to get our asses handed to us.

So, corroboration, amirite?  Must be true!

I figure I'm doing my civic duty by letting everyone know that they should get themselves ready for a rough ride.  We've got the Nirons coming next year, then everyone vanishes five years after that, and if that's not bad enough, in 2035 there's a massive rebellion that takes down civilization entirely.  (Yes, I know that (1) it's impossible to have a rebellion if everyone disappeared eight years earlier, and (2) the rebellion itself is part of a novel I made up myself.  Stop asking questions.)

Anyhow, I figure knowing all this will take our minds off the fact that we seem to be doing our level best to destroy ourselves right here in the present.  I'm hoping I at least live long enough to meet the Nirons.  Sounds like they'll probably blast me with their laser guns immediately afterward, but you know how I am about aliens.  If I'm gonna die anyway, that's a fitting end.

**************************************

One of the characteristics which is -- as far as we know -- unique to the human species is invention.

Given a problem, we will invent a tool to solve it.  We're not just tool users; lots of animal species, from crows to monkeys, do that.  We're tool innovators.  Not that all of these tools have been unequivocal successes -- the internal combustion engine comes to mind -- but our capacity for invention is still astonishing.

In The Alchemy of Us: How Humans and Matter Transformed One Another, author Ainissa Ramirez takes eight human inventions (clocks, steel rails, copper telegraph wires, photographic film, carbon filaments for light bulbs, hard disks, scientific labware, and silicon chips) and looks not only at how they were invented, but how those inventions changed the world.  (To take one example -- consider how clocks and artificial light changed our sleep and work schedules.)

Ramirez's book is a fascinating lens into how our capacity for innovation has reflected back and altered us in fundamental ways.  We are born inventors, and that ability has changed the world -- and, in the end, changed ourselves along with it.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Tuesday, July 27, 2021

Untruth and consequences

In Dorothy Sayers' novel Gaudy Night, set (and written) in 1930s England, a group of Oxford University dons are the targets of an increasingly vicious series of threats and violence by a deranged individual.  The motive of the perpetrator turns out to be that one of the dons had, years earlier, caught the perpetrator's spouse in academic dishonesty, and the spouse had been dismissed from his position, and ultimately committed suicide.

Near the end of the novel, the main character, Harriet Vane, experiences a great deal of conflict over the resolution of the mystery.  Which individual was really at fault?  Was it the woman who made the threats, a widow whose grief drove her to threaten those she felt were smug, ivory-tower intellectuals who cared nothing for the love and devotion of a wife for her husband?  Or was it the don who had exposed the husband's "crime" -- which was withholding evidence contrary to his thesis in an academic paper?  Is that a sin that's worth the destruction of one life and the ruining of another?

The perpetrator, when found out, snarls at the dons, "... (C)ouldn't you leave my man alone?  He told a lie about somebody who was dead and dust hundreds of years ago.  Nobody was the worse for that.  Was a dirty bit of paper more important than all our lives and happiness?  You broke him and killed him -- all for nothing."  The don whose words led to the man's dismissal, and ultimately his suicide, says, "I knew nothing of (his suicide) until now...  I had no choice in the matter.  I could not foresee the consequences... but even if I had..."  She trails off, making it clear that in her view, her words had to be spoken, that academic integrity was a mandate -- even if that stance left a human being in ruins.

It's not, really, a very happy novel.  One is left feeling at the end that the incident left only losers, no winners.

The central theme of the book -- that words have consequences -- is one that seems to escape a lot of today's political pundits here in the United States.  Or, more accurately, they seem to feel that the fact that words sometimes have unforeseen consequences absolves them of any responsibility for the results.  A particularly egregious example is Fox News's Tucker Carlson, who considers himself blameless in the recent surge of Delta-Variant COVID-19 -- a surge that is virtually entirely amongst the unvaccinated, and significantly higher in the highly conservative Fox-watching states of Missouri, Arkansas, and Louisiana.  Carlson told his viewers on the air that they should accost people wearing masks in public, saying that mask-wearers are "zealots and neurotics" who are "the true aggressors, here."  Anyone seeing a child wearing a mask should "call 911 or Child Protection Services immediately" -- that if you see masked children you are "morally obligated to do something."

Then, as if to drive home his stance that you should be entitled to say anything you want, free of consequence (as long as what you're saying conforms to the Trump-GOP party line, of course), he was outraged when a couple of days ago he was confronted by an angry guy in a fly-fishing store in Montana, who called Carlson "the worst human being in the world" for his anti-vaxx stance.

So, Mr. Carlson, let me get this straight: after telling your viewers they're morally obligated to accost people who disagree with them, you object to the fact that someone accosted you because he disagrees with you?

[Image licensed under the Creative Commons Gage Skidmore from Surprise, AZ, United States of America, Tucker Carlson (50752390162), CC BY-SA 2.0]

Not only does this give new meaning to the words "sanctimonious hypocrite," it also shows a fundamental lack of understanding of what the principle of free speech means.  Yes, you're entitled to say what you want; but you are not entitled to be free of the consequences of those words.  To use the hackneyed example, you can shout "Fire" in a crowded theater, but if there's a stampede and someone gets hurt or killed, you will (rightly) be held responsible.  You can call your boss an idiotic asshole, but if you get fired, no judge in the world will advocate for your reinstatement on the basis of free speech.

You said what you wanted, then got the consequences.  End of story.

So the Trump-GOP members are now trying to figure out how to spin the surge of Delta-Variant COVID-19 amongst the unvaccinated after having played the most serious public health crisis we've seen in fifty years as a political stunt, and efforts to mitigate its spread as the Left trying to destroy fundamental American liberties.  Even Governor Ron DeSantis of Florida, long one of the most vocal anti-mask, anti-vaxx elected officials -- just a few weeks ago his website had for sale merchandize with the slogan "Don't Fauci My Florida" printed on it -- has made an about-face, and is urging people to get vaccinated.

The result?  Conservatives in Florida are furious with DeSantis for "selling out," some even suggesting he had taken bribes from vaccine manufacturers to change his message.  What the fuck did he expect?  He's spent the past year and a half claiming that the pandemic is overblown and any attempt to push vaccines is a conspiracy against freedom by the Democrats.  Did he think that the people who swallowed his lies hook, line, and sinker would simply forget what he'd said, and go, "Oh, okay, I'll run right out and get vaccinated now"?

Another mealy-mouthed too-little, too-late message came from Governor Kay Ivey of Alabama, the state with the overall lowest vaccination rate (39.6%) in the country.  Alarmed by the dramatic upsurge in new cases in her state, she said, "It's time to start blaming the unvaccinated folks."

So, Governor Ivey, let's just go one step backward in the causal chain, shall we?  Why exactly are so many Americans unvaccinated, when the vaccine is available for free whether or not you have health insurance?  Why is it that if you drew up a map of Trump voters, a map of Fox News watchers, and a map of the incidence of new cases of COVID-19, the three maps would show a remarkable similarity?

You can say what you want, but you can't expect to be free of the consequences of what you say.

I'm appalled not just because political hacks like Tucker Carlson have callously used this tragedy to sledgehammer in their own views with an increasingly polarized citizenry, nor because re-election-minded governors like Ivey and DeSantis jumped on the anti-vaxx bandwagon because they didn't want to alienate the Trump-worshipers who form a significant proportion of their base.  The most appalling thing is that they have done this, blind to the end results of their words, just like the Oxford don in Gaudy Night whose dedication to the nth degree of academic integrity made her blind to the human cost of her actions.  Words are tools, and these hypocrites have used them with as much thought and responsibility as a five-year-old with a chainsaw.

And now they are expecting us to hold them faultless when the people who trusted them are, literally, dying by the thousands.

I suppose I should be glad that even DeSantis and Ivey are pivoting.  Carlson, of course, hasn't, and probably never will; his motto seems to be "Death Before Admitting Error."  Perhaps a few lives will be saved from a horrible and painful death because some conservative leaders are now changing their tunes.

But honestly; it's far too late.  A study released in February in The Lancet ascribed forty percent of the 610,000 COVID deaths in the United States directly to Trump's policies.  "Instead of galvanizing the U.S. populace to fight the pandemic," the authors state, "President Trump publicly dismissed its threat."

And unless there is a concerted effort to hold accountable the ones who caused this catastrophe -- legally, if possible, or at least at the ballot box -- we are allowing them to get away with saying, "I had no choice in the matter, I could not foresee the consequences" and doing nothing while every public health expert in the world was begging them to take action.

**************************************

One of the characteristics which is -- as far as we know -- unique to the human species is invention.

Given a problem, we will invent a tool to solve it.  We're not just tool users; lots of animal species, from crows to monkeys, do that.  We're tool innovators.  Not that all of these tools have been unequivocal successes -- the internal combustion engine comes to mind -- but our capacity for invention is still astonishing.

In The Alchemy of Us: How Humans and Matter Transformed One Another, author Ainissa Ramirez takes eight human inventions (clocks, steel rails, copper telegraph wires, photographic film, carbon filaments for light bulbs, hard disks, scientific labware, and silicon chips) and looks not only at how they were invented, but how those inventions changed the world.  (To take one example -- consider how clocks and artificial light changed our sleep and work schedules.)

Ramirez's book is a fascinating lens into how our capacity for innovation has reflected back and altered us in fundamental ways.  We are born inventors, and that ability has changed the world -- and, in the end, changed ourselves along with it.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]