Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Tuesday, March 16, 2021

Isolation and anxiety

Last September I took a job working half-time, providing companion care for a senior gentleman who lives in a full-care facility about twenty minutes' drive from where I live.  The work was easy -- mostly what he wanted to do was go for long walks -- and it helped replace a little bit of the income I lost when I retired from teaching.  It also got me out of the house, and (in my wife's words) kept me from turning into a complete recluse.

Then in November, I was furloughed because of the pandemic.

I was in the fortunate position that the financial hit of not working wasn't the dire situation it is for many.  The loss of my weekly paycheck didn't mean we would go without food or miss our mortgage payment.  What it did mean -- both for my client and me -- was that since then, we've been pretty well totally isolated.  My client still sees the nursing staff at the facility; and, to be clear about this, they are stupendous, doing their best to see not only to the physical but to the mental and emotional health of their residents.  For me, it's meant that other than occasional quick trips to the grocery store, the only person I see is my wife.

That's been the situation since the first week of November.

I honestly thought it would be easier for me to deal with isolation.  I'm an introvert by nature, and pretty shy and socially awkward at the best of times.  But the last few months have been dismal, with the fact of it being the middle of an upstate New York winter not helping matters.  I've been fighting bouts of depression and anxiety -- something I've dealt with all my life, but lately it's seemed a lot worse than my usual baseline.

A couple of weeks ago, I was contacted by the director of the facility.  Because I've been vaccinated against COVID, and the residents were also receiving their vaccines, they were reopening to non-essential visits, and my client was eager to resume our daily time together.  Yesterday was my first day back at work after being stuck at home, pretty much continuously, for four months.

This is where things get weird.  Because instead of it relieving my anxiety, it made it spike higher.  I'm talking, to nearly panic-attack levels.

In case this isn't clear enough, there is nothing rational about this reaction.  My client has some developmental disabilities, and frequently needs a lot of help and encouragement, but he's kind, funny, and a pleasure to be with.  The job itself is the opposite of stressful; the worst part of it is having to keep track of the paperwork required by the agency and the state.  Being stuck home made my anxiety worse; if anything made sense about this, you'd think being given the green light to work again would assuage it.

Edvard Munch, Anxiety (1894) [Image is in the Public Domain]

Apparently, though, I'm not alone in this rather counterintuitive reaction.  A paper last week in the journal Brain Sciences found that the social isolation a lot of us have experienced over the past year has caused a measurable spike in the levels of a hormone associated with stress called cortisol.  Cortisol is a multi-purpose chemical; it has a role in carbohydrate metabolism, behavior, resilience to emotional stressors, and reducing inflammation (cortisone, used for treating arthritis and joint injury, and topically for relieving skin irritations, is basically synthetic cortisol).  This last function is thought to be why long-term stress has a role in many inflammatory diseases, such as ulcers, acid reflux, and atherosclerosis; just as overconsumption of sugar can lead to the body losing its sensitivity to the hormone that regulates blood sugar (insulin), continuous stress seems to lower our sensitivity to cortisol, leading to increased inflammation.

Apropos of its role in emotional stress, the authors write:

There are important individual differences in adaptation and reactivity to stressful challenges.  Being subjected to strict social confinement is a distressful psychological experience leading to reduced emotional well-being, but it is not known how it can affect the cognitive and empathic tendencies of different individuals.  Cortisol, a key glucocorticoid in humans, is a strong modulator of brain function, behavior, and cognition, and the diurnal cortisol rhythm has been postulated to interact with environmental stressors to predict stress adaptation.  The present study investigates in 45 young adults (21.09 years old, SD = 6.42) whether pre-pandemic diurnal cortisol indices, overall diurnal cortisol secretion (AUCg) and cortisol awakening response (CAR) can predict individuals’ differential susceptibility to the impact of strict social confinement during the Coronavirus Disease 2019 (COVID-19) pandemic on working memory, empathy, and perceived stress.  We observed that, following long-term home confinement, there was an increase in subjects’ perceived stress and cognitive empathy scores, as well as an improvement in visuospatial working memory.  Moreover, during confinement, resilient coping moderated the relationship between perceived stress scores and pre-pandemic AUCg and CAR.

I thought it was pretty interesting that heightened cortisol has the effect of improving visuospatial working memory, but it makes sense if you think about it.  When a person is in a stressful situation, there's a benefit to being on guard, to keeping constant tabs on what's around you.  The downside, of course, is that such perpetual wariness is downright exhausting.

The last bit is also fascinating, if hardly surprising.  People who were capable of resilient coping with stress beforehand were less affected by the new emotional impact of being isolated; people like myself who were already struggling fared more poorly.  And interestingly, this was a pronounced enough response that it had a measurable effect on the levels of stress hormones in the blood.

This may explain my odd reaction to being taken off furlough.  Cortisol can be thought of as a sort of an "adrenaline for the long haul."  Adrenaline allows a fight-or-flight reaction in sudden emergencies, and has a rapid effect and equally rapid decline once the emergency is over.  Cortisol handles our response to long-duration stress -- and its effects are much slower to go away once the situation improves.  For people like myself who suffer from anxiety, it's like our brains still can't quite believe that we're no longer teetering over the edge of the cliff.  Even though things have improved, we still feel like we're one step from total ruin, and the added stressor of jumping back into a work situation when we've been safe at home for months certainly doesn't help.

In any case, yesterday's work day went fine.  As they always do.  I'm hoping that after a couple of weeks, my errant brain will finally begin to calm down once it realizes it doesn't have to keep me ramped up to red alert constantly.  It helps knowing I'm not alone in this reaction, and that there's a biochemical basis for it; that I'm not just making this up (something I was accused of pretty much every time I had an anxiety attack when I was a kid).

But it would also be nice if my brain would just think for a change.

***************************************

I've always been in awe of cryptographers.  I love puzzles, but code decipherment has seemed to me to be a little like magic.  I've read about such feats as the breaking of the "Enigma" code during World War II by a team led by British computer scientist Alan Turing, and the stunning decipherment of Linear B -- a writing system for which (at first) we knew neither the sound-to-symbol correspondence nor even the language it represented -- by Alice Kober and Michael Ventris.

My reaction each time has been, "I am not nearly smart enough to figure something like this out."

Possibly because it's so unfathomable to me, I've been fascinated with tales of codebreaking ever since I can remember.  This is why I was thrilled to read Simon Singh's The Code Book: The Science of Secrecy from Ancient Egypt to Quantum Cryptography, which describes some of the most amazing examples of people's attempts to design codes that were uncrackable -- and the ones who were able to crack them.

If you're at all interested in the science of covert communications, or just like to read about fascinating achievements by incredibly talented people, you definitely need to read The Code Book.  Even after I finished it, I still know I'm not smart enough to decipher complex codes, but it sure is fun to read about how others have accomplished it.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Monday, March 15, 2021

In your right mind

There's a peculiarity of the human brain called lateralization, which is the tendency of the brain to have a dominant side.  It's most clearly reflected in hand dominance; because of the cross-wiring of the brain, people who are right-handed have a tendency to be left brain dominant, and vice versa.  (There's more to it than that, as some people who are right handed are, for example, left eye dominant, but handedness is the most familiar manifestation of brain lateralization.)

It bears mention at this juncture that the common folk wisdom that brain lateralization has an influence on your personality -- that, for instance, left brain dominant people are sequential, mathematical, and logical, and right brain dominant people are creative, artistic, and holistic -- is complete nonsense.  That myth has been around for a long while, and has been roundly debunked, but still persists for some reason.

I first was introduced to the concept of brain dominance when I was in eighth grade.  I was having some difficulty reading, and my English teacher, Mrs. Gates, told me she thought I was mixed-brain dominant -- that I didn't have a strongly lateralized brain -- and that this often leads to processing disorders like dyslexia.  (She was right, but they still don't know why that connection exists.)  It made sense.  When I was in kindergarten, I switched back and forth between writing with my right and left hand about five times until my teacher got fed up and told me to simmer down and pick one.  I picked my right hand, and have stuck with it ever since, but I still have a lot of lefty characteristics.  I tend to pick up a drinking glass with my left hand, and I'm strongly left eye dominant, for example.

Anyhow, Mrs. Gates identified my mixed-brainness, and the outcome apropos of my reading facility, but she also told me that there was one thing that mixed-brain people can learn faster than anyone else.  Because of our nearly-equal control from both sides of the brain, we can do a cool thing, which Mrs. Gates taught me and I learned in fifteen seconds flat.  I can write, in cursive, forward with my right hand while I'm writing the same thing backwards with my left.  (Because it's me, they're both pretty illegible, but it's still kind of a fun party trick.)


[Image licensed under the Creative Commons Evan-Amos, Human-Hands-Front-Back, CC BY-SA 3.0]

Fast forward to today.  It's been known for years that lots of animals are lateralized, so it stands to reason that it must confer some kind of evolutionary advantage, but what that might be was unclear until recently.

Research by a team led by Onur Güntürkün, of the Institute of Cognitive Neuroscience at Ruhr-University Bochum, in Germany, has looked at lateralization in animals from cockatoos to zebra fish to humans, and has described the possible evolutionary rationale for having a dominant side of the brain.

"What you do with your hands is a miracle of biological evolution," Güntürkün says. " We are the master of our hands, and by funneling this training to one hemisphere of our brains, we can become more proficient at that kind of dexterity.  Natural selection likely provided an advantage that resulted in a proportion of the population -- about 10% -- favoring the opposite hand.  The thing that connects the two is parallel processing, which enables us to do two things that use different parts of the brain at the same time."

Additionally, Güntürkün says, our perceptual systems have also evolved that kind of division of labor.  Both left and right brain have visual recognition centers, but in humans the one on the right side is more devoted to image recognition, and the one on the left to word and symbol recognition.  And this is apparently a very old evolutionary innovation, long predating our use of language; even pigeons have a split perceptual function between the two sides of the brain (and therefore between their eyes).  They tend to tilt their heads so their left eye is scanning the ground for food while their right one scans the sky for predators.

So what might seem to be a bad idea -- ceding more control to one side of the brain than the other, making one hand more nimble than the other --turns out to have a distinct advantage.  And if you'll indulge me in a little bit of linguistics geekery, for good measure, even our word "dexterous" reflects this phenomenon.  "Dexter" is Latin for "right," and reflects the commonness of right-handers, who were considered to be more skillful.  (And when you find out that the Latin word for "left" is "sinister," you get a rather unfortunate lens into attitudes toward southpaws.)

Anyhow, there you have it; another interesting feature of our brain physiology explained, and one that has a lot of potential for increasing our understanding of neural development.  "Studying asymmetry can provide the most basic blueprints for how the brain is organized," Güntürkün says.  "It gives us an unprecedented window into the wiring of the early, developing brain that ultimately determines the fate of the adult brain.  Because asymmetry is not limited to human brains, a number of animal models have emerged that can help unravel both the genetic and epigenetic foundations for the phenomenon of lateralization."

***************************************

I've always been in awe of cryptographers.  I love puzzles, but code decipherment has seemed to me to be a little like magic.  I've read about such feats as the breaking of the "Enigma" code during World War II by a team led by British computer scientist Alan Turing, and the stunning decipherment of Linear B -- a writing system for which (at first) we knew neither the sound-to-symbol correspondence nor even the language it represented -- by Alice Kober and Michael Ventris.

My reaction each time has been, "I am not nearly smart enough to figure something like this out."

Possibly because it's so unfathomable to me, I've been fascinated with tales of codebreaking ever since I can remember.  This is why I was thrilled to read Simon Singh's The Code Book: The Science of Secrecy from Ancient Egypt to Quantum Cryptography, which describes some of the most amazing examples of people's attempts to design codes that were uncrackable -- and the ones who were able to crack them.

If you're at all interested in the science of covert communications, or just like to read about fascinating achievements by incredibly talented people, you definitely need to read The Code Book.  Even after I finished it, I still know I'm not smart enough to decipher complex codes, but it sure is fun to read about how others have accomplished it.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Saturday, March 13, 2021

The eyes have it

A friend of mine has characterized the teaching of science in elementary school, middle school, high school, and college as follows:

  1. Elementary school: Here's how it works!  There are a couple of simple rules.
  2. Middle school: Okay, it's not quite that simple.  Here are a few exceptions to the simple rules.
  3. High school: Those exceptions aren't actually exceptions, it's just that there are a bunch more rules.
  4. College: Here are papers written studying each of those "rules," and it turns out some are probably wrong, and analysis of the others has raised dozens of other questions.

This is pretty close to spot-on. The universe is a complicated place, and it's inevitable that to introduce children to science you have to simplify it considerably.  A seventh grader could probably understand and be able to apply F = ma, but you wouldn't get very far if you started out the with the equations of quantum electrodynamics.

But there are good ways to do this and bad ways.  Simplifying concepts and omitting messy complications is one thing; telling students something that is out-and-out false because it's familiar and sounds reasonable is quite another.  And there is no example of this that pisses me off more than the intro-to-genetics standard that brown eye color in humans is a Mendelian dominant allele, and the blue-eyed allele is recessive.

How many of you had your first introduction to Mendel's laws from a diagram like this one?


This is one of those ideas that isn't so much an oversimplification as it is ridiculously wrong.  Any reasonably intelligent seventh-grader would see this and immediately realize that not only do different people's brown and blue eyes vary in hue and darkness, there are hazel eyes, green eyes, gray eyes, and various combos -- hazel eyes with green flecks, for example.  Then there's heterochromia -- far more common in dogs than in humans -- where the iris of the right eye has a dramatically different color than the left.

[Image licensed under the Creative Commons AWeith, Sled dog on Svalbard with heterochromia, CC BY-SA 4.0]

When I taught genetics, I found that the first thing I usually had to get my students to do was to unlearn the things they'd been taught wrong, with eye color inheritance at the top of the list.  (Others were that right-handedness is dominant -- in fact, we have no idea how handedness is inherited; that red hair is caused by a recessive allele; and that dark skin color is dominant.)  In fact, even some traits that sorta-kinda-almost follow a Mendelian pattern, such as hitchhiker's thumb, cleft chin, and attached earlobes, aren't as simple as they might seem.

But there's nowhere that the typical middle-school approach to genetics misses the mark quite as badly as it does with eye color.  While it's clearly genetic in origin -- most physical traits are -- the actual mechanism should rightly be put in that unfortunate catch-all stuffed away in the science attic:

"Complex and poorly understood."

The good news, though, and what prompted me to write this, is a paper this week in Science Advances that might at least deal with some of the "poorly understood" part.  A broad-ranging study of people from across Europe and Asia found that eye color in the people studied was caused by no fewer than sixty-one different gene loci.  Each of these controls some part of pigment creation and/or deposition, and the variation in these loci from population to population is why the variation in eye appearance seems virtually infinite.

The authors write:

Human eye color is highly heritable, but its genetic architecture is not yet fully understood.   We report the results of the largest genome-wide association study for eye color to date, involving up to 192,986 European participants from 10 populations.  We identify 124 independent associations arising from 61 discrete genomic regions, including 50 previously unidentified.  We find evidence for genes involved in melanin pigmentation, but we also find associations with genes involved in iris morphology and structure.  Further analyses in 1636 Asian participants from two populations suggest that iris pigmentation variation in Asians is genetically similar to Europeans, albeit with smaller effect sizes.  Our findings collectively explain 53.2% (95% confidence interval, 45.4 to 61.0%) of eye color variation using common single-nucleotide polymorphisms.  Overall, our study outcomes demonstrate that the genetic complexity of human eye color considerably exceeds previous knowledge and expectations, highlighting eye color as a genetically highly complex human trait.
And note that even this analysis only explained a little more than half of the observed variation in human eye color.

Like I said, it's not that middle-school teachers should start their students off with a paper from Science Advances.  I usually began with a few easily-observable traits from the sorta-kinda-Mendelian list, like tongue rolling and hitchhiker's thumb.  These aren't quite as simple as they're usually portrayed, but at least calling them Mendelian isn't so ridiculously wrong that when students find out the correct model -- most often in college -- they could accuse their teachers of lying outright.

Eye color, though.  That one isn't even Mendelian on a superficial level.  Teaching it that way is a little akin to teaching elementary students that 2+2=5 and figuring that's close enough for now and can be refined later.  So to teachers who still use brown vs. blue eye color as their canonical example of a dominant and recessive allele:

Please find a different one.

****************************************

Last week's Skeptophilia book-of-the-week was about the ethical issues raised by gene modification; this week's is about the person who made CRISPR technology possible -- Nobel laureate Jennifer Doudna.

In The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race, author Walter Isaacson describes the discovery of how the bacterial enzyme complex called CRISPR-Cas9 can be used to edit genes of other species with pinpoint precision.  Doudna herself has been fascinated with scientific inquiry in general, and genetics in particular, since her father gave her a copy of The Double Helix and she was caught up in what Richard Feynman called "the joy of finding things out."  The story of how she and fellow laureate Emmanuelle Charpentier developed the technique that promises to revolutionize our ability to treat genetic disorders is a fascinating exploration of the drive to understand -- and a cautionary note about the responsibility of scientists to do their utmost to make certain their research is used ethically and responsibly.

If you like biographies, are interested in genetics, or both, check out The Code Breaker, and find out how far we've come into the science-fiction world of curing genetic disease, altering DNA, and creating "designer children," and keep in mind that whatever happens, this is only the beginning.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Friday, March 12, 2021

Worlds without end

Earlier this week, I dealt with the rather unsettling idea that when AI software capabilities improve just a little more, we may be able to simulate someone so effectively that their interactions with us will be nearly identical to the real thing.  At that point, we may have to redefine what death means -- if someone's physical body has died, but their personality lives on, emulated within a computer, are they really gone?

Well, according to a couple of recent papers, the rabbit hole may go a hell of a lot deeper than that.

Let's start with Russian self-styled "transhumanist" Alexey Turchin.  Turchin has suggested that in order to build a convincing simulated reality, we need not only much more sophisticated hardware and software, we need a much larger energy source to run it than is now available.  Emulating one person, semi-convincingly, with an obviously fake animated avatar, doesn't take much; as we saw in my earlier post, we can more or less already do that.

But to emulate millions of people, so well that they really are indistinguishable from the people they're copied from, is a great deal harder.  Turchin proposes that one way to harvest that kind of energy is to create a "Dyson sphere" around the Sun, effectively capturing all of that valuable light and heat that otherwise is simply radiated into space.

Now, I must say that the whole Dyson sphere idea isn't what grabbed me about Turchin's paper, as wonderful as the concept is in science fiction (Star Trek aficionados will no doubt recall the TNG episode "Relics," in which the Enterprise almost got trapped inside one permanently).  The technological issues presented by building a Dyson sphere that is stable seem to me to be nearly insurmountable.  What raised my eyebrows was his claim that once we've achieved a sufficient level of software and hardware sophistication -- wherever we get the energy to run it -- the beings (can you call them that?) within the simulation would proceed to interact with each other as if it were a real world.

And might not even know they were within a simulation.

"If a copy is sufficiently similar to its original to the extent that we are unable to distinguish one from the other," Turchin asks, "is the copy equal to the original?"

If that's not bad enough, there's the even more unsettling idea that not only is it possible we could eventually emulate ourselves within a computer, it's possible that it's already been done.

And we're it.

Work by Nick Bostrom (of the University of Oxford) and David Kipping (of Columbia University) has looked at the question from a statistical standpoint.  Way back in 2003, Bostrom considered the issue a trilemma.  There are three possibilities, he says:
  • Intelligent species always go extinct before they become technologically capable of creating simulated realities that sophisticated.
  • Intelligent species don't necessarily go extinct, but even when they reach the state where they'd be technologically capable of it, none of them become interested in simulating realities.
  • Intelligent species eventually become able to simulate reality, and go ahead and do it.
Kipping recently extended Bostrom's analysis using Bayesian statistical techniques.  The details of the mathematics are a bit beyond my ken, but the gist of it is to consider what it would be like if choice #3 has even a small possibility of being true.  Let's say some intelligent civilizations eventually become capable of creating simulations of reality.  Within that reality, the denizens themselves evolve -- we're talking about AI that is capable of learning, here -- and some of them eventually become capable of simulating their reality with a reality-within-a-reality.

Kipping calls such a universe "multiparous" -- meaning "giving birth to many."  Because as soon as this ball gets rolling, it will inevitably give rise to a nearly infinite number of nested universes.  Some of them will fall apart, or their sentient species will go extinct, just as (on a far simpler level) your character in a computer game can die and disappear from the "world" it lives in.  But as long as some of them survive, the recursive process continues indefinitely, generating an unlimited number of matryoshka-doll universes, one inside the other.

[Image licensed under the Creative Commons Stephen Edmonds from Melbourne, Australia, Matryoshka dolls (3671820040) (2), CC BY-SA 2.0]

Then Kipping asks the question that blows my mind: if this is true, then what is the chance of our being in the one and only "base" (i.e. original) universe, as opposed to one of the uncounted trillions of copies?

Very close to zero.

"If humans create a simulation with conscious beings inside it, such an event would change the chances that we previously assigned to the physical hypothesis," Kipping said.  "You can just exclude that [hypothesis] right off the bat.  Then you are only left with the simulation hypothesis.  The day we invent that technology, it flips the odds from a little bit better than 50–50 that we are real to almost certainly we are not real, according to these calculations.  It’d be a very strange celebration of our genius that day."

The whole thing reminded me of a conversation in my novel Sephirot between the main character, Duncan Kyle, and the fascinating and enigmatic Sphinx, that occurs near the end of the book:
"How much of what I experienced was real?" Duncan asked.

"This point really bothers you, doesn't it?"

"Of course. It's kind of critical, you know?"

"Why?" Her basso profundo voice dropped even lower, making his innards vibrate.  "Everyone else goes about their lives without worrying much about it."

"Even so, I'd like to know."

She considered for a moment.  "I could answer you, but I think you're asking the wrong question."

"What question should I be asking?"

"Well, if you're wondering whether what you're seeing is real or not, the first thing to establish is whether or not you are real.  Because if you're not real, then it rather makes everyone else's reality status a moot point, don't you think?"

He opened his mouth, stared at her for a moment, and then closed it again.

"Surely you have some kind of clever response meant to dismiss what I have said entirely," she said.  "You can't come this far, meeting me again after such a long journey, only to find out you've run out of words."

"I'm not sure what to say."

The Sphinx gave a snort, and a shower of rock dust floated down onto his head and shoulders.  "Well, say something.  I mean, I'm not going anywhere, but at some point you'll undoubtedly want to."

"Okay, let's start with this.  How can I not be real?  That question doesn't even make sense.  If I'm not real, then who is asking the question?"

"And you say you're not a philosopher," the Sphinx said, her voice shuddering a little with a deep laugh.

"No, but really.  Answer my question."

"I cannot answer it, because you don't really know what you're asking.  You looked into the mirrors of Da'at, and saw reflections of yourself, over and over, finally vanishing into the glass, yes?  Millions of Duncan Kyles, all looking this way and that, each one complete and whole and wearing the charming befuddled expression you excel at."

"Yes."

"Had you asked one of those reflections, 'Which is the real Duncan Kyle, and which the copies?' what do you think he would have said?"

"I see what you're saying.  But still… all of the reflections, even if they'd insisted that they were the real one, they'd have been wrong.  I'm the original, they're the copies."

"You're so sure?... A man who cannot prove that he isn't a reflection of a reflection, who doesn't know whether he is flesh and blood or a character in someone else's tale, sets himself up to determine what is real."  She chuckled.  "That's rich."
So yeah.  When I wrote that, I wasn't ready for it to be turned on me personally.

Anyhow, that's our unsettling science/philosophy for this morning.  Right now it's probably better to go along with Duncan's attitude of "I sure feel real to me," and get on with life.  But if perchance I am in a simulation, I'd like to appeal to whoever's running it to let me sleep better at night.

And allow me to add that the analysis by Bostrom and Kipping is not helping much.

****************************************

Last week's Skeptophilia book-of-the-week was about the ethical issues raised by gene modification; this week's is about the person who made CRISPR technology possible -- Nobel laureate Jennifer Doudna.

In The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race, author Walter Isaacson describes the discovery of how the bacterial enzyme complex called CRISPR-Cas9 can be used to edit genes of other species with pinpoint precision.  Doudna herself has been fascinated with scientific inquiry in general, and genetics in particular, since her father gave her a copy of The Double Helix and she was caught up in what Richard Feynman called "the joy of finding things out."  The story of how she and fellow laureate Emmanuelle Charpentier developed the technique that promises to revolutionize our ability to treat genetic disorders is a fascinating exploration of the drive to understand -- and a cautionary note about the responsibility of scientists to do their utmost to make certain their research is used ethically and responsibly.

If you like biographies, are interested in genetics, or both, check out The Code Breaker, and find out how far we've come into the science-fiction world of curing genetic disease, altering DNA, and creating "designer children," and keep in mind that whatever happens, this is only the beginning.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Thursday, March 11, 2021

The monster in the mist

I thought that after writing this blog for ten years, I'd have run into every cryptid out there.  But just yesterday a loyal reader of Skeptophilia sent me a link about one I'd never heard of, which is especially interesting given that the thing supposedly lives in Scotland.

I've had something of a fascination with Scotland and all things Scottish for a long time, partly because of the fact that my dad's family is half Scottish (he used to describe his kin as "French enough to like to drink and Scottish enough not to know when to stop").  My grandma, whose Hamilton and Lyell ancestry came from near Glasgow, knew lots of cheerful Scottish stories and folk songs, 95% of which were about a guy named Johnny who was smitten with a girl named Jenny, but she spurned him, so he had no choice but to stab her to death with his wee pen-knife.

Big believers in happy endings, the Scots.

Anyhow, none of my grandma's stories were about the "Am Fear Liath Mòr," which roughly translates to "Big Gray Dude," who supposedly lopes about in the Cairngorms, the massive mountain range in the eastern Highlands.  He is described as extremely tall and covered with gray hair, and his presence is said to "create uneasy feelings."  Which seems to me to be putting it mildly.  If I was hiking through some lonely, rock-strewn mountains and came upon a huge hair-covered proto-hominid, my uneasy feelings would include pissing my pants and then having a stroke.  But maybe the Scots are tougher-spirited than that, and upon seeing the Am Fear Liath Mòr simply report feeling a little unsettled about the whole thing.

A couple of Scottish hikers being made to feel uneasy

The Big Gray Dude has been seen by a number of people, most notably the famous mountain climber J. Norman Collie, who in 1925 had reported the following encounter on the summit of Ben MacDhui, the highest peak in the Cairngorms:
I was returning from the cairn on the summit in the mist when I began to think I heard something else than merely the noise of my own footsteps.  For every few steps I took I heard a crunch, and then another crunch as if someone was walking after me but taking steps three or four times the length of my own.  I said to myself, this is all nonsense.  I listened and heard it again, but could see nothing in the mist.  As I walked on and the eerie crunch, crunch, sounded behind me, I was seized with terror and took to my heels, staggering blindly among the boulders for four or five miles nearly down to Rothiemurchus Forest.  Whatever you make of it I do not know, but there is something very queer about the top of Ben MacDhui and I will not go back there myself I know.
Collie's not the only one who's had an encounter.  Mountain climber Alexander Tewnion says he was on the Coire Etchachan path on Ben MacDhui, and the thing actually "loomed up out of the mist and then charged."  Tewnion fired his revolver at it, but whether he hit it or not he couldn't say.  In any case, it didn't harm him, although it did give him a serious scare.

Periodic sightings still occur today, mostly hikers who catch a glimpse of it or find large footprints that don't seem human.  Many report feelings of "morbidity, menace, and depression" when the Am Fear Liath Mòr is nearby -- one reports suddenly being "overwhelmed by either a feeling of utter panic or a downward turning of my thoughts which made me incredibly depressed."  Scariest of all, one person driving through the Cairngorms toward Aberdeen said that the creature chased their car, keeping up with it on the twisty roads until finally they hit a straight bit and were able to speed up sufficiently to lose it.  After it gave up the chase, they said, "it stood there in the middle of the road watching us as we drove away."

So that's our cryptozoological inquiry for today.  I've been to Scotland once, but never made it out of Edinburgh -- I hope to go back and visit the ancestral turf some day.  When I do, I'll be sure to get up into the Cairngorms and see if I can catch a glimpse of the Big Gray Dude.  I'll report back on how uneasy I feel afterwards.

****************************************

Last week's Skeptophilia book-of-the-week was about the ethical issues raised by gene modification; this week's is about the person who made CRISPR technology possible -- Nobel laureate Jennifer Doudna.

In The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race, author Walter Isaacson describes the discovery of how the bacterial enzyme complex called CRISPR-Cas9 can be used to edit genes of other species with pinpoint precision.  Doudna herself has been fascinated with scientific inquiry in general, and genetics in particular, since her father gave her a copy of The Double Helix and she was caught up in what Richard Feynman called "the joy of finding things out."  The story of how she and fellow laureate Emmanuelle Charpentier developed the technique that promises to revolutionize our ability to treat genetic disorders is a fascinating exploration of the drive to understand -- and a cautionary note about the responsibility of scientists to do their utmost to make certain their research is used ethically and responsibly.

If you like biographies, are interested in genetics, or both, check out The Code Breaker, and find out how far we've come into the science-fiction world of curing genetic disease, altering DNA, and creating "designer children," and keep in mind that whatever happens, this is only the beginning.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Wednesday, March 10, 2021

Shooting the bull

There's a folk truism that goes, "Don't try to bullshit a bullshitter."

The implication is that people who exaggerate and/or lie routinely, either to get away with things or to create an overblown image of themselves, know the technique so well that they can always spot it in others.  This makes bullshitting a doubly attractive game; not only does it make you slick, impressing the gullible and allowing you to avoid responsibility, it makes you savvy and less likely to be suckered yourself.

Well, a study published this week in The British Journal of Social Psychology, conducted by Shane Littrell, Evan Risko, and Jonathan Fugelsang, has shown that like many folk truisms, this isn't true at all.

In fact, the research supports the opposite conclusion.  At least one variety of regular bullshitting leads to more likelihood of falling for bullshit from others.

[Image licensed under the Creative Commons Inkscape by Anynobody, composing work: Mabdul ., Bullshit, CC BY-SA 3.0]

The researchers identified two main kinds of bullshitting, persuasive and evasive.  Persuasive bullshitters exaggerate or embellish their own accomplishments to impress others or fit in with their social group; evasive ones dance around the truth to avoid damaging their own reputations or the reputations of their friends.

Because of the positive shine bullshitting has with many, the researchers figured most people who engage either type wouldn't be shy about admitting it, so they used self-reporting to assess the bullshit levels and styles of the eight hundred participants.  They then gave each a more formal measure of cognitive ability, metacognitive insight, intellectual overconfidence, and reflective thinking, then a series of pseudo-profound and pseudoscientific statements mixed in with real profound and truthful statements, to see if they could tell them apart.

The surprising result was that the people who were self-reported persuasive bullshitters were significantly worse at detecting pseudo-profundity than the habitually honest; the evasive bullshitters were better than average.

"We found that the more frequently someone engages in persuasive bullshitting, the more likely they are to be duped by various types of misleading information regardless of their cognitive ability, engagement in reflective thinking, or metacognitive skills," said study lead author Shane Littrell, of the University of Waterloo.  "Persuasive BSers seem to mistake superficial profoundness for actual profoundness.  So, if something simply sounds profound, truthful, or accurate to them that means it really is.  But evasive bullshitters were much better at making this distinction."

Which supports a contention that I've had for years; if you lie for long enough, you eventually lose touch with what the truth is.  The interesting fact that persuasive and evasive bullshitting aren't the same in this respect might be because evasive bullshitters engage in this behavior because they're highly sensitive to people's opinions, both of themselves and of others.  This would have the effect of making them more aware of what others are saying and doing, and becoming better at sussing out what people's real motives are -- and whether they're being truthful or not.  But persuasive bullshitters are so self-focused that they aren't paying much attention to what others say, so any subtleties that might clue them in to the fact they they're being bullshitted slip right by.

I don't know whether this is encouraging or not.  I'm not sure if the fact that it's easier to lie successfully to a liar is a point to celebrate by those of us who care about the truth.  But it does illustrate the fact that our common sense about our own behavior sometimes isn't very accurate.  As usual, approaching questions from a skeptical scientific angle is the best.

After all, no form of bullshit can withstand that.

****************************************

Last week's Skeptophilia book-of-the-week was about the ethical issues raised by gene modification; this week's is about the person who made CRISPR technology possible -- Nobel laureate Jennifer Doudna.

In The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race, author Walter Isaacson describes the discovery of how the bacterial enzyme complex called CRISPR-Cas9 can be used to edit genes of other species with pinpoint precision.  Doudna herself has been fascinated with scientific inquiry in general, and genetics in particular, since her father gave her a copy of The Double Helix and she was caught up in what Richard Feynman called "the joy of finding things out."  The story of how she and fellow laureate Emmanuelle Charpentier developed the technique that promises to revolutionize our ability to treat genetic disorders is a fascinating exploration of the drive to understand -- and a cautionary note about the responsibility of scientists to do their utmost to make certain their research is used ethically and responsibly.

If you like biographies, are interested in genetics, or both, check out The Code Breaker, and find out how far we've come into the science-fiction world of curing genetic disease, altering DNA, and creating "designer children," and keep in mind that whatever happens, this is only the beginning.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Tuesday, March 9, 2021

Memento mori

A man is discussing his fears about dying with his parish priest.

"Father," he says, "I'd be able to relax a little if I knew more about what heaven's like.  I mean, I love baseball... do you think there's baseball in heaven?"

The priest says, "Let me pray on the matter, my son."

So at their next meeting, the priest says, "I have good news and bad news...  The good news is, there is baseball in heaven."

The man gave him a relieved smile.  "So, what's the bad news?"

"You're playing shortstop on Friday."

*rimshot*

The vast majority of us aren't in any particular rush to die, and would go to significant lengths to postpone the event.  Even people who believe in a pleasant afterlife -- with or without baseball -- are usually just fine waiting as long as possible to get there.

And beyond our own fears about dying, there's the pain of grief and loss to our loved ones.  The idea that we're well and truly gone -- either off in some version of heaven, or else gone completely -- is understandably devastating to the people who care about us.

Well, with a machine-learning chatbot-based piece of software from Microsoft, maybe gone isn't forever, after all.

Carstian Luyckx, Memento Mori (ca. 1650) [Image is in the Public Domain]

What this piece of software does is to go through your emails, text messages, and social media posts, and pulls out what you might call "elements of style" -- typical word choice, sentence structure, use of figurative language, use of humor, and so on.  Once sufficient data is given to it, it can then "converse" with your friends and family in a way that is damn near indistinguishable from the real you, which in my case would probably involve being unapologetically nerdy, having a seriously warped sense of humor, and saying "fuck" a lot.

If you find this idea kind of repellent, you're not alone.  Once I'm gone, I really don't want anyone digitally reincarnating me; because, after all, it isn't me you'd be talking to.  The conscious part of me isn't there, it's just a convincing mimic, taking input from what you say, cranking through an algorithm, and producing an appropriate output based on the patterns of speech it learned.

But.

This brings up the time-honored question of what consciousness actually is, something that has been debated endlessly by far wiser heads than mine.  In what way are our brains not doing the same thing?  When you say, "Hi, Gordon, how's it going?", aren't my neural firing patterns zinging about in a purely mechanistic fashion until I come up with, "Just fine, how are you?"  Even a lot of us who don't explicitly believe in a "soul" or a "spirit," something that has an independent existence outside of our physical bodies, get a little twitchy about our own conscious experience.

So if an AI could mimic my responses perfectly -- and admittedly, the Microsoft chatbot is still fairly rudimentary -- how is that AI not me?

*brief pause to give my teddy bear a hug*

Myself, I wouldn't find a chatbot version of my deceased loved one at all comforting, however convincing it sounded.  Apparently there's even been some work on having the software scan through your photographs, and creating an animated avatar to go along with your verbal responses, and I find that even worse.  As hard as it is to lose someone you care about, it seems to me better to accept that death is part of the human condition, to grieve and honor your loved one in whatever way seems appropriate, and then get on with your own lives.

So please: once I'm gone, leave me to Rest In Peace.  No digital resuscitation, thanks.  To me, the Vikings had the right idea.  When I die, put my body on a boat, set fire to it, and push it out into the ocean.  Then afterward, have a wild party on the beach in my honor, with plenty of wine, music, dancing, and drunken debauchery.  This is probably illegal, but I can't think of a better sendoff.

After that, just remember me fondly, read what I wrote, recall all the good times, and get on with living.  Maybe there's an afterlife and maybe there isn't, but there's one thing just about all of us would agree on: the life we have right now is too precious to waste.

****************************************

Last week's Skeptophilia book-of-the-week was about the ethical issues raised by gene modification; this week's is about the person who made CRISPR technology possible -- Nobel laureate Jennifer Doudna.

In The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race, author Walter Isaacson describes the discovery of how the bacterial enzyme complex called CRISPR-Cas9 can be used to edit genes of other species with pinpoint precision.  Doudna herself has been fascinated with scientific inquiry in general, and genetics in particular, since her father gave her a copy of The Double Helix and she was caught up in what Richard Feynman called "the joy of finding things out."  The story of how she and fellow laureate Emmanuelle Charpentier developed the technique that promises to revolutionize our ability to treat genetic disorders is a fascinating exploration of the drive to understand -- and a cautionary note about the responsibility of scientists to do their utmost to make certain their research is used ethically and responsibly.

If you like biographies, are interested in genetics, or both, check out The Code Breaker, and find out how far we've come into the science-fiction world of curing genetic disease, altering DNA, and creating "designer children," and keep in mind that whatever happens, this is only the beginning.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]