Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Saturday, March 13, 2021

The eyes have it

A friend of mine has characterized the teaching of science in elementary school, middle school, high school, and college as follows:

  1. Elementary school: Here's how it works!  There are a couple of simple rules.
  2. Middle school: Okay, it's not quite that simple.  Here are a few exceptions to the simple rules.
  3. High school: Those exceptions aren't actually exceptions, it's just that there are a bunch more rules.
  4. College: Here are papers written studying each of those "rules," and it turns out some are probably wrong, and analysis of the others has raised dozens of other questions.

This is pretty close to spot-on. The universe is a complicated place, and it's inevitable that to introduce children to science you have to simplify it considerably.  A seventh grader could probably understand and be able to apply F = ma, but you wouldn't get very far if you started out the with the equations of quantum electrodynamics.

But there are good ways to do this and bad ways.  Simplifying concepts and omitting messy complications is one thing; telling students something that is out-and-out false because it's familiar and sounds reasonable is quite another.  And there is no example of this that pisses me off more than the intro-to-genetics standard that brown eye color in humans is a Mendelian dominant allele, and the blue-eyed allele is recessive.

How many of you had your first introduction to Mendel's laws from a diagram like this one?


This is one of those ideas that isn't so much an oversimplification as it is ridiculously wrong.  Any reasonably intelligent seventh-grader would see this and immediately realize that not only do different people's brown and blue eyes vary in hue and darkness, there are hazel eyes, green eyes, gray eyes, and various combos -- hazel eyes with green flecks, for example.  Then there's heterochromia -- far more common in dogs than in humans -- where the iris of the right eye has a dramatically different color than the left.

[Image licensed under the Creative Commons AWeith, Sled dog on Svalbard with heterochromia, CC BY-SA 4.0]

When I taught genetics, I found that the first thing I usually had to get my students to do was to unlearn the things they'd been taught wrong, with eye color inheritance at the top of the list.  (Others were that right-handedness is dominant -- in fact, we have no idea how handedness is inherited; that red hair is caused by a recessive allele; and that dark skin color is dominant.)  In fact, even some traits that sorta-kinda-almost follow a Mendelian pattern, such as hitchhiker's thumb, cleft chin, and attached earlobes, aren't as simple as they might seem.

But there's nowhere that the typical middle-school approach to genetics misses the mark quite as badly as it does with eye color.  While it's clearly genetic in origin -- most physical traits are -- the actual mechanism should rightly be put in that unfortunate catch-all stuffed away in the science attic:

"Complex and poorly understood."

The good news, though, and what prompted me to write this, is a paper this week in Science Advances that might at least deal with some of the "poorly understood" part.  A broad-ranging study of people from across Europe and Asia found that eye color in the people studied was caused by no fewer than sixty-one different gene loci.  Each of these controls some part of pigment creation and/or deposition, and the variation in these loci from population to population is why the variation in eye appearance seems virtually infinite.

The authors write:

Human eye color is highly heritable, but its genetic architecture is not yet fully understood.   We report the results of the largest genome-wide association study for eye color to date, involving up to 192,986 European participants from 10 populations.  We identify 124 independent associations arising from 61 discrete genomic regions, including 50 previously unidentified.  We find evidence for genes involved in melanin pigmentation, but we also find associations with genes involved in iris morphology and structure.  Further analyses in 1636 Asian participants from two populations suggest that iris pigmentation variation in Asians is genetically similar to Europeans, albeit with smaller effect sizes.  Our findings collectively explain 53.2% (95% confidence interval, 45.4 to 61.0%) of eye color variation using common single-nucleotide polymorphisms.  Overall, our study outcomes demonstrate that the genetic complexity of human eye color considerably exceeds previous knowledge and expectations, highlighting eye color as a genetically highly complex human trait.
And note that even this analysis only explained a little more than half of the observed variation in human eye color.

Like I said, it's not that middle-school teachers should start their students off with a paper from Science Advances.  I usually began with a few easily-observable traits from the sorta-kinda-Mendelian list, like tongue rolling and hitchhiker's thumb.  These aren't quite as simple as they're usually portrayed, but at least calling them Mendelian isn't so ridiculously wrong that when students find out the correct model -- most often in college -- they could accuse their teachers of lying outright.

Eye color, though.  That one isn't even Mendelian on a superficial level.  Teaching it that way is a little akin to teaching elementary students that 2+2=5 and figuring that's close enough for now and can be refined later.  So to teachers who still use brown vs. blue eye color as their canonical example of a dominant and recessive allele:

Please find a different one.

****************************************

Last week's Skeptophilia book-of-the-week was about the ethical issues raised by gene modification; this week's is about the person who made CRISPR technology possible -- Nobel laureate Jennifer Doudna.

In The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race, author Walter Isaacson describes the discovery of how the bacterial enzyme complex called CRISPR-Cas9 can be used to edit genes of other species with pinpoint precision.  Doudna herself has been fascinated with scientific inquiry in general, and genetics in particular, since her father gave her a copy of The Double Helix and she was caught up in what Richard Feynman called "the joy of finding things out."  The story of how she and fellow laureate Emmanuelle Charpentier developed the technique that promises to revolutionize our ability to treat genetic disorders is a fascinating exploration of the drive to understand -- and a cautionary note about the responsibility of scientists to do their utmost to make certain their research is used ethically and responsibly.

If you like biographies, are interested in genetics, or both, check out The Code Breaker, and find out how far we've come into the science-fiction world of curing genetic disease, altering DNA, and creating "designer children," and keep in mind that whatever happens, this is only the beginning.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Friday, March 12, 2021

Worlds without end

Earlier this week, I dealt with the rather unsettling idea that when AI software capabilities improve just a little more, we may be able to simulate someone so effectively that their interactions with us will be nearly identical to the real thing.  At that point, we may have to redefine what death means -- if someone's physical body has died, but their personality lives on, emulated within a computer, are they really gone?

Well, according to a couple of recent papers, the rabbit hole may go a hell of a lot deeper than that.

Let's start with Russian self-styled "transhumanist" Alexey Turchin.  Turchin has suggested that in order to build a convincing simulated reality, we need not only much more sophisticated hardware and software, we need a much larger energy source to run it than is now available.  Emulating one person, semi-convincingly, with an obviously fake animated avatar, doesn't take much; as we saw in my earlier post, we can more or less already do that.

But to emulate millions of people, so well that they really are indistinguishable from the people they're copied from, is a great deal harder.  Turchin proposes that one way to harvest that kind of energy is to create a "Dyson sphere" around the Sun, effectively capturing all of that valuable light and heat that otherwise is simply radiated into space.

Now, I must say that the whole Dyson sphere idea isn't what grabbed me about Turchin's paper, as wonderful as the concept is in science fiction (Star Trek aficionados will no doubt recall the TNG episode "Relics," in which the Enterprise almost got trapped inside one permanently).  The technological issues presented by building a Dyson sphere that is stable seem to me to be nearly insurmountable.  What raised my eyebrows was his claim that once we've achieved a sufficient level of software and hardware sophistication -- wherever we get the energy to run it -- the beings (can you call them that?) within the simulation would proceed to interact with each other as if it were a real world.

And might not even know they were within a simulation.

"If a copy is sufficiently similar to its original to the extent that we are unable to distinguish one from the other," Turchin asks, "is the copy equal to the original?"

If that's not bad enough, there's the even more unsettling idea that not only is it possible we could eventually emulate ourselves within a computer, it's possible that it's already been done.

And we're it.

Work by Nick Bostrom (of the University of Oxford) and David Kipping (of Columbia University) has looked at the question from a statistical standpoint.  Way back in 2003, Bostrom considered the issue a trilemma.  There are three possibilities, he says:
  • Intelligent species always go extinct before they become technologically capable of creating simulated realities that sophisticated.
  • Intelligent species don't necessarily go extinct, but even when they reach the state where they'd be technologically capable of it, none of them become interested in simulating realities.
  • Intelligent species eventually become able to simulate reality, and go ahead and do it.
Kipping recently extended Bostrom's analysis using Bayesian statistical techniques.  The details of the mathematics are a bit beyond my ken, but the gist of it is to consider what it would be like if choice #3 has even a small possibility of being true.  Let's say some intelligent civilizations eventually become capable of creating simulations of reality.  Within that reality, the denizens themselves evolve -- we're talking about AI that is capable of learning, here -- and some of them eventually become capable of simulating their reality with a reality-within-a-reality.

Kipping calls such a universe "multiparous" -- meaning "giving birth to many."  Because as soon as this ball gets rolling, it will inevitably give rise to a nearly infinite number of nested universes.  Some of them will fall apart, or their sentient species will go extinct, just as (on a far simpler level) your character in a computer game can die and disappear from the "world" it lives in.  But as long as some of them survive, the recursive process continues indefinitely, generating an unlimited number of matryoshka-doll universes, one inside the other.

[Image licensed under the Creative Commons Stephen Edmonds from Melbourne, Australia, Matryoshka dolls (3671820040) (2), CC BY-SA 2.0]

Then Kipping asks the question that blows my mind: if this is true, then what is the chance of our being in the one and only "base" (i.e. original) universe, as opposed to one of the uncounted trillions of copies?

Very close to zero.

"If humans create a simulation with conscious beings inside it, such an event would change the chances that we previously assigned to the physical hypothesis," Kipping said.  "You can just exclude that [hypothesis] right off the bat.  Then you are only left with the simulation hypothesis.  The day we invent that technology, it flips the odds from a little bit better than 50–50 that we are real to almost certainly we are not real, according to these calculations.  It’d be a very strange celebration of our genius that day."

The whole thing reminded me of a conversation in my novel Sephirot between the main character, Duncan Kyle, and the fascinating and enigmatic Sphinx, that occurs near the end of the book:
"How much of what I experienced was real?" Duncan asked.

"This point really bothers you, doesn't it?"

"Of course. It's kind of critical, you know?"

"Why?" Her basso profundo voice dropped even lower, making his innards vibrate.  "Everyone else goes about their lives without worrying much about it."

"Even so, I'd like to know."

She considered for a moment.  "I could answer you, but I think you're asking the wrong question."

"What question should I be asking?"

"Well, if you're wondering whether what you're seeing is real or not, the first thing to establish is whether or not you are real.  Because if you're not real, then it rather makes everyone else's reality status a moot point, don't you think?"

He opened his mouth, stared at her for a moment, and then closed it again.

"Surely you have some kind of clever response meant to dismiss what I have said entirely," she said.  "You can't come this far, meeting me again after such a long journey, only to find out you've run out of words."

"I'm not sure what to say."

The Sphinx gave a snort, and a shower of rock dust floated down onto his head and shoulders.  "Well, say something.  I mean, I'm not going anywhere, but at some point you'll undoubtedly want to."

"Okay, let's start with this.  How can I not be real?  That question doesn't even make sense.  If I'm not real, then who is asking the question?"

"And you say you're not a philosopher," the Sphinx said, her voice shuddering a little with a deep laugh.

"No, but really.  Answer my question."

"I cannot answer it, because you don't really know what you're asking.  You looked into the mirrors of Da'at, and saw reflections of yourself, over and over, finally vanishing into the glass, yes?  Millions of Duncan Kyles, all looking this way and that, each one complete and whole and wearing the charming befuddled expression you excel at."

"Yes."

"Had you asked one of those reflections, 'Which is the real Duncan Kyle, and which the copies?' what do you think he would have said?"

"I see what you're saying.  But still… all of the reflections, even if they'd insisted that they were the real one, they'd have been wrong.  I'm the original, they're the copies."

"You're so sure?... A man who cannot prove that he isn't a reflection of a reflection, who doesn't know whether he is flesh and blood or a character in someone else's tale, sets himself up to determine what is real."  She chuckled.  "That's rich."
So yeah.  When I wrote that, I wasn't ready for it to be turned on me personally.

Anyhow, that's our unsettling science/philosophy for this morning.  Right now it's probably better to go along with Duncan's attitude of "I sure feel real to me," and get on with life.  But if perchance I am in a simulation, I'd like to appeal to whoever's running it to let me sleep better at night.

And allow me to add that the analysis by Bostrom and Kipping is not helping much.

****************************************

Last week's Skeptophilia book-of-the-week was about the ethical issues raised by gene modification; this week's is about the person who made CRISPR technology possible -- Nobel laureate Jennifer Doudna.

In The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race, author Walter Isaacson describes the discovery of how the bacterial enzyme complex called CRISPR-Cas9 can be used to edit genes of other species with pinpoint precision.  Doudna herself has been fascinated with scientific inquiry in general, and genetics in particular, since her father gave her a copy of The Double Helix and she was caught up in what Richard Feynman called "the joy of finding things out."  The story of how she and fellow laureate Emmanuelle Charpentier developed the technique that promises to revolutionize our ability to treat genetic disorders is a fascinating exploration of the drive to understand -- and a cautionary note about the responsibility of scientists to do their utmost to make certain their research is used ethically and responsibly.

If you like biographies, are interested in genetics, or both, check out The Code Breaker, and find out how far we've come into the science-fiction world of curing genetic disease, altering DNA, and creating "designer children," and keep in mind that whatever happens, this is only the beginning.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Thursday, March 11, 2021

The monster in the mist

I thought that after writing this blog for ten years, I'd have run into every cryptid out there.  But just yesterday a loyal reader of Skeptophilia sent me a link about one I'd never heard of, which is especially interesting given that the thing supposedly lives in Scotland.

I've had something of a fascination with Scotland and all things Scottish for a long time, partly because of the fact that my dad's family is half Scottish (he used to describe his kin as "French enough to like to drink and Scottish enough not to know when to stop").  My grandma, whose Hamilton and Lyell ancestry came from near Glasgow, knew lots of cheerful Scottish stories and folk songs, 95% of which were about a guy named Johnny who was smitten with a girl named Jenny, but she spurned him, so he had no choice but to stab her to death with his wee pen-knife.

Big believers in happy endings, the Scots.

Anyhow, none of my grandma's stories were about the "Am Fear Liath Mòr," which roughly translates to "Big Gray Dude," who supposedly lopes about in the Cairngorms, the massive mountain range in the eastern Highlands.  He is described as extremely tall and covered with gray hair, and his presence is said to "create uneasy feelings."  Which seems to me to be putting it mildly.  If I was hiking through some lonely, rock-strewn mountains and came upon a huge hair-covered proto-hominid, my uneasy feelings would include pissing my pants and then having a stroke.  But maybe the Scots are tougher-spirited than that, and upon seeing the Am Fear Liath Mòr simply report feeling a little unsettled about the whole thing.

A couple of Scottish hikers being made to feel uneasy

The Big Gray Dude has been seen by a number of people, most notably the famous mountain climber J. Norman Collie, who in 1925 had reported the following encounter on the summit of Ben MacDhui, the highest peak in the Cairngorms:
I was returning from the cairn on the summit in the mist when I began to think I heard something else than merely the noise of my own footsteps.  For every few steps I took I heard a crunch, and then another crunch as if someone was walking after me but taking steps three or four times the length of my own.  I said to myself, this is all nonsense.  I listened and heard it again, but could see nothing in the mist.  As I walked on and the eerie crunch, crunch, sounded behind me, I was seized with terror and took to my heels, staggering blindly among the boulders for four or five miles nearly down to Rothiemurchus Forest.  Whatever you make of it I do not know, but there is something very queer about the top of Ben MacDhui and I will not go back there myself I know.
Collie's not the only one who's had an encounter.  Mountain climber Alexander Tewnion says he was on the Coire Etchachan path on Ben MacDhui, and the thing actually "loomed up out of the mist and then charged."  Tewnion fired his revolver at it, but whether he hit it or not he couldn't say.  In any case, it didn't harm him, although it did give him a serious scare.

Periodic sightings still occur today, mostly hikers who catch a glimpse of it or find large footprints that don't seem human.  Many report feelings of "morbidity, menace, and depression" when the Am Fear Liath Mòr is nearby -- one reports suddenly being "overwhelmed by either a feeling of utter panic or a downward turning of my thoughts which made me incredibly depressed."  Scariest of all, one person driving through the Cairngorms toward Aberdeen said that the creature chased their car, keeping up with it on the twisty roads until finally they hit a straight bit and were able to speed up sufficiently to lose it.  After it gave up the chase, they said, "it stood there in the middle of the road watching us as we drove away."

So that's our cryptozoological inquiry for today.  I've been to Scotland once, but never made it out of Edinburgh -- I hope to go back and visit the ancestral turf some day.  When I do, I'll be sure to get up into the Cairngorms and see if I can catch a glimpse of the Big Gray Dude.  I'll report back on how uneasy I feel afterwards.

****************************************

Last week's Skeptophilia book-of-the-week was about the ethical issues raised by gene modification; this week's is about the person who made CRISPR technology possible -- Nobel laureate Jennifer Doudna.

In The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race, author Walter Isaacson describes the discovery of how the bacterial enzyme complex called CRISPR-Cas9 can be used to edit genes of other species with pinpoint precision.  Doudna herself has been fascinated with scientific inquiry in general, and genetics in particular, since her father gave her a copy of The Double Helix and she was caught up in what Richard Feynman called "the joy of finding things out."  The story of how she and fellow laureate Emmanuelle Charpentier developed the technique that promises to revolutionize our ability to treat genetic disorders is a fascinating exploration of the drive to understand -- and a cautionary note about the responsibility of scientists to do their utmost to make certain their research is used ethically and responsibly.

If you like biographies, are interested in genetics, or both, check out The Code Breaker, and find out how far we've come into the science-fiction world of curing genetic disease, altering DNA, and creating "designer children," and keep in mind that whatever happens, this is only the beginning.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Wednesday, March 10, 2021

Shooting the bull

There's a folk truism that goes, "Don't try to bullshit a bullshitter."

The implication is that people who exaggerate and/or lie routinely, either to get away with things or to create an overblown image of themselves, know the technique so well that they can always spot it in others.  This makes bullshitting a doubly attractive game; not only does it make you slick, impressing the gullible and allowing you to avoid responsibility, it makes you savvy and less likely to be suckered yourself.

Well, a study published this week in The British Journal of Social Psychology, conducted by Shane Littrell, Evan Risko, and Jonathan Fugelsang, has shown that like many folk truisms, this isn't true at all.

In fact, the research supports the opposite conclusion.  At least one variety of regular bullshitting leads to more likelihood of falling for bullshit from others.

[Image licensed under the Creative Commons Inkscape by Anynobody, composing work: Mabdul ., Bullshit, CC BY-SA 3.0]

The researchers identified two main kinds of bullshitting, persuasive and evasive.  Persuasive bullshitters exaggerate or embellish their own accomplishments to impress others or fit in with their social group; evasive ones dance around the truth to avoid damaging their own reputations or the reputations of their friends.

Because of the positive shine bullshitting has with many, the researchers figured most people who engage either type wouldn't be shy about admitting it, so they used self-reporting to assess the bullshit levels and styles of the eight hundred participants.  They then gave each a more formal measure of cognitive ability, metacognitive insight, intellectual overconfidence, and reflective thinking, then a series of pseudo-profound and pseudoscientific statements mixed in with real profound and truthful statements, to see if they could tell them apart.

The surprising result was that the people who were self-reported persuasive bullshitters were significantly worse at detecting pseudo-profundity than the habitually honest; the evasive bullshitters were better than average.

"We found that the more frequently someone engages in persuasive bullshitting, the more likely they are to be duped by various types of misleading information regardless of their cognitive ability, engagement in reflective thinking, or metacognitive skills," said study lead author Shane Littrell, of the University of Waterloo.  "Persuasive BSers seem to mistake superficial profoundness for actual profoundness.  So, if something simply sounds profound, truthful, or accurate to them that means it really is.  But evasive bullshitters were much better at making this distinction."

Which supports a contention that I've had for years; if you lie for long enough, you eventually lose touch with what the truth is.  The interesting fact that persuasive and evasive bullshitting aren't the same in this respect might be because evasive bullshitters engage in this behavior because they're highly sensitive to people's opinions, both of themselves and of others.  This would have the effect of making them more aware of what others are saying and doing, and becoming better at sussing out what people's real motives are -- and whether they're being truthful or not.  But persuasive bullshitters are so self-focused that they aren't paying much attention to what others say, so any subtleties that might clue them in to the fact they they're being bullshitted slip right by.

I don't know whether this is encouraging or not.  I'm not sure if the fact that it's easier to lie successfully to a liar is a point to celebrate by those of us who care about the truth.  But it does illustrate the fact that our common sense about our own behavior sometimes isn't very accurate.  As usual, approaching questions from a skeptical scientific angle is the best.

After all, no form of bullshit can withstand that.

****************************************

Last week's Skeptophilia book-of-the-week was about the ethical issues raised by gene modification; this week's is about the person who made CRISPR technology possible -- Nobel laureate Jennifer Doudna.

In The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race, author Walter Isaacson describes the discovery of how the bacterial enzyme complex called CRISPR-Cas9 can be used to edit genes of other species with pinpoint precision.  Doudna herself has been fascinated with scientific inquiry in general, and genetics in particular, since her father gave her a copy of The Double Helix and she was caught up in what Richard Feynman called "the joy of finding things out."  The story of how she and fellow laureate Emmanuelle Charpentier developed the technique that promises to revolutionize our ability to treat genetic disorders is a fascinating exploration of the drive to understand -- and a cautionary note about the responsibility of scientists to do their utmost to make certain their research is used ethically and responsibly.

If you like biographies, are interested in genetics, or both, check out The Code Breaker, and find out how far we've come into the science-fiction world of curing genetic disease, altering DNA, and creating "designer children," and keep in mind that whatever happens, this is only the beginning.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Tuesday, March 9, 2021

Memento mori

A man is discussing his fears about dying with his parish priest.

"Father," he says, "I'd be able to relax a little if I knew more about what heaven's like.  I mean, I love baseball... do you think there's baseball in heaven?"

The priest says, "Let me pray on the matter, my son."

So at their next meeting, the priest says, "I have good news and bad news...  The good news is, there is baseball in heaven."

The man gave him a relieved smile.  "So, what's the bad news?"

"You're playing shortstop on Friday."

*rimshot*

The vast majority of us aren't in any particular rush to die, and would go to significant lengths to postpone the event.  Even people who believe in a pleasant afterlife -- with or without baseball -- are usually just fine waiting as long as possible to get there.

And beyond our own fears about dying, there's the pain of grief and loss to our loved ones.  The idea that we're well and truly gone -- either off in some version of heaven, or else gone completely -- is understandably devastating to the people who care about us.

Well, with a machine-learning chatbot-based piece of software from Microsoft, maybe gone isn't forever, after all.

Carstian Luyckx, Memento Mori (ca. 1650) [Image is in the Public Domain]

What this piece of software does is to go through your emails, text messages, and social media posts, and pulls out what you might call "elements of style" -- typical word choice, sentence structure, use of figurative language, use of humor, and so on.  Once sufficient data is given to it, it can then "converse" with your friends and family in a way that is damn near indistinguishable from the real you, which in my case would probably involve being unapologetically nerdy, having a seriously warped sense of humor, and saying "fuck" a lot.

If you find this idea kind of repellent, you're not alone.  Once I'm gone, I really don't want anyone digitally reincarnating me; because, after all, it isn't me you'd be talking to.  The conscious part of me isn't there, it's just a convincing mimic, taking input from what you say, cranking through an algorithm, and producing an appropriate output based on the patterns of speech it learned.

But.

This brings up the time-honored question of what consciousness actually is, something that has been debated endlessly by far wiser heads than mine.  In what way are our brains not doing the same thing?  When you say, "Hi, Gordon, how's it going?", aren't my neural firing patterns zinging about in a purely mechanistic fashion until I come up with, "Just fine, how are you?"  Even a lot of us who don't explicitly believe in a "soul" or a "spirit," something that has an independent existence outside of our physical bodies, get a little twitchy about our own conscious experience.

So if an AI could mimic my responses perfectly -- and admittedly, the Microsoft chatbot is still fairly rudimentary -- how is that AI not me?

*brief pause to give my teddy bear a hug*

Myself, I wouldn't find a chatbot version of my deceased loved one at all comforting, however convincing it sounded.  Apparently there's even been some work on having the software scan through your photographs, and creating an animated avatar to go along with your verbal responses, and I find that even worse.  As hard as it is to lose someone you care about, it seems to me better to accept that death is part of the human condition, to grieve and honor your loved one in whatever way seems appropriate, and then get on with your own lives.

So please: once I'm gone, leave me to Rest In Peace.  No digital resuscitation, thanks.  To me, the Vikings had the right idea.  When I die, put my body on a boat, set fire to it, and push it out into the ocean.  Then afterward, have a wild party on the beach in my honor, with plenty of wine, music, dancing, and drunken debauchery.  This is probably illegal, but I can't think of a better sendoff.

After that, just remember me fondly, read what I wrote, recall all the good times, and get on with living.  Maybe there's an afterlife and maybe there isn't, but there's one thing just about all of us would agree on: the life we have right now is too precious to waste.

****************************************

Last week's Skeptophilia book-of-the-week was about the ethical issues raised by gene modification; this week's is about the person who made CRISPR technology possible -- Nobel laureate Jennifer Doudna.

In The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race, author Walter Isaacson describes the discovery of how the bacterial enzyme complex called CRISPR-Cas9 can be used to edit genes of other species with pinpoint precision.  Doudna herself has been fascinated with scientific inquiry in general, and genetics in particular, since her father gave her a copy of The Double Helix and she was caught up in what Richard Feynman called "the joy of finding things out."  The story of how she and fellow laureate Emmanuelle Charpentier developed the technique that promises to revolutionize our ability to treat genetic disorders is a fascinating exploration of the drive to understand -- and a cautionary note about the responsibility of scientists to do their utmost to make certain their research is used ethically and responsibly.

If you like biographies, are interested in genetics, or both, check out The Code Breaker, and find out how far we've come into the science-fiction world of curing genetic disease, altering DNA, and creating "designer children," and keep in mind that whatever happens, this is only the beginning.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Monday, March 8, 2021

Music on the brain

It is a source of tremendous curiosity to me why music is as powerful an influence as it is.  Music has been hugely important in my own life, and remains so to this day.  I remember my parents telling me stories about my early childhood, including tales of when I couldn't have been more than about four years old and I clamored to be allowed to use the record player myself.  At first they were reluctant, but my insistence finally won the day.  They showed me how to handle the records carefully, operate the buttons to drop the needle onto the record, and put everything away when I was done.  There were records I played over and over again (that I wasn't discouraged is a testimony to my parents' patience and forbearance) -- and I never damaged a single one.  They were simply too important to me to handle roughly.

The transformative experience of music is universal to the human species.  A 43,000 year old carved bone was found in Slovenia that many think was one of the earliest musical instruments -- if this contention is correct, our drive to make music must be very old indeed.


The neurological underpinning of our musical experience, however, has not been easy to elucidate.  Until recently, there was speculation that our affinity for music had something to do with the tonal-based expression of emotion in language, but that is still speculative.  And recently, three scientists in the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology have shown that we have a dedicated module in our brains for experiencing and responding to music.

A team led by Sam Norman-Haignere did fMRIs of individuals who were listening to music, and others listening to a variety of other familiar sounds (including human speech).  They then compared the type of sound to the three-dimensional neural response pattern -- what the scientists call a voxel -- to see if they could find correlations between them.

The relationship turned out to be unmistakable.  They found that there were distinct firing patterns in regions of the brain that occurred only when the subject was listening to music -- and that it didn't matter what the style of music was.  Norman-Haignere said, "The sound of a solo drummer, whistling, pop songs, rap, almost everything that had a musical quality to it, melodic or rhythmic, would activate it.  That's one reason the results surprised us."

The research team writes:
The organization of human auditory cortex remains unresolved, due in part to the small stimulus sets common to fMRI studies and the overlap of neural populations within voxels.  To address these challenges, we measured fMRI responses to 165 natural sounds and inferred canonical response profiles ("components") whose weighted combinations explained voxel responses throughout auditory cortex...  Anatomically, music and speech selectivity concentrated in distinct regions of non-primary auditory cortex...  [This research] identifies primary dimensions of response variation across natural sounds, revealing distinct cortical pathways for music and speech.
This study opens up a whole new approach to understanding why our auditory centers are structured the way they are, although it does still leave open the question of why music is so tremendously important across cultures. "Why do we have music?" study senior author Nancy Kanwisher said in an interview with the New York Times.  "Why do we enjoy it so much and want to dance when we hear it?  How early in development can we see this sensitivity to music, and is it tunable with experience?  These are the really cool first-order questions we can begin to address."

What I find the most curious about this is that the same region of the brain is firing in response to incredibly dissimilar inputs.  Consider, for example, the differences between a sitar solo, a Rossini aria, a Greydon Square rap, and a Bach harpsichord sonata.  Isn't it fascinating that we all have a part of the auditory cortex that responds to all of those -- regardless of our cultural background or musical preferences?

I find the whole thing tremendously interesting, and can only hope that the MIT team will continue their investigations.  I'm fascinated not only with the universality of musical appreciation, but the peculiar differences -- why, for example, I love Bach, Stravinsky, Shostakovich, and Vaughan Williams, but Chopin, Brahms, Mahler, and Schumann leave me completely cold.  Must be something about my voxels, I suppose -- but wouldn't it be cool to find out what it is?

****************************************

Last week's Skeptophilia book-of-the-week was about the ethical issues raised by gene modification; this week's is about the person who made CRISPR technology possible -- Nobel laureate Jennifer Doudna.

In The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race, author Walter Isaacson describes the discovery of how the bacterial enzyme complex called CRISPR-Cas9 can be used to edit genes of other species with pinpoint precision.  Doudna herself has been fascinated with scientific inquiry in general, and genetics in particular, since her father gave her a copy of The Double Helix and she was caught up in what Richard Feynman called "the joy of finding things out."  The story of how she and fellow laureate Emmanuelle Charpentier developed the technique that promises to revolutionize our ability to treat genetic disorders is a fascinating exploration of the drive to understand -- and a cautionary note about the responsibility of scientists to do their utmost to make certain their research is used ethically and responsibly.

If you like biographies, are interested in genetics, or both, check out The Code Breaker, and find out how far we've come into the science-fiction world of curing genetic disease, altering DNA, and creating "designer children," and keep in mind that whatever happens, this is only the beginning.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Saturday, March 6, 2021

Complexity vs. bigotry

By now, most of you have probably heard that Marjorie Taylor Greene, the Republican representative from Georgia who narrowly edged out both Matt Gaetz and Lauren Boebert as the biggest asshole in Congress, thought it'd be a fun stunt to taunt Representative Marie Newman (D-Ill.) about having a transgender daughter by putting up the following sign:


This sign illustrates a general rule of thumb, to wit: do not append "Trust the Science" to your ignorant, bigoted opinion and expect it to go unchallenged when there's someone in the room who actually understands science.

The whole "anything that's not cis-het-binary sexuality is unnatural" claim starts to fall apart as soon as you look at it carefully.  Beginning with the fact that to date, homosexual behavior has been observed and documented in 450 animal species besides humans.  That's a few too many to explain away, as one Kenyan official did regarding a video of coupling between two male lions, that the animals were "influenced by gays who have gone to the national parks and behaved badly."

Although I have to say that any couple, gay or otherwise, who is brave enough to fuck outdoors while lions are watching has my utmost admiration.

Since "unnatural" means "not found in nature," we're off to a bad start.  Things only get worse when you look not at who's mating with whom, but what the sexes of individuals themselves are.  Over five hundred species of fish have been identified that change sex -- often when a dominant individual of one sex dies, and the strongest remaining individual switches sex to take his/her place.  Some species, such as many types of gobies, can actually change back and forth, actually altering their anatomy to become reproductively mature females or males as needed dependent on the makeup of the rest of the population.

The complications don't end there, because there's the difficulty of specifying what exactly we mean when we say "male" and "female."  There are at least five different ways that you could define "sex:" what genitals you have, which gender(s) you're attracted to, what sex chromosomes you have, the hormones present in your bloodstream, and your brain wiring (i.e., what gender you see yourself as).  And despite what Marjorie Taylor Greene and others of her ilk would have you believe, all too commonly these don't line up.

We dealt with attraction and genitalia; what about chromosomes?  In mammals, maleness is conferred by a gene complex called SRY that's present on the Y chromosome, so generally if an individual has a matched set of sex chromosomes (XX), she's female, while someone with an unmatched set (XY) is male.  It's wryly amusing that the euphemism for explaining sex is "the birds and the bees," because birds and bees both do this a different way; in birds, it's the males that have the matched set (ZZ) while the females have the unmatched set (ZW), which is why sex-linked trait inheritance has the opposite pattern in birds than it does in mammals.  Bees are haplo-diploid, meaning that males have half the number of chromosomes that females do -- fertilized eggs give rise to females, and unfertilized ones to males.  (If you're thinking, "so that means male bees have a mother but no father?", you're exactly right.)

Okay, so let's limit it to humans.  Makes it simple, right?  If that's your guess, you've kind of lost the plot.  Humans follow the XX/XY pattern -- most of the time.  In embryonic development, female anatomy is sort of the default condition; if an embryo lacks a working SRY, it develops into a female.  One of the drivers of the development of male anatomy is a gene in the SRY complex called 5-alpha-reductase, of which males generally have two copies.  One activates embryonically, which is why a prenatal ultrasound can often tell a woman if she's going to have a boy or not; the other activates around age twelve or thirteen and generates the changes in a boy's body that happen at puberty.

But there's a mutation called 5-alpha-reductase deficiency, which knocks out the first copy but not the second.  So the baby is born looking like an ordinary female infant.  Then at age twelve, the second gene switches on, and in a few months, the child turns into a male -- the gonads descend, the penis develops, and so on.

Then there are the kids who have X-SRY -- the SRY complex moved during a process called crossing over onto the X chromosome, so the child karyotypes as a female but is anatomically male.  Then there's XY androgen insensitivity, which is sort of the opposite; an alteration in a hormone receptor causes the male hormones to be unable to lock onto the appropriate cells, so even though they have an XY karyotype and the amount of testosterone in the bloodstream usually seen in a normal male, they're anatomically female.

And then there's the most complex thing of all, which is the neural wiring that gives rise to the sense of self.  Most adults have a sense of their gender that goes beyond what their plumbing looks like.  Sometimes that doesn't line up with the genitalia, the chromosome makeup, or both.  A 2019 paper in Nature exhibits beyond any doubt that transgender people are not, as Marjorie Taylor Greene would claim, either "unnatural" or "making it up," they actually have differences in their neurology and hormone/receptor interactions from those that cisgender people do.  We still don't fully understand what causes the transgender condition, but one thing it definitely isn't is some kind of invented pseudo-condition.

Nor is any of this a choice.  I'm reminded of what a trans student of mine said a couple of years ago: "A choice?  Why would I choose this?  To face prejudice on a daily basis?  To have to fight continuously for people simply to acknowledge that I am who I say I am?  Give me a break."  Then there was the gay student who shut up the "it's a choice" bigots by saying that if homosexual attraction is a choice, straight people should be able to choose, at least temporarily, to be attracted to the same sex.  "Try it!" he'd tell them cheerfully.  "Look at the body of someone the same sex as you, and choose to be attracted!"

After the bigot is stunned into silence, he usually adds, "Until you can do that, shut the fuck up."

Unfortunately, a lot of non-cis-hetero-binary people aren't in the position where they can be that determined not to give an inch; they still face ostracism from family and friends, ridicule and violence, and in some countries, imprisonment, torture, or execution.  Just for being who they are, just for loving who they love, just for wanting to have society acknowledge that sexuality and gender are complex -- and therefore as long as it's between consenting adults, every person has the right to be open about expressing those things in whatever way they experience them.

But with so many people being bound and determined to fit the whole world into a neat, tidy, binary box, is it any wonder why LGBTQ+ people want to find a descriptor for every possible combination and gradation?  I sometimes hear snickering over "adding another letter to the acronym;" but society has been so dismissive for so long that it's no wonder people want to find a label to hold up and say "This is who I am."  (If you're wondering, I'm male and bisexual, but "queer" is also fine with me.)  Sexuality, both in humans and in other species, is so complex and multifaceted that there may not be letters in the alphabet to slice it finely enough to find a unique descriptor for each person's experience of it.  But with clods like Marjorie Taylor Greene posting signs on office doors saying that they have the God-given black-and-white truth and all the scientists agree, you can hardly fault them for trying.

So to wrap this up: not only is Greene's sign simple bigotry, it's outright false.  The universe is a complicated place, and either you should take the time to learn what science actually has uncovered about it, or else keep your damn mouth shut.

And if you're too lazy, ignorant, and opinionated to do that, you have no place in our government crafting policy for people smarter than you.

****************************************

The advancement of technology has opened up ethical questions we've never had to face before, and one of the most difficult is how to handle our sudden ability to edit the genome.

CRISPR-Cas9 is a system for doing what amounts to cut-and-paste editing of DNA, and since its discovery by Emmanuelle Charpentier and Jennifer Doudna, the technique has been refined and given pinpoint precision.  (Charpentier and Doudna won the Nobel Prize in Chemistry last year for their role in developing CRISPR.)

Of course, it generates a host of questions that can be summed up by Ian Malcolm's quote in Jurassic Park, "Your scientists were so preoccupied with whether they could, they didn't stop to think if they should."  If it became possible, should CRISPR be used to treat devastating diseases like cystic fibrosis and sickle-cell anemia?  Most people, I think, would say yes.  But what about disorders that are mere inconveniences -- like nearsightedness?  What about cosmetic traits like hair and eye color?

What about intelligence, behavior, personality?

None of that has been accomplished yet, but it bears keeping in mind that ten years ago, the whole CRISPR gene-editing protocol would have seemed like fringe-y science fiction.  We need to figure this stuff out now -- before it becomes reality.

This is the subject of bioethicist Henry Greely's new book, CRISPR People: The Science and Ethics of Editing Humans.  It considers the thorny questions surrounding not just what we can do, or what we might one day be able to do, but what we should do.

And given how fast science fiction has become reality, it's a book everyone should read... soon.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]