Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Saturday, February 27, 2021

Halting the conveyor

The Irish science historian James Burke, best known for his series Connections and The Day the Universe Changed, did a less-well-known two-part documentary in 1991 called After the Warming which -- like all of his productions -- approached the issue at hand from a novel angle.

The subject was anthropogenic climate change, something that back then was hardly the everyday topic of discussion it is now.  Burke has a bit of a theatrical bent, and in After the Warming he takes the point of view of a scientist in the year 2050, looking back to see how humanity ended up where they were by the mid-21st century.

Watching this documentary now, I have to keep reminding myself that everything he says happened after 1991 was a prediction, not a recounting of actual history.  Some of his scenarios were downright prescient, more than one of them down to the year they occurred.  The Iraq War, the catastrophic Atlantic hurricane barrage in 2005, droughts and heat waves in India, East Africa, and Australia -- and the repeated failure of the United States to believe the damn scientists and get on board with addressing the issue.  He was spot-on that the last thing the climatologists themselves would be able to figure out was the effect of climate change on the deep ocean.  He had a few misses -- the drought he predicted for the North American Midwest never happened, nor did the violent repulsion of refugees from Southeast Asia by Australia.  But his batting average still is pretty remarkable.

One feature of climate science he went into detail about, that beforehand was not something your average layperson would probably have known, was the Atlantic Conveyor -- known to scientists as AMOC, the Atlantic Meridional Overturning Circulation.  The Atlantic Conveyor works more or less as follows:

The Gulf Stream, a huge surface current of warm water moving northward along the east coast of North America, evaporates as it moves, and that evaporation does two things; it cools the water, and makes it more saline.  Both have the effect of increasing its density, and just south of Iceland, it reaches the point that it becomes dense enough to sink.  This sinking mechanism is what keeps the Gulf Stream moving, drawing up more warm water from the south, and that northward transport of heat energy is why eastern Canada, western Europe, and Iceland itself are as temperate as they are.  (Consider, for example, that Oslo, Norway and Okhotsk, Siberia are at the same latitude -- 60 degrees North.)

[Image is in the Public Domain courtesy of NASA/Goddard Space Flight Center]

Just about any high school kid, though, has heard about the Gulf Stream, usually in the context of the paths of sailing ships during the European Age of Exploration.  What many people don't know, however, is that if things warm up, leading to the melting of the Greenland Ice Sheets, it will cause a drastic drop in salinity at the north end of the Gulf Stream, making that blob of water too fresh to sink.

The result: the entire Atlantic Conveyor stops in its tracks.  No more transport of heat energy northward, putting eastern Canada and northwestern Europe into the deep freeze.  The heat doesn't just go away, though -- that would break the First Law of Thermodynamics, which is strictly forbidden in most jurisdictions -- it would just cause the south Atlantic to heat up more, boosting temperatures in the southeastern United States and northern South America, and fueling hurricanes the likes of which we've never seen before.

Back in 1991, this was all speculative, based on geological records from the last time something like that happened, on the order of thirteen thousand years ago.  The possibility was far from common knowledge; in fact, I think After the Warming was the first place I ever heard about it.

Well, score yet another one for James Burke.

A paper this week in Proceedings of the National Academy of Science describes research by Johannes Lohmann and Peter Ditlevsen of the University of Copenhagen indicating the that based on current freshwater output from the melting of Arctic ice sheets, that tipping point from "saline-enough-to-sink" to "not" might be too near to do anything about.  "These tipping points have been shown previously in climate models, where meltwater is very slowly introduced into the ocean," Lohmann said, in an interview with Gizmodo.  "In reality, increases in meltwater from Greenland are accelerating and cannot be considered slow."

The authors write -- and despite the usual careful word choice for scientific accuracy's sake, you can't help picking up the urgency behind the words:

Central elements of the climate system are at risk for crossing critical thresholds (so-called tipping points) due to future greenhouse gas emissions, leading to an abrupt transition to a qualitatively different climate with potentially catastrophic consequences...  Using a global ocean model subject to freshwater forcing, we show that a collapse of the Atlantic Meridional Overturning Circulation can indeed be induced even by small-amplitude changes in the forcing, if the rate of change is fast enough.  Identifying the location of critical thresholds in climate subsystems by slowly changing system parameters has been a core focus in assessing risks of abrupt climate change...  The results show that the safe operating space of elements of the Earth system with respect to future emissions might be smaller than previously thought.

The Lohmann and Ditlevsen paper is hardly the first to sound the alarm.  Five years ago, a paper in Nature described a drop in temperature in the north Atlantic that is precisely what Burke warned about.  In that paper, written by a team led by Stefan Rahmstorf of the Potsdam Institute for Climate Impact Research, the authors write, "Using a multi-proxy temperature reconstruction for the AMOC index suggests that the AMOC weakness after 1975 is an unprecedented event in the past millennium (p > 0.99).  Further melting of Greenland in the coming decades could contribute to further weakening of the AMOC."

Once again, the sense of dismay is obvious despite being couched in deliberately cautious science-speak.

Even if the current administration in the United States explicitly says that addressing climate change is one of their top priorities, they're facing an uphill battle.  Baffling though it is to me, we are still engaged in fighting with people who don't even believe climate change exists, who understand science so little they're still at the "it was cold today, so climate change isn't happening" level of understanding.  (To quote Stephen Colbert, "And in other good news, I just ate dinner, so there's no such thing as world hunger.")  Besides outright stupidity (and apparent inability to read and comprehend scientific research), there's the added problem of elected officials being in the pockets of the fossil fuel industry, the money from which gives them a significant incentive for keeping the voting public ignorant about the issues.

Until we hit the tipping point Lohmann and Ditlevsen warn about.  At which point the effects will be obvious.

In other words, until it's too late.

If the Atlantic Conveyor shuts down, the results will no longer be arguable even by climate-change-denying knuckle-draggers like James "Senator Snowball" Inhofe.  The saddest part is that we were warned about this thirty years ago by a science historian in terms a layperson could easily understand, and -- in Burke's own words -- we sat on our hands.

And as with Cassandra, the character from Greek mythology who was blessed with the gift of foresight but cursed to have no one believe what she says, we'll only say, "Okay, I guess Burke and the rest were right all along" as the world's climate systems are collapsing around us.

********************************

 Many of us were riveted to the screen last week watching the successful landing of the Mars Rover Perseverance, and it brought to mind the potential for sending a human team to investigate the Red Planet.  The obstacles to overcome are huge; the four-odd-year voyage there and back, requiring a means for producing food, and purifying air and water, that has to be damn near failsafe.

Consider what befell the unfortunate astronaut Mark Watney in the book and movie The Martian, and you'll get an idea of what the crew could face.

Physicist and writer Kate Greene was among a group of people who agreed to participate in a simulation of the experience, not of getting to Mars but of being there.  In a geodesic dome on the slopes of Mauna Loa in Hawaii, Greene and her crewmates stayed for four months in isolation -- dealing with all the problems Martian visitors would run into, not only the aforementioned problems with food, water, and air, but the isolation.  (Let's just say that over that time she got to know the other people in the simulation really well.)

In Once Upon a Time I Lived on Mars: Space, Exploration, and Life on Earth, Greene recounts her experience in the simulation, and tells us what the first manned mission to Mars might really be like.  It makes for wonderful reading -- especially for people like me, who are just fine staying here in comfort on Earth, but are really curious about the experience of living on another world.

If you're an astronomy buff, or just like a great book about someone's real and extraordinary experiences, pick up a copy of Once Upon a Time I Lived on Mars.  You won't regret it.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Friday, February 26, 2021

The code switchers

When I was a graduate student in the School of Oceanography at the University of Washington -- an endeavor that lasted one semester, at which point I realized that I had neither the focus nor the brainpower to succeed as a research scientist -- I found an interesting commonality amongst the graduate students I hung out with.

This group of perhaps eight or nine twenty-somethings were without question the most vulgar, profane group I have ever been part of.  Regular readers of Skeptophilia, not to mention my friends and family, will know that my own vocabulary isn't exactly what anyone would call "prim and proper;" but while I am not averse to seasoning my speech with the occasional swear word, these people basically dumped in the entire spice cabinet.

The words "fuck" and "fuckin'" were like a staccato percussive beat to just about every sentence uttered.  You didn't say, "I gotta go to class," you said, "I fuckin' gotta go to class."  It was so bad most of us didn't even hear it any more, it was just "how we talked."  (And, I might add, it had the result of making those words completely lose their punch, and thus their effectiveness as emotionally-packed language.)  I have no idea why this particular group was so prone to obscene speech -- as you might expect, they were smart, scientifically-minded people with commensurately large vocabularies to choose from -- but once that became the norm, it was what one did to fit in.

What's most interesting is that when, at the end of that semester, I switched to the School of Education and started the track toward becoming a high school science teacher (a much more felicitous choice, as it turned out), I almost instantly adjusted my vocabulary to reflect the far more squeaky-clean speech of the Future Teachers of America.  I didn't have to think much about it; it wasn't like I had to obsessively watch my mouth until I learned how to control it.  The change was quick and required very little conscious thought to maintain.

This phenomenon is called code switching.  In its broadest definition, code switching occurs when a bilingual person flips between his/her two languages depending on the language of the listeners.  But context-dependent code switching occurs whenever we jump from one group we belong to into a different one, or from a group of strangers to a group of friends.

[Image licensed under the Creative Commons JasonSWrench, Transactional comm model, CC BY 3.0]

Code switching occurs in written language, too.  I write here at Skeptophilia, I write fiction, I have written science curriculum, I write emails to family, friends, coworkers, and total strangers (like the guy at the software company helpdesk and the woman at the bank who oversees our mortgage).  In each of those, my vocabulary, sentence structure, and degree of formality are different, not only in the words I choose, but in how exactly they're used.  Some of the differences are obvious; my wife gets emails ending with "xoxoxoxoxo;" my friends, usually with "cheers, g," and people I've contacted over business matters, "thank you so much, sincerely, Gordon."  (I'm a bit absent-minded at the best of times, and I live in fear of the day I send the guy at the helpdesk an email ending with the hugs-and-kisses signoff.)

But it turns out that these differences are apparent in other, more subtle ways.  A study out of the University of Exeter that appeared in the journal Behavior Research Methods this week describes a protocol for detecting code switching that had an accuracy of 70% -- even when they didn't look at words that would be obvious giveaways.

The researchers used an automated linguistic analysis program to look at writing done by the same people in two different contexts.  The participants in the study were chosen because they were active in two different sorts of social media groups, some having to do with parenting and others gender equity, and the software was given passages they'd written in both venues -- with tipoff words like "childcare" and "feminism" removed.  It turned out the program was still able to discern which social media group the passage had been directed toward, simply by looking at structural features like use of pronouns and meaning-based characteristics like the number of emotionally-laden words used per paragraph.

"It is the first method that lets us study how people access different group identities outside the laboratory on a large scale, in a quantified way," said study lead author Miriam Koschate-Reis, in an interview with Science Daily.  "For example, it gives us the opportunity to understand how people acquire new identities, such as becoming a first-time parent, and whether difficulties 'getting into' this identity may be linked to postnatal depression and anxiety.  Our method could help to inform policies and interventions in this area, and in many others."

Koschate-Reis and her team are next going to look into whether this kind of code switching is facilitated by location -- if, for example, an informal-to-formal switch might be easier in an academic location like a library than it is in a relaxed setting like a café.

In other words, if it might be better not to work on your dissertation in Starbucks.

All of which is fascinating, and once again points out the complexity of human communication -- and why it's so hard to get an artificial neural network to mimic written conversation convincingly.  Most of us code switch automatically, without even being aware of it, as we navigate daily through the many groups to which we belong.  Most AI speech I've seen, even if responses are contextually correct and use the right vocabulary with the right structure, have a inflexible stilted quality that is lacking in the generally more sensitive, free-flowing communication that happens between real people.  But perhaps that's another application that the Koschate-Reis et al. research might have; if a linguistic analysis software can learn to detect code switching, that's the first step toward an AI actually learning how to apply it.

One step closer to passing the Turing Test.

In any case, I'd better run along and get my fuckin' day started.  I hope y'all have a good one.  Hugs & kisses. 💘

********************************

 Many of us were riveted to the screen last week watching the successful landing of the Mars Rover Perseverance, and it brought to mind the potential for sending a human team to investigate the Red Planet.  The obstacles to overcome are huge; the four-odd-year voyage there and back, requiring a means for producing food, and purifying air and water, that has to be damn near failsafe.

Consider what befell the unfortunate astronaut Mark Watney in the book and movie The Martian, and you'll get an idea of what the crew could face.

Physicist and writer Kate Greene was among a group of people who agreed to participate in a simulation of the experience, not of getting to Mars but of being there.  In a geodesic dome on the slopes of Mauna Loa in Hawaii, Greene and her crewmates stayed for four months in isolation -- dealing with all the problems Martian visitors would run into, not only the aforementioned problems with food, water, and air, but the isolation.  (Let's just say that over that time she got to know the other people in the simulation really well.)

In Once Upon a Time I Lived on Mars: Space, Exploration, and Life on Earth, Greene recounts her experience in the simulation, and tells us what the first manned mission to Mars might really be like.  It makes for wonderful reading -- especially for people like me, who are just fine staying here in comfort on Earth, but are really curious about the experience of living on another world.

If you're an astronomy buff, or just like a great book about someone's real and extraordinary experiences, pick up a copy of Once Upon a Time I Lived on Mars.  You won't regret it.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Thursday, February 25, 2021

Peering into the Neanderthal brain

Despite having taught genetics for 32 years, it still is astonishing to me that all of the genetic diversity of the 7.7-odd-billion humans on Earth is accounted for by differences amounting to only a tenth of a percent of the genome.

Put a different way, if you were to find the person who is the most genetically different from you, the two of you would still have a 99.9% overlap in your DNA.  A lot of that additional tenth of a percent is made up of genes for obvious appearance-related features -- eye color and shape, hair color and texture, skin color, body build, and so on.  But even these characteristics, which are usually considered to determine race, don't really tell you all that much.  A San man and his Tswana neighbor in Botswana were both called "black" by the white European colonists, but those same white Europeans were genetically closer to people in Japan than the San and Tswana were to each other.  There is, in fact, more human genetic diversity on the continent of Africa than there is in the entire rest of the world put together -- unsurprising, perhaps, given that our species originated there.

Race, then, is a social construct, not really a biological one.  There are some distinct genetic signatures in different ethnic groups, which is what allows the "percent composition" you get if you have your DNA analyzed by Ancestry or 23 & Me, and within their limitations, they don't have bad accuracy.  My own DNA test lined up almost perfectly with what I know of my family tree; something like two-thirds from western and northwestern France, a good chunk of the rest from Scotland and England, and an interesting (and spot-on) 6% of my DNA from my Ashkenazi Jewish great-great-grandfather.

Even more surprising, perhaps, is that the average difference between the human genome and that of our closest non-human relatives -- chimps and bonobos -- is still only 1.2%.  So all of the lineages that split off from our line of descent after the chimps and bonobos did, on the order of five million years ago, would be closer than that to us genetically.  This has been confirmed by analysis of DNA in those now-extinct groups of hominins -- Neanderthals, Denisovans, and so on.

Here, we're talking about way bigger physical differences than there are between any two races of modern humans you might pick.  Bone structure, brain size and structure, body proportions -- some pretty major stuff.  Still, the Neanderthals, Denisovans, and us are all the same species, by the rather mushy definition of a species as being a group of organisms capable of reproduction that results in fertile offspring; modern humans have a good chunk of Neanderthal and Denisovan DNA, also at least in part detectable by genetic testing.

[Image licensed under the Creative Commons Stefan Scheer, Neandertaler reconst, CC BY-SA 3.0]

Why all this comes up is a study in Science this week by a huge team led by Alysson Muotri of the University of California - San Diego in which geneticists tinkered with human stem cells, altering their DNA to reflect one of 61 genes that have been identified as differences between ourselves and our Neanderthal and Denisovan kin.  They then allowed those cells to proliferate and form organoids -- mini-brains that can form connections (synapses) just as a developing brain in an embryo would.

Well, my first thought was, "Haven't these people ever watched a science fiction movie?"  Because scientists always try shit like this in movies, and it always ends up with a giant brain-blob that goes rogue, escapes the lab, and proceeds to eat Tokyo.  But Dr. Muotri assures us that that's not possible in this case.  He explains that organoids are incapable of living all that long because they don't have all the support structures that real brains have -- a circulatory system for example -- so they'd never be capable of surviving outside the petri dish.

To which I say: of course, Dr. Muotri.  That's what you would say.  Just realize that in those same science fiction movies, it's always the scientist who says, "Wait, stand back!  Let me try to communicate with it!" and ends up being the first one to get devoured.

So don't say I didn't warn you.

In any case, what is kind of amazing is that these organoid brains with a single gene altered to what our Neanderthal cousins had developed in a way that was completely unlike our own.  As the press release in Science Daily explained it:

The Neanderthal-ized brain organoids looked very different than modern human brain organoids, even to the naked eye.  They had a distinctly different shape.  Peering deeper, the team found that modern and Neanderthal-ized brain organoids also differ in the way their cells proliferate and how their synapses -- the connections between neurons -- form.  Even the proteins involved in synapses differed.  And electrical impulses displayed higher activity at earlier stages, but didn't synchronize in networks in Neanderthal-ized brain organoids.

All of that, from a single gene.

"This study focused on only one gene that differed between modern humans and our extinct relatives. Next we want to take a look at the other sixty genes, and what happens when each, or a combination of two or more, are altered," Muotri said.  "We're looking forward to this new combination of stem cell biology, neuroscience and paleogenomics.  The ability to apply the comparative approach of modern humans to other extinct hominins, such as Neanderthals and Denisovans, using brain organoids carrying ancestral genetic variants is an entirely new field of study."

So that "less than a percent" label on the differences between ourselves and our nearest non-modern-human kin is a little misleading, because apparently some of that less-than-a-percent are really critical.

In any case, that's our view of the cutting edge of science for today.  One can't help but be impressed with studies like this, which accomplish feats of genetic messing-about that would have been themselves in the realm of science fiction twenty years ago.  I wonder what the next twenty years will bring?  Hopefully not brain blobs eating Tokyo.  I mean, I'm all for scientific advancement, but you have to draw the line somewhere.

********************************

 Many of us were riveted to the screen last week watching the successful landing of the Mars Rover Perseverance, and it brought to mind the potential for sending a human team to investigate the Red Planet.  The obstacles to overcome are huge; the four-odd-year voyage there and back, requiring a means for producing food, and purifying air and water, that has to be damn near failsafe.

Consider what befell the unfortunate astronaut Mark Watney in the book and movie The Martian, and you'll get an idea of what the crew could face.

Physicist and writer Kate Greene was among a group of people who agreed to participate in a simulation of the experience, not of getting to Mars but of being there.  In a geodesic dome on the slopes of Mauna Loa in Hawaii, Greene and her crewmates stayed for four months in isolation -- dealing with all the problems Martian visitors would run into, not only the aforementioned problems with food, water, and air, but the isolation.  (Let's just say that over that time she got to know the other people in the simulation really well.)

In Once Upon a Time I Lived on Mars: Space, Exploration, and Life on Earth, Greene recounts her experience in the simulation, and tells us what the first manned mission to Mars might really be like.  It makes for wonderful reading -- especially for people like me, who are just fine staying here in comfort on Earth, but are really curious about the experience of living on another world.

If you're an astronomy buff, or just like a great book about someone's real and extraordinary experiences, pick up a copy of Once Upon a Time I Lived on Mars.  You won't regret it.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Wednesday, February 24, 2021

Living the dream

A lucid dream occurs when you are aware you're dreaming while you're dreaming -- and (apparently) with practice, you can learn to control what happens.

Which sounds kind of awesome, but I've never had one.

I came close one time.  A while back I had a dream of being in a wedding party in a big old cathedral -- vaulted ceilings, stained glass, huge pipe organ, the works.  I don't recall recognizing the bride and groom, or (in fact) any of the other people present, but anyhow, there I was with other formally-dressed people, flanking the happy couple as they recited their vows.

When the priest got to the "if anyone objects to this marriage" bit, that's when things got weird.  An old lady in the front row stood up and said, "I object!" in a really nasal, grating voice.  She then began a recitation of how awful the bride and groom were, how she couldn't stand either of them, and how she was only there to let 'em both have it.  (You'd think, if both the bride and groom were horrible people, it'd be better to let them marry each other than to potentially ruin the lives of two other people, but apparently the old lady didn't see it that way.)

So I'm watching all this, aghast, and I had this sudden thought.  "This is so weird.  When does this ever happen in real life?  I must be dreaming."

I looked around, assessing the surroundings, and then kind of poked myself in the chest with my finger, and thought, "Huh.  I guess it's real after all.  How strange."

Doesn't it just figure?  I have my one and only opportunity to lucid dream, and when the time came to figure out if I was dreaming, I got the wrong answer.

Dickens's Dream by Robert Buss (1875) [Image is in the Public Domain]

The subject comes up because of a study led by Karen Konkoly, professor of psychology at Northwestern University, which describes research done into people having lucid dreams -- and the potential for communicating with dreamers and having them answer questions or even follow simple commands.

The authors write:

Dreams take us to a different reality, a hallucinatory world that feels as real as any waking experience.  These often-bizarre episodes are emblematic of human sleep but have yet to be adequately explained.  Retrospective dream reports are subject to distortion and forgetting, presenting a fundamental challenge for neuroscientific studies of dreaming.  Here we show that individuals who are asleep and in the midst of a lucid dream (aware of the fact that they are currently dreaming) can perceive questions from an experimenter and provide answers using electrophysiological signals.  We implemented our procedures for two-way communication during polysomnographically verified rapid-eye-movement (REM) sleep in 36 individuals.  Some had minimal prior experience with lucid dreaming, others were frequent lucid dreamers, and one was a patient with narcolepsy who had frequent lucid dreams.  During REM sleep, these individuals exhibited various capabilities, including performing veridical perceptual analysis of novel information, maintaining information in working memory, computing simple answers, and expressing volitional replies.  Their responses included distinctive eye movements and selective facial muscle contractions, constituting correctly answered questions on 29 occasions across 6 of the individuals tested.  These repeated observations of interactive dreaming, documented by four independent laboratory groups, demonstrate that phenomenological and cognitive characteristics of dreaming can be interrogated in real time.  This relatively unexplored communication channel can enable a variety of practical applications and a new strategy for the empirical exploration of dreams.

It's a cool finding, but not that surprising when you think about it.  A lot of us have had experiences where outside (i.e. real) sounds have become incorporated into our dreams.  My wife once had a dream of hearing NPR, but when she checked, it wasn't coming from our stereo speakers.  So she wandered around the house, and finally found that it was coming from the microwave.  But unplugging the microwave still didn't shut off the news broadcast, and she proceeded to go from appliance to appliance, trying to figure out why our house had the ghostly voices of Doualy Xaykaothao and Ofebia Quist-Arcton and Soraya Sarhaddi Nelson coming out of nowhere.

Then she woke up, and found that her clock radio was on, and the news broadcast had become incorporated into her dream state.

Which, of course, brings up another, and more pressing question: why do NPR reporters have such awesome names?  I don't know anyone in real life who has a name as cool as Kai Ryssdal, Nedda Ulaby, and David Folkenflik.  I wonder if it's a condition of employment?

NPR interviewer: I'm happy to see that you've applied for a job as a reporter here at NPR.  What's your name?

Me:  Gordon Bonnet.

NPR interviewer:  Oh, I'm sorry, that's not nearly cool enough.  Maybe you should see if MSNBC has any openings.

But I digress.

The lucid dreaming experiment definitely is cool, but it does open up some rather scary potential.  One of my favorite novels is Ursula LeGuin's The Lathe of Heaven, which involves a man named George Orr who not only has lucid dreams, his dreams change reality.  If he dreams that there was a plague five years ago that wiped out half the world's population, when he wakes up, that's what's happened.  The problem is, no one realizes the change but him.  When the past changed, everyone's memory changed as well, so George is the only person who recognizes that things are different.  Of course, any psychologist he tries to tell this is going to think he's crazy; the psychologist's memory changed along with everyone else's.  But one psychologist believes him -- and realizes that he can use George's peculiar power to reshape the world as he sees fit, by giving George suggestions while he's sleeping.

After that, things go downhill fast.

While the dream-changing-reality ability in LeGuin's incredibly inventive story isn't real, you have to wonder if other things might be.  Could a suggestion during a lucid dream change your memories, or create a memory of something that never happened?  Could it be used to overcome psychological issues like phobias -- or induce aversions where there were none previously?  Any time I hear about people trying to tap into the subconscious mind, it always gives me pause -- because being subconscious, we would very likely be at the mercy of whoever is controlling the experiment, just like the unfortunate George Orr in The Lathe of Heaven.

In any case, it's a compelling study about a topic I've always found fascinating.  As Konkoly et al. point out, we still don't know why people -- and apparently, other animals -- dream, but its ubiquity suggests that it serves some important purpose.  Not only that, we know that blocking REM sleep fairly quickly leads to hallucinations and psychotic symptoms, so whatever dreaming is doing for us, it's evidently critical for our mental health.

The Konkoly et al. study is only a first foray into this topic, so I'll be looking for more research springboarding off their findings.  I'm wondering if it might be possible for people who don't lucid dream to learn how.  I've seen advertisements for devices that are supposed to clue in the sleeper that (s)he is dreaming, and allow for learning how to control the internal dream state -- one I recall is a headband with an LED that activates when you go into REM -- but I haven't seen any particularly convincing evidence that they work.  It'd be cool if they did, though.  I'd love to learn to lucid dream.  Think of the fun you could have!  To start with, I could have told the old lady in the cathedral to shut up and sit down.  That'd have been nice, at least as a start.

********************************

 Many of us were riveted to the screen last week watching the successful landing of the Mars Rover Perseverance, and it brought to mind the potential for sending a human team to investigate the Red Planet.  The obstacles to overcome are huge; the four-odd-year voyage there and back, requiring a means for producing food, and purifying air and water, that has to be damn near failsafe.

Consider what befell the unfortunate astronaut Mark Watney in the book and movie The Martian, and you'll get an idea of what the crew could face.

Physicist and writer Kate Greene was among a group of people who agreed to participate in a simulation of the experience, not of getting to Mars but of being there.  In a geodesic dome on the slopes of Mauna Loa in Hawaii, Greene and her crewmates stayed for four months in isolation -- dealing with all the problems Martian visitors would run into, not only the aforementioned problems with food, water, and air, but the isolation.  (Let's just say that over that time she got to know the other people in the simulation really well.)

In Once Upon a Time I Lived on Mars: Space, Exploration, and Life on Earth, Greene recounts her experience in the simulation, and tells us what the first manned mission to Mars might really be like.  It makes for wonderful reading -- especially for people like me, who are just fine staying here in comfort on Earth, but are really curious about the experience of living on another world.

If you're an astronomy buff, or just like a great book about someone's real and extraordinary experiences, pick up a copy of Once Upon a Time I Lived on Mars.  You won't regret it.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Tuesday, February 23, 2021

The kites fly again

The iconic movie Jurassic Park has provided us with quite a number of quotable lines:

"I hate it when I'm always right."

"Clever girl."

"That is one big pile of shit."

"See?  Nobody cares."

"Hold onto your butts."

But as someone who has studied (and taught) evolution for decades, none of them has stuck in my mind like Ian Malcolm's pronouncement, "Life... uh... finds a way."

This short sentence sums up something really profound; however the Earth's ecosystems are damaged, they always bounce back.  Even after the catastrophic Permian-Triassic Extinction -- which by some estimates wiped out 90% of the existing taxa on Earth -- there was a recovery and rediversification.

Note that I'm not saying that means it was a good thing.  The end Permian extinction event was, it is believed, caused by an unimaginably huge series of volcanic eruptions, followed by a major spike in the carbon dioxide content of the atmosphere -- leading to a jump in the global temperature and catastrophic oceanic anoxia.

So yeah.  "Life survived" doesn't mean it'd have been a fun event to live through.  But it should give us hope that the damage humans can do to the Earth as a whole is, in the grand scheme of things, short-lived.

As an encouraging example of this, take a recent study out of the University of Florida on snail kites.  These birds, related to hawks and falcons, are serious food specialists; they eat only one species of snail, found in salt marshes like the Everglades (and also parts of Central America; I first saw snail kites in Belize).  When things are stable, being a specialist is a good thing -- you pretty much corner the market on a particular resource, like the South American hummingbird species whose bills are shaped to fit one and only one species of flower.  The snail kite's food finickiness is this same sort of thing, and as long as the Everglades was undamaged and had an abundant supplies of snails, all was well.

[Image licensed under the Creative Commons Bernard DUPONT from FRANCE, Snail Kite (Rosthramus sociabilis) Poconé, Mato Grosso, CC BY-SA 2.0]

But when the environment is rapidly changing, either through human effects or because of natural events, being a specialist is seriously precarious.  When a new species of snail -- the island apple snail -- was introduced to the Everglades, its larger size and voracious appetite outcompeted the native snails, and the snail kites were in trouble because their bills weren't large and heavy enough to tackle the bigger prey.

Snail kites were already on the Endangered Species List, given that the Everglades has been massively damaged by human activity.  This, it seemed, might be the death blow to the Florida population of this striking bird.

But... life, uh, finds a way.

The snail kite, in a near-perfect reenactment of the bill diversification in Darwin's finches in the Galapagos, had a variety of bill sizes.  Genetic diversity, despite their extreme specialization.  Before the introduction of the island apple snail, bill size probably didn't make much difference, positive or negative, to the individual birds.  But now, large bills were a serious advantage.  The birds with the biggest bills could tackle the larger snail species -- meaning they had a copious food source that their smaller-billed cousins couldn't utilize.

And in the thirteen years since the introduction of the island apple snail, the average bill size has gone up dramatically -- and the overall population is rebounding.

"Beak size had been increasing every year since the invasion of the snail from about 2007,” said Robert Fletcher, who co-authored the study.  "At first, we thought the birds were learning how to handle snails better or perhaps learning to forage on the smaller, younger individual snails...  We found that beak size had a large amount of genetic variance and that more variance happened post-invasion of the island apple snail.  This indicates that genetic variations may spur rapid evolution under environmental change."

As I said earlier, this is not meant to give the anti-environmental types another reason to say, "Meh, we don't have to change what we're doing, things'll be okay regardless."  Most species aren't as fortunate as the snail kites, already having the genetic diversity to cope with a sudden change.  Much more likely, if we keep doing what we're doing, the specialist species in the world will simply be wiped out.

Whether we'll be able to survive in such a changed world remains to be seen.

But one thing is nearly certain; even if we catastrophically damage the global ecosystem, it will rebound eventually.  Which is hopeful, as far as it goes.  Even after Homo sapiens is another fossilized footnote in the Earth's geological history, life will persist -- once more generating, in Darwin's immortal words, "endless forms most beautiful and most wonderful."

********************************

 Many of us were riveted to the screen last week watching the successful landing of the Mars Rover Perseverance, and it brought to mind the potential for sending a human team to investigate the Red Planet.  The obstacles to overcome are huge; the four-odd-year voyage there and back, requiring a means for producing food, and purifying air and water, that has to be damn near failsafe.

Consider what befell the unfortunate astronaut Mark Watney in the book and movie The Martian, and you'll get an idea of what the crew could face.

Physicist and writer Kate Greene was among a group of people who agreed to participate in a simulation of the experience, not of getting to Mars but of being there.  In a geodesic dome on the slopes of Mauna Loa in Hawaii, Greene and her crewmates stayed for four months in isolation -- dealing with all the problems Martian visitors would run into, not only the aforementioned problems with food, water, and air, but the isolation.  (Let's just say that over that time she got to know the other people in the simulation really well.)

In Once Upon a Time I Lived on Mars: Space, Exploration, and Life on Earth, Greene recounts her experience in the simulation, and tells us what the first manned mission to Mars might really be like.  It makes for wonderful reading -- especially for people like me, who are just fine staying here in comfort on Earth, but are really curious about the experience of living on another world.

If you're an astronomy buff, or just like a great book about someone's real and extraordinary experiences, pick up a copy of Once Upon a Time I Lived on Mars.  You won't regret it.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Monday, February 22, 2021

The oddest star in the galaxy

I'll start today with a quote (often misquoted) from William Shakespeare -- more specifically, Hamlet, Act I, Scene 5:

Horatio:

O day and night, but this is wondrous strange!

Hamlet:

And therefore as a stranger give it welcome. 

There are more things in heaven and earth, Horatio,

Than are dreamt of in your philosophy.

Horatio and Hamlet, of course, are talking about ghosts and the supernatural, but it could equally well be applied to science.  It's tempting sometimes, when reading about new scientific discoveries, for the layperson to say, "This can't possibly be true, it's too weird."  But there are far too many truly bizarre theories that have been rigorously verified over and over -- quantum mechanics and the General Theory of Relativity jump to mind immediately -- to rule anything out based upon our common-sense ideas about how the universe works.

That was my reaction while reading an article sent by a loyal reader of Skeptophilia about an astronomical object I'd never heard of -- Przybylski's Star, named after its discoverer, Polish-born Australian astronomer Antoni Przybylski -- a star 355 light years from Earth, in the constellation of Centaurus, which is weird in so many ways that it kind of boggles the mind.

Przybylski's Star is classified as a Type Ap star.  Type A stars are young, compact, and very hot; the brightest star in the night sky, Sirius, is in this class.

The "p" stands for "peculiar."

Przybylski's Star rotates slowly.  I mean, really slowly.  Compared to the Sun, which rotates about once every 27 days, Przybylski's Star rotates once every two hundred years.  But the strangest thing about it is its composition, which is so anomalous that its discoverer initially thought that his measurements were crazily off.

"No star should look like that," Przybylski said.

You probably know that most ordinary stars are primarily composed of hydrogen, and of the bit that's not hydrogen, most of it is helium.  Hydrogen is the fuel for the fusion in the core of the star, and helium is the product formed by that fusion.  Late in their life, many stars undergo core collapse, in which the temperatures heat up enough to fuse helium into heavier elements like carbon and oxygen.  Most of the rest of the elements on the periodic table are generated in supernovas and in neutron stars, a topic I dealt with in detail in a post I did last year.

My point here is that if you look at the emission spectra of your average star, the spectral lines you see should mostly be the familiar ones from hydrogen and helium, with minuscule traces of the spectra of other elements.  The heaviest element that should be reasonably abundant, even in the burned-out cores of stars, is iron -- it represents the turnaround point on the curve of binding energy, the point where fusion into heavier elements starts taking more energy than it releases.

So elements that are low in abundance pretty much everywhere, such as the aptly-named rare earth elements (known to chemists as the lanthanides), should be so uncommon as to be effectively undetectable.  Short-lived radioactive elements like thorium and radium shouldn't be there at all, because they don't form in the core of your ordinary star, and therefore any traces present had to have formed prior to the star in question's formation -- almost always, enough time that they should have long since decayed away.

The composition of Przybylski's Star, on the other hand, is so skewed toward heavy elements that it elicits more in the way of frustrated shrugs than it does in viable models that could account for it.  It's ridiculously high in lanthanides like cerium, dysprosium, europium, and gadolinium -- not elements you hear about on a daily basis.  There's more praseodymium in the spectrum of its upper atmosphere than there is iron.  Even stranger is the presence of very short-lived radioactive elements such as actinium, americium, and plutonium.

So where did they come from?

"What we’d like to know... is how the heavy elements observed here have come about," said astronomy blogger Paul Gilster.  "A neutron star is one solution, a companion object whose outflow of particles could create heavy elements in Przybylski’s Star, and keep them replenished.  The solution seems to work theoretically, but no neutron star is found anywhere near the star."

"[T]hat star doesn’t just have weird abundance patterns; it has apparently impossible abundance patterns," said Pennsylvania State University astrophysicist Jason Wright, in his wonderful blog AstroWright.  "In 2008 Gopka et al. reported the identification of short-lived actinides in the spectrum.  This means radioactive elements with half-lives on the order of thousands of years (or in the case of actinium, decades) are in the atmosphere...  The only way that could be true is if these products of nuclear reactions are being replenished on that timescale, which means… what exactly?  What sorts of nuclear reactions could be going on near the surface of this star?"

All the explanations I've seen require so many ad-hoc assumptions that they're complete non-starters.  One possibility astrophysicists have floated is that the replenishment is because it was massively enriched by a nearby supernova, and not just with familiar heavy elements like gold and uranium, but with superheavy elements that thus far, we've only seen produced in high-energy particle accelerators -- elements like flerovium (atomic number 114) and oganesson (atomic number 118).  These elements are so unstable that they have half-lives measured in fractions of a second, but it's theorized that certain isotopes might exist in an island of stability, where they have much longer lives, long enough to build up in a star's atmosphere and then decay into the lighter, but still rare, elements seen in Przybylski's Star.

There are a couple of problems with this idea, the first being that every attempt to find where the island of stability lies hasn't succeeded.  Physicists thought that flerovium might have the "magic number" of protons and neutrons to make it more stable, but a paper just last month seems to dash that hope.

The second, and worse, problem is that there's no supernova remnant anywhere near Przybylski's Star.

Which brings me to the wildest speculation about the weird abundances of heavy elements.  You'll never guess who's responsible.

Go ahead, guess.


There is a serious suggestion out there -- and by "serious," I mean made by professional, highly-respected astrophysicists, not cranks, wackos, or bloggers.  (Irony intended.)  The idea here is that an advanced technological civilization might have struck on the solution for nuclear waste of dumping it into the nearest star.  This explanation, bizarre as it sounds, would explain not only why the elements are there, but why they're way more concentrated in the upper atmosphere of the star than in the core.

"Here on Earth, he notes, people sometimes propose to dispose of our nuclear waste by throwing it into the Sun,” Wright writes.  “Seven years before Superman thought of the idea, Whitmire & Wright (not me, I was only 3 in 1980) proposed that alien civilizations might use their stars as depositories for their fissile waste.  They even pointed out that the most likely stars we would find such pollution in would be… [type] A stars!  (And not just any A stars, late A stars, which is what Przybylski’s Star is).  In fact, back in 1966, Sagan and Shklovskii in their book Intelligent Life in the Universe proposed aliens might 'salt' their stars with obviously artificial elements to attract attention."

A curious side note is that I've met (Daniel) Whitmire, of Whitmire & Wright -- he was a professor in the physics department of the University of Louisiana when I was an undergraduate, and I took a couple of classes with him (including Astronomy).  He was known for his outside-of-the-box ideas, including that a Jupiter-sized planet beyond the orbit of Pluto was responsible for disturbing the Oort Cloud as it passed through every hundred million years or so (being so far out, it would have a super-long rate of revolution).  This would cause comets, asteroids, and other debris to rain in on the inner Solar System, resulting in a higher rate of impacts with the Earth -- and explaining the odd cyclic nature of mass extinctions.

So I'm not all that surprised about Whitmire's suggestion, although it bears mention that he was talking about the concept in the purely theoretical sense; the weird spectrum of Przybylski's Star was discovered after Whitmire & Wright's paper on the topic.

Curiouser and curiouser.

So we're left with a mystery.  The "it's aliens" explanation is hardly going to be accepted by the scientific establishment without a hell of a lot more evidence, and thus far, there is none.  The peculiar abundance of heavy elements in this very odd star remains unaccounted-for by any science we currently understand.

I'll end with another quote, this one from eminent biologist J. B. S. Haldane: "The universe is not only queerer than we imagine, it is queerer than we can imagine."

********************************

 Many of us were riveted to the screen last week watching the successful landing of the Mars Rover Perseverance, and it brought to mind the potential for sending a human team to investigate the Red Planet.  The obstacles to overcome are huge; the four-odd-year voyage there and back, requiring a means for producing food, and purifying air and water, that has to be damn near failsafe.

Consider what befell the unfortunate astronaut Mark Watney in the book and movie The Martian, and you'll get an idea of what the crew could face.

Physicist and writer Kate Greene was among a group of people who agreed to participate in a simulation of the experience, not of getting to Mars but of being there.  In a geodesic dome on the slopes of Mauna Loa in Hawaii, Greene and her crewmates stayed for four months in isolation -- dealing with all the problems Martian visitors would run into, not only the aforementioned problems with food, water, and air, but the isolation.  (Let's just say that over that time she got to know the other people in the simulation really well.)

In Once Upon a Time I Lived on Mars: Space, Exploration, and Life on Earth, Greene recounts her experience in the simulation, and tells us what the first manned mission to Mars might really be like.  It makes for wonderful reading -- especially for people like me, who are just fine staying here in comfort on Earth, but are really curious about the experience of living on another world.

If you're an astronomy buff, or just like a great book about someone's real and extraordinary experiences, pick up a copy of Once Upon a Time I Lived on Mars.  You won't regret it.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Saturday, February 20, 2021

Pharaonic forensics

I'm fascinated with history, and have always been especially interested in times and places for which we have few records and, therefore, not much way of knowing what really went on.

I'm not sure if this is because I'm a fiction writer and rely a lot on imaginary realms of the mind to fill in the gaps, or if I just have a perverse enjoyment of setting myself up with impossible tasks.  My favorite time and place to read about is western Europe during the Dark Ages, in the centuries after the fall-ish of Rome.  I phrase it that way because just as Rome wasn't built in a day, neither did it fall in a day, as if the Hordes of Barbarians went through the gates and the Romans basically just dropped their swords and said, "Okay, fuck it, we give up."  In fact, the aforementioned hordes weren't themselves a single unit; starting in the fourth century C.E., various tribes and sub-tribes of Celts, Goths, Huns, Scythians, et al. kind of chipped away at the empire until there wasn't much left of it.  The Western Roman Empire collapsed first, but the Eastern persisted for a while longer, devolving into chaos more than once -- nearly falling apart entirely during the mid-sixth-century Plague of Justinian, that wiped out a quarter of Europe's population and seems to have exceeded the both the fourteenth-century Black Death and the twentieth-century Spanish flu for sheer number of victims.  (I dealt with that topic, and what may have caused it, a couple of years ago, if you want to read about what historians call "the worst century in history.")

Another time and place I find intriguing is the early years of ancient Egypt, once again because so little is known for sure about it.  Our knowledge of the Old Kingdom, First Intermediate Period, and Middle Kingdom -- up until around the sixteenth century B.C.E. -- is hampered not only because it was a long time ago and a lot of the records haven't survived, but because what records were kept weren't all that accurate.  Just as with a lot of other theocratic cultures, the scribes of early pharaonic Egypt were as invested in depicting the rulers as gods as they were with writing down an accurate account of what happened.  The result was a mishmash of actual history, divine genealogies, miracle stories, and whitewashing that makes teasing the truth from the fiction damn near impossible.

Not that I blame the scribes, mind you.  Keeping monarchs in good humor is a full-time job, and often doesn't end well.  I've recently been re-reading the Shakespearean history plays, and he, like the scribes, knew which side his bread was buttered on.  Shakespeare was writing during the reigns of Elizabeth I and James 1, and if you take a look at works like Richard II, Henry IV (parts 1 and 2), Henry V, Henry VI (parts 1, 2, and 3), Richard III, and especially Henry VIII, you'll pretty quickly notice that any ancestors of the monarchs he was writing for are depicted as good guys, while ones who weren't -- like the villainous King Richard III of the play -- are the opposite.  I love the history plays, and that sort of treatment makes for great theater, but honestly, "history" is kind of the last thing they actually are.

So all of this is a long-winded way of leading up to a paper I stumbled upon yesterday in Frontiers of Medicine entitled, "Computed Tomography Study of the Mummy of King Seqenenre Taa II: New Insights Into His Violent Death," by renowned scholars of ancient Egypt Sahir Saleem (of Cairo University) and Zahi Hawass (former Egyptian Minister of Antiquities).  Pharaoh Seqenenre Taa II was the second-to-last pharaoh of the Seventeenth Dynasty, and ruled over part of Egypt during the chaotic Second Intermediate Period, when much of Egypt was controlled by a race of "warrior kings" from what is now Israel, Jordan, and southern Lebanon called the Hyksos.

The constant fighting, along with a long run of weak, short-lived rulers, makes the Second Intermediate Period hard to parse, because records from that time are even more sparse than they were from the preceding dynasties.  We know that the pharaoh in question, Seqenenre Taa II, was killed in battle with the Hyksos, and after a short reign by his elder son, Kamose, his younger son Ahmose I took over, overcame and drove out the Hyksos, and became the first pharaoh of both the Eighteenth Dynasty and the New Kingdom, which saw the peak of pharaonic power.

Fortunately for us history buffs, the Egyptians did leave behind one thing that helps us to figure out what was going on back then -- mummified bodies of their leaders.  And despite the chaotic conditions of the Seventeenth Dynasty, Seqenenre Taa II's body has survived for almost 3,600 years, and now Saleem and Hawass have done a CT scan to see if they can figure out more about him.

The hints from the records of the time that Seqenenre Taa II died in battle are almost certainly correct.  He had multiple injuries, including wounds that appear to have been made with an axe, a dagger, a club, and a spear.  (It's grimly amusing that several times in the paper, during the description of each injury, the authors say "this blow was probably fatal," as if the poor man got killed over and over.)  Most interesting, injury to his wrists suggests his hands had been tied behind his back -- and that he probably was captured in battle, possibly injured at the time, and afterward executed.

Pharaoh Seqenenre Taa II's skull, CT scan by Saleem & Hawass

"This suggests that Seqenenre was really on the front line with his soldiers risking his life to liberate Egypt," said study lead author Dr. Sahar Saleem, in a press release.  "In a normal execution on a bound prisoner, it could be assumed that only one assailant strikes, possibly from different angles but not with different weapons.  Seqenenre's death was rather a ceremonial execution."

Which is gruesome but fascinating, and illustrates that parts of history that have seemed like closed books may one day be understood using cutting-edge techniques from science.  And a bit of luck; the information about the unfortunate pharaoh only is available to us because his mummified body survived for three and a half millennia.  But it does mean that we haven't uncovered everything there is to study about cryptic and chaotic chapters in our history -- and that with diligence, ages that have appeared dark might eventually be illuminated.

*********************************

Back when I taught Environmental Science, I used to spend at least one period addressing something that I saw as a gigantic hole in students' knowledge of their own world: where the common stuff in their lives came from.  Take an everyday object -- like a sink.  What metals are the faucet, handles, and fittings made of?  Where did those metals come from, and how are they refined?  What about the ceramic of the bowl, the pigments in the enamel on the surface, the flexible plastic of the washers?  All of those substances came from somewhere -- and took a long road to get where they ended up.

Along those same lines, there are a lot of questions about those same substances that never occur to us.  Why is the elastic of a rubber band stretchy?  Why is glass transparent?  Why is a polished metal surface reflective, but a polished wooden surface isn't?  Why does the rubber on the soles of your running shoes grip -- but the grip worsens when they're wet, and vanishes entirely when you step on ice?

If you're interested in these and other questions, this week's Skeptophilia book-of-the-week is for you.  In Stuff Matters: Exploring the Marvelous Materials that Shape Our Man-Made World, materials scientist Mark Miodownik takes a close look at the stuff that makes up our everyday lives, and explains why each substance we encounter has the characteristics it has.  So if you've ever wondered why duct tape makes things stick together and WD-40 makes them come apart, you've got to read Miodownik's book.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]