Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Thursday, February 29, 2024

The dying of the light

In July of 2004, my father died.  I was at his bedside in Our Lady of Lourdes General Hospital in Lafayette, Louisiana when it happened.  He'd  been declining for a while -- his once razor-sharp mental faculties slipping into a vague cloudiness, his gait slowing and becoming halting and cautious, his former rapier wit completely gone.  The most heartbreaking thing was his own awareness of what he had lost and would continue to lose.  It looked like a slow slide into debility.

Then, in June, he had what the doctors described as a mini-stroke.  Afterward, he was still fairly lucid, but was having trouble walking.  It had long been his deepest fear (one I share) that he'd become completely dependent on others for his care, and it was obvious to us (and probably to him as well) that this was the direction things were going.

What happened next was described in three words by my mother: "He gave up."

Despite the fact that the doctors could find no obvious direct cause of it, his systems one by one started to shut down.  Three weeks after the mini-stroke and fall that precipitated his admission into the hospital, he died at age 83.

I had never been with someone as they died before (and haven't since).  I was out of state when my beloved grandma died in 1986; and when my mother died, eight months after my father, it was so sudden I didn't have time to get there.  But I was by my father's side as his breathing slowed and finally stopped.  The event itself wasn't at all dramatic; the transition between life and death was subtle, gentle, and peaceful.  However wrenching it was on my mother and me, for him there seemed to be hardly a boundary between "here" and "not here."

Of course, I'm judging that from the outside.  No one knows -- no one can know -- what the experience was like for him.  It's funny, really; death is one of the experiences that unites us as human, and one which we all will ultimately share, but none of us knows what it actually is.

Noël LeMire, La Mort et le Mourant (ca. 1770) [Image is in the Public Domain]

A study in the journal Frontiers in Aging Neuroscience, though, may be the first clue as to what the experience is like.  An 87-year-old Canadian epilepsy patient was set up for an electroencephalogram to try and get a picture of what was causing his seizures, when he unexpectedly had a severe heart attack.  The man was under a DNR (Do Not Resuscitate) order, so when his heart stopped beating, they let him die...

... but he was still hooked up to the EEG.

This gave his doctors our first glimpse into what is happening in the brain of someone as they die.  And they found a sudden increase in activity in the parts of the brain involved in memory, recall, and dreaming -- which lasted for thirty seconds after his heart stopped, then gradually faded.

"Through generating oscillations involved in memory retrieval, the brain may be playing a last recall of important life events just before we die, similar to the ones reported in near-death experiences," said Ajmal Zemmar, a neurosurgeon who was the study's lead author.  "As a neurosurgeon, I deal with loss at times.  It is indescribably difficult to deliver the news of death to distraught family members.  Something we may learn from this research is that although our loved ones have their eyes closed and are ready to leave us to rest, their brains may be replaying some of the nicest moments they experienced in their lives."

Which is a pleasant thought.  Many of us -- even, for some reason, the devoutly religious, who you'd think would be positively eager for the experience -- are afraid of death.  Me, I'm not looking forward to it; I rather like being alive, and as a de facto atheist I have no particular expectation that there'll be anything afterwards.  Being with my father as he died did, however, have the effect of making me less afraid of death.  The usual lead-up, with its frequent pain and debility and illness, is still deeply terrifying to me, but crossing the boundary itself seemed fairly peaceful.

And the idea that our brains give us one last go-through of our pleasant memories is kind of nice.  I know that this single patient's EEG is hardly conclusive -- and it's unlikely there'll be many other people hooked up to a brain scanner as they die -- but it does give some comfort that perhaps, this experience we will all share someday isn't as awful as we might fear.


Wednesday, February 28, 2024

The family tree of folk tales

When I was a kid, one of my favorite books was a fantastic collection of Japanese folk tales called The Case of the Marble Monster and Other Stories.  They had been collected in the 1950s by an American, I. G. Edmonds, and through the wonders of the Scholastic Book Club became available for schoolchildren like myself.

The stories center on the wise and humorous character of Ōoka Tadasuke, who was a real person -- he lived from 1677 to 1752 in Yedo (now Tokyo), and was an acclaimed and popular magistrate who got a well-deserved reputation not only for his fairness and concern for the plight of the poor, but for coming up with brilliant solutions for difficult cases.  In the first one, "The Case of the Stolen Smell," a miserly and nasty-tempered tempura shop owner claims that a poor student living above his shop is deliberately waiting until he fries his fish, so the aroma will make the student's bowl of rice (all he can afford) taste better -- and the merchant demands compensation for all the smells the student has stolen.

Judge Ōoka hears the complaint, then orders the student to get together all the coins he has, and it looks like the poor young man is in trouble, but then the judge orders the student to pour the pile of coins from one hand to the other, and declares the fine paid.  The tempura shop owner, of course, objects that he hasn't been paid anything.

"I have decided that the payment for the smell of food is the sound of money," Ōoka says, with a bland smile.  "Justice, as always, has prevailed in my court."

The whole collection is an absolute delight.  Several of them -- notably "The Case of the Terrible-Tempered Tradesman" and "The Case of the Halved Horse" -- are laugh-out-loud funny. And in fact, I still own my much-loved and rather worn copy.

A woodcut portrait of the wise Judge Ōoka Tadasuke [Image is in the Public Domain]

Humans have been telling stories for a very, very long time.  And of course, as a novelist, the topic is near and dear to my heart.  Stories can be uplifting, cathartic, funny, shocking, heartbreaking, edifying, instructive, and surprising -- allowing us to access and express our strongest emotions, creating a deep bond between the storyteller and the listener (or reader).

How long have we been telling our invented tales, though?  The tales of the wisdom of Judge Ōoka are about three hundred years old; of course, we have far older ones, from the Irish Táin Bó Cúailnge (The Cattle Raid of Cooley), which was first written down in the twelfth century C.E. but probably dates in oral tradition to a millennium earlier, to the Greek and Roman myths, back to what is probably the oldest written mythological story we still have a copy of -- the Epic of Gilgamesh, which dates to around the eighteenth century B.C.E.  But how much farther back in time does the storytelling tradition go?  And how could we be at all sure?

A new study by Sara Graça da Silva (of the New University of Lisbon) and Jamshid Tehrani (of Durham University) has taken a shot at figuring that out.  Long-time readers of Skeptophilia may recognize Tehrani's name; he was responsible for the delightful study of the various versions of "Little Red Riding Hood" that amounted to using cladistic bootstrap analysis to determine which were related to which.  Now, da Silva and Tehrani have gone one step further -- employing another technique swiped from evolutionary genetics to analyze folk tales and determine how old the most recent ancestor of the various versions actually is.

There's a technique used by taxonomists and evolutionary biologists called a molecular clock -- a sequence of DNA, some version of which is shared by two or more species, and which undergoes mutations at a known rate.  The number of differences in that sequence between two species then becomes an indication of how long ago they had a common ancestor; the more differences, the longer ago that common ancestor lived.

De Silva and Tehrani used the same approach, but instead of looking for commonalities in actual DNA sequences, they looked at what amounts to the DNA of a story -- the characters, themes, and motifs that make it stand out.  As with Tehrani's earlier study of "Little Red Riding Hood," they found that many folk tales have related versions in other cultures that make it possible to do this kind of comparative phylogenetics.  And some of them seem to go back a very long way -- notably "Jack and the Beanstalk," their analysis of which found common ancestry with other versions dating back to the Bronze Age.

In one way, it's astonishing that this is possible, but in another, it shouldn't be surprising.  The oral tradition of storytelling is common to just about every culture in the world.  I remember my maternal uncle telling us kids creepy stories in French about the loup-garou and feu follet and les lutins that scared the absolute hell out of us (and we loved every minute of it).  That cultural inheritance has very deep roots -- and as da Silva and Tehrani showed, those roots show through in versions of stories we still tell today.


Tuesday, February 27, 2024

The ghost of Greyfriars

I've been asked a number of times why I disbelieve in such phenomena as ghosts, and my answer is always the same: I don't.  I have no strong evidence that they exist, which is not the same thing.  Presented with scientifically admissible evidence, I'd have no choice but to admit that, in fact, I do believe in spooks.

So on this count -- like with most other fringe-y beliefs -- I'm able to have my mind changed.  But -- to borrow a phrase from astrophysicist Neil deGrasse Tyson -- "I need more than 'you saw it.'"

And that's the difficulty I have with just about every ghost story I've ever heard.  Take, for example, the spot that is often called "the most haunted place in Scotland" -- Greyfriars Kirkyard in Edinburgh.

Greyfriars Kirkyard, Edinburgh, Scotland [Image licensed under the Creative Commons Carlos Delgado, Greyfriars Kirkyard - 03, CC BY-SA 3.0]

It's unsurprising that the place is claimed to have ghosts; it's been used as a cemetery since the time of Mary Queen of Scots.  But it didn't really get an evil reputation until the horrible "Killing Time," when beginning in 1679 and lasting nine years, the Scottish Covenanters got into a dispute with King Charles II over whether the Presbyterian Church would be the sole form of religion in Scotland.  (It's always been astonishing to me how often people were killed in Europe, and in the places the Europeans colonized, over disputes that boil down to "my Jesus is better than your Jesus.")  In the end, of course, Charles's side won, and hundreds of Covenanters were transported, imprisoned, or even executed as traitors to the crown.  And things only got worse when Charles's brother James II succeeded to the throne -- James was (to put not too fine a point on it) a narrow-minded, humorless religious fanatic, who (as a Roman Catholic) was even more against the Covenanters than his brother was.

However, the name most often associated with the Killing Time is one George Mackenzie of Rosehaugh, nicknamed "Bluidy Mackenzie" by the Covenanters, who despised him because of his siding with the King and for his role in the persecutions that followed.  It's likely Mackenzie saw himself as having no choice, and that he was simply doing what the King ordered him to do -- but, from the Covenanters' perspective, that was a mighty fine excuse for the horrors that followed, which included people being crowded into unheated, stone-floored jails in midwinter with only four ounces of food a day to sustain them.  The worst spot was the official Covenanters' Prison, conveniently (considering how many of them died) located right next to Greyfriars Kirkyard.

In any case, the persecutions eventually ended with the "Glorious Revolution" of 1688, when James II was deposed and his daughter, Mary II, and her Dutch husband William of Orange, were put on the throne.  The Presbyterians were given their religious freedom, the surviving Covenanters (there weren't many) freed, and everything more or less went back to normal.  Mackenzie only lived three more years, dying in 1691 at the age of 55, and was buried with honors...

... in Greyfriars Kirkyard, within a stone's throw of the old Covenanters' Prison.

Which these days is called rubbing salt in a wound.

It wasn't long before the horrors that had happened gave rise to claims that Mackenzie's spirit was haunting the place.  By the nineteenth century, it was so established as a haunted spot that Robert Louis Stevenson commented upon it (and Mackenzie), "When a man’s soul is certainly in hell, his body will scarce lie quiet in a tomb however costly, sometime or other the door must open, and the reprobate come forth in the abhorred garments of the grave...  Foolhardy urchins [thought it] a high piece of prowess to knock at the Lord Advocate’s Mausoleum and challenge him to appear. 'Bluidy Mackenzie, come oot if ye dar!'"

This legend has persisted to today, where Greyfriars figures prominently on Edinburgh ghost tours.  But here's where the problem comes up.  It's haunted by an evil presence, they claim, which one site says "is attracted to and feeds on fear;" another says the vengeful spirit has "knocked more than fifty people [on ghost tours] unconscious" and has scratched or bruised others, including an eleven-year-old boy who was given a black eye.

And my question is: if there's such an embarrassment of riches in the way of evidence that the ghost of Greyfriars is real, how has this not been verified scientifically?

If people are being beaten up right and left by a ghost, it seems like it'd be simple to set things up so that there'd be some kind of evidence other than saying after the fact, "I'm sure I didn't have these scratches when I came in here."  Now, mind you, I'm not accusing anyone of lying.  But it certainly does seems suspicious that if so many people are having these experiences, no one has conducted a scientifically-admissible investigation of the place.

If they have, I haven't found anything about it.  Plenty of anecdotes, nothing in the way of proof of the claims.

So, to return to my original point -- I'm convincible.  But don't @ me with more "my grandma's Cousin Ethel went there and an invisible hand touched the back of her neck!"  I'm very sorry grandma's Cousin Ethel got scared, but that's hardly to the point as far as science goes.

In any case, you can bet that the next time I'm in Scotland, Greyfriars Kirkyard will be high on the list of must-sees.  And I hereby invite the ghost himself to change my mind.  I would consider a black eye from a poltergeist a badge of honor, and after all, as a skeptic it's no more than I deserve.

Bluidy Mackenzie, do your worst.


Monday, February 26, 2024

Biggest and brightest

If you're the kind of person who likes having your mind blown by superlatives, astrophysics is the science for you.

I ran into two really good examples of that last week.  In the first, a paper in the journal Monthly Notices of the Royal Astronomical Society, from research led by astrophysicist Ruth Daly of Pennsylvania State University, found that the massive black hole at the center of the Milky Way -- Sagittarius A* -- is spinning so fast it's actually warping the fabric of space time around it, flattening it into the shape of a football.

The "no-hair theorem" of the physics of black holes states that they are rather simple beasts.  They can be completely characterized using only three parameters: their mass, charge, and angular momentum.  The name comes from the quip by physicist John Archibald Wheeler that "black holes have no hair," by which he meant that there are no other adornments you need to describe to get a full picture of what they're doing.  However, I've always been puzzled by what exactly it means to say that a black hole has angular momentum; objects with mass and spin, such as a twirling top or the rotating Earth, have angular momentum, but since the mass in a black hole has (at least as far as we understand them) collapsed into a singularity, what exactly is spinning, and how could you tell?

Last week's paper at least answers the second half of the question.  Using data from x-ray and radio wave collimation and material outflow from Sagittarius A*, astrophysicists can determine how much spacetime is being deformed by the angular momentum of the black hole, and from that determine its rate of spin.

And it's spinning fast -- an estimated sixty percent of the maximum possible rate, which is set by the universal speed limit that matter can't travel at or faster than the speed of light.  The deformation is so great that the fabric of spacetime is compressed along the spin axis, so it appears spherical from above but flattened from the side.

[Image is in the Public Domain courtesy of NASA/JPL]

The second piece of research comes from a study at the European Southern Observatory, and was published in Nature Astronomy.  It looks at the recent discovery of the brightest object known, a quasar (an active galactic nucleus containing a supermassive black hole) that -- get ready for the superlatives -- is five hundred trillion times more luminous than the Sun, contains a black hole that has seventeen billion times the mass of the Sun, and is consuming one Sun's worth of mass a day.  This object, given the unassuming name of J0529-4351, is twelve billion light years away, making it also one of the most distant objects ever studied.

"All this light comes from a hot accretion disk that measures seven light-years in diameter -- this must be the largest accretion disk in the Universe," said study co-author Samuel Lai, of Australian National University.  If he sounds a little blown away by this -- well, so are we all.  A seven-light-year accretion disk means that if it were placed where the Sun is, not only would its accretion disk engulf the entire Solar System, it would extend outward past the five nearest stars -- the triple-star system of Alpha/Proxima Centauri, Barnard's Star, and Luhman 16.

I don't know about you, but something on that scale boggles my mind.

And that's not a bad thing, really.  I think we need to be reminded periodically that in the grand scheme of things, the problems we lose so much sleep over down here are pretty minuscule.  Also, it's good to have our brains overwhelmed by the grandeur of the universe we live in, to be able to look up into the night sky and think, "Wow.  How fortunate I am to be able to witness -- and in some small way, understand -- such wonders."


Saturday, February 24, 2024


One of the more fascinating bits of biochemistry is the odd "handedness" (technically called chirality) that a lot of biological molecules have.  Chiral molecules come in a left-handed (sinistral) and a right-handed (dextral) form that are made of exactly the same parts but put together in such a way that they're mirror-images of each other, just like a left-handed and right-handed glove.

Where it gets really interesting is that although the left-handed and right-handed forms of biologically active molecules have nearly identical properties, they aren't equivalent in function within living cells.  Nearly all naturally-occurring sugars are right-handed (that's where the name dextrose comes from); amino acids, on the other hand, are all left-handed (which is why amino acid supplements often have an "l-" in front of the name -- l-glutamate, l-tryptophan, and so on).  Having evolved with this kind of specificity has the result that if you were fed a mirror-image diet -- left-handed glucose, for example, and proteins made of right-handed amino acids -- you wouldn't be able to tell anything apart by its smell or taste, but you would proceed to starve to death because your cells would not be able to metabolize molecules with the wrong chirality.

Chirality in amino acids [Image is in the Public Domain courtesy of NASA]

Molecular chirality was used to brilliant effect by the wonderful murder mystery author Dorothy Sayers in her novel The Documents in the Case.  In the story, a man dies after eating a serving of mushrooms he'd picked.  His friends and family are stunned; he'd been a wild mushroom enthusiast for decades, and the fatal mistake he apparently made -- including a deadly ivory funnel mushroom (Clitocybe dealbata) in with a pan full of other edible kinds -- was something they believed he never would have done.

The toxic substance in ivory funnels, the alkaloid muscarine, is -- like many organic compounds -- chiral.  Naturally-occurring muscarine is all left-handed.  However, when it's synthesized artificially in the lab, you end up with a mixture of right- and left-handed molecules, in about equal numbers.  So when the contention is made that the victim hadn't mistakenly included a poisonous mushroom in with the edible ones, but had been deliberately poisoned by someone who'd added the chemical to his food, the investigators realize this is the key to solving the riddle of the man's death.

Chiral molecules have another odd property; if you shine a beam of polarized light through a crystal, right-handed ones rotate the polarization angle of the beam clockwise, and left-handed ones counterclockwise.  So when an extract from the victim's digestive tract is analyzed, and a polarized light beam shined through it splits in two -- part of the beam rotated clockwise, the other part counterclockwise -- there's no doubt he was poisoned by synthetic (mixed-chiral) muscarine, not by mistakenly eating a poisonous mushroom that would only have contained the left-handed form.

So specific chirality is ubiquitous in the natural world.  We have a particular handedness, all the way down to the molecular level.  What's a little puzzling, however, is why this tendency occurs.  Not chirality per se; that merely arises from the fact that if you bond four different atoms or groups around a central carbon atom, there are two ways you can do it, and they result in molecules that are mirror images of each other (as shown in the image above).  But why do living things all exhibit a preference for a certain handedness?  It must have evolved extremely early, because virtually all living things share the same preferences.  But what got this bias started -- especially given that left-handed and right-handed molecules are equally easy to make abiotically, and have nearly identical physical and chemical properties?

Well, a paper this week in the journal Advanced Materials may have just answered this long-standing question.  A group led by Karl-Heinz Ernst, at the Swiss Federal Laboratories for Materials Science and Technology, found that the selection for a particular handedness happened because of the interplay between the electromagnetic fields of metallic surfaces with the spin configuration of chiral molecules.

They created surfaces coated with patches of a thin layer of a magnetic metal, such as iron or cobalt, and analyzed the magnetic "islands" to determine the direction of orientation of the magnetic field of each.  They then took a solution of a chiral molecule called helicene, which had equal numbers of right and left-handed forms, and poured it over the surface.  The hypothesis was that the opposite patterns of spin of the electrons in the two different forms of helicene would allow them to bond only to a magnetic patch with a specific orientation. 

So after introducing the mixed helicene to the metal surfaces, they looked to see where the molecules adhered.

Sure enough -- depending on the direction of the magnetic field, one or the other form of helicene stuck to the metal surface.  The magnetic field was acting as a selecting agent on the spin, picking out the handedness that was compatible with the orientation of the patch.

This, of course, is only a preliminary study of a single chiral molecule in a very artificial setting.  However, it does for the first time provide a mechanism by which selective chirality could have originated.  "In certain surface-catalyzed chemical reactions," Ernst explained, "such as those that could have taken place in the chemical 'primordial soup' on the early Earth, a certain combination of electric and magnetic fields could have led to a steady accumulation of one form or another of the various biomolecules -- and thus ultimately to the handedness of life."

So a simple experiment (simple to explain, not to perform!) has taken the first step toward settling a question that chemistry Nobel laureate Vladimir Prelog called "one of the first questions of molecular theology" back in 1975.  It shows that science has the capacity for reaching back and explaining the earliest origins of biochemistry -- and how life as we know it came about.


Friday, February 23, 2024

The language of Sark

The title of my master's thesis was The Linguistic and Cultural Effects of the Viking Invasions on England and Scotland.  I don't think many people read it other than me and my committee, but it did win the 1996 International Prize For Research With Absolutely No Practical Applications Whatsoever.  And it allowed me to learn valuable information such as the fact that there were two words in eleventh-century England for window -- one from Old English (eagþyrl, literally "eye-hole") and one from Old Norse (vindauga, literally "wind-eye") -- and for some reason the Old Norse one won and our word window comes from it rather than from Old English.

Which is a handy "fun fact" for me to bring out at cocktail parties, especially if I want everyone to back away slowly and then find other people to talk to for the rest of the evening.

In any case, I spent a good bit of my time in graduate school learning assorted random facts about western European linguistics, which was why I was a bit gobsmacked when I found out that there's a language in western Europe that I had never even heard of.  It's called Sarkese, and is only found on the tiny (1.5 by 3.5 kilometers) island of Sark, east of Guernsey in the Channel Islands.

The Channel Islands [Image licensed under the Creative Commons Aotearoa, Wyspy Normandzkie, CC BY-SA 3.0]

Sark is currently home to five hundred people, of whom only three learned Sarkese (known colloquially as patois) as their first language.  It's a Romance language -- the closest relative is French, but it's not mutually intelligible.  It came originally from medieval Norman French via the isle of Jersey; the ancestors of the people of Sark came over from Jersey in 1565 and it's been relatively isolated ever since.

The samples of Sarkese in the article I linked above illustrate how far the two have diverged in the close to a thousand years since it split from mainland French.  "Thank you very much," for example -- merci beaucoup in French -- is mérsî ben dê fê in Sarkese.  French has seventeen different vowel phonemes; Sarkese has over fifty.  Add to that the complication that the island is shaped like an hourglass, with a narrow isthmus (La Coupée) that is all but impassible during storms, and the two pieces (Big Sark and Little Sark) have different dialects.

Fortunately, a Czech linguist, Martin Neudörfl, is trying to document Sarkese, and has worked with the three remaining fluent speakers -- who are all over eighty years old -- and about fifteen semi-fluent individuals to produce a huge library of recordings, and reams of documents describing the morphology and syntax of Sarkese.  "We have hundreds of hours [of recordings] and our audio archive is outstanding," Neudörfl said.  "Even if I were to disappear, someone could revive the language just using the recordings.  We've only achieved this through years of exhaustive research.  It's all thanks to [the speakers] for sharing their knowledge."

It's always sad when a language goes extinct, and so many have done so without anyone ever recording them or writing them down.  In large part it's due to competition with more widely spoken languages; it's eye-opening to know that half of the world's individuals are native speakers of only fifteen different languages.  The other half speak one of the other seven-thousand-odd languages that currently exist in the world.  Sarkese is one of many languages that have fallen prey to the prevalence, convenience, and ubiquity of English.

On the one hand, I get why it happens.  If you want to be understood, you have to speak a language that the people around you can understand, and if you only spoke Sarkese you could communicate with eighteen other people on the island (and one Czech linguist).  But still, each language represents a trove of knowledge about the culture and history of a people, and it's a tragedy when that is lost.

So kudos to Martin Neudörfl, and the Sarkese speakers who are working with him to record this language before it's too late.  Makes me wish I'd tackled a project like this for my master's research.  I could be wrong, but I don't think Old Norse is coming back any time soon.


Thursday, February 22, 2024

Animalia paradoxa

Carl Linnaeus was born in Råshult, Sweden, on 23 May 1707.  His father Nils was the minister of the parish of Stenbrohult but was also an avid gardener, and the story goes that when Carl was young and got upset, Nils would bring him a flower and tell the little boy its name, and that always calmed him down.

The love of botany -- and of knowing the names of living things -- was to shape Carl Linnaeus's life.  Prior to his time, there was no systematic way of giving names to species; there were dozens of names in various languages for the same species, and sometimes several different names in the same language.  Additionally, the fact that this is before the recognition of the relatedness of all life meant that things were named simply by their superficial appearance, which may or may not indicate an underlying relationship.  We still have some leftovers from this haphazard practice, such as the various birds called buntings (from the Middle English buntynge, "small bird") that aren't necessarily related to each other.  (For example, the North American indigo bunting is in the cardinal family; the European pine bunting in the family Emberizidae.) 

Young Linnaeus was lucky enough not only to have supportive parents, but a variety of people who recognized his intellect and ability and nurtured him in his studies.  (Amongst them was the scientist and polymath Olof Celsius, whose nephew Anders gave us the Celsius temperature scale.)  He was primarily interested in botany, but quickly became frustrated with the fact that the same plant could have six different names in six different villages -- and worse still, it was impossible to communicate taxonomic information clearly to botanists in other countries, where the names would have come from their native language.

So he decided to do something about it.

Linnaeus came up with the idea of binomial nomenclature -- the "two-name naming system," more commonly called "scientific names."  Each species would be assigned a unique and unambiguous name made of the genus and species names, each derived from Latin or Greek (which were the common languages of science at the time).  The genus would include various related species.  His determinations of who was related to whom were based upon appearance -- this is long before genetics became the sine qua non of systematics -- and some of Linnaeus's classifications have been revised in the 250-odd years since he wrote his magnum opus, the Systema Naturae.  But even so, the system he created is the one we still use today.

And this is why scientists the world over will know, if you say Mustela nigripes, that you are talking about the black-footed ferret.  (The scientific name translates to... "black-footed ferret."  Just because they're fancy-sounding Latin and Greek words doesn't mean they're all that revelatory.)

So Linnaeus took the first steps toward ordering the natural world.  But what is less well-known is that he included a few animals in his book that are more than a little suspect -- and labeled them as such, illustrating an admirable dedication to honoring hard evidence as the touchstone for scientific understanding.

In a section called "Animalia paradoxa," Linnaeus listed some "species" that had been reported by others, but for which there was no clear evidence.  From the tone of his writing, it's obvious he was doubtful they existed at all, and was only including them to point out that any reports of them were based upon hearsay.  These included the following genera, along with his description of them:
  • Hydra: "body of a snake, with two feet, seven necks and the same number of heads, lacking wings, preserved in Hamburg, similar to the description of the Hydra of the Apocalypse of St.John chapters 12 and 13.  And it is provided by very many as a true species of animal, but falsely.  Nature for itself and always the similar, never naturally makes multiple heads on one body.  Fraud and artifice, as we ourselves saw [on it] teeth of a weasel, different from teeth of an Amphibian [or reptile], easily detected."
  • Monoceros: "Monoceros of the older [generations], body of a horse, feet of a 'wild animal,' horn straight, long, spirally twisted.  It is a figment of painters.  The Monodon of Artedi [= narwhal] has the same manner of horn, but the other parts of its body are very different."
  • Satyrus: "Has a tail, hairy, bearded, with a manlike body, gesticulating much, very fallacious, is a species of monkey, if ever one has been seen."
  • Borometz: "The Borometz or Scythian Lamb is reckoned with plants, and is similar to a lamb; whose stalk coming out of the ground enters an umbilicus; and the same is said to be provided with blood from by chance devouring wild animals.  But it is put together artificially from roots of American ferns. But naturally it is an allegorical description of an embryo of a sheep, as has all attributed data."
  • Manticora: "Has the face of a decrepit old man, body of a lion, tail starred with sharp points."
A manticore, from Johannes Jonston's Historiae Naturalis (1650) [Image is in the Public Domain]

I've always admired Linnaeus -- like him, I've been fascinated with the names of things since I was little, and started out with plants -- but knowing about his commitment to avoid getting drawn into the superstition and credulity of his time makes me even more fond of him.  He was unafraid to call out the Animalia paradoxa as probable hoaxes, and that determination to follow the rules of scientific skepticism still guides taxonomists to this day.

Of course, sometimes there are some bizarre "forms most beautiful and most wonderful" in the natural world, to borrow a phrase from Darwin.  When the first taxidermied pelts and skeletons of the duck-billed platypus were sent from Australia back to England, many English scientists thought they were a prank -- that someone had stitched together the remains of various animals in an attempt to play a joke.  And once convinced that they were real, the first scientific name given to the platypus was...

... Ornithorhynchus ("bird-billed") paradoxa.


Wednesday, February 21, 2024

Shaky ground

A little less than six years apart -- on 1 November 1755 and 31 March 1761 -- two major earthquakes struck the country of Portugal, each time generating a tsunami that devastated the capital city of Lisbon.

They were both huge, although given that this was before the invention of the seismometer, we can only guess at how big; estimates are that the 1761 quake was around 8.5 on the Richter Scale, while the 1755 one may have been as high as 9.0.  Each time, the tremors were felt far from the epicenter.  The shaking from the 1755 quake was recorded as far away as Finland.

The effects in Portugal and nearby nations were devastating.  In 1755 the combined death toll in Portugal, Spain, and Morocco -- mostly from the tsunami -- is estimated at fifty thousand.  Over eighty percent of the buildings in Lisbon were damaged or completely destroyed -- and five and a half years later, many of the ones that had survived in 1755 collapsed.

Ruins of the Convento do Carmo, which was destroyed in the Great Lisbon Earthquake of 1755 [Image licensed under the Creative Commons Chris Adams, Convento do Carmo ruins in Lisbon, CC BY-SA 3.0]

What's curious is that Portugal isn't ordinarily thought to be high on the list of seismically-active nations.  It's not on the Ring of Fire, where the majority of the world's earthquakes and volcanoes occur.  The fact is, though, there is a poorly-studied (and poorly-understood) fault zone offshore -- the Azores-Gibraltar Transform Fault -- that is thought to have been responsible for both of the huge eighteenth century quakes, as well as a smaller (but still considerable) earthquake in 1816.

The AGTF, and how it's evolving, was the subject of a paper in the journal Geology last week.  The big picture here has to do with the Wilson Cycle -- named after plate tectonics pioneer John Tuzo Wilson -- which has to do with how the Earth's crust is formed, moved, and eventually destroyed.

At its simplest level, the Wilson Cycle has two main pieces -- divergent zones (or rifts) where oceanic crust is created, pushing plates apart, and convergent zones (or trenches) where oceanic crust is subducted back into the mantle and destroyed.  Right now, one of the main divergent zones is the Mid-Atlantic Rift, which is why the Atlantic Ocean is gradually widening; the Pacific, on the other hand, is largely surrounded by convergent zones, so it's getting smaller.

Of course, the real situation is considerably more complex.  In some places the plates are moving parallel to the faults; these are transform (or strike-slip) faults, like the AGTF and the more famous San Andreas Fault.  And what the new paper found was that the movement along the AGTF doesn't just involve side-by-side movement, but there's a component of compression.

So the Azores-Gibraltar Transform Fault, in essence, is trying to turn into a new subduction zone.

"[These are] some of the oldest pieces of crust on Earth, super strong and rigid -- if it were any younger, the subducting plate would just break off and subduction would come to a halt," said João Duarte, of the University of Lisbon, who lead the research, in an interview with Science Daily.  "Still, it is just barely strong enough to make it, and thus moves very slowly."

The upshot is that subduction appears to be invading the eastern Atlantic, a process that (in tens or hundreds of millions of years) will result in the Atlantic Ocean closing up once more.  The authors write:
[T]he Atlantic already has two subduction zones, the Lesser Antilles and the Scotia arcs.  These subduction zones have been forced from the nearby Pacific subduction zones.  The Gibraltar arc is another place where a subduction zone is invading the Atlantic.  This corresponds to a direct migration of a subduction zone that developed in the closing Mediterranean Basin.  Nevertheless, few authors consider the Gibraltar subduction to be still active because it has significantly slowed down in the past millions of years.  Here, we use new gravity-driven geodynamic models that reproduce the evolution of the Western Mediterranean, show how the Gibraltar arc formed, and test if it is still active.  The results suggest that the arc will propagate farther into the Atlantic after a period of quiescence.  The models also show how a subduction zone starting in a closing ocean (Ligurian Ocean) can migrate into a new opening ocean (Atlantic) through a narrow oceanic corridor.

So the massive Portugal quakes of the eighteenth and nineteenth centuries seem to be part of a larger process, where compression along a (mostly) transform fault is going to result in the formation of a trench.  It's amazing to me how much we've learned in only sixty-odd years -- Wilson and his colleagues only published their seminal papers that established the science of plate tectonics between 1963 and 1968 -- and how much we are still continuing to learn.

And along the way elucidating the processes that generated some of the biggest earthquakes ever recorded.


Tuesday, February 20, 2024

Dream a little dream of me

In one of my favorite novels, The Lathe of Heaven by Ursula LeGuin, the main character -- an unassuming man named George Orr -- figures out that when he dreams, his dream changes reality.  The problem is, since when the change occurs, it alters everyone else's memories of what had happened, the only one who realizes that anything has changed is him.

At first, of course, he doesn't believe it.  He must be remembering wrong.  Then, when he becomes convinced it's actually happening, he starts taking drugs to try to stop him from dreaming, but they don't work.  As a last resort, he tries to get help from a psychologist...

... but the psychologist realizes how powerful this ability could be, and starts guiding George into dreams that will shape the world into what he wants it to be.

It's a powerful cautionary tale about what happens when an unscrupulous person gains control over someone with a valuable talent.  Power corrupts, as the oft-quoted line from John Dalberg-Acton goes, and absolute power corrupts absolutely.

I couldn't help thinking about The Lathe of Heaven when I read about some new exploration of lucid dreaming taking place at REMSpace, a California startup, that will be featured in a paper in The International Journal of Dream Research soon (a preprint is available at the link provided).  A lucid dream is one in which you are aware that you're dreaming while you're dreaming, and often have some degree of control over what happens.  Around twenty percent of people report regular lucid dreaming, but there is some research that suggests many of us can learn to lucid dream.

Dickens's Dream by Robert W. Buss (1875) [Image is in the Public Domain]

At this point, I'll interject that despite a long history of very vivid dreams, I've never had a lucid dream.  I did have an almost-lucid dream, once; it was a weird and involved story about being a groomsman in a wedding in a big cathedral, and when the priest said the whole "does anyone have any objections?" thing, a gaudily-dressed old lady in the front row stood up and started shouting about what an asshole the groom was and how the bride could do way better.  And I'm standing there, feeling horrified and uncomfortable, and I thought, "This is bizarre!  How could this be happening?  Is this a dream?"  So I kind of looked around, then patted myself to reassure myself that I was solid, and thought, "Nope.  I guess this is real."

So the one time I actually considered the question of whether I was dreaming, I got the wrong answer.

But I digress.

Anyhow, the researchers at REMSpace took a group of test subjects who all reported being able to lucid dream, and hooked them up to electromyography and electroencephalography sensors -- which, respectively, measure the electrical discharge from voluntary muscle contractions and neural firing in the brain -- and gave them the pre-sleep suggestion that they would dream about driving a car.  Using the output from the sensors, they created a virtual avatar of the person on a computer screen, and found that they were able to use tiny motions of their hands to steer it, and even avoid obstacles.

"Two-way interaction with a computer from dreams opens up a whole area of new technologies," said Michael Raduga, who led the experiment.  "Now, these developments are crude, but soon they will change the idea of human capabilities."

Maybe so, but it also puts the dreamer in the hands of the experimenter.  Now, I'm not saying Michael Raduga and his team are up to anything nefarious; and obviously I don't believe anyone's got the George-Orr-like ability to change reality to conform to what they dream.  But does anyone else have the feeling that "two-way interaction" into your dreams is potentially problematic?  I've heard a lot of people say things like, "hypnosis isn't dangerous, you can't be given a post-hypnotic suggestion that induces you to do something you wouldn't ordinarily do," but if there's one thing my knowledge of neuroscience has taught me, it's that the human brain is highly suggestible.

So as interested as I am in lucid dreaming, I'm not ready to sign up to have my dreams interacted with by a computer controlled by someone else.  And I hope like hell that when Raduga and his group at REMSpace start "changing the idea of human capabilities," they are extremely careful.

Anyway, that's our interesting-but-a-little-scary research for today.  Me, I'm gonna stick with my ordinary old dreams, which are peculiar enough.  And given my failure at detecting a potentially lucid dream when I had the chance, I doubt I'd be all that good at it in any case.  I'd probably drive my virtual dream car right into a telephone pole.


Monday, February 19, 2024

The viral accelerator

It's virus season, which thus far I've been able to avoid participating in, but seems like half the people I see are hacking and snorting and coughing so even with caution and mask-wearing I figure it's only a matter of time.  Viruses are odd beasts; they're obligate intracellular parasites, doing their evil work by hijacking your cellular machinery and using it to make more viruses.  Furthermore, they lack virtually all of the structures that cells have, including cell membranes, cytoplasm, and organelles.  They really are more like self-replicating chemicals than they are like living things.

Simian Polyoma Virus 40 [Image licensed under the Creative Commons Phoebus87 at English Wikipedia, Symian virus, CC BY-SA 3.0]

What is even stranger about viruses is that while some of the more familiar ones, such as colds, flu, measles, invade the host, make him/her sick, and eventually (with luck) are cleared from the body -- some of them leave behind remnants that can make their presence known later.  This behavior is what makes the herpes family of viruses so insidious.  If you've been infected once, you are infected for life, and the latent viruses hidden in your cells can cause another eruption of symptoms, sometimes decades later.

Even weirder is when those latent viral remnants cause havoc in a completely different way than the original infection did.  There's a piece of a virus left in the DNA of many of us called HERV-W (human endogenous retrovirus W) which, if activated, can trigger multiple sclerosis or schizophrenia.  Another one, Coxsackie virus, has an apparent connection to type-1 diabetes and Sjögren's syndrome.  The usual sense is that all viral infections, whether or not they're latent, are damaging to the host.  So it was quite a shock to me to read a piece of recent research that there's a viral remnant that not only is beneficial, but is critical for creating myelin -- the coating of our nerve cells that is essential for speeding up nerve transmission!

The paper -- which appeared last week in the journal Cell -- is by a team led by Tanay Ghosh of the Cambridge Institute of Science, and looked at a gene called RetroMyelin.  This gene is one of an estimated forty (!) percent of our genome that is made up of retrotransposons, DNA that was inserted by viruses during evolutionary history.  Or, looking at it another way, genes that made their way to us using a virus as a carrier.  Once inside our genome, transposons begin to do what they do best -- making copies of themselves and moving around.  Most retrovirus-introduced elements are deleterious; HIV and feline leukemia, after all, are caused by retroviruses.  But sometimes, the product of a retroviral gene turns out to be pretty critical, and that's what happened with RetroMyelin.

Myelin is a phosopholipid/protein mixture that surrounds a great many of the nerves in vertebrates.  It not only acts as an insulator, preventing the ion distribution changes that allow for nerve conduction to "short-circuit" into adjacent neurons, it is also the key to saltatory conduction -- the jumping of neural signals down the axon, which can increase transmission speed by a factor of fifty.  So this viral gene acted a bit like a neural accelerator, and gave the animals that had it a serious selective advantage.

"Retroviruses were required for vertebrate evolution to take off," said senior author and neuroscientist Robin Franklin, in an interview in Science Daily.  "There's been an evolutionary drive to make impulse conduction of our axons quicker because having quicker impulse conduction means you can catch things or flee from things more rapidly.  If we didn't have retroviruses sticking their sequences into the vertebrate genome, then myelination wouldn't have happened, and without myelination, the whole diversity of vertebrates as we know it would never have happened."

The only vertebrates that don't have myelin are the jawless fish, such as lampreys and hagfish -- so it's thought that the retroviral infection that gave us the myelin gene occurred around the same time that jaws evolved on our branch of the vertebrate family tree, on the order of four hundred million years ago.

So even some fundamental (and critical) traits shared by virtually all vertebrates, like the myelin sheaths that surround our neurons, are the result of viral infections.  Just proving that not all of 'em are bad.  Something to think about the next time you feel a sore throat coming on.