Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Thursday, March 30, 2023

Dark days

I'm going to propose a new law, in the vein of Murphy's Law ("If it can go wrong, it will"), Betteridge's Law ("If a headline ends in a question mark, the answer is 'no'"), and Poe's Law ("A sufficiently well-done satire is indistinguishable from the real thing"): "If a statement begins with, 'Scientists claim...' without mentioning any specific scientists, it's completely made up."

I ran into an excellent (by which I mean "ridiculous") example of that over at the site Anomalien just yesterday, called "The Mysterious Phenomenon of the Onset of Sudden Darkness."  The article, which is (as advertised) about times when darkness suddenly fell during the day for no apparent reason, gets off to a great start by citing the Bible (specifically the darkness sent by God in the Book of Exodus to punish the Egyptians for keeping Moses et al. in slavery), because that's clearly admissible as hard evidence.  "Scientists," we are told, "are seriously concerned about this phenomenon."

I have spoken with a great many scientists over the years, and not a single one of them has voiced any concern about sudden-onset darkness.  Maybe they're keeping it secret because they don't want us laypeople getting scared, or something.

That being said, and even excluding the Pharaonic Plagues, the claim has been around for a while.  One of my favorite books growing up -- I still have my rather battered copy, in fact -- was Strangely Enough, by C. B. Colby, which deals with dozens of weird "Strange But True!" tales.  One of them, called "New England's Darkest Day," describes an event that allegedly occurred on May 19, 1780, in which pitch darkness fell on a sunny day.  Colby writes:

May 19 dawned as bright and clear as usual, except that there appeared to be a haze in the southwest.  (One town history reports that it was raining.)  This haze grew darker, and soon the whole sky was covered with a thick cloud which was traveling northeast rapidly.  It reached the Canadian border by midmorning.  Meanwhile the eastern part of New York, as well as Maine, New Hampshire, Rhode Island, Massachusetts, and Connecticut were becoming darker.

By one o'clock some sections were so dark that white paper held a few inches from the eyes couldn't be seen.  It was as dark as a starless night.  Apprehension soon turned to panic.  Schools were dismissed, and lanterns and candles were lighted in homes and along the streets...

That night the darkness continued, and it was noted that by the light of lanterns everything seemed to have a greenish hue.  A full moon, due to rise at nine, did not show until after 1 AM, when it appeared high in the sky and blood-red.  Shortly afterward stars began to appear, and the following morning the sun was as bright as ever, after fourteen hours of the strangest darkness ever to panic staunch New Englanders.

Surprisingly, there's no doubt this actually happened; as Colby states, it's recorded in dozens of town histories.  However, the actual cause isn't anything paranormal.  It was most likely a combination of dense fog and the smoke from a massive forest fire in what is now Algonquin Provincial Park in Ontario, which left evidence in the form of tree ring scars from the late spring of that year, precisely when the "Dark Day" occurred.  And, in fact, Colby conveniently doesn't mention that there are also reports in town histories that "the air smelled like soot" and after the sky cleared, some places (especially in New Hampshire) had layers of ash on the ground up to fifteen centimeters deep.

Kind of blows away the mystery, doesn't it?

Artist's depiction of the "Dark Day" [Image is in the Public Domain, courtesy of the New England Historical Society]

The Anomalien article isn't even on as firm a ground as Colby is.  The majority of their accounts are single-person anecdotes; even the ones that aren't have very little going for them.  Take, for example, the case in Louisville, Kentucky, which they say is so certain "it's almost become a textbook" [sic].  On March 7, 1911, they say, a "viscous darkness" fell upon the entire city, lasting for an hour and resulting in massive panic.

Funny that such a strange, widespread, and terrifying event merited zero mention in the Louisville newspaper that came out only four days later.  You'd think it'd have been headline news.

That doesn't stop the folks at Anomalien from attributing the phenomenon to you-know-who:

Is it all aliens to be blamed?  Researchers... believe that unexpected pitch darkness occurs in the event of a violation of the integrity of space.  At such moments, it is possible to penetrate both into different dimensions and worlds, and out of them...  

Some researchers believe that the phenomenon of sudden pitch darkness is associated with the presence on earth of creatures, unknown to science, with supernatural abilities.  All these cryptids and other strange creatures enter our world through the corridors of pitch darkness.  And they seem to be more familiar with this phenomenon than we are.  They know when this passage will open, and they use it.  Only they do not immediately disappear along with the darkness, but wait for the next opportunity to return to their world.

Oh?  "Researchers believe that," do they?  I'll be waiting for the paper in Science.

Anyhow, there you have it.  Bonnet's Law in action.  I'm just as happy that the claim is nonsense; the sun's out right now, and I'm hoping it stays that way.  It's gloomy enough around here in early spring without aliens and cryptids and whatnot opening dimensional portals and creating "corridors of pitch darkness."  Plus, having creatures ("unknown to science, with supernatural abilities") bumbling about in the dark would freak out my dog, who is -- no offense to him intended, he's a Very Good Boy -- a great big coward.

So let's just keep the lights on, shall we?  Thanks.

****************************************



Wednesday, March 29, 2023

The biochemical symphony

Sometimes I run into a piece of scientific research that's so odd and charming that I just have to tell you about it.

Take, for example, the paper that appeared in ACS Nano that ties together two of my favorite things -- biology and music.  It has the imposing title, "A Self-Consistent Sonification Method to Translate Amino Acid Sequences into Musical Compositions and Application in Protein Design Using Artificial Intelligence," and was authored by Chi-Hua Yu, Zhao Qin, Francisco J. Martin-Martinez, and Markus J. Buehler, all of the Massachusetts Institute of Technology.  Their research uses a fascinating lens to study protein structure: converting the amino acid sequence and structure of a protein into music, then having an AI software study the musical pattern that results as a way of learning more about how proteins function -- and how that function might be altered.

What's cool is that the musical note that represents each amino acid isn't randomly chosen.  It's based on the amino acid's actual quantum vibrational frequency.  So when you listen to it, you're not just hearing a whimsical combination of notes based on something from nature; you're actually hearing the protein itself.

[Image licensed under the Creative Commons © Nevit Dilmen, Music 01754, CC BY-SA 3.0]

In an article about the research in MIT News, written by David L. Chandler, you can hear clips from the Yu et al. study.  I recommend the second one especially -- the one titled "An Orchestra of Amino Acids" -- which is a "sonification" of spider silk protein.  The strange, percussive rhythm is kind of mesmerizing, and if someone had told me that it was a composition by an avant-garde modern composer -- Philip Glass, perhaps, or Steve Reich -- I would have believed it without question.  But what's coolest about this is that the music actually means something beyond the sound.  The AI is now able to discern the difference between some basic protein structures, including two of the most common -- the alpha-helix (shaped like a spring) and the beta-pleated-sheet (shaped like the pleats on a kilt -- because they sound different.  This gives us a lens into protein function that we didn't have before.  "[Proteins] have their own language, and we don’t know how it works," said Markus Buehler, who co-authored the study.  "We don’t know what makes a silk protein a silk protein or what patterns reflect the functions found in an enzyme.  We don’t know the code."

But this is exactly what the AI, and the scientists running it, hope to find out.  "When you look at a molecule in a textbook, it’s static," Buehler said.  "But it’s not static at all.  It’s moving and vibrating.  Every bit of matter is a set of vibrations.  And we can use this concept as a way of describing matter."

This new approach has impressed a lot of people not only for its potential applications, but from how amazingly creative it is.  This is why it drives me nuts when people say that science isn't a creative process. They apparently have the impression that science is pure grunt work, inoculating petri dishes, looking at data from particle accelerators, analyzing rock layers.  But at its heart, the best science is about making connections between disparate ideas -- just like this research does -- and is as deeply creative as writing a symphony.

"Markus Buehler has been gifted with a most creative soul, and his explorations into the inner workings of biomolecules are advancing our understanding of the mechanical response of biological materials in a most significant manner," said Marc Meyers, professor of materials science at the University of California at San Diego, who was not involved in this work.  "The focusing of this imagination to music is a novel and intriguing direction. his is experimental music at its best.  The rhythms of life, including the pulsations of our heart, were the initial sources of repetitive sounds that engendered the marvelous world of music.  Markus has descended into the nanospace to extract the rhythms of the amino acids, the building blocks of life."

What is most amazing about this is the potential for the AI, once trained, to go in reverse -- to be given an altered musical pattern, and to predict from that what the function of a protein engineered from that music would do.  Proteins are perhaps the most fundamental pieces of living things; the majority of genes do what they do by making proteins, which then guide processes within the organism (including frequently affecting other genes).  The idea that we could use music as a lens into how our biochemistry works is kind of stunning.

So that's your science-is-so-freaking-cool moment for the day.  I peruse the science news pretty much daily, looking for intriguing new research, but this one's gonna be hard to top.  Now I think I'm going to go back to the paper and click on the sound links -- and listen to the proteins sing.

****************************************



Tuesday, March 28, 2023

Escaping the bottle

Two years ago, I wrote a post about the work of Nick Bostrom (of Oxford University) and David Kipping (of Columbia University) regarding the unsettling possibility that we -- and by "we," I mean the entire observable universe -- might be a giant computer simulation.

There are a lot of other scientists who take this possibility seriously.  In fact, back in 2016 there was a fascinating panel discussion (well worth watching in its entirety), moderated by astrophysicist Neil deGrasse Tyson, considering the question.  Interestingly, Tyson -- who I consider to be a skeptic's skeptic -- was himself very accepting of the claim, and said at the end that if hard evidence is ever found that we are living in a simulation, he'll "be the only one in the room who's not surprised."

Other participants brought up some mind-boggling points.  The brilliant Swedish-American cosmologist Max Tegmark, of MIT, asked the question of why the fundamental rules of physics are mathematical.  He went on to point out that if you were a character inside a computer game (even a simple one), and you started to analyze the behavior of things in the game from within the game -- i.e., to do science -- you'd see the same thing.  Okay, in our universe the math is more complicated than the rules governing a computer game, but when you get down to the most basic levels, it still is just math.  "Everything is mathematical," he said.  "And if everything is mathematical, then it's programmable."

One of the most interesting approaches came from Zohreh Davoudi, also of MIT.  Davoudi is studying high-energy cosmic rays -- orders of magnitude more energetic than anything we can create in the lab -- as a way of probing the universe for what amount to glitches in the simulation.  It's analogous to the screen-door effect , a well-known phenomenon in visual displays, where (because there isn't sufficient resolution or computing power to give an infinitely smooth picture) if you zoom in too much, images pixellate.  The same thing, Davoudi says, could happen at extremely high energies; since you'd need an infinite amount of information to simulate behavior of particles on those scales, glitchiness in extreme conditions could be a hint we're inside a simulation.  "We're looking for evidence of cutting corners to make the simulation run with less demand on memory," she said.  "It's one way to test the claim empirically."

The reason this comes up is because of a recent paper by Roman Yampolskiy (of the University of Louisville) called, simply, "How to Hack the Simulation?"  Yampolskiy springboards from the arguments of Bostrom, Kipping, and others -- if you accept that it's possible, or even likely, that we're in a simulation, is there a way to hack our way out of it?

The open question, of course, is whether we should.  As I recall from The Matrix, the world inside the Matrix was a hell of a lot more pleasant than the apocalyptic hellscape outside it.

Be that as it may, Yampolskiy presents a detailed argument about whether it's even possible to hack ourselves out of a simulation (and answers the question "yes").  Not only does he, like Tegmark, use examples from computer games, but also describes an astonishing experiment I'd never heard of where the connectome (map of neural connections in the brain) of a roundworm, Caenorhabditis elegans, was uploaded into a robot body which then was able to navigate its environment exactly as the real, living worm did.  (The more I think about this experiment, the more freaked out I become.  Did the robotic worm know it was in a simulated body?)

Evaluating the strength of Yampolskiy's technical arguments is a bit beyond me, but to me where it becomes really interesting is when he gets into concrete suggestions of how we could get a glimpse of the world outside the simulation.  One method, he says, is get enormous numbers of people to do something identical and (presumably) easy to simulate, and then simultaneously all doing something different.  He writes:

If, say, 100 million of us do nothing (maybe by closing our eyes and meditating and thinking nothing), then the forecasting load-balancing algorithms will pack more and more of us in the same machine.  The next step is, then, for all of us to get very active very quickly (doing something that requires intense processing and I/O) all at the same time.  This has a chance to overload some machines, making them run short of resources, being unable to meet the computation/communication needed for the simulation.  Upon being overloaded, some basic checks will start to be dropped, and the system will be open for exploitation in this period...  The system may not be able to perform all those checks in an overloaded state...  We can... try to break causality.  Maybe by catching a ball before someone throws it to you.  Or we can try to attack this by playing with the timing, trying to make things asynchronous.

Of course, the problem here is that it's damn near impossible to get a hundred people to cooperate and follow directions, much less a hundred million.

Another suggestion is to increase the demand on the system by creating our own simulation -- a possibility Bostrom and Kipping considered, that we could be in a near-infinite nesting of universes within universes.  Yampolskiy says the problem is computing power; even if we're positing a simulator way smarter than we are, there's a limit, and we might be able to exploit that:

The most obvious strategy would be to try to cause the equivalent of a stack overflow—asking for more space in the active memory of a program than is available—by creating an infinitely, or at least excessively, recursive process.  And the way to do that would be to build our own simulated realities, designed so that within those virtual worlds are entities creating their version of a simulated reality, which is in turn doing the same, and so on all the way down the rabbit hole.  If all of this worked, the universe as we know it might crash, revealing itself as a mirage just as we winked out of existence.

In which case the triumph of being right would be cancelled out rather spectacularly by the fact that we'd immediately afterward cease to exist.

The whole question is as fascinating as it is unsettling, and Yampolskiy's analysis is at least is a start (along with more technical approaches like Davoudi's cosmic ray experiments) toward putting this on firmer scientific ground.  Until we can do that, I tend to agree with theoretical physicist James Sylvester Gates, of the University of Maryland, who criticizes the simulator argument as not being science at all.  "The simulator hypothesis is equivalent to God," Gates said.  "At its heart, it is a theological argument -- that there's a programmer who lives outside our universe and is controlling things here from out there.  The fact is, if the simulator's universe is inaccessible to us, it puts the claim outside the realm of science entirely."

So despite Bostrom and Kipping's mathematical argument and Tyson's statement that he won't be surprised to find evidence, I'm still dubious -- not because I don't think it's possible we're in a simulation, but because I don't believe that it's going to turn out to be testable.  I doubt very much that Mario knows he's a two-dimensional image on a computer monitor, for example; even though he actually is, I don't see how he could figure that out from inside the program.  (That particular problem was dealt with in brilliant fashion in the Star Trek: The Next Generation episode "Ship in a Bottle" -- where in the end even the brilliant Professor Moriarty never did figure out that he was still trapped on the Holodeck.)


So those are our unsettling thoughts for the day.  Me, I have to wonder why, if we are in a simulation, the Great Simulators chose to make this place so freakin' weird.  Maybe it's just for the entertainment value.  As Max Tegmark put it, "If you're unsure at the end of the day if you live in a simulation, go out there and live really interesting lives and do unexpected things so the simulators don't get bored and shut you down." 

Which seems like good advice whether we're in a simulation or not.

****************************************



Monday, March 27, 2023

The avalanche

I always give a grim chuckle whenever someone on the far right calls us liberals "snowflakes," because when it comes to taking offense over absolutely everything, there's nothing like a MAGA Republican.

If you think I'm overstating my case, you have only to look at what's currently happening in the state of Florida to see that if anything, I'm being generous.  The right-wing elected officials in Florida are so pants-wettingly terrified of any viewpoints other than their own Christofascist agenda that they don't even want anyone finding out there are people who think differently.

Take, for example, the school principal in Tallahassee who was forced to resign because she had the temerity to show students in the sixth grade a photograph of Michelangelo's David

[Image licensed under the Creative Commons Michelangelo artist QS:P170,Q5592 Jörg Bittner Unna, 'David' by Michelangelo Fir JBU005 denoised, CC BY-SA 3.0]

David was originally commissioned to be placed in Florence Cathedral.  In, to make it abundantly clear, a Christian house of worship.  But it was soon considered such a masterpiece of art that it was taken out -- and placed in the public square outside the Palazzo Vecchio, so it could be seen by everyone.

But now?  According to the elected officials of Florida, whose sensibilities haven't even caught up to the sixteenth century, we can't have sixth graders see a world-renowned piece of sculpture, evidently because then they'll find out that people have genitals.

Then there's book bans.  Clay County School District just announced a new list of books that are officially banned from any school in the district, bringing the total up to 355.  Here are the new additions:


It doesn't take a genius to notice a pattern, here.  Anything dealing with LGBTQ+ themes (Heartstopper, Radio Silence, One Man Guy), anything to do with the Black experience (Americanah, Notes from a Young Black ChefPunching the Air, and Black Brother, Black Brother, among many others), anything criticizing Republicans (Russian Hacking in American Elections), and anything written by an outspoken liberal (The Fault in Our Stars, Slaughterhouse Five).  

Apparently we can't have anyone finding out there's a world out there besides those who are straight, white, Christian conservatives.

You'd think if these people were as confident in the self-evident righteousness of their own beliefs as they claim to be, they wouldn't be so fucking scared of the rest of us.

I think the problem here is that we've allowed the purveyors of this narrow-minded, bigoted bullshit to portray themselves as the valiant defenders of the cause, instead of calling them what they are: craven cowards.  They are constantly, deeply fearful, afraid that any exposure to a view beyond their own tiny, terrified world will cause the entire thing to come crashing down like a house of cards.

It's pathetic, really.  No wonder so many of them carry assault rifles when they go to Walmart.

When it comes down to it, though, isn't all fascism about fear?  Why would you be so desperate to build an autocracy if you weren't afraid of dissent?  Yeah, there's the attraction of power and its perks, I get that; but really, the desperation to crush all opposing views is born from a deep-seated and terrified knowledge that if people find out there are other ways, they'll realize they've been lied to and start demanding scary stuff like free speech and free access to information.

So to Ron DeSantis and his cronies who are so determined to erase those of us who aren't like them: I'm sorry you're so bone-shakingly terrified.  I do feel badly for you, because it must be a horrible way to live.  But just because I pity you doesn't mean that I and the others like me are going to stand silent and let you erase us.  You want to fight?  Well, battle joined.

I think you're about to find out that a bunch of snowflakes together create an avalanche.

****************************************



Saturday, March 25, 2023

Myth come to life

While I've been known to make fun of the cryptid hunters, there's something to be said for their persistence.

Not only do we have people working hard to prove the continued existence of animals thought by science to be extinct -- most notably, the thylacine (Thylacinus cynocephalus) of southern Australia, which actually has a Facebook page devoted to sightings -- there are the devotees of animals science has never admitted in the first place, like Bigfoot, Nessie, and dozens of lesser-known denizens of myth and legend.

Despite my skepticism, no one would be more delighted than me if one of these elusive beasties turned out to be real.  Which is why I was so tickled when a friend and loyal reader of Skeptophilia sent me a link about a cryptid I'd never heard of -- the Corsican cat-fox -- which was just proven to be very real indeed.

The legend has been around for centuries; a wildcat in Corsica that is larger than your typical house cat, has rusty brown fur, and a long, ringed tail, notorious for raiding chicken coops.  Called in the Corsican language ghjattu-volpe -- "cat-fox" -- it was thought to be a myth.

It's not.  In an intensive effort to establish the legend's veracity, the ghjattu-volpe was found -- not only photographed, but captured for DNA sampling.


Genetic analysis has shown that its DNA is distinct from domestic cats, from wildcats in mainland Europe, and wildcats in the neighboring island of Sardinia.  

The fact that this animal stayed undetected for so long has left the locals saying "see, we told you so," and encouraged the absolute hell out of the proponents of other elusive animal claims.  Even so, I think some cryptids are unlikely in the extreme -- the Loch Ness Monster topping that list.  The idea that there is a breeding population of plesiosaurs in Loch Ness, which somehow survived the last ice age (during which that region of Scotland was under a thirty-meter-thick sheet of ice) and has gone undetected despite years of searching with sonar and other high-tech telemetry devices, strikes me as a little ridiculous.

However, I don't find anything inherently implausible about there being a large, elusive proto-hominid in the Pacific Northwest.  I lived in Seattle for ten years and spent my summers camping in the Cascades and Olympics, and man, that is some trackless wilderness up there.  Neither do I doubt the possibility of the survival of thylacines, ivory-billed woodpeckers, and various other thought-to-be-extinct species.

But "possible" and "not inherently implausible" doesn't equal "real."  I remain very much a "show me the money" type.  And that means more than just blurred photos and videos.  (To borrow a phrase from Neil deGrasse Tyson, Photoshop probably has an "add Bigfoot" button.)  Until there's hard evidence, I'm not going to be in the True Believer column.

Even so, I have to admit that the Corsican cat-fox certainly is encouraging to those of us who want to believe.

****************************************



Friday, March 24, 2023

The writing's on the wall

When you think about it, writing is pretty weird.

Honestly, language in general is odd enough.  Unlike (as far as we know for sure) any other species, we engage in arbitrary symbolic communication -- using sounds to represent words.  The arbitrary part means that which sounds represent what concepts is not because of any logical link; there's nothing any more doggy about the English word dog than there is about the French word chien or the German word Hund (or any of the other thousands of words for dog in various human languages).  With the exception of the few words that are onomatopoeic -- like bang, bonk, crash, and so on -- the word-to-concept link is random.

Written language adds a whole extra layer of randomness to it, because (again, with the exception of the handful of languages with truly pictographic scripts), the connection between the concept, the spoken word, and the written word are all arbitrary.  (I discussed the different kinds of scripts out there in more detail in a post a year ago, if you're curious.)

Which makes me wonder how such a complex and abstract notion ever caught on.  We have at least a fairly good model of how the alphabet used for the English language evolved, starting out as a pictographic script and becoming less concept-based and more sound-based as time went on:


The conventional wisdom about writing is that it began in Sumer something like six thousand years ago, beginning with fired clay bullae that allowed merchants to keep track of transactions by impression into soft clay tablets.  Each bulla had its own symbol; some were symbols for the type of goods, others for numbers.  Once the Sumerians made the jump of letting marks stand for concepts, it wasn't such a huge further step to make marks for other concepts, and ultimately, for syllables or individual sounds.

The reason all this comes up is that a recent paper in the Cambridge Archaeology Journal is claiming that marks associated with cave paintings in France and Spain that were long thought to be random are actual meaningful -- an assertion that would push back the earliest known writing another fourteen thousand years.

The authors assessed 862 strings of symbols dating back to the Upper Paleolithic in Europe -- most commonly dots, slashes, and symbols like a letter Y -- and came to the conclusion that they were not random, but were true written language, for the purpose of keeping track of the mating and birthing cycles of the prey animals depicted in the paintings.

The authors write;

[Here we] suggest how three of the most frequently occurring signs—the line <|>, the dot <•>, and the <Y>—functioned as units of communication.  We demonstrate that when found in close association with images of animals the line <|> and dot <•> constitute numbers denoting months, and form constituent parts of a local phenological/meteorological calendar beginning in spring and recording time from this point in lunar months.  We also demonstrate that the <Y> sign, one of the most frequently occurring signs in Palaeolithic non-figurative art, has the meaning <To Give Birth>.  The position of the <Y> within a sequence of marks denotes month of parturition, an ordinal representation of number in contrast to the cardinal representation used in tallies.  Our data indicate that the purpose of this system of associating animals with calendar information was to record and convey seasonal behavioural information about specific prey taxa in the geographical regions of concern.  We suggest a specific way in which the pairing of numbers with animal subjects constituted a complete unit of meaning—a notational system combined with its subject—that provides us with a specific insight into what one set of notational marks means.  It gives us our first specific reading of European Upper Palaeolithic communication, the first known writing in the history of Homo sapiens.
The claim is controversial, of course, and is sure to be challenged; moving the date of the earliest writing from six thousand to twenty thousand years ago isn't a small shift in our model.  But if it bears up, it's pretty extraordinary.  It further gives lie to our concept of Paleolithic humans as brutal, stupid "cave men," incapable of any kind of mental sophistication.  As I hope I made clear in my first paragraphs, any kind of written language requires subtlety and complexity of thought.  If the beauty of the cave paintings in places like Lascaux doesn't convince you of the intelligence and creativity of our distant forebears, surely this will.

So what I'm doing now -- speaking to my fellow humans via strings of visual symbols -- may have a much longer history than we ever thought.  It's awe-inspiring that we landed on this unique way to communicate; even more that we stumbled upon it so long ago.

****************************************



Thursday, March 23, 2023

The nibblers

I'm always on the lookout for fascinating, provocative topics for Skeptophilia, but even so, it's seldom that I read a scientific paper with my jaw hanging open.  But that was the reaction I had to a paper from a couple of months ago in Nature that I just stumbled across yesterday.

First, a bit of background.

Based on the same kind of genetic evidence I described in yesterday's post, biologists have divided all living things into three domains: Eukarya, Bacteria, and Archaea.  Eukarya contains eukaryotes -- organisms with true nuclei and complex systems of organelles -- and are broken down into four kingdoms: protists, plants, fungi, and animals.  Bacteria contains, well, bacteria; all the familiar groups of single-celled organisms that lack nuclei and most of the other membrane-bound organelles.  Archaea are superficially bacteria-like; they're mostly known from environments most other living things would consider hostile, like extremely salty water, anaerobic mud, and acidic hot springs.  In fact, they used to be called archaebacteria (and lumped together with Bacteria into "Kingdom Monera") until it was discovered in 1977 by Carl Woese that Archaea are more genetically similar to eukaryotes like ourselves than they are to ordinary bacteria, and forced a complete revision of how taxonomy is done.

So things have stood since 1977: three domains (Bacteria, Archaea, and Eukarya), and within Eukarya four kingdoms (Protista, Plantae, Fungi, and Animalia).

But now a team led by Denis Tikhonenkov, of the Russian Academy of Scientists, has published a paper called "Microbial Predators Form a New Supergroup of Eukaryotes" that looks like it's going to force another overhaul of the tree of life.

Rather than trying to summarize, I'm going to quote directly from the Tikhonenkov et al. paper so you get the full impact:

Molecular phylogenetics of microbial eukaryotes has reshaped the tree of life by establishing broad taxonomic divisions, termed supergroups, that supersede the traditional kingdoms of animals, fungi and plants, and encompass a much greater breadth of eukaryotic diversity.  The vast majority of newly discovered species fall into a small number of known supergroups.  Recently, however, a handful of species with no clear relationship to other supergroups have been described, raising questions about the nature and degree of undiscovered diversity, and exposing the limitations of strictly molecular-based exploration.  Here we report ten previously undescribed strains of microbial predators isolated through culture that collectively form a diverse new supergroup of eukaryotes, termed Provora.  The Provora supergroup is genetically, morphologically and behaviourally distinct from other eukaryotes, and comprises two divergent clades of predators—Nebulidia and Nibbleridia—that are superficially similar to each other, but differ fundamentally in ultrastructure, behaviour and gene content.  These predators are globally distributed in marine and freshwater environments, but are numerically rare and have consequently been overlooked by molecular-diversity surveys. In the age of high-throughput analyses, investigation of eukaryotic diversity through culture remains indispensable for the discovery of rare but ecologically and evolutionarily important eukaryotes.

The members of Provora are distinguished not only genetically but by their behavior; to my eye they look a bit like a basketball with tentacles, using weird little tooth-like structures to nibble their way forward as they creep along.  (Thus "nibblerid," which is their actual name, despite the fact that it sounds like a comical monster species from Doctor Who.)  The first one discovered (in 2017), the euphoniously-named Ancoracysta twista, is a predator on tropical coral, and was found in (of all places) a home aquarium.  Since then, they've been found all over the place, although they're not common anywhere; the only place they've never been seen is on land.  But just about every aquatic environment, fresh or marine, has provorans of some kind.

An electron micrograph of a provoran [Image from Tikhonenkov et al.]

The provorans appear to be closely related to no other eukaryote, and Tikhonenkov et al. are proposing that they warrant placement in their own supergroup (usually known as a "kingdom").  But it raises questions of how many more outlier supergroups there are.  A 2022 analysis by Sijia Liu et al. estimated the number of microbial species on Earth at somewhere around three million, of which only twenty percent have been classified.  It's easy to overlook them, given that they're microscopic -- but that means there could be dozens of other branches of the tree of life out there about which we know nothing. 

It's amazing how much more sophisticated our understanding of evolutionary descent has become.  When I was a kid (back in medieval times), we learned in science class that there were three divisions; animals, plants, and microbes.  (I even had a Golden Guide called Non-Flowering Plants -- which included mushrooms.)  Then it was found that fungi and animals were more closely related than fungi and plants, and that microbes with nuclei and organelles (like amoebas) were vastly different from those without (like bacteria).  There it stood till Woese came along in 1977 and told us that the bacteria weren't a single group, either.

And now we've got another new branch to add to the tree.  The nibblers.  Further illustrating that we don't have to look into outer space to find new and astonishing things to study; there is a ton we don't know about what's right here on Earth.

****************************************