Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Friday, March 31, 2023

The global melting pot

One of the shakiest concepts in biological anthropology is race.

Pretty much all biologists agree that race, as usually defined, has very little genetic basis.  Note that I'm not saying race doesn't exist; just that it's primarily a cultural, not a biological, phenomenon.  Given the fact that race has been used as the basis for systematic oppression for millennia, it would be somewhere beyond disingenuous to claim that it isn't real.

The problem is, determination of race has usually been based upon a handful of physical characteristics, most often skin, eye, and hair pigmentation and the presence or absence of an epicanthal fold across the inner corner of the eye.  These traits are not only superficial and not necessarily indicative of an underlying relationship, the pigment-related ones are highly subject to natural selection.  Back in the nineteenth and early twentieth century, however, this highly oversimplified and drastically inaccurate criterion was used to develop maps like this one:

The "three great races" according to the 1885 Meyers Konversations-Lexikon 

This subdivides all humanity into three groups -- "Caucasoid" (shown in various shades of blue), "Negroid" (shown in brown), and "Mongoloid" (shown in yellow and orange).  (The people of India and Sri Lanka, shown in green, are said to be "of uncertain affinities.")  If you're jumping up and down saying, "Wait, but... but..." -- well, you should be.  The lumping together of people like Indigenous Australians and all sub-Saharan Africans (based mainly on skin color) is only the most glaring error.  (Another is that any classification putting the Finns, Polynesians, Koreans, and Mayans into a single group has something seriously amiss.)

The worst part of all of this is that this sort of map was used to justify colonialism.  If you believed that there really was a qualitative difference (for that, read genetic) between the "three great races," it was only one step away from deciding which one was the best and shrugging your shoulders at the subjugation by that one of the other two. 

The truth is way more complicated, and way more interesting.  By far the highest amount of genetic diversity in the world is in sub-Saharan Africa; a 2009 study by Jeffrey Long found more genetic differences between individuals from two different ethnic groups in central Africa than between a typical White American and a typical person from Japan.  To quote a paper by Long, Keith Hunley, and Graciela Cabana that appeared in The American Journal of Physical Anthropology in 2015: "Western-based racial classifications have no taxonomic significance."

The reason all this comes up -- besides, of course, the continuing relevance of this discussion to the aforementioned systematic oppression based on race that is still happening in many parts of the world, including the United States -- is a paper that appeared last week in Nature looking at the genetics of the Swahili people of east Africa, a large ethnic group extending from southern Somalia down to northern Mozambique.  While usually thought to be a quintessentially sub-Saharan African population, the Swahili were found to have only around half of their genetic ancestry from known African roots; the other half came from southwestern Asia, primarily Persia, India, and Arabia.

The authors write:

[We analyzed] ancient DNA data for 80 individuals from 6 medieval and early modern (AD 1250–1800) coastal towns and an inland town after AD 1650.  More than half of the DNA of many of the individuals from coastal towns originates from primarily female ancestors from Africa, with a large proportion—and occasionally more than half—of the DNA coming from Asian ancestors.  The Asian ancestry includes components associated with Persia and India, with 80–90% of the Asian DNA originating from Persian men.  Peoples of African and Asian origins began to mix by about AD 1000, coinciding with the large-scale adoption of Islam.  Before about AD 1500, the Southwest Asian ancestry was mainly Persian-related, consistent with the narrative of the Kilwa Chronicle, the oldest history told by people of the Swahili coast.  After this time, the sources of DNA became increasingly Arabian, consistent with evidence of growing interactions with southern Arabia.  Subsequent interactions with Asian and African people further changed the ancestry of present-day people of the Swahili coast in relation to the medieval individuals whose DNA we sequenced.
Note that on the Meyers Konversations-Lexikon map, the Arabians and Persians are considered "Caucasoid," the Indians are "uncertain," while the Swahili are definitely "Negroid."

A bit awkward, that.

It's appalling that we still use an outmoded and scientifically-unsound concept to justify bigotry, prejudice, and discrimination, despite the mountains of evidence showing that there's no biological basis whatsoever to the way race is usually defined.  Easy, I suppose, to hang on to your biases like grim death rather than questioning them when new data comes along.  Not even all that new; the Long study I referenced above was from fourteen years ago.  And hell, the Italian geneticist Luigi Luca Cavalli-Sforza was researching all this back in the 1960s.  Okay, it takes time for people's minds to catch up with scientific discovery, but how much damn time do you need?

The truth is that (1) ultimately, we all come from Africa, (2) since then, we've continued to move around all over the place, and therefore (3) the world is just a huge single melting pot.  Oh, and (4), the result is that we're all of (very) mixed ancestry.  I'm sorry if that makes some people feel squinky, but as I've pointed out before, the universe is under no obligation to align with your preconceived notions about how the world should work.

Time to accept the beauty and complexity of our shared humanity, and stop looking for further ways to divide us.


Thursday, March 30, 2023

Dark days

I'm going to propose a new law, in the vein of Murphy's Law ("If it can go wrong, it will"), Betteridge's Law ("If a headline ends in a question mark, the answer is 'no'"), and Poe's Law ("A sufficiently well-done satire is indistinguishable from the real thing"): "If a statement begins with, 'Scientists claim...' without mentioning any specific scientists, it's completely made up."

I ran into an excellent (by which I mean "ridiculous") example of that over at the site Anomalien just yesterday, called "The Mysterious Phenomenon of the Onset of Sudden Darkness."  The article, which is (as advertised) about times when darkness suddenly fell during the day for no apparent reason, gets off to a great start by citing the Bible (specifically the darkness sent by God in the Book of Exodus to punish the Egyptians for keeping Moses et al. in slavery), because that's clearly admissible as hard evidence.  "Scientists," we are told, "are seriously concerned about this phenomenon."

I have spoken with a great many scientists over the years, and not a single one of them has voiced any concern about sudden-onset darkness.  Maybe they're keeping it secret because they don't want us laypeople getting scared, or something.

That being said, and even excluding the Pharaonic Plagues, the claim has been around for a while.  One of my favorite books growing up -- I still have my rather battered copy, in fact -- was Strangely Enough, by C. B. Colby, which deals with dozens of weird "Strange But True!" tales.  One of them, called "New England's Darkest Day," describes an event that allegedly occurred on May 19, 1780, in which pitch darkness fell on a sunny day.  Colby writes:

May 19 dawned as bright and clear as usual, except that there appeared to be a haze in the southwest.  (One town history reports that it was raining.)  This haze grew darker, and soon the whole sky was covered with a thick cloud which was traveling northeast rapidly.  It reached the Canadian border by midmorning.  Meanwhile the eastern part of New York, as well as Maine, New Hampshire, Rhode Island, Massachusetts, and Connecticut were becoming darker.

By one o'clock some sections were so dark that white paper held a few inches from the eyes couldn't be seen.  It was as dark as a starless night.  Apprehension soon turned to panic.  Schools were dismissed, and lanterns and candles were lighted in homes and along the streets...

That night the darkness continued, and it was noted that by the light of lanterns everything seemed to have a greenish hue.  A full moon, due to rise at nine, did not show until after 1 AM, when it appeared high in the sky and blood-red.  Shortly afterward stars began to appear, and the following morning the sun was as bright as ever, after fourteen hours of the strangest darkness ever to panic staunch New Englanders.

Surprisingly, there's no doubt this actually happened; as Colby states, it's recorded in dozens of town histories.  However, the actual cause isn't anything paranormal.  It was most likely a combination of dense fog and the smoke from a massive forest fire in what is now Algonquin Provincial Park in Ontario, which left evidence in the form of tree ring scars from the late spring of that year, precisely when the "Dark Day" occurred.  And, in fact, Colby conveniently doesn't mention that there are also reports in town histories that "the air smelled like soot" and after the sky cleared, some places (especially in New Hampshire) had layers of ash on the ground up to fifteen centimeters deep.

Kind of blows away the mystery, doesn't it?

Artist's depiction of the "Dark Day" [Image is in the Public Domain, courtesy of the New England Historical Society]

The Anomalien article isn't even on as firm a ground as Colby is.  The majority of their accounts are single-person anecdotes; even the ones that aren't have very little going for them.  Take, for example, the case in Louisville, Kentucky, which they say is so certain "it's almost become a textbook" [sic].  On March 7, 1911, they say, a "viscous darkness" fell upon the entire city, lasting for an hour and resulting in massive panic.

Funny that such a strange, widespread, and terrifying event merited zero mention in the Louisville newspaper that came out only four days later.  You'd think it'd have been headline news.

That doesn't stop the folks at Anomalien from attributing the phenomenon to you-know-who:

Is it all aliens to be blamed?  Researchers... believe that unexpected pitch darkness occurs in the event of a violation of the integrity of space.  At such moments, it is possible to penetrate both into different dimensions and worlds, and out of them...  

Some researchers believe that the phenomenon of sudden pitch darkness is associated with the presence on earth of creatures, unknown to science, with supernatural abilities.  All these cryptids and other strange creatures enter our world through the corridors of pitch darkness.  And they seem to be more familiar with this phenomenon than we are.  They know when this passage will open, and they use it.  Only they do not immediately disappear along with the darkness, but wait for the next opportunity to return to their world.

Oh?  "Researchers believe that," do they?  I'll be waiting for the paper in Science.

Anyhow, there you have it.  Bonnet's Law in action.  I'm just as happy that the claim is nonsense; the sun's out right now, and I'm hoping it stays that way.  It's gloomy enough around here in early spring without aliens and cryptids and whatnot opening dimensional portals and creating "corridors of pitch darkness."  Plus, having creatures ("unknown to science, with supernatural abilities") bumbling about in the dark would freak out my dog, who is -- no offense to him intended, he's a Very Good Boy -- a great big coward.

So let's just keep the lights on, shall we?  Thanks.


Wednesday, March 29, 2023

The biochemical symphony

Sometimes I run into a piece of scientific research that's so odd and charming that I just have to tell you about it.

Take, for example, the paper that appeared in ACS Nano that ties together two of my favorite things -- biology and music.  It has the imposing title, "A Self-Consistent Sonification Method to Translate Amino Acid Sequences into Musical Compositions and Application in Protein Design Using Artificial Intelligence," and was authored by Chi-Hua Yu, Zhao Qin, Francisco J. Martin-Martinez, and Markus J. Buehler, all of the Massachusetts Institute of Technology.  Their research uses a fascinating lens to study protein structure: converting the amino acid sequence and structure of a protein into music, then having an AI software study the musical pattern that results as a way of learning more about how proteins function -- and how that function might be altered.

What's cool is that the musical note that represents each amino acid isn't randomly chosen.  It's based on the amino acid's actual quantum vibrational frequency.  So when you listen to it, you're not just hearing a whimsical combination of notes based on something from nature; you're actually hearing the protein itself.

[Image licensed under the Creative Commons © Nevit Dilmen, Music 01754, CC BY-SA 3.0]

In an article about the research in MIT News, written by David L. Chandler, you can hear clips from the Yu et al. study.  I recommend the second one especially -- the one titled "An Orchestra of Amino Acids" -- which is a "sonification" of spider silk protein.  The strange, percussive rhythm is kind of mesmerizing, and if someone had told me that it was a composition by an avant-garde modern composer -- Philip Glass, perhaps, or Steve Reich -- I would have believed it without question.  But what's coolest about this is that the music actually means something beyond the sound.  The AI is now able to discern the difference between some basic protein structures, including two of the most common -- the alpha-helix (shaped like a spring) and the beta-pleated-sheet (shaped like the pleats on a kilt -- because they sound different.  This gives us a lens into protein function that we didn't have before.  "[Proteins] have their own language, and we don’t know how it works," said Markus Buehler, who co-authored the study.  "We don’t know what makes a silk protein a silk protein or what patterns reflect the functions found in an enzyme.  We don’t know the code."

But this is exactly what the AI, and the scientists running it, hope to find out.  "When you look at a molecule in a textbook, it’s static," Buehler said.  "But it’s not static at all.  It’s moving and vibrating.  Every bit of matter is a set of vibrations.  And we can use this concept as a way of describing matter."

This new approach has impressed a lot of people not only for its potential applications, but from how amazingly creative it is.  This is why it drives me nuts when people say that science isn't a creative process. They apparently have the impression that science is pure grunt work, inoculating petri dishes, looking at data from particle accelerators, analyzing rock layers.  But at its heart, the best science is about making connections between disparate ideas -- just like this research does -- and is as deeply creative as writing a symphony.

"Markus Buehler has been gifted with a most creative soul, and his explorations into the inner workings of biomolecules are advancing our understanding of the mechanical response of biological materials in a most significant manner," said Marc Meyers, professor of materials science at the University of California at San Diego, who was not involved in this work.  "The focusing of this imagination to music is a novel and intriguing direction. his is experimental music at its best.  The rhythms of life, including the pulsations of our heart, were the initial sources of repetitive sounds that engendered the marvelous world of music.  Markus has descended into the nanospace to extract the rhythms of the amino acids, the building blocks of life."

What is most amazing about this is the potential for the AI, once trained, to go in reverse -- to be given an altered musical pattern, and to predict from that what the function of a protein engineered from that music would do.  Proteins are perhaps the most fundamental pieces of living things; the majority of genes do what they do by making proteins, which then guide processes within the organism (including frequently affecting other genes).  The idea that we could use music as a lens into how our biochemistry works is kind of stunning.

So that's your science-is-so-freaking-cool moment for the day.  I peruse the science news pretty much daily, looking for intriguing new research, but this one's gonna be hard to top.  Now I think I'm going to go back to the paper and click on the sound links -- and listen to the proteins sing.


Tuesday, March 28, 2023

Escaping the bottle

Two years ago, I wrote a post about the work of Nick Bostrom (of Oxford University) and David Kipping (of Columbia University) regarding the unsettling possibility that we -- and by "we," I mean the entire observable universe -- might be a giant computer simulation.

There are a lot of other scientists who take this possibility seriously.  In fact, back in 2016 there was a fascinating panel discussion (well worth watching in its entirety), moderated by astrophysicist Neil deGrasse Tyson, considering the question.  Interestingly, Tyson -- who I consider to be a skeptic's skeptic -- was himself very accepting of the claim, and said at the end that if hard evidence is ever found that we are living in a simulation, he'll "be the only one in the room who's not surprised."

Other participants brought up some mind-boggling points.  The brilliant Swedish-American cosmologist Max Tegmark, of MIT, asked the question of why the fundamental rules of physics are mathematical.  He went on to point out that if you were a character inside a computer game (even a simple one), and you started to analyze the behavior of things in the game from within the game -- i.e., to do science -- you'd see the same thing.  Okay, in our universe the math is more complicated than the rules governing a computer game, but when you get down to the most basic levels, it still is just math.  "Everything is mathematical," he said.  "And if everything is mathematical, then it's programmable."

One of the most interesting approaches came from Zohreh Davoudi, also of MIT.  Davoudi is studying high-energy cosmic rays -- orders of magnitude more energetic than anything we can create in the lab -- as a way of probing the universe for what amount to glitches in the simulation.  It's analogous to the screen-door effect , a well-known phenomenon in visual displays, where (because there isn't sufficient resolution or computing power to give an infinitely smooth picture) if you zoom in too much, images pixellate.  The same thing, Davoudi says, could happen at extremely high energies; since you'd need an infinite amount of information to simulate behavior of particles on those scales, glitchiness in extreme conditions could be a hint we're inside a simulation.  "We're looking for evidence of cutting corners to make the simulation run with less demand on memory," she said.  "It's one way to test the claim empirically."

The reason this comes up is because of a recent paper by Roman Yampolskiy (of the University of Louisville) called, simply, "How to Hack the Simulation?"  Yampolskiy springboards from the arguments of Bostrom, Kipping, and others -- if you accept that it's possible, or even likely, that we're in a simulation, is there a way to hack our way out of it?

The open question, of course, is whether we should.  As I recall from The Matrix, the world inside the Matrix was a hell of a lot more pleasant than the apocalyptic hellscape outside it.

Be that as it may, Yampolskiy presents a detailed argument about whether it's even possible to hack ourselves out of a simulation (and answers the question "yes").  Not only does he, like Tegmark, use examples from computer games, but also describes an astonishing experiment I'd never heard of where the connectome (map of neural connections in the brain) of a roundworm, Caenorhabditis elegans, was uploaded into a robot body which then was able to navigate its environment exactly as the real, living worm did.  (The more I think about this experiment, the more freaked out I become.  Did the robotic worm know it was in a simulated body?)

Evaluating the strength of Yampolskiy's technical arguments is a bit beyond me, but to me where it becomes really interesting is when he gets into concrete suggestions of how we could get a glimpse of the world outside the simulation.  One method, he says, is get enormous numbers of people to do something identical and (presumably) easy to simulate, and then simultaneously all doing something different.  He writes:

If, say, 100 million of us do nothing (maybe by closing our eyes and meditating and thinking nothing), then the forecasting load-balancing algorithms will pack more and more of us in the same machine.  The next step is, then, for all of us to get very active very quickly (doing something that requires intense processing and I/O) all at the same time.  This has a chance to overload some machines, making them run short of resources, being unable to meet the computation/communication needed for the simulation.  Upon being overloaded, some basic checks will start to be dropped, and the system will be open for exploitation in this period...  The system may not be able to perform all those checks in an overloaded state...  We can... try to break causality.  Maybe by catching a ball before someone throws it to you.  Or we can try to attack this by playing with the timing, trying to make things asynchronous.

Of course, the problem here is that it's damn near impossible to get a hundred people to cooperate and follow directions, much less a hundred million.

Another suggestion is to increase the demand on the system by creating our own simulation -- a possibility Bostrom and Kipping considered, that we could be in a near-infinite nesting of universes within universes.  Yampolskiy says the problem is computing power; even if we're positing a simulator way smarter than we are, there's a limit, and we might be able to exploit that:

The most obvious strategy would be to try to cause the equivalent of a stack overflow—asking for more space in the active memory of a program than is available—by creating an infinitely, or at least excessively, recursive process.  And the way to do that would be to build our own simulated realities, designed so that within those virtual worlds are entities creating their version of a simulated reality, which is in turn doing the same, and so on all the way down the rabbit hole.  If all of this worked, the universe as we know it might crash, revealing itself as a mirage just as we winked out of existence.

In which case the triumph of being right would be cancelled out rather spectacularly by the fact that we'd immediately afterward cease to exist.

The whole question is as fascinating as it is unsettling, and Yampolskiy's analysis is at least is a start (along with more technical approaches like Davoudi's cosmic ray experiments) toward putting this on firmer scientific ground.  Until we can do that, I tend to agree with theoretical physicist James Sylvester Gates, of the University of Maryland, who criticizes the simulator argument as not being science at all.  "The simulator hypothesis is equivalent to God," Gates said.  "At its heart, it is a theological argument -- that there's a programmer who lives outside our universe and is controlling things here from out there.  The fact is, if the simulator's universe is inaccessible to us, it puts the claim outside the realm of science entirely."

So despite Bostrom and Kipping's mathematical argument and Tyson's statement that he won't be surprised to find evidence, I'm still dubious -- not because I don't think it's possible we're in a simulation, but because I don't believe that it's going to turn out to be testable.  I doubt very much that Mario knows he's a two-dimensional image on a computer monitor, for example; even though he actually is, I don't see how he could figure that out from inside the program.  (That particular problem was dealt with in brilliant fashion in the Star Trek: The Next Generation episode "Ship in a Bottle" -- where in the end even the brilliant Professor Moriarty never did figure out that he was still trapped on the Holodeck.)

So those are our unsettling thoughts for the day.  Me, I have to wonder why, if we are in a simulation, the Great Simulators chose to make this place so freakin' weird.  Maybe it's just for the entertainment value.  As Max Tegmark put it, "If you're unsure at the end of the day if you live in a simulation, go out there and live really interesting lives and do unexpected things so the simulators don't get bored and shut you down." 

Which seems like good advice whether we're in a simulation or not.


Monday, March 27, 2023

The avalanche

I always give a grim chuckle whenever someone on the far right calls us liberals "snowflakes," because when it comes to taking offense over absolutely everything, there's nothing like a MAGA Republican.

If you think I'm overstating my case, you have only to look at what's currently happening in the state of Florida to see that if anything, I'm being generous.  The right-wing elected officials in Florida are so pants-wettingly terrified of any viewpoints other than their own Christofascist agenda that they don't even want anyone finding out there are people who think differently.

Take, for example, the school principal in Tallahassee who was forced to resign because she had the temerity to show students in the sixth grade a photograph of Michelangelo's David

[Image licensed under the Creative Commons Michelangelo artist QS:P170,Q5592 Jörg Bittner Unna, 'David' by Michelangelo Fir JBU005 denoised, CC BY-SA 3.0]

David was originally commissioned to be placed in Florence Cathedral.  In, to make it abundantly clear, a Christian house of worship.  But it was soon considered such a masterpiece of art that it was taken out -- and placed in the public square outside the Palazzo Vecchio, so it could be seen by everyone.

But now?  According to the elected officials of Florida, whose sensibilities haven't even caught up to the sixteenth century, we can't have sixth graders see a world-renowned piece of sculpture, evidently because then they'll find out that people have genitals.

Then there's book bans.  Clay County School District just announced a new list of books that are officially banned from any school in the district, bringing the total up to 355.  Here are the new additions:

It doesn't take a genius to notice a pattern, here.  Anything dealing with LGBTQ+ themes (Heartstopper, Radio Silence, One Man Guy), anything to do with the Black experience (Americanah, Notes from a Young Black ChefPunching the Air, and Black Brother, Black Brother, among many others), anything criticizing Republicans (Russian Hacking in American Elections), and anything written by an outspoken liberal (The Fault in Our Stars, Slaughterhouse Five).  

Apparently we can't have anyone finding out there's a world out there besides those who are straight, white, Christian conservatives.

You'd think if these people were as confident in the self-evident righteousness of their own beliefs as they claim to be, they wouldn't be so fucking scared of the rest of us.

I think the problem here is that we've allowed the purveyors of this narrow-minded, bigoted bullshit to portray themselves as the valiant defenders of the cause, instead of calling them what they are: craven cowards.  They are constantly, deeply fearful, afraid that any exposure to a view beyond their own tiny, terrified world will cause the entire thing to come crashing down like a house of cards.

It's pathetic, really.  No wonder so many of them carry assault rifles when they go to Walmart.

When it comes down to it, though, isn't all fascism about fear?  Why would you be so desperate to build an autocracy if you weren't afraid of dissent?  Yeah, there's the attraction of power and its perks, I get that; but really, the desperation to crush all opposing views is born from a deep-seated and terrified knowledge that if people find out there are other ways, they'll realize they've been lied to and start demanding scary stuff like free speech and free access to information.

So to Ron DeSantis and his cronies who are so determined to erase those of us who aren't like them: I'm sorry you're so bone-shakingly terrified.  I do feel badly for you, because it must be a horrible way to live.  But just because I pity you doesn't mean that I and the others like me are going to stand silent and let you erase us.  You want to fight?  Well, battle joined.

I think you're about to find out that a bunch of snowflakes together create an avalanche.


Saturday, March 25, 2023

Myth come to life

While I've been known to make fun of the cryptid hunters, there's something to be said for their persistence.

Not only do we have people working hard to prove the continued existence of animals thought by science to be extinct -- most notably, the thylacine (Thylacinus cynocephalus) of southern Australia, which actually has a Facebook page devoted to sightings -- there are the devotees of animals science has never admitted in the first place, like Bigfoot, Nessie, and dozens of lesser-known denizens of myth and legend.

Despite my skepticism, no one would be more delighted than me if one of these elusive beasties turned out to be real.  Which is why I was so tickled when a friend and loyal reader of Skeptophilia sent me a link about a cryptid I'd never heard of -- the Corsican cat-fox -- which was just proven to be very real indeed.

The legend has been around for centuries; a wildcat in Corsica that is larger than your typical house cat, has rusty brown fur, and a long, ringed tail, notorious for raiding chicken coops.  Called in the Corsican language ghjattu-volpe -- "cat-fox" -- it was thought to be a myth.

It's not.  In an intensive effort to establish the legend's veracity, the ghjattu-volpe was found -- not only photographed, but captured for DNA sampling.

Genetic analysis has shown that its DNA is distinct from domestic cats, from wildcats in mainland Europe, and wildcats in the neighboring island of Sardinia.  

The fact that this animal stayed undetected for so long has left the locals saying "see, we told you so," and encouraged the absolute hell out of the proponents of other elusive animal claims.  Even so, I think some cryptids are unlikely in the extreme -- the Loch Ness Monster topping that list.  The idea that there is a breeding population of plesiosaurs in Loch Ness, which somehow survived the last ice age (during which that region of Scotland was under a thirty-meter-thick sheet of ice) and has gone undetected despite years of searching with sonar and other high-tech telemetry devices, strikes me as a little ridiculous.

However, I don't find anything inherently implausible about there being a large, elusive proto-hominid in the Pacific Northwest.  I lived in Seattle for ten years and spent my summers camping in the Cascades and Olympics, and man, that is some trackless wilderness up there.  Neither do I doubt the possibility of the survival of thylacines, ivory-billed woodpeckers, and various other thought-to-be-extinct species.

But "possible" and "not inherently implausible" doesn't equal "real."  I remain very much a "show me the money" type.  And that means more than just blurred photos and videos.  (To borrow a phrase from Neil deGrasse Tyson, Photoshop probably has an "add Bigfoot" button.)  Until there's hard evidence, I'm not going to be in the True Believer column.

Even so, I have to admit that the Corsican cat-fox certainly is encouraging to those of us who want to believe.


Friday, March 24, 2023

The writing's on the wall

When you think about it, writing is pretty weird.

Honestly, language in general is odd enough.  Unlike (as far as we know for sure) any other species, we engage in arbitrary symbolic communication -- using sounds to represent words.  The arbitrary part means that which sounds represent what concepts is not because of any logical link; there's nothing any more doggy about the English word dog than there is about the French word chien or the German word Hund (or any of the other thousands of words for dog in various human languages).  With the exception of the few words that are onomatopoeic -- like bang, bonk, crash, and so on -- the word-to-concept link is random.

Written language adds a whole extra layer of randomness to it, because (again, with the exception of the handful of languages with truly pictographic scripts), the connection between the concept, the spoken word, and the written word are all arbitrary.  (I discussed the different kinds of scripts out there in more detail in a post a year ago, if you're curious.)

Which makes me wonder how such a complex and abstract notion ever caught on.  We have at least a fairly good model of how the alphabet used for the English language evolved, starting out as a pictographic script and becoming less concept-based and more sound-based as time went on:

The conventional wisdom about writing is that it began in Sumer something like six thousand years ago, beginning with fired clay bullae that allowed merchants to keep track of transactions by impression into soft clay tablets.  Each bulla had its own symbol; some were symbols for the type of goods, others for numbers.  Once the Sumerians made the jump of letting marks stand for concepts, it wasn't such a huge further step to make marks for other concepts, and ultimately, for syllables or individual sounds.

The reason all this comes up is that a recent paper in the Cambridge Archaeology Journal is claiming that marks associated with cave paintings in France and Spain that were long thought to be random are actual meaningful -- an assertion that would push back the earliest known writing another fourteen thousand years.

The authors assessed 862 strings of symbols dating back to the Upper Paleolithic in Europe -- most commonly dots, slashes, and symbols like a letter Y -- and came to the conclusion that they were not random, but were true written language, for the purpose of keeping track of the mating and birthing cycles of the prey animals depicted in the paintings.

The authors write;

[Here we] suggest how three of the most frequently occurring signs—the line <|>, the dot <•>, and the <Y>—functioned as units of communication.  We demonstrate that when found in close association with images of animals the line <|> and dot <•> constitute numbers denoting months, and form constituent parts of a local phenological/meteorological calendar beginning in spring and recording time from this point in lunar months.  We also demonstrate that the <Y> sign, one of the most frequently occurring signs in Palaeolithic non-figurative art, has the meaning <To Give Birth>.  The position of the <Y> within a sequence of marks denotes month of parturition, an ordinal representation of number in contrast to the cardinal representation used in tallies.  Our data indicate that the purpose of this system of associating animals with calendar information was to record and convey seasonal behavioural information about specific prey taxa in the geographical regions of concern.  We suggest a specific way in which the pairing of numbers with animal subjects constituted a complete unit of meaning—a notational system combined with its subject—that provides us with a specific insight into what one set of notational marks means.  It gives us our first specific reading of European Upper Palaeolithic communication, the first known writing in the history of Homo sapiens.
The claim is controversial, of course, and is sure to be challenged; moving the date of the earliest writing from six thousand to twenty thousand years ago isn't a small shift in our model.  But if it bears up, it's pretty extraordinary.  It further gives lie to our concept of Paleolithic humans as brutal, stupid "cave men," incapable of any kind of mental sophistication.  As I hope I made clear in my first paragraphs, any kind of written language requires subtlety and complexity of thought.  If the beauty of the cave paintings in places like Lascaux doesn't convince you of the intelligence and creativity of our distant forebears, surely this will.

So what I'm doing now -- speaking to my fellow humans via strings of visual symbols -- may have a much longer history than we ever thought.  It's awe-inspiring that we landed on this unique way to communicate; even more that we stumbled upon it so long ago.


Thursday, March 23, 2023

The nibblers

I'm always on the lookout for fascinating, provocative topics for Skeptophilia, but even so, it's seldom that I read a scientific paper with my jaw hanging open.  But that was the reaction I had to a paper from a couple of months ago in Nature that I just stumbled across yesterday.

First, a bit of background.

Based on the same kind of genetic evidence I described in yesterday's post, biologists have divided all living things into three domains: Eukarya, Bacteria, and Archaea.  Eukarya contains eukaryotes -- organisms with true nuclei and complex systems of organelles -- and are broken down into four kingdoms: protists, plants, fungi, and animals.  Bacteria contains, well, bacteria; all the familiar groups of single-celled organisms that lack nuclei and most of the other membrane-bound organelles.  Archaea are superficially bacteria-like; they're mostly known from environments most other living things would consider hostile, like extremely salty water, anaerobic mud, and acidic hot springs.  In fact, they used to be called archaebacteria (and lumped together with Bacteria into "Kingdom Monera") until it was discovered in 1977 by Carl Woese that Archaea are more genetically similar to eukaryotes like ourselves than they are to ordinary bacteria, and forced a complete revision of how taxonomy is done.

So things have stood since 1977: three domains (Bacteria, Archaea, and Eukarya), and within Eukarya four kingdoms (Protista, Plantae, Fungi, and Animalia).

But now a team led by Denis Tikhonenkov, of the Russian Academy of Scientists, has published a paper called "Microbial Predators Form a New Supergroup of Eukaryotes" that looks like it's going to force another overhaul of the tree of life.

Rather than trying to summarize, I'm going to quote directly from the Tikhonenkov et al. paper so you get the full impact:

Molecular phylogenetics of microbial eukaryotes has reshaped the tree of life by establishing broad taxonomic divisions, termed supergroups, that supersede the traditional kingdoms of animals, fungi and plants, and encompass a much greater breadth of eukaryotic diversity.  The vast majority of newly discovered species fall into a small number of known supergroups.  Recently, however, a handful of species with no clear relationship to other supergroups have been described, raising questions about the nature and degree of undiscovered diversity, and exposing the limitations of strictly molecular-based exploration.  Here we report ten previously undescribed strains of microbial predators isolated through culture that collectively form a diverse new supergroup of eukaryotes, termed Provora.  The Provora supergroup is genetically, morphologically and behaviourally distinct from other eukaryotes, and comprises two divergent clades of predators—Nebulidia and Nibbleridia—that are superficially similar to each other, but differ fundamentally in ultrastructure, behaviour and gene content.  These predators are globally distributed in marine and freshwater environments, but are numerically rare and have consequently been overlooked by molecular-diversity surveys. In the age of high-throughput analyses, investigation of eukaryotic diversity through culture remains indispensable for the discovery of rare but ecologically and evolutionarily important eukaryotes.

The members of Provora are distinguished not only genetically but by their behavior; to my eye they look a bit like a basketball with tentacles, using weird little tooth-like structures to nibble their way forward as they creep along.  (Thus "nibblerid," which is their actual name, despite the fact that it sounds like a comical monster species from Doctor Who.)  The first one discovered (in 2017), the euphoniously-named Ancoracysta twista, is a predator on tropical coral, and was found in (of all places) a home aquarium.  Since then, they've been found all over the place, although they're not common anywhere; the only place they've never been seen is on land.  But just about every aquatic environment, fresh or marine, has provorans of some kind.

An electron micrograph of a provoran [Image from Tikhonenkov et al.]

The provorans appear to be closely related to no other eukaryote, and Tikhonenkov et al. are proposing that they warrant placement in their own supergroup (usually known as a "kingdom").  But it raises questions of how many more outlier supergroups there are.  A 2022 analysis by Sijia Liu et al. estimated the number of microbial species on Earth at somewhere around three million, of which only twenty percent have been classified.  It's easy to overlook them, given that they're microscopic -- but that means there could be dozens of other branches of the tree of life out there about which we know nothing. 

It's amazing how much more sophisticated our understanding of evolutionary descent has become.  When I was a kid (back in medieval times), we learned in science class that there were three divisions; animals, plants, and microbes.  (I even had a Golden Guide called Non-Flowering Plants -- which included mushrooms.)  Then it was found that fungi and animals were more closely related than fungi and plants, and that microbes with nuclei and organelles (like amoebas) were vastly different from those without (like bacteria).  There it stood till Woese came along in 1977 and told us that the bacteria weren't a single group, either.

And now we've got another new branch to add to the tree.  The nibblers.  Further illustrating that we don't have to look into outer space to find new and astonishing things to study; there is a ton we don't know about what's right here on Earth.


Wednesday, March 22, 2023

In vino veritas

One of the best explanations of how modern evolutionary genomics is done is in the fourth chapter of Richard Dawkins's fantastic The Ancestor's Tale.  The book starts with humans (although he makes the point that he could have started with any other species on Earth), and tracks backwards in time to each of the points where the human lineage intersects with other lineages.  So it starts out with chapters about our nearest relatives -- bonobos and chimps -- and gradually progresses to more and more distantly-related groups, until by the last chapter we've united our lineage with every other life form on the planet.

In chapter four ("Gibbons"), he describes something of the methodology of how this is done, using as an analogy how linguists have traced the "ancestry" (so to speak) of the surviving copies of Chaucer's The Canterbury Tales, each of which have slight variations from the others.  The question he asks is how we could tell what the original version looked like; put another way, which of those variations represent alterations, and which were present in the first edition.

The whole thing is incredibly well done, in the lucid style for which Dawkins has rightly become famous, and I won't steal his thunder by trying to recap it here (in fact, you should simply read the book, which is wonderful from beginning to end).  But a highly oversimplified capsule explanation is that the method relies on the law of parsimony -- that the model which requires the fewest ad hoc assumptions is the most likely to be correct.  When comparing pieces of DNA from groups of related species, the differences come from mutations; but if two species have different base pairs at a particular position, which was the original and which the mutated version -- or are both mutations from a third, different, base pair at that position?

The process takes the sequences and puts together various possible "family trees" for the DNA; the law of parsimony states that the likeliest one is the arrangement that requires the fewest de novo mutations.  To take a deliberately facile example, suppose that within a group of twelve related species, in a particular stretch of DNA, eleven of them have an A/T pair at the third position, and the twelfth has a C/G pair.  Which is more likely -- that the A/T was the base pair in the ancestral species and species #12 had a mutation to C/G, or that C/G was the base pair in the ancestral species and species #1-11 all independently had mutations to A/T?

Clearly the former is (hugely) more likely.  Most situations, of course, aren't that clear-cut, and there are complications I won't go into here, but that's the general idea.  Using software -- none of this is done by hand any more -- the most parsimonious arrangement is identified, and in the absence of any evidence to the contrary, is assumed to be the lineage of the species in question.

This is pretty much how all cladistics is done.  Except in cases where we don't have DNA evidence -- such as with prehistoric animals known only from fossils -- evolutionary biologists don't rely much on structure any longer.  As Dawkins himself put it, "Even if we were to erase every fossil from the Earth, the evidence for evolution from genetics alone would be overwhelming."

The reason this comes up is a wonderful study that came out this week in Science that uses these same techniques to put together the ancestry of all the modern varieties of grapes.  A huge team at the Karlsruher Institut für Technologie and the Chinese Yunnan Agricultural University analyzed the genomes of 3,500 different grapevines, including both wild and cultivated varieties, and was able to track their ancestry back to the southern Caucasus in around 11,000 B.C.E. (meaning that grapes seem to have been cultivated before wheat was).  From there, the vine rootstocks were carried both ways along the Silk Road, spreading all the way from China to western Europe in the process.

[Image licensed under the Creative Commons Ian L, Malbec grapes, CC BY 2.0]

There are a lot of things about this study that are fascinating.  First, of course, is that we can use the current assortment of wild and cultivated grape vines to reconstruct a family tree that goes back thirteen thousand years -- and come up with a good guess about where the common ancestor of all of them lived.  Second, though, is the more general astonishment at how sophisticated our ability to analyze genomes has become.  Modern genomic analysis has allowed us to create family trees of all living things that boggle the mind -- like this one:

[Image licensed under the Creative Commons Laura A. Hug et al., A Novel Representation Of The Tree Of Life, CC BY 4.0]

These sorts of analyses have overturned a lot of our preconceived notions about our place in the world.  It upset a good many people, for some reason, when it was found we have a 98.7% overlap in our DNA with our nearest relatives (bonobos) -- that remaining 1.3% accounts for the entire genetic difference between yourself and a bonobo.  People were so used to believing there was a qualitative biological difference between humans and everything other living thing that to find out we're so closely related to apes was a significant shock.  (It still hasn't sunk in for some people; you'll still hear the phrase "human and animal" used, as if we weren't ourselves animals.)

Anyhow, an elegant piece of research on the ancestry of grapes is what got all this started, and after all of my circumlocution you probably feel like you need a glass of wine.  Enjoy -- in vino veritas, as the Romans put it, even if they may not have known as much about where their vino originated as we do.


Tuesday, March 21, 2023

The strangest star in the galaxy

Ever heard of Eta Carinae?

If there was a contest for the weirdest known astronomical object in the Milky Way, Eta Carinae would certainly be in the top ten.  It's a binary star system in the constellation Carina, one member of which is a luminous blue variable, unusual in and of itself, but its behavior in the last hundred or so years (as seen from Earth; Eta Carinae is 7,500 light years away, so of course the actual events we're seeing took place 7,500 years ago) has been nothing short of bizarre.  It's estimated to have started out enormous, at about two hundred solar masses, but in a combination of explosions peaking in the 1843 "Great Eruption" it lost thirty solar masses' worth of material, which has been blown outward at 670 kilometers per second to form the odd Homunculus Nebula.

After the Great Eruption, during which it briefly rose to a magnitude of -0.8, making it the second-brightest star in the night sky, it faded below naked eye visibility, largely due to the ejected dust cloud that surrounded it.  But in the twentieth century it began to brighten again, and by 1940 was again visible to the naked eye -- and then its brightness mysteriously doubled again between 1998 and 1999.

Which is even more mind-blowing when you find out that the actual luminosity of the combined Eta Carinae binary is more than five million times greater than that of the Sun.

This comes up because the Hubble Space Telescope has provided astronomers the clearest images of Eta Carinae and the Homunculus Nebula they've yet had, and what they're learning is kind of mind-blowing. Here's one of the best images:

[Image is in the Public Domain, courtesy of the NASA Hubble Space Telescope]

There are a lot of features of these photographs that surprised researchers.  "We've discovered a large amount of warm gas that was ejected in the Great Eruption but hasn't yet collided with the other material surrounding Eta Carinae," said astronomer Nathan Smith of the University of Arizona, lead investigator of the study.  "Most of the emission is located where we expected to find an empty cavity.  This extra material is fast, and it 'ups the ante' in terms of the total energy of an already powerful stellar blast....  We had used Hubble for decades to study Eta Carinae in visible and infrared light, and we thought we had a pretty full account of its ejected debris.  But this new ultraviolet-light image looks astonishingly different, revealing gas we did not see in either visible-light or infrared images.  We're excited by the prospect that this type of ultraviolet magnesium emission may also expose previously hidden gas in other types of objects that eject material, such as protostars or other dying stars; and only Hubble can take these kinds of pictures."

One of the most curious things -- one which had not been observed before -- are the streaks clearly visible in the photograph.  These are beams of ultraviolet light radiating from the stars at the center striking and exciting visible light emission from the dust cloud, creating an effect sort of like sunbeams through clouds.

Keep in mind, though, how big this thing is.  The larger of the two stars in the system, Eta Carinae A, has a diameter about equal to the orbit of Jupiter.  So where you're sitting right now, if our Sun was replaced by Eta Carinae A, you would be inside the star.

The question most people have after learning about this behemoth is, "when will it explode?"  And not just an explosion like the Great Eruption, which was impressive enough, but a real explosion -- a supernova.  It's almost certain to end its life that way, and when it does, it's going to be (to put it in scientific terms) freakin' unreal.  Even at 7.500 light years away, it has the potential to be the brightest supernova we have any record of.  It will almost certainly outshine the Moon, meaning that in places where it's visible (mostly in the Southern Hemisphere) for a time you won't have a true dark night.

But when?  It's imminent -- in astronomical terms.  That means "probably some time in the next hundred thousand years."  It might have already happened -- meaning the light from the supernova is currently streaming toward us.  It might not happen for thousands of years.

But it's considered the most likely star to go supernova in our near region of the galaxy, so there's always hoping.

[Nota bene: we're in no danger at this distance.  There will be gamma rays from the explosion that will reach Earth, but they'll be pretty attenuated by the time they get here, and the vast majority of them will be blocked by our atmosphere.  So no worries that your friends and family might be at risk of turning into the Incredible Hulk, or anything.]

So that's our cool scientific research of the day.  Makes me kind of glad we're in a relatively quiet part of the Milky Way.  Eta Carinae, and the surrounding Carina Nebula (of which the Homunculus is just a small part), is a pretty rough neighborhood.  But if it decides to grace us with some celestial fireworks, it'll be nice to see -- from a safe distance.