Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Friday, March 24, 2023

The writing's on the wall

When you think about it, writing is pretty weird.

Honestly, language in general is odd enough.  Unlike (as far as we know for sure) any other species, we engage in arbitrary symbolic communication -- using sounds to represent words.  The arbitrary part means that which sounds represent what concepts is not because of any logical link; there's nothing any more doggy about the English word dog than there is about the French word chien or the German word Hund (or any of the other thousands of words for dog in various human languages).  With the exception of the few words that are onomatopoeic -- like bang, bonk, crash, and so on -- the word-to-concept link is random.

Written language adds a whole extra layer of randomness to it, because (again, with the exception of the handful of languages with truly pictographic scripts), the connection between the concept, the spoken word, and the written word are all arbitrary.  (I discussed the different kinds of scripts out there in more detail in a post a year ago, if you're curious.)

Which makes me wonder how such a complex and abstract notion ever caught on.  We have at least a fairly good model of how the alphabet used for the English language evolved, starting out as a pictographic script and becoming less concept-based and more sound-based as time went on:


The conventional wisdom about writing is that it began in Sumer something like six thousand years ago, beginning with fired clay bullae that allowed merchants to keep track of transactions by impression into soft clay tablets.  Each bulla had its own symbol; some were symbols for the type of goods, others for numbers.  Once the Sumerians made the jump of letting marks stand for concepts, it wasn't such a huge further step to make marks for other concepts, and ultimately, for syllables or individual sounds.

The reason all this comes up is that a recent paper in the Cambridge Archaeology Journal is claiming that marks associated with cave paintings in France and Spain that were long thought to be random are actual meaningful -- an assertion that would push back the earliest known writing another fourteen thousand years.

The authors assessed 862 strings of symbols dating back to the Upper Paleolithic in Europe -- most commonly dots, slashes, and symbols like a letter Y -- and came to the conclusion that they were not random, but were true written language, for the purpose of keeping track of the mating and birthing cycles of the prey animals depicted in the paintings.

The authors write;

[Here we] suggest how three of the most frequently occurring signs—the line <|>, the dot <•>, and the <Y>—functioned as units of communication.  We demonstrate that when found in close association with images of animals the line <|> and dot <•> constitute numbers denoting months, and form constituent parts of a local phenological/meteorological calendar beginning in spring and recording time from this point in lunar months.  We also demonstrate that the <Y> sign, one of the most frequently occurring signs in Palaeolithic non-figurative art, has the meaning <To Give Birth>.  The position of the <Y> within a sequence of marks denotes month of parturition, an ordinal representation of number in contrast to the cardinal representation used in tallies.  Our data indicate that the purpose of this system of associating animals with calendar information was to record and convey seasonal behavioural information about specific prey taxa in the geographical regions of concern.  We suggest a specific way in which the pairing of numbers with animal subjects constituted a complete unit of meaning—a notational system combined with its subject—that provides us with a specific insight into what one set of notational marks means.  It gives us our first specific reading of European Upper Palaeolithic communication, the first known writing in the history of Homo sapiens.
The claim is controversial, of course, and is sure to be challenged; moving the date of the earliest writing from six thousand to twenty thousand years ago isn't a small shift in our model.  But if it bears up, it's pretty extraordinary.  It further gives lie to our concept of Paleolithic humans as brutal, stupid "cave men," incapable of any kind of mental sophistication.  As I hope I made clear in my first paragraphs, any kind of written language requires subtlety and complexity of thought.  If the beauty of the cave paintings in places like Lascaux doesn't convince you of the intelligence and creativity of our distant forebears, surely this will.

So what I'm doing now -- speaking to my fellow humans via strings of visual symbols -- may have a much longer history than we ever thought.  It's awe-inspiring that we landed on this unique way to communicate; even more that we stumbled upon it so long ago.

****************************************



Thursday, March 23, 2023

The nibblers

I'm always on the lookout for fascinating, provocative topics for Skeptophilia, but even so, it's seldom that I read a scientific paper with my jaw hanging open.  But that was the reaction I had to a paper from a couple of months ago in Nature that I just stumbled across yesterday.

First, a bit of background.

Based on the same kind of genetic evidence I described in yesterday's post, biologists have divided all living things into three domains: Eukarya, Bacteria, and Archaea.  Eukarya contains eukaryotes -- organisms with true nuclei and complex systems of organelles -- and are broken down into four kingdoms: protists, plants, fungi, and animals.  Bacteria contains, well, bacteria; all the familiar groups of single-celled organisms that lack nuclei and most of the other membrane-bound organelles.  Archaea are superficially bacteria-like; they're mostly known from environments most other living things would consider hostile, like extremely salty water, anaerobic mud, and acidic hot springs.  In fact, they used to be called archaebacteria (and lumped together with Bacteria into "Kingdom Monera") until it was discovered in 1977 by Carl Woese that Archaea are more genetically similar to eukaryotes like ourselves than they are to ordinary bacteria, and forced a complete revision of how taxonomy is done.

So things have stood since 1977: three domains (Bacteria, Archaea, and Eukarya), and within Eukarya four kingdoms (Protista, Plantae, Fungi, and Animalia).

But now a team led by Denis Tikhonenkov, of the Russian Academy of Scientists, has published a paper called "Microbial Predators Form a New Supergroup of Eukaryotes" that looks like it's going to force another overhaul of the tree of life.

Rather than trying to summarize, I'm going to quote directly from the Tikhonenkov et al. paper so you get the full impact:

Molecular phylogenetics of microbial eukaryotes has reshaped the tree of life by establishing broad taxonomic divisions, termed supergroups, that supersede the traditional kingdoms of animals, fungi and plants, and encompass a much greater breadth of eukaryotic diversity.  The vast majority of newly discovered species fall into a small number of known supergroups.  Recently, however, a handful of species with no clear relationship to other supergroups have been described, raising questions about the nature and degree of undiscovered diversity, and exposing the limitations of strictly molecular-based exploration.  Here we report ten previously undescribed strains of microbial predators isolated through culture that collectively form a diverse new supergroup of eukaryotes, termed Provora.  The Provora supergroup is genetically, morphologically and behaviourally distinct from other eukaryotes, and comprises two divergent clades of predators—Nebulidia and Nibbleridia—that are superficially similar to each other, but differ fundamentally in ultrastructure, behaviour and gene content.  These predators are globally distributed in marine and freshwater environments, but are numerically rare and have consequently been overlooked by molecular-diversity surveys. In the age of high-throughput analyses, investigation of eukaryotic diversity through culture remains indispensable for the discovery of rare but ecologically and evolutionarily important eukaryotes.

The members of Provora are distinguished not only genetically but by their behavior; to my eye they look a bit like a basketball with tentacles, using weird little tooth-like structures to nibble their way forward as they creep along.  (Thus "nibblerid," which is their actual name, despite the fact that it sounds like a comical monster species from Doctor Who.)  The first one discovered (in 2017), the euphoniously-named Ancoracysta twista, is a predator on tropical coral, and was found in (of all places) a home aquarium.  Since then, they've been found all over the place, although they're not common anywhere; the only place they've never been seen is on land.  But just about every aquatic environment, fresh or marine, has provorans of some kind.

An electron micrograph of a provoran [Image from Tikhonenkov et al.]

The provorans appear to be closely related to no other eukaryote, and Tikhonenkov et al. are proposing that they warrant placement in their own supergroup (usually known as a "kingdom").  But it raises questions of how many more outlier supergroups there are.  A 2022 analysis by Sijia Liu et al. estimated the number of microbial species on Earth at somewhere around three million, of which only twenty percent have been classified.  It's easy to overlook them, given that they're microscopic -- but that means there could be dozens of other branches of the tree of life out there about which we know nothing. 

It's amazing how much more sophisticated our understanding of evolutionary descent has become.  When I was a kid (back in medieval times), we learned in science class that there were three divisions; animals, plants, and microbes.  (I even had a Golden Guide called Non-Flowering Plants -- which included mushrooms.)  Then it was found that fungi and animals were more closely related than fungi and plants, and that microbes with nuclei and organelles (like amoebas) were vastly different from those without (like bacteria).  There it stood till Woese came along in 1977 and told us that the bacteria weren't a single group, either.

And now we've got another new branch to add to the tree.  The nibblers.  Further illustrating that we don't have to look into outer space to find new and astonishing things to study; there is a ton we don't know about what's right here on Earth.

****************************************



Wednesday, March 22, 2023

In vino veritas

One of the best explanations of how modern evolutionary genomics is done is in the fourth chapter of Richard Dawkins's fantastic The Ancestor's Tale.  The book starts with humans (although he makes the point that he could have started with any other species on Earth), and tracks backwards in time to each of the points where the human lineage intersects with other lineages.  So it starts out with chapters about our nearest relatives -- bonobos and chimps -- and gradually progresses to more and more distantly-related groups, until by the last chapter we've united our lineage with every other life form on the planet.

In chapter four ("Gibbons"), he describes something of the methodology of how this is done, using as an analogy how linguists have traced the "ancestry" (so to speak) of the surviving copies of Chaucer's The Canterbury Tales, each of which have slight variations from the others.  The question he asks is how we could tell what the original version looked like; put another way, which of those variations represent alterations, and which were present in the first edition.

The whole thing is incredibly well done, in the lucid style for which Dawkins has rightly become famous, and I won't steal his thunder by trying to recap it here (in fact, you should simply read the book, which is wonderful from beginning to end).  But a highly oversimplified capsule explanation is that the method relies on the law of parsimony -- that the model which requires the fewest ad hoc assumptions is the most likely to be correct.  When comparing pieces of DNA from groups of related species, the differences come from mutations; but if two species have different base pairs at a particular position, which was the original and which the mutated version -- or are both mutations from a third, different, base pair at that position?

The process takes the sequences and puts together various possible "family trees" for the DNA; the law of parsimony states that the likeliest one is the arrangement that requires the fewest de novo mutations.  To take a deliberately facile example, suppose that within a group of twelve related species, in a particular stretch of DNA, eleven of them have an A/T pair at the third position, and the twelfth has a C/G pair.  Which is more likely -- that the A/T was the base pair in the ancestral species and species #12 had a mutation to C/G, or that C/G was the base pair in the ancestral species and species #1-11 all independently had mutations to A/T?

Clearly the former is (hugely) more likely.  Most situations, of course, aren't that clear-cut, and there are complications I won't go into here, but that's the general idea.  Using software -- none of this is done by hand any more -- the most parsimonious arrangement is identified, and in the absence of any evidence to the contrary, is assumed to be the lineage of the species in question.

This is pretty much how all cladistics is done.  Except in cases where we don't have DNA evidence -- such as with prehistoric animals known only from fossils -- evolutionary biologists don't rely much on structure any longer.  As Dawkins himself put it, "Even if we were to erase every fossil from the Earth, the evidence for evolution from genetics alone would be overwhelming."

The reason this comes up is a wonderful study that came out this week in Science that uses these same techniques to put together the ancestry of all the modern varieties of grapes.  A huge team at the Karlsruher Institut für Technologie and the Chinese Yunnan Agricultural University analyzed the genomes of 3,500 different grapevines, including both wild and cultivated varieties, and was able to track their ancestry back to the southern Caucasus in around 11,000 B.C.E. (meaning that grapes seem to have been cultivated before wheat was).  From there, the vine rootstocks were carried both ways along the Silk Road, spreading all the way from China to western Europe in the process.

[Image licensed under the Creative Commons Ian L, Malbec grapes, CC BY 2.0]

There are a lot of things about this study that are fascinating.  First, of course, is that we can use the current assortment of wild and cultivated grape vines to reconstruct a family tree that goes back thirteen thousand years -- and come up with a good guess about where the common ancestor of all of them lived.  Second, though, is the more general astonishment at how sophisticated our ability to analyze genomes has become.  Modern genomic analysis has allowed us to create family trees of all living things that boggle the mind -- like this one:

[Image licensed under the Creative Commons Laura A. Hug et al., A Novel Representation Of The Tree Of Life, CC BY 4.0]

These sorts of analyses have overturned a lot of our preconceived notions about our place in the world.  It upset a good many people, for some reason, when it was found we have a 98.7% overlap in our DNA with our nearest relatives (bonobos) -- that remaining 1.3% accounts for the entire genetic difference between yourself and a bonobo.  People were so used to believing there was a qualitative biological difference between humans and everything other living thing that to find out we're so closely related to apes was a significant shock.  (It still hasn't sunk in for some people; you'll still hear the phrase "human and animal" used, as if we weren't ourselves animals.)

Anyhow, an elegant piece of research on the ancestry of grapes is what got all this started, and after all of my circumlocution you probably feel like you need a glass of wine.  Enjoy -- in vino veritas, as the Romans put it, even if they may not have known as much about where their vino originated as we do.

****************************************



Tuesday, March 21, 2023

The strangest star in the galaxy

Ever heard of Eta Carinae?

If there was a contest for the weirdest known astronomical object in the Milky Way, Eta Carinae would certainly be in the top ten.  It's a binary star system in the constellation Carina, one member of which is a luminous blue variable, unusual in and of itself, but its behavior in the last hundred or so years (as seen from Earth; Eta Carinae is 7,500 light years away, so of course the actual events we're seeing took place 7,500 years ago) has been nothing short of bizarre.  It's estimated to have started out enormous, at about two hundred solar masses, but in a combination of explosions peaking in the 1843 "Great Eruption" it lost thirty solar masses' worth of material, which has been blown outward at 670 kilometers per second to form the odd Homunculus Nebula.

After the Great Eruption, during which it briefly rose to a magnitude of -0.8, making it the second-brightest star in the night sky, it faded below naked eye visibility, largely due to the ejected dust cloud that surrounded it.  But in the twentieth century it began to brighten again, and by 1940 was again visible to the naked eye -- and then its brightness mysteriously doubled again between 1998 and 1999.

Which is even more mind-blowing when you find out that the actual luminosity of the combined Eta Carinae binary is more than five million times greater than that of the Sun.

This comes up because the Hubble Space Telescope has provided astronomers the clearest images of Eta Carinae and the Homunculus Nebula they've yet had, and what they're learning is kind of mind-blowing. Here's one of the best images:

[Image is in the Public Domain, courtesy of the NASA Hubble Space Telescope]

There are a lot of features of these photographs that surprised researchers.  "We've discovered a large amount of warm gas that was ejected in the Great Eruption but hasn't yet collided with the other material surrounding Eta Carinae," said astronomer Nathan Smith of the University of Arizona, lead investigator of the study.  "Most of the emission is located where we expected to find an empty cavity.  This extra material is fast, and it 'ups the ante' in terms of the total energy of an already powerful stellar blast....  We had used Hubble for decades to study Eta Carinae in visible and infrared light, and we thought we had a pretty full account of its ejected debris.  But this new ultraviolet-light image looks astonishingly different, revealing gas we did not see in either visible-light or infrared images.  We're excited by the prospect that this type of ultraviolet magnesium emission may also expose previously hidden gas in other types of objects that eject material, such as protostars or other dying stars; and only Hubble can take these kinds of pictures."

One of the most curious things -- one which had not been observed before -- are the streaks clearly visible in the photograph.  These are beams of ultraviolet light radiating from the stars at the center striking and exciting visible light emission from the dust cloud, creating an effect sort of like sunbeams through clouds.

Keep in mind, though, how big this thing is.  The larger of the two stars in the system, Eta Carinae A, has a diameter about equal to the orbit of Jupiter.  So where you're sitting right now, if our Sun was replaced by Eta Carinae A, you would be inside the star.

The question most people have after learning about this behemoth is, "when will it explode?"  And not just an explosion like the Great Eruption, which was impressive enough, but a real explosion -- a supernova.  It's almost certain to end its life that way, and when it does, it's going to be (to put it in scientific terms) freakin' unreal.  Even at 7.500 light years away, it has the potential to be the brightest supernova we have any record of.  It will almost certainly outshine the Moon, meaning that in places where it's visible (mostly in the Southern Hemisphere) for a time you won't have a true dark night.

But when?  It's imminent -- in astronomical terms.  That means "probably some time in the next hundred thousand years."  It might have already happened -- meaning the light from the supernova is currently streaming toward us.  It might not happen for thousands of years.

But it's considered the most likely star to go supernova in our near region of the galaxy, so there's always hoping.

[Nota bene: we're in no danger at this distance.  There will be gamma rays from the explosion that will reach Earth, but they'll be pretty attenuated by the time they get here, and the vast majority of them will be blocked by our atmosphere.  So no worries that your friends and family might be at risk of turning into the Incredible Hulk, or anything.]

So that's our cool scientific research of the day.  Makes me kind of glad we're in a relatively quiet part of the Milky Way.  Eta Carinae, and the surrounding Carina Nebula (of which the Homunculus is just a small part), is a pretty rough neighborhood.  But if it decides to grace us with some celestial fireworks, it'll be nice to see -- from a safe distance.

****************************************



Monday, March 20, 2023

Grave matters

It's easy to scoff at the superstitious beliefs of the past.  I've certainly been known to do it myself.  But it bears keeping in mind that although, to more scientific minds, some of the rituals and practices seem kind of ridiculous, sometimes they had a strange underlying logic to them.

Take, for example, the strange case of JB55.  Archaeologists excavating a site near Griswold, Connecticut in 1990 found a nineteenth-century wooden coffin with brass tacks hammered into the surface that spelled out "JB55" -- according to the practice of the time, the initials of the deceased and the age at which (s)he died.  Inside were the bones of a man -- but they had been rearranged after death into a "skull-and-crossbones" orientation.


This seems like an odd thing to do, and raised the obvious question of why anyone would rearrange a dead person's remains.  There was speculation that it was part of some kind of magical ritual intended to prevent him from coming back from the dead; in the mid-1800s, the region around Griswold was known for rampant belief in vampirism.  The reason seems to have been an epidemic of tuberculosis, which (among other things) causes pale skin, swollen eyes, and coughing up blood; there are known cases where the bodies of disease victims were exhumed and either burned and reinterred, or else rearranged much as JB55's were.

The explanation in this specific case gained credence when an examination of JB55's bones showed tuberculosis lesions.  Further, an analysis of the Y DNA from the bones allowed them to identify the individual's last name as Barber -- and sure enough, there was a John Barber living in Griswold who would have been of the right age to be JB55.

It's amazing how widespread these sorts of practices are.  In 2018 a skeleton of a ten-year-old child was unearthed in Umbria, Italy.  The skeleton dated from the fifth century C.E., and she seems to have died during a terrible epidemic of malaria that hit the area during the last years of the Roman Empire.  Before burial, the child had a rock placed in her mouth -- thought to be part of a ritual to prevent her spirit from rising from the dead and spreading the disease.  In 2022, a skeleton was uncovered in Pién, Poland, dating from the seventeenth century -- it was of an adult woman, and had a sickle placed across her neck and a padlock on her left big toe.  The reason was probably similar to the aforementioned cases -- to keep her in her grave where she belonged.

The reason this comes up is a paper this week in Antiquity about another interesting burial -- this one in Sagalossos, in western Turkey.  Archaeologists found evidence of a funeral pyre dating to the second century C.E., but unlike the usual practice at the time -- in which the burned remains were taken elsewhere to be buried -- here, the pyre and the remains were simply covered up with a layer of lime and brick tiles.  Most interestingly, scattered over the surface of the tiles were dozens of bent iron nails.

Iron and iron-bearing minerals have been thought from antiquity to have magical properties; Neanderthals were using hematite to anoint the dead fifty thousand years ago.  Here, both the iron in the nails and the angles at which they were bent probably were thought to play a role in their power.

The authors write:

The placement of nails in proximity to the deceased's remains might suggest the first of these two hypotheses.  The fixing qualities of nails, however, may also have been used to pin the spirits of the restless dead (so-called revenants) to their final resting place, so that they could not return from the afterlife...  Aside from the application of nails to symbolically fix the spirit, heavy weights were also used in an attempt to immobilise the physical remains of a potential revenant.

I do have to wonder how the idea of revenants got started in the first place.  Surely all of them can't be from the symptoms of tuberculosis, like in JB55's case.  And since the number of people who have actually returned from the dead is, um, statistically insignificant, it's not like they had lots of data to work from. 

Perhaps much of it was simply fear.  Death is a big scary unknown, and most of us aren't eager to experience it; even the ultra-Christian types who are completely certain they're heading to an afterlife of eternal heavenly bliss look both ways before they cross the road.  But like many superstitions, these all seem so... specific.  How did someone become convinced that nails weren't enough, they had to be bent nails?  And that a padlock on the left big toe would keep the woman in Poland from rising from the dead, but that it wouldn't work if it had been around, say, her right thumb?

Curious stuff.  But I guess if you try something, and lo, the dead guy stays dead, you place that in the "Win" column and do it again next time. 

It's like the story of the guy in Ohio who had a friend who'd come to visit, and whenever he'd walk into the guy's house, he'd raise both hands, close his eyes, and say, "May this house be safe from tigers."

After doing this a few times, the guy said, "Dude.  Why do you say that every time?  This is Ohio.  There's not a tiger within a thousand miles of here."

And the friend gave him a knowing smile and said, "It works well, doesn't it?"

****************************************



Saturday, March 18, 2023

It's the end of the world, if you notice

I have commented more than once about my incredulity with regards to end-of-the-world predictions.  Despite the fact that to date, they have had a 100% failure rate, people of various stripes (usually of either the ultra-religious persuasion or the woo-woo conspiracy one) continue to say that not only is the world doomed, they know exactly when, how, and why.  (If you don't believe me, take a look at the Wikipedia page for apocalyptic predictions, which have occurred so often they had to break it down by century.)  

As far as why this occurs -- why repeated failure doesn't make the true believers say, "Well, I guess that claim was a bunch of bullshit, then" -- there are a variety of reasons.  One is a sort of specialized version of the backfire effect, which occurs when evidence against a claim you believe strongly leaves you believing it even more strongly.  Way back in 1954 psychologists Leon Festinger, Henry Riecken, and Stanley Schachter infiltrated a doomsday cult, and in fact Festinger was with the cult on the day they'd claimed the world was going to end.  When 11:30 PM rolled around and nothing much was happening, the leader of the cult went into seclusion.  A little after midnight she returned with the joyous news that the cult's devotion and prayers had averted the disaster, and god had decided to spare the world, solely because of their fidelity.

Hallelujah!  We better keep praying, then!

(Note bene: The whole incident, and the analysis of the phenomenon by Festinger et al., is the subject of the fascinating book When Prophecy Fails.)

Despite this, the repeated failure of an apocalyptic prophecy can cause your followers to lose faith eventually, as evangelical preacher Harold Camping found out.  So the people who believe this stuff often have to engage in some fancy footwork after the appointed day and hour arrive, and nothing happens other than the usual nonsense.

Take, for example, the much-publicized "Mayan apocalypse" on December 21, 2012 that allegedly was predicted by ancient Mayan texts (it wasn't) and was going to herald worldwide natural disasters (it didn't).  The True Believers mostly retreated in disarray when December 22 dawned, as well they should have.  My wife and I threw a "Welcoming In The Apocalypse" costume party on the evening of December 21 (I went as a zombie, which I felt was fitting given the theme), and I have to admit to some disappointment when the hour of midnight struck and we were all still there.  But it turns out that not all of the Mayan apocalyptoids disappeared after the prediction failed; one of them, one Nick Hinton, says actually the end of the world did happen, as advertised...

... but no one noticed.

Hinton's argument, such as it is, starts with a bit of puzzling over why you never hear people talking about the 2012 apocalypse any more.  (Apparently "it didn't happen" isn't a sufficient reason.)  Hinton finds this highly peculiar, and points out that this was the year CERN fired up the Large Hadron Collider and discovered the Higgs boson, and that this can't possibly be a coincidence.  He wonders if this event destroyed the universe and/or created a black hole, and then "sucked us in" without our being aware of it.

[Image licensed under the Creative Commons Lucas Taylor / CERN, CMS Higgs-event, CC BY-SA 3.0]

Me, I think I'd notice if I got sucked into a black hole.  They're kind of violent places, as I described a recent post about Sagittarius A* and the unpleasant process called "spaghettification."   But Hinton isn't nearly done with his explanation.  He writes:
There's the old cliché argument that "nothing has felt right" since 2012.  I agree with this...  [E]ver since then the world seems to descend more and more into chaos each day.  Time even feels faster.  There's some sort of calamity happening almost daily.  Mass shootings only stay in the headlines for like 12 hours now.  Did we all die and go to Hell?...  Like I've said, I think we live in a series of simulations.  Perhaps the universe was destroyed by CERN and our collective consciousness was moved into a parallel universe next door.  It would be *almost* identical.
Of course, this is a brilliant opportunity to bring out the Mandela effect, about which I've written before.  The idea of the Mandela effect is that people remember various stuff differently (such as whether Nelson Mandela died in prison, whether it's "Looney Tunes" or "Loony Tunes" and "The Berenstein Bears" or "The Berenstain Bears," and so forth), and the reason for this is not that people's memories in general suck, but that there are alternate universes where these different versions occur and people slip back and forth between them.

All of which makes me want to take Ockham's Razor and slit my wrists with it.

What I find intriguing about Hinton's explanation is not all the stuff about CERN, though, but his arguing that the prediction didn't fail because he was wrong, but that the world ended and seven-billion-plus people didn't even notice.  Having written here at Skeptophilia for over twelve years, I'm under no illusions about the general intelligence level of humanity, but for fuck's sake, we're not that unobservant.  And even if somehow CERN did create an alternate universe, why would it affect almost nothing except for things like the spelling of Saturday morning cartoon titles?

So this is taking the backfire effect and raising it to the level of performance art.  This is saying that it is more likely that the entire population of the Earth was unaware of a universe-ending catastrophe than it is that you're simply wrong.

Which is so hubristic that it's kind of impressive.

But I better wind this up, because I've got to prepare myself for the next end of the world, which (according to Messiah Foundation International, which I have to admit sounds pretty impressive) is going to occur in January of 2026.  This only gives us all a bit shy of three years to get ready, so I really should get cracking on my next novel.  And if that apocalypse doesn't pan out, evangelical Christian lunatic Kent Hovind says not to worry, the Rapture is happening in 2028, we're sure this time, cross our hearts and hope to be assumed bodily into heaven.

So many apocalypses, so little time.

****************************************



Friday, March 17, 2023

The heart of the world

One of the biggest mysteries in science lies literally beneath our feet; the structure and composition of the interior of the Earth.

We have direct access only to the barest fraction of it.  The deepest borehole ever created is the Kola Superdeep Borehole, on the Kola Peninsula in Russia near the border of Norway.  It's 12.26 kilometers deep, which is pretty impressive, but when you realize that the mean radius of the Earth is just under 6,400 kilometers, it kind of puts things in perspective.

What we know is that the crust is principally silicate rock -- lower-density felsic rocks (like granite) forming the majority of the continental crust, and denser mafic rocks (like basalt) comprising the thinner oceanic crust.  Beneath that is the semisolid mantle, which makes up two-thirds of the Earth's mass.  Inside that is the outer core, thought (primarily from estimates of density) to be made up of liquid iron and nickel, and within that the inner core, a solid ball of red-hot iron and nickel.

At least that's what we thought.  All of this was determined through inference from evidence like the relative speed of different kinds of seismic waves; despite what Jules Verne would have you believe, no one has been to the center of the Earth (nor is likely to).  But figuring all this out is important not just from the standpoint of adding to our knowledge of the planet we live on, but in comprehending phenomena like magnetic field reversals -- something that would have obvious impacts on our own lives, and which are still poorly understood at best.

We just got another piece of the puzzle in the form of a paper last week in Nature that suggests our picture of the Earth's inner core as a homogeneous ball of solid iron and nickel may not be right.  Using data from seismic waves, scientists at the Australian National University in Canberra have concluded that the inner core itself has two layers.  The exact difference between the two isn't certain -- as I said before, we're limited by what information we can get long-distance -- but the best guess is that it's a difference in crystal structure, probably caused by the immense pressures at the center.

[Image courtesy of Drew Whitehouse, Hrvoje Tkalčić, and Thanh-Son Phạm]

In general, whenever a wave crosses a boundary from one medium to another, it refracts (changes angle); this is why a straw leaning in a glass of water looks like it's bent.  If the angle is shallow enough, some of the wave's energy can also reflect off the interface.  When that happens to seismic waves inside the Earth, those reflected waves bounce around inside the core; when they finally make it back out and are measured by scientists on the Earth's surface, features such as the energy, wavelength, and angle can provide a lot of information about the materials it passed through on its journey.

The authors write:
Earth’s inner core (IC), which accounts for less than 1% of the Earth’s volume, is a time capsule of our planet’s history.  As the IC grows, the latent heat and light elements released by the solidification process drive the convection of the liquid outer core, which, in turn, maintains the geodynamo.  Although the geomagnetic field might have preceded the IC’s birth5, detectable changes in the IC’s structures with depth could signify shifts in the geomagnetic field’s operation, which could have profoundly influenced the Earth’s evolution and its eco-system.  Therefore, probing the innermost part of the IC is critical to further disentangling the time capsule and understanding Earth’s evolution in the distant past.

The discovery of the Earth's hitherto-unknown center could help us to understand one of the most fundamental questions in geology; the structure of the inside of the Earth.  We still have a very long way to go, of course.  As I said, even understanding how exactly the core generates the Earth's protective magnetic field is far from achieved.  But the new research gives us a deeper comprehension of the structure of the inner core -- the red-hot heart hidden beneath the deceptively tranquil surface of our home planet. 

****************************************