Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Monday, June 13, 2022

The google trap

The eminent physicist Stephen Hawking said, "The greatest enemy of knowledge is not ignorance; it is the illusion of knowledge."

Somewhat more prosaically, my dad once said, "Ignorance can be cured.  We're all ignorant about some things.  Stupid, on the other hand, goes all the way to the bone."

Both of these sayings capture an unsettling idea; that often it's more dangerous to think you understand something than it is to admit you don't.  This idea was illustrated -- albeit using an innocuous example -- in a 2002 paper called "The Illusion of Explanatory Depth" by Leo Rozenblit and Frank Keil, of Yale University.  What they did is to ask people to rate their level of understanding of a simple, everyday object (for example, how a zipper works), on a scale of zero to ten.  Then, they asked each participant to write down an explanation of how zippers work in as much detail as they could.  Afterward, they asked the volunteers to re-rate their level of understanding.

Across the board, people rated themselves lower the second time, after a single question -- "Okay, then explain it to me" -- shone a spotlight on how little they actually knew.

The problem is, unless you're in school, usually no one asks the question.  You can claim you understand something, you can even have a firmly-held opinion about it, and there's no guarantee that your stance is even within hailing distance of reality.

And very rarely does anyone challenge you to explain yourself in detail.

[Image is in the Public Domain]

If that's not bad enough, a recent paper by Adrian Ward (of the University of Texas - Austin) showed that not only do we understand way less than we think we do, we fold what we learn from other sources into our own experiential knowledge, regardless of the source of that information.  Worse still, that incorporation is so rapid and smooth that afterward, we aren't even aware of where our information (right or wrong) comes from.

Ward writes:

People frequently search the internet for information.  Eight experiments provide evidence that when people “Google” for online information, they fail to accurately distinguish between knowledge stored internally—in their own memories—and knowledge stored externally—on the internet.  Relative to those using only their own knowledge, people who use Google to answer general knowledge questions are not only more confident in their ability to access external information; they are also more confident in their own ability to think and remember.  Moreover, those who use Google predict that they will know more in the future without the help of the internet, an erroneous belief that both indicates misattribution of prior knowledge and highlights a practically important consequence of this misattribution: overconfidence when the internet is no longer available.  Although humans have long relied on external knowledge, the misattribution of online knowledge to the self may be facilitated by the swift and seamless interface between internal thought and external information that characterizes online search.  Online search is often faster than internal memory search, preventing people from fully recognizing the limitations of their own knowledge.  The internet delivers information seamlessly, dovetailing with internal cognitive processes and offering minimal physical cues that might draw attention to its contributions.  As a result, people may lose sight of where their own knowledge ends and where the internet’s knowledge begins.  Thinking with Google may cause people to mistake the internet’s knowledge for their own.

I recall vividly trying, with minimal success, to fight this in the classroom.  Presented with a question, many students don't stop to try to work it out themselves, they immediately jump to looking it up on their phones.  (One of many reasons I had a rule against having phones out during class, another exercise in frustration given how clever teenagers are at hiding what they're doing.)  I tried to make the point over and over that there's a huge difference between looking up a fact (such as the average number of cells in the human body) and looking up an explanation (such as how RNA works).  I use Google and/or Wikipedia for the former all the time.  The latter, on the other hand, makes it all too easy simply to copy down what you find online, allowing you to have an answer to fill in the blank irrespective of whether you have the least idea what any of it means.

Even Albert Einstein, pre-internet though he was, saw the difference, and the potential problem therein.  Once asked how many feet were in a mile, the great physicist replied, "I don't know.  Why should I fill my brain with facts I can find in two minutes in any standard reference book?”

In the decades since Einstein's said this, that two minutes has shrunk to about ten seconds, as long as you have internet access.  And unlike the standard reference books he mentioned, you have little assurance that the information you found online is even close to right.

Don't get me wrong; I think that our rapid, and virtually unlimited, access to human knowledge is a good thing.  But like most good things, it comes at a cost, and that cost is that we have to be doubly cautious to keep our brains engaged.  Not only is there information out there that is simply wrong, there are people who are (for various reasons) very eager to convince you they're telling the truth when they're not.  This has always been true, of course; it's just that now, there are few barriers to having that erroneous information bombard us all day long -- and Ward's paper shows just how quickly we can fall for it.

The cure is to keep our rational faculties online.  Find out if the information is coming from somewhere reputable and reliable.  Compare what you're being told with what you know to be true from your own experience.  Listen to or read multiple sources of information -- not only the ones you're inclined to agree with automatically.  It might be reassuring to live in the echo chamber of people and media which always concur with our own preconceived notions, but it also means that if something is wrong, you probably won't realize it.

Like I said in Saturday's post, finding out you're wrong is no fun.  More than once I've posted stuff here at Skeptophilia and gotten pulled up by the short hairs when someone who knows better tells me I've gotten it dead wrong.  Embarrassing as it is, I've always posted retractions, and often taken the original post down.  (There's enough bullshit out on the internet without my adding to it.)

So we all need to be on our guard whenever we're surfing the web or listening to the news or reading a magazine.  Our tendency to absorb information without question, regardless of its provenance -- especially when it seems to confirm what we want to believe -- is a trap we can all fall into, and Ward's paper shows that once inside, it can be remarkably difficult to extricate ourselves.

**************************************

Saturday, June 11, 2022

Locked into error

Back in 2011, author Kathryn Schulz did a phenomenal TED Talk called "On Being Wrong."  She looks at how easy it is to slip into error, and how hard it is not only to correct it, but (often) even to recognize that it's happened.  At the end, she urges us to try to find our way out of the "tiny, terrified space of rightness" that virtually all of us live in.

Unfortunately, that's one thing that she herself gets wrong.  Because for a lot of people, their belief in their rightness about everything isn't terrified; it's proudly, often belligerently, defiant.

I'm thinking of one person in particular, here, who regularly posts stuff on social media that is objectively wrong -- I mean, hard evidence, no question about it -- and does so in a combative way that comes across as, "I dare you to contradict me."  I've thus far refrained from saying anything.  One of my faults is that I'm a conflict avoider, but I also try to be cognizant of the cost/benefit ratio.  Maybe I'm misjudging, but I think the likelihood of my eliciting a "Holy smoke, I was wrong" -- about anything -- is as close to zero as you could get.

Now, allow me to say up front that I'm not trying to imply here that I'm right about everything, nor that I don't come across as cocky or snarky at times.  Kathryn Schulz's contention (and I think she's spot-on about this one) is that we all fall into the much-too-comfortable trap of believing that our view of the world perfectly reflects reality.  One of the most startling bullseyes Schulz makes in her talk is about how it feels to be wrong:

So why do we get stuck in this feeling of being right?  One reason, actually, has to do with the feeling of being wrong.  So let me ask you guys something...  How does it feel -- emotionally -- how does it feel to be wrong?  Dreadful.  Thumbs down.  Embarrassing...  Thank you, these are great answers, but they're answers to a different question.  You guys are answering the question: How does it feel to realize you're wrong?  Realizing you're wrong can feel like all of that and a lot of other things, right?  I mean, it can be devastating, it can be revelatory, it can actually be quite funny...  But just being wrong doesn't feel like anything.

I'll give you an analogy.  Do you remember that Looney Tunes cartoon where there's this pathetic coyote who's always chasing and never catching a roadrunner?  In pretty much every episode of this cartoon, there's a moment where the coyote is chasing the roadrunner and the roadrunner runs off a cliff, which is fine -- he's a bird, he can fly.  But the thing is, the coyote runs off the cliff right after him.  And what's funny -- at least if you're six years old -- is that the coyote's totally fine too.  He just keeps running -- right up until the moment that he looks down and realizes that he's in mid-air.  That's when he falls.  When we're wrong about something -- not when we realize it, but before that -- we're like that coyote after he's gone off the cliff and before he looks down.  You know, we're already wrong, we're already in trouble, but we feel like we're on solid ground.  So I should actually correct something I said a moment ago.  It does feel like something to be wrong; it feels like being right.
What brought this talk to mind -- and you should take fifteen minutes and watch the whole thing, because it's just that good -- is some research out of the University of California - Los Angeles published a couple of weeks ago in Psychological Review that looked at the neuroscience of these quick -- and once made, almost impossible to undo -- judgments about the world.


The study used a technique called electrocorticography to see what was going on in a part of the brain called the gestalt cortex, which is known to be involved in sensory interpretation.  In particular, the team analyzed the activity of the gestalt cortex when presented with the views of other people, some of which the test subjects agreed with, some with which they disagreed, and others about which they had yet to form an opinion.

The most interesting result had to do with the strength of the response.  The reaction of the gestalt cortex is most pronounced when we're confronted with views opposing our own, and with statements about which we've not yet decided.  In the former case, the response is to suppress the evaluative parts of the brain -- i.e., to dismiss immediately what we've read because it disagrees with what we already thought.  In the latter case, it amplifies evaluation, allowing us to make a quick judgment about what's going on, but once that's happened any subsequent evidence to the contrary elicits an immediate dismissal.  Once we've made our minds up -- and it happens fast -- we're pretty much locked in.

"We tend to have irrational confidence in our own experiences of the world, and to see others as misinformed, lazy, unreasonable or biased when they fail to see the world the way we do," said study lead author Matthew Lieberman, in an interview with Science Daily.  "We believe we have merely witnessed things as they are, which makes it more difficult to appreciate, or even consider, other perspectives.  The mind accentuates its best answer and discards the rival solutions.  The mind may initially process the world like a democracy where every alternative interpretation gets a vote, but it quickly ends up like an authoritarian regime where one interpretation rules with an iron fist and dissent is crushed.  In selecting one interpretation, the gestalt cortex literally inhibits others."

Evolutionarily, you can see how this makes perfect sense.  As a proto-hominid out on the African savanna, it was pretty critical to look at and listen to what's around you and make a quick judgment about its safety.  Stopping to ponder could be a good way to become a lion's breakfast.  The cost of making a wrong snap judgment and overestimating the danger is far lower than blithely going on your way and assuming everything is fine.  But now?  This hardwired tendency to squelch opposing ideas without consideration means we're unlikely to correct -- or even recognize -- that we've made a mistake.

I'm not sure what's to be done about this.  If anything can be done.  Perhaps it's enough to remind people -- including myself -- that our worldviews aren't flawless mirrors of reality, they're the result of our quick evaluation of what we see and hear.  And, most importantly, that we never lose by reconsidering our opinions and beliefs, weighing them against the evidence, and always keeping in mind the possibility that we might be wrong.  I'll end with another quote from Kathryn Schulz:
This attachment to our own rightness keeps us from preventing mistakes when we absolutely need to, and causes us to treat each other terribly.  But to me, what's most baffling and most tragic about this is that it misses the whole point of being human.  It's like we want to imagine that our minds are these perfectly translucent windows, and we just gaze out of them and describe the world as it unfolds.  And we want everybody else to gaze out of the same window and see the exact same thing.  That is not true, and if it were, life would be incredibly boring.  The miracle of your mind isn't that you can see the world as it is, it's that you can see the world as it isn't.  We can remember the past, and we can think about the future, and we can imagine what it's like to be some other person in some other place.  And we all do this a little differently...  And yeah, it is also why we get things wrong.

Twelve hundred years before Descartes said his famous thing about "I think therefore I am," this guy, St. Augustine, sat down and wrote "Fallor ergo sum" -- "I err, therefore I am."  Augustine understood that our capacity to screw up, it's not some kind of embarrassing defect in the human system, something we can eradicate or overcome.  It's totally fundamental to who we are.  Because, unlike God, we don't really know what's going on out there.  And unlike all of the other animals, we are obsessed with trying to figure it out.  To me, this obsession is the source and root of all of our productivity and creativity.

**************************************

Friday, June 10, 2022

There's a word for that

I've always had a fascination for words, ever since I was little.  My becoming a writer was hardly in question from the start.  And when I found out that because of the rather byzantine rules governing teacher certification at the time, I could earn my permanent certification in biology with a master's degree in linguistics, I jumped into it with wild abandon.  (Okay, I know that's kind of strange; and for those of you who are therefore worried about how I could have been qualified to teach science classes, allow me to point out that I also have enough graduate credit hours to equal a master's degree in biology, although I never went through the degree program itself.)

In any case, I've been a logophile for as long as I can remember, and as a result, my kids grew up in a household where incessant wordplay was the order of the day.  Witness the version of "Itsy Bitsy Spider" I used to sing to my boys when they were little:
The minuscule arachnid, a spigot he traversed
Precipitation fell, the arachnid was immersed
Solar radiation
Caused evaporation
So the minuscule arachnid recommenced perambulation.
Okay, not only do I love words, I might be a little odd.  My kids developed a good vocabulary probably as much as a defense mechanism as for any other reason.

[Image is in the Public Domain]

All of this is just by way of saying that I am always interested in research regarding how words are used.  And just yesterday, I ran across a set of data collected by some Dutch linguists a while back regarding word recognition in several languages (including English) -- and when they looked at gender differences, an interesting pattern emerged.

What they did was to give a test to see if the correct definitions were known for various unfamiliar words, and then sorted them by gender.  It's a huge sample size -- there were over 500,000 respondents to the online quiz.  And they found that which words the respondents got wrong was more interesting than the ones they got right.

From the data, they compiled a list of the twelve words that men got wrong more frequently than women. They were:
  • taffeta
  • tresses
  • bottlebrush (the plant, not the kitchen implement, which is kind of self-explanatory)
  • flouncy
  • mascarpone
  • decoupage
  • progesterone
  • wisteria
  • taupe
  • flouncing
  • peony
  • bodice
Then, there were the ones women got wrong more frequently than men:
  • codec
  • solenoid
  • golem
  • mach
  • humvee
  • claymore
  • scimitar
  • kevlar
  • paladin
  • bolshevism
  • biped
  • dreadnought
There are a lot of things that are fascinating about these lists. The female-skewed words are largely about clothes, flowers, and cooking; the male-skewed words about machines and weapons.  (Although I have to say that I have a hard time imagining that anyone wouldn't recognize the definition of tresses and scimitar.)

It's easy to read too much into this, of course.  Even the two words with the biggest gender-based differences (taffeta and codec) were still correctly identified by 43 and 48% of the male and female respondents, respectively.  (Although I will admit that one of the "male" words -- codec -- is the only one on either list that I wouldn't have been able to make a decent guess at.  It means "a device that compresses data to allow faster transmission," and I honestly don't think I've ever heard it used.)

It does point out, however, that however much progress we have made as a society in creating equal opportunities for the sexes, we still have a significant skew in how we teach and use language, and in the emphasis we place on different sorts of knowledge.

I was also interested in another bit of this study, which is the words that almost no one knew.  Their surveys found that the least-known nouns in the study were the following twenty words.  See how many of these you know:
  • genipap
  • futhorc
  • witenagemot
  • gossypol
  • chaulmoogra
  • brummagem
  • alsike
  • chersonese
  • cacomistle
  • yogh
  • smaragd
  • duvetyn
  • pyknic
  • fylfot
  • yataghan
  • dasyure
  • simoom
  • stibnite
  • kalian
  • didapper
As you might expect, I didn't do so well with these.  There are three I knew because they are biology-related (chaulmoogra, cacomistle, and dasyure); one I got because of my weather-obsession (simoom); one I got because my dad was a rockhound (stibnite); and one I got because of my degree in linguistics (futhorc -- and see, the MA did come in handy!).  The rest I didn't even have a guess about.  (I did look up genipap because it sounds like some kind of STD, and it turns out to be "a tropical American tree with edible orange fruit and useful timber.")

I'm not entirely sure what all this tells us, other than what we started with, which is that words are interesting.  That, and I definitely think you should make sure you have the opportunity to work into your ordinary speech the words brummagem (cheap, showy, counterfeit), smaragd (another name for an emerald), and pyknic (fat, stout, of stocky build).

Although admittedly, I'm probably not the person you should be going to for advice on how to converse normally.

**************************************

Thursday, June 9, 2022

One, two, five!

Sometimes it seems like the members of my family communicate with each other primarily via movie quotes.

In that way we're a little like the Tamarians from the Star Trek: The Next Generation episode "Darmok," except instead of using edifying legends, mythology, and folklore from our history and culture to explain ourselves, we use quotes from Monty Python, Doctor Who, Arrested Development, Miranda, The X Files, and Looney Tunes.

And yes, I am aware of the irony of using a reference to a television show to explain our obsession with references to movies and television shows.

So, Shaka when the walls fell, I guess.

There's pretty much a quote for every occasion.  I can't hear an announcement in an airport without thinking (or often saying aloud), "Don't start that 'red zone/white zone' crap with me, Betty."  (From the movie Airplane, surely one of the most quotable movies ever made.)  If I fuck something up royally, my response is either "Missed it by... that much" (from Get Smart) or "Back to the old fiasco hatchery" (from the inimitable Wile E. Coyote).  If someone makes an egregious misstatement, there's always "That word... I don't think it means what you think it means."  (From, of course, The Princess Bride.)

This habit of making connections between real life and on-screen events also explains why there are times when I'll find something funny that no one else does.  This is why I burst out laughing when I read a new piece of research published in the journal PLOS ONE last week entitled, "Composition of Trace Residues from the Contents of 11th–12th Century Sphero-conical Vessels from Jerusalem," by a team led by Carney Matheson of Lakehead University.

From the title, you'd think that not only would there be nothing whatsoever funny about it, but that it'd be an article of interest only to people fascinated with obscure archaeological trivia.  But I suspect some of you will get why I thought it was funny when you hear what those "trace residues" were.

Gunpowder.  Matheson et al. believe they have discovered...

... medieval hand grenades.

Yes, Monty Python fans, what we have here is clearly the Holy Hand Grenade of Antioch.


There is, unfortunately, no evidence that they were brought out in order to deal with a murderous fluffy bunny, nor that their use was preceded by monks chanting "pie Jesu domine, dona eis requiem" and a reading from the Book of Armaments ("And the Lord spake, saying, 'First shalt thou take out the Holy Pin, then shalt thou count to three.  No more, no less.  Three shall be the number thou shalt count, and the number of the counting shall be three.  Four shalt thou not count, neither count thou two, excepting that thou then proceedest on to three.  Five is right out.  Once the number three, being the third number, be reached, then lobbest thou thy Holy Hand Grenade of Antioch towards thy foe, who being naughty in my sight, shall snuff it.'")

I feel obliged to state up front that I mean no disrespect to Matheson et al., and I have the greatest regard for scholarly research, so any connections I'm making to Killer Rabbits and monks who wallop themselves over the heads with boards is purely a function of my rather loopy brain.  And I suppose it's only fair to show a photo of the actual artifact:

[Photo credit: Robert Mason, Royal Ontario Museum]

In my own defense, I'm sure you'll agree that this looks a great deal like what was left when King Arthur didst lob the Holy Hand Grenade of Antioch at the Beast of Caer Bannog and blew it to tiny bits.

So, there you go.  A post that's more about the workings of my bizarre neural connections than it is about the actual research.  I've been sitting here trying to think of a way to wrap this up, and I think it's only fitting to go with a quote from the Star Trek: The Next Generation episode "Ship in a Bottle," the last line of which is Lieutenant Barclay saying, "Computer: end program."

**************************************

Wednesday, June 8, 2022

The glass RNA factory

A couple of months ago, I wrote about a discovery that has startling (and encouraging) implications for the search for extraterrestrial life -- that amino acids, the building blocks of proteins, are so easy to form abiotically that they are common even in interstellar dust clouds.

Well, because of my twin-brudda-from-anudda-mudda, the wonderful writer and blogger Andrew Butters, I found out that a new bit of research has shown that another piece of biochemistry -- RNA -- is equally easy to make in large quantities.

If anything, this is even more exciting to us aliens-in-space aficionados than the amino acid research was, because the model for the origins of life on Earth that is now virtually universally accepted is called "RNA world."  The idea has been around since the early 1960s, and simply put, it's that the first nucleic acid type to form in the early oceans was not DNA, but RNA.  At first this model met with considerable skepticism.  One common criticism was that the only organisms that encode their genome as RNA are certain viruses (such as the common cold, flu, rabies, mumps, measles, hepatitis, and COVID-19); all other organisms encode their genomes as DNA.  The second is that RNA has a tendency to be unstable.  It's a single helix; the shape resembles a spiral with short spokes sticking out at angles along its length, and that open shape allows it to be attacked and broken down readily by solvents (including water).

[Image licensed under the Creative Commons DataBase Center for Life Science (DBCLS), 201904 RNA, CC BY 4.0]

Two subsequent discoveries tilted biochemists toward accepting the RNA world model.  First, it was found that there are stable forms of RNA, such as transfer RNA, that are able to protect themselves from breakdown by having "hairpin loops" -- places where the helix doubles back and bonds to itself through complementary base-pairing, just like DNA has.

[Image licensed under the Creative Commons Vossman, Pre-mRNA-1ysv-tubes, CC BY-SA 3.0]

The second discovery was that RNA is autocatalytic -- pieces of RNA can actually feed back and speed up the reactions that form more RNA.  DNA doesn't do this, which was a major stumbling block to figuring out how the first self-replicating DNA formed.

So most folks are convinced that RNA was the first genetic material, and that it was only superseded by DNA after first double-stranded RNA formed, and then there was a chemical alteration of the sugar in the backbone (deoxyribose for ribose) and one of the nitrogenous bases (thymine for uracil).  But this only shoved the basic problem back one step.  Okay, RNA came before DNA; but what made the RNA?

We've known for ages, because of the stupendous Miller-Urey experiment, that making nucleotides -- the building blocks of both RNA and DNA -- is easy in the abiotic conditions that existed on the early Earth.  But how did link together into the long chains that form the structure of all functional RNA?

The new research indicates that it's amazingly simple -- all you have to do is to take the solution of nucleotides, and allow it to percolate through the pores of one of the most common rocks on Earth -- basaltic volcanic glass.

This stuff is kind of everywhere.  Not only is ninety percent of all volcanic rock on Earth made of basalt, it's also common on the two other rocky worlds we've studied -- the Moon and Mars.  "Basaltic glass was everywhere on Earth at the time," said Stephen Mojzsis, of the Budapest Research Centre for Astronomy and Earth Sciences, who co-authored the study.  "For several hundred million years after the Moon formed, frequent impacts coupled with abundant volcanism on the young planet formed molten basaltic lava, the source of the basalt glass.  Impacts also evaporated water to give dry land, providing aquifers where RNA could have formed."

Basalt also contains two ions that the team showed were critical for forming the RNA nucleotides and then linking them together -- nickel and boron.  The experiments they ran showed that all you had to do was pour the nucleotide slurry over the basaltic glass, and wait -- and voilà, in a day or two you had 100- to 200-subunit-long chains of RNA that look exactly like the kind you find in living things.

Given basalt's ubiquity on rocky planets, this makes it even more likely that there is life elsewhere in the universe, and that its biochemistry might have some striking overlap with ours.  Exciting stuff.

So it looks like the quote from the wonderful movie Contact might well turn out to be prescient.  "The universe is a pretty big place. It's bigger than anything anyone has ever dreamed of before. So if it's just us... seems like an awful waste of space."

**************************************

Tuesday, June 7, 2022

Runes in Maine

Ready for a convoluted story?

Today's journey is about the twistiest trip through mythology, fakery, and pseudohistory I have ever seen, linking the Vikings, the Templars, first century Judaea, and a farm in Maine.  It's the story of the Spirit Pond runestones, an alleged pre-Columbian runic inscription that one guy thinks proves that the Native Americans of the northeastern United States are direct descendants of Jesus Christ.

So pop yourself some popcorn, sit back, and let me tell you a tall tale.

In 1971, Walter Elliott, a carpenter from Phippsburg, Maine, claimed that he had found a stone with some odd inscriptions near a place called Spirit Pond. The inscriptions, he said, looked like Norse runes, so could this possibly be proof that the Norse explorers of the eleventh century, especially Leif Eiríksson and Thorfinn Karlsefni, had made their way to New England?

Part of the inscription on the Spirit Pond runestone [Image is in the Public Domain]

The claim came to the attention of Einar Haugen, Harvard University professor of linguistics, and one of the world's experts on Norse runes.  Haugen pronounced the inscription a fake, claiming that the inscription has "a few Norse words in a sea of gibberish."  Specifically, he said that the use of the "hooked X" or "stung A" character (it can be seen in the top right word above, the second character from the right) was inconsistent with verified eleventh century Norse inscriptions, and in fact was eerily similar to the inscription on the Kensington runestone, found in Minnesota in 1898, which is universally considered to be a modern fake.

Pretty decisive, no?  But as we've seen over and over, a silly old Ph.D. and professorship in a subject doesn't mean that amateurs can't know more.  So the Spirit Pond stone has gained quite a following amongst the Vikings-in-the-Americas crowd.

And as we've also seen, there is no wild theory that can't be made even more bizarre.

Enter geologist Scott Wolter.  Wolter thinks that the Spirit Pond runestone is a genuine archaeological find, but it doesn't mean what its finder claimed -- that it was proof that Eiríksson, Karlsefni, et al. had made it to North America in the eleventh century.  He claims that it was brought to what is now Maine in the fourteenth century...

... by the Knights Templar.

Yes, the Knights Templar, that fertile source of speculation for aficionados of secret societies, which was forcefully disbanded in 1314 and has spawned wacky conspiracy theories ever since.  The Templars ran afoul of the powers-that-be, especially Pope Clement V and King Philip IV of France, mostly because of their money, power, and influence, and Clement and Philip had the leaders arrested on trumped-up charges of sorcery.  (To be fair, some of their rituals were pretty bizarre.)  Templars who weren't willing to confess -- and this included their head, Jacques de Molay -- were burned at the stake.

So, so much for the Templars.  Except for the aforementioned conspiracy theories, of course, which suggest that the main body of the Templars escaped, letting de Molay take the fall (some say de Molay willingly sacrificed himself to let the others get away).  But the question remained; get away to where?

Scott Wolter has the answer.

To Maine, of course.

So they sailed across the Atlantic Ocean to Maine, bringing along Cistercian monks who (for some reason) wrote in Norse runes, and the monks inscribed the Spirit Pond stone.  And Wolter says he knows what the runic inscription means.  Haugen, and other so-called experts, are wrong to say it's gibberish.  The Spirit Pond stone is an incredibly important artifact because it tells how the Templars came to North America, bringing with them the Holy Grail.

And you thought that its final resting place was the "Castle Arrrrggggghhh."

But that's another mistake people make, Wolter said.  The "San Gréal" -- Holy Grail -- is actually a mistranscription of "Sang Réal" -- meaning "royal blood."  In other words, the bloodline of Jesus.  Which means that the Templars were Jesus's direct descendants.  So they arrived in Maine, carrying the Sang Réal, and proceeded to have lots of sex with Native women (apparently ignoring the Templars' compulsory vow of celibacy), meaning that the Native inhabitants of eastern North America are descended from Jesus Christ.

All of this is just jolly news for me, because I am descended through my mom from various members of the Micmac and Maliseet tribes of New Brunswick and Nova Scotia.  So here's yet another branch I can add to my family tree.

Of course, the linguistic world isn't paying this much attention, which pisses Wolter right off.  "These archaeologists have all been programmed [to believe the stones are fakes] and they can’t think outside the box," he said.

Well, sorry, Mr. Wolter.  "Decades of scholarly study" does not equal being "programmed," it equals "knowing what the fuck you're talking about."  Haugen's work in the field of Norse linguistics is the epitome of careful research and thorough study.  So I'm not ready to jettison his expertise because you'd like the northeastern Natives to be Jesus's great-great-great (etc.) grandchildren.

And as far as I can tell, Wolter seems to be thinking so far outside the box that from where he's standing, he couldn't even see the box with a powerful telescope.

In any case, I hope you've enjoyed today's journey through time.  It's not bad as fiction; kind of the bastard child of The DaVinci Code and Foucault's Pendulum.  But as a real historical claim, it's a bit of a non-starter.

**************************************

Monday, June 6, 2022

A fish out of water

A lot of us probably have moments in our education where the teacher showed us something that overturned some piece of how we saw the universe.  I can remember several:

  • Finding out in physics class that horizontal and vertical motion are entirely independent of each other -- so if you shoot a bullet from a gun horizontally, and at the same moment drop a bullet from the height of the gun barrel, both will hit the ground at the same time.
  • Learning that chemical reactions completely change substances' properties; for example, if you take sodium (a soft, silvery metal) and react it with chlorine (a yellow-green, highly toxic gas) you get table salt.
  • Seeing in math class that there are different kinds of infinity -- countable and uncountable -- and the uncountable kinds are way bigger.  Infinitely bigger, in fact.  (If you want to find out more about how this was figured out, and also learn about some other mind-blowing stuff about math, check out my post on bizarre mathematics from a couple of years ago.)
  • Learning in astronomy class that because light travels at a finite speed, any time you are looking at something, you're seeing what it looked like when light left its surface.  So you never see anything in the present -- you see everything as it was in the past.  And the farther it is away from you, the further into the past you're looking.

Another one came up in my high school biology class, during the unit that was to launch me on a life-long fascination: evolution.  Ms. Miller, our bio teacher, told us that all tetrapods (four-limbed vertebrates) came from a common ancestor, and the way we know this is that regardless of the limb function, they all have the same number and arrangement of bones.  Your arm has one upper arm bone (humerus), two lower arm bones (radius and ulna), seven wrist bones (carpals), five hand bones (metacarpals), and fourteen finger bones (phalanges).  But... so does a whale's flipper, a dog's front leg, a bat's wing, and a lizard's leg.  The sizes differ, but the bones are the same.

[Image is in the Public Domain]

I've never found a single creationist or intelligent design advocate who could give me a cogent explanation of why an omnipotent creator would give a whale articulated finger bones and then encase them in a solid flipper.

In any case, I found the idea boggling; that we have the arm and hand bones we do because we inherited the basic structure from an ancestor hundreds of millions of years ago -- and share that same structure, and that same ancestor, with all other four-limbed vertebrates.  And just a couple of weeks ago, a team led by Shigeru Kuratani of the Evolutionary Morphology Laboratory at the RIKEN Cluster for Pioneering Research in Japan, have found a good candidate for what exactly that common ancestor might be.

Called Palaeospondylus, the great-great (etc.) grandparents of all tetrapods was a fish-like animal, only about five centimeters long.  It swam around in the warm seas of the Devonian Period on the order of four hundred million years ago, so preceded the previous candidate (Tiktaalik) by a good twenty-five million years.

A reconstruction of Palaeospondylus [Image licensed under the Creative Commons Smokeybjb, Palaeospondylus, CC BY-SA 3.0]

The tipoff that this was the common ancestor of tetrapods had nothing to do with limbs, but with the ear morphology -- they had three semicircular canals, organs critical in balance and equilibrium, just like we do -- and some details of cranial anatomy.  "The strange morphology of Palaeospondylus, which is comparable to that of tetrapod larvae, is very interesting from a developmental genetic point of view," said Tatsuya Hirasawa, who co-authored the study.  "Taking this into consideration, we will continue to study the developmental genetics that brought about this and other morphological changes that occurred at the water-to-land transition in vertebrate history."

I don't know how anyone could think about this -- that a little fish found fossilized in a limestone bed in Scotland might be the last common ancestor between humans, bats, lizards, and salamanders -- and not be at least a little bit amazed.  It brings home what to me is one of the most wonderful things about the evolutionary model; that all life on Earth is connected, not only ecologically, but genetically.  You're a distant cousin to all other terrestrial species, however different they may look from you.

And if learning that doesn't alter your view of the universe, I don't know what would.

**************************************