Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Friday, October 4, 2019

Ignoring the unimportant

Before I get into the subject of today's post, I want all of you to watch this two-minute video, entitled "Whodunnit?"

*****

How many of you were successful?  I know I wasn't.  I've watched it since about a dozen times, usually in the context of my neuroscience class when we were studying perception, and even knowing what was going on I still didn't see it.  (Yes, I'm being deliberately oblique because there are probably some of you who haven't watched the video.  *stern glare*)

This comes up because of some recent research that appeared in Nature Communications about why it is we get tricked so easily, or (which amounts to the same thing) miss something happening right in front of our eyes.  In "Spatial Suppression Promotes Rapid Figure-Ground Segmentation of Moving Objects," a team made up of Duje Tadin, Woon Ju Park, Kevin C. Dieter, and Michael D. Melnick (of the University of Rochester) and Joseph S. Lappin and Randolph Blake (of Vanderbilt University) describe a fascinating experiment they conducted that shows how when we look at something, our brains are actively suppressing parts of it we've (subconsciously) decided are unimportant.

The authors write:
Segregation of objects from their backgrounds is one of vision’s most important tasks.  This essential step in visual processing, termed figure-ground segmentation, has fascinated neuroscientists and psychologists since the early days of Gestalt psychology.  Visual motion is an especially rich source of information for rapid, effective object segregation.  A stealthy animal cloaked by camouflage immediately loses its invisibility once it begins moving, just as does a friend you’re trying to spot, waving her arms amongst a bustling crowd at the arrival terminal of an airport.  While seemingly effortless, visual segregation of moving objects invokes a challenging problem that is ubiquitous across sensory and cognitive domains: balancing competing demands between processes that discriminate and those that integrate and generalize.  Figure-ground segmentation of moving objects, by definition, requires highlighting of local variations in velocity signals.  This, however, is in conflict with integrative processes necessitated by local motion signals that are often noisy and/or ambiguous.  Achieving an appropriate and adaptive balance between these two competing demands is a key requirement for efficient segregation of moving objects.
The most fascinating part of the research was that they found you can get better at doing this -- but only at the expense of getting worse at perceiving other things.  They tested people's ability to detect a small moving object against a moving background, and found most people were lousy at it.  After five weeks of training, though, they got better...

... but not because they'd gotten better at seeing the small moving object.  Tested by itself, that didn't change.  What changed was they got worse at seeing when the background was moving.  Their brains had decided the background's movement was unimportant, so they simply ignored it.

"In some sense, their brain discarded information it was able to process only five weeks ago," lead author Duje Tadin said in an interview in Quanta.  "Before attention gets to do its job, there’s already a lot of pruning of information.  For motion perception, that pruning has to happen automatically because it needs to be done very quickly."

The last thing a wildebeest ever ignores.  [Image licensed under the Creative Commons Nevit Dilmen, Lion Panthera leo in Tanzania 0670 Nevit, CC BY-SA 3.0]

All of this reinforces once again how generally inaccurate our sensory-integrative systems are.  Oh, they work well enough; they had to in order to be selected for evolutionarily.  But a gain of efficiency, and its subsequent gain in selective fitness, means ignoring as much (or more) than you're actually observing.  Which is why we so often find ourselves in situations where we and our friends relate a completely different version of events we both participated in -- and why, in fact, there are probably times we're both right, at least partly.  We're just remembering different pieces of what we saw and heard -- and misremembering other pieces different ways.

So "I know it happened that way, I saw it" is a big overstatement.  Think about that next time you hear about a court case where a defendant's fate depends on eyewitness testimony.  It may be the highest standard in a court of law -- but from a biological perspective, it's on pretty thin ice.

********************************

This week's Skeptophilia book recommendation is by the team of Mark Carwardine and the brilliant author of The Hitchhiker's Guide to the Galaxy, the late Douglas Adams.  Called Last Chance to See, it's about a round-the-world trip the two took to see the last populations of some of the world's most severely endangered animals, including the Rodrigues Fruit Bat, the Mountain Gorilla, the Aye-Aye, and the Komodo Dragon.  It's fascinating, entertaining, and sad, as Adams and Carwardine take an unflinching look at the devastation being wrought on the world's ecosystems by humans.

But it should be required reading for anyone interested in ecology, the environment, and the animal kingdom. Lucid, often funny, always eye-opening, Last Chance to See will give you a lens into the plight of some of the world's rarest species -- before they're gone forever.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]





Thursday, October 3, 2019

Breaking the world in two

It's no revelation to regular readers of Skeptophilia that I'm fascinated with quantum physics.

In fact, some years ago I was in the car with my younger son, then about 17, and we were discussing the difference between the Many-Worlds and the Copenhagen Interpretation of the collapse of the wave function (as one does), and he said something that led to my writing my time-travel novel, Lock & Key: "What if there was a place that kept track of all the possible outcomes, for every decision anyone makes?"

And thus was born the Library of Possibilities, and its foul-mouthed, Kurt-Cobain-worshiping Head Librarian, Archibald Fischer.

The Many-Worlds Interpretation -- which, put simply, surmises that at every point where any decision could have gone two or more ways, the universe splits -- has always fascinated me, but at the same point, it does seem to fall into Wolfgang Pauli's category of "not even wrong."  It's not falsifiable, because at every bifurcation, the two universes become effectively walled off from each other, so we wouldn't be able to prove or disprove the claim either way.  (This hasn't stopped fiction writers like me from capitalizing on the possibility of jumping from one to the other; this trope has been the basis of dozens of plot lines in Star Trek alone, where Geordi LaForge was constantly having to rescue crew members who fell through a rip in the space-time continuum.)

So it was with great curiosity that I read an article written by physicist Sean Carroll that appeared in Literary Hub last week, that looks at the possible outcome in our own universe if Many-Worlds turns out to be true -- and a way to use quantum mechanics as a basis for making choices.

[Image is in the Public Domain]

Carroll writes:
[Keep in mind] the importance of treating individuals on different branches of the wave function as distinct persons, even if they descended from the same individual in the past.  There is an important asymmetry between how we think about “our future” versus “our past” in Many-Worlds, which ultimately can be attributed to the low-entropy condition of our early universe. 
Any one individual can trace their lives backward in a unique person, but going forward in time we will branch into multiple people.  There is not one future self that is picked out as "really you," and it’s equally true that there is no one person constituted by all of those future individuals.  They are separate, as much as identical twins are distinct people, despite descending from a single zygote. 
We might care about what happens to the versions of ourselves who live on other branches, but it’s not sensible to think of them as "us."
Carrol's point is whether, if you buy Many-Worlds, we should concern ourselves with the consequences of our decisions.  After all, if every possible outcome happens in some universe somewhere -- if everything that can happen, will happen -- then the net result of our decision-making is exactly zero.  If in this branch, you make the decision to rob a bank, and in the other, you decide not to, this is precisely the same outcome as if you decided not to in this branch and your counterpart decided to go through with the robbery in the other one.  But as Carroll points out, while it doesn't make any overall difference if you take into account every possible universe, that's a perspective none of us actually have.  Your decision in this branch does matter to you (well, at least I hope it does), and it certainly has consequences for your future in the universe you inhabit -- as well as restricting what choices are available to you for later decision-making.

 If you'd like to play a little with the idea of Many-Worlds, you can turn your decision-making over to a purely quantum process via an app for iPhones called "Universe Splitter."  You ask the app a two-option question -- Carroll's example is, "Should I have pepperoni or sausage on my pizza tonight?" -- and the app sends a signal to a physics lab in Switzerland, where a photon is sent through a beam-splitter with detectors on either side.  If the photon goes to the left, you're told to go with option 1 (pepperoni), and if to the right, option 2 (sausage).  So here, as in the famous Schrödinger's Cat thought experiment, the outcome is decided by the actual collapse of an actual wave function, and if you buy Many-Worlds, you've now chopped the universe in two because of your choice of pizza toppings.

What I wonder about, though, is that after you get the results, the decision-making isn't over; you've just added one more step.  Once you get the results, you have to decide whether or not to abide by them, so once again you've split the universe (into "abide by the decision" and "don't" branches).  How many of us have put a decision up to a flip of the coin, then when the results come in, think, "That's not the outcome I wanted" and flip the coin again?  What's always bothered me about Many-Worlds is that it's an embarrassment of riches.  We're constantly engaging in situations that could go one of two or more ways, so within moments, the number of possible outcomes in the entire universe becomes essentially infinite.  Physicists tend to be (rightly) suspicious of infinities, and this by itself makes me dubious about Many-Worlds.  (I deliberately glossed over this point in Lock & Key, and implied that all human choices could be catalogued in a library -- albeit a very, very large one.  That may be the single biggest whopper I've told in any of my fiction, even though as a speculative fiction writer my stock in trade is playing fast-and-loose with the universe as it is.)

Carroll is fully aware of how bizarre the outcome of Many-Worlds is, even though (by my understanding) he appears to be in favor of that interpretation over the seemingly-arbitrary Copenhagen Interpretation.  He says -- and this quote seems as fitting a place to stop as any:
Even for the most battle-hardened quantum physicist, one must admit that this sounds ludicrous.  But it’s the most straightforward reading of our best understanding of quantum mechanics.   
The question naturally arises: What should we do about it?  If the real world is truly this radically different from the world of our everyday experience, does this have any implications for how we live our lives?

Largely—no. To each individual on some branch of the wave function, life goes on just as if they lived in a single world with truly stochastic quantum events...  As counterintuitive as Many-Worlds might seem, at the end of the day it doesn’t really change how we should go through our lives.
********************************

This week's Skeptophilia book recommendation is by the team of Mark Carwardine and the brilliant author of The Hitchhiker's Guide to the Galaxy, the late Douglas Adams.  Called Last Chance to See, it's about a round-the-world trip the two took to see the last populations of some of the world's most severely endangered animals, including the Rodrigues Fruit Bat, the Mountain Gorilla, the Aye-Aye, and the Komodo Dragon.  It's fascinating, entertaining, and sad, as Adams and Carwardine take an unflinching look at the devastation being wrought on the world's ecosystems by humans.

But it should be required reading for anyone interested in ecology, the environment, and the animal kingdom. Lucid, often funny, always eye-opening, Last Chance to See will give you a lens into the plight of some of the world's rarest species -- before they're gone forever.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]





Wednesday, October 2, 2019

Inside an animal's mind

There's a doggy intelligence test that most dogs can't pass -- for an interesting reason.

The test involves placing a treat on the floor, stepping ten feet or so away, and letting the dog into the room.  You point to the treat, and the dog has to use that information to find the treat (i.e., not just sniff around until they blunder on it).

It may seem simple, but success at this requires a remarkable degree of sophistication.  What the dog has to be able to do is to look at you, understand the concept of "pointing," and then think, "If I were where (s)he is, what direction would his/her finger appear to be pointing at?"  In other words, the dog has to realize that another individual is seeing things from a different perspective, and has different information about how the world looks.

Success at this test shows the rudiments of a theory of mind -- an understanding that all sentient individuals see what's around them from their own personal point of view.  Most dogs in this scenario will respond by coming up and sniffing the person's hand, or by becoming confused and simply wandering around because they don't have any idea what the owner is expecting them to do (and usually finding the treat accidentally, so in some sense, they win anyhow).

Only one of the many dogs I've known was able to pass the Theory of Mind Test.  She was a neurotic, hyperactive half border collie, half coonhound named Doolin.  Doolin is far and away the smartest dog I've ever known.  She figured out how to unlatch the slide bolts on our gates with her teeth -- simply from watching us do it.  She not only passed the Theory of Mind Test, she also had no problem with the Mirror Test -- when she saw her reflection, she knew it was her and not another dog.  The first time she saw her reflection in a full-length mirror, she barked -- once.  Then she sort of went, "Oh, ha-ha, that's me, I get it" and never did it again.

Doolin the Canine Genius.  Yes, she did always look this fretful.  I guess being that smart means you've got a lot on your mind.

On the other hand, one of our current dogs, Lena -- who, and I say this with all due affection, has the IQ of a lint ball -- spends hours entertaining herself by standing at the end of our dock and barking at her own reflection in the pond.  ("There's that damn water dog again!  She's a pretty wily one, that water dog, but I'll get her this time!")

Lena, whose perpetually happy expression communicates either "What, me worry?" or else, "Derp."

This comes up because of a cool study published last week in the Proceedings of the National Academy of Sciences, called, "Great Apes Use Self-Experience to Anticipate an Agent’s Action in a False-Belief Test," by Fumihiro Kano, Satoshi Hirata, and Masaki Tomonaga (of Kyoto University), and Josep Call and Christopher Krupenye (of the University of St. Andrews).  What the researchers did was to show half of their ape test subjects a test box with an opaque barrier and half a box with a transparent barrier, then they were allowed to observe a human interacting with the barrier from a distance where it was impossible to tell whether the barrier was opaque or transparent.  In other words, they had to interpret the behavior of another individual not based on what they themselves were seeing, but what they could infer about what the individual himself saw.

And they did it flawlessly.  When the ape saw that an object had been moved behind an opaque barrier, they guessed that the human trying to look through the barrier wouldn't know it'd been moved -- and the ape's eyes tracked in the direction of where it expected the human to reach (i.e., where the object was before the barrier was lowered).  From these results, it's clear that apes understand that each individual -- ape or human or otherwise -- has his or her own perspective, and they're not all the same.  Like us humans, they recognize that we don't all have access to the same information.

What this immediately brings up for me is our treatment of non-human animals.  My Animal Physiology professor in college -- one of the only college teachers I had who was truly an asshole -- scoffed at the idea that animals had emotions or could experience pain in the same way a human did.  With the perspective of time, I now realize that he hadn't come to this conclusion based on any scientific evidence, but because it made it much easier for him to rationalize hurting animals "in the name of science" without it putting a ding in his conscience.  We now know that many species grieve the death of one of their fellow creatures, bond strongly to their owners, and remember both good and bad treatment (if you don't believe this last one, take a look at this short video of a lion who was reintroduced to the wild, and then a year later remembered the people who'd rescued him -- a video that never fails to bring me to tears).

So we need to throw out this silly dichotomy of "human versus animal."  First, humans are animals.  Second, all the things we think of as being quintessentially human -- emotions, bonding, logic/problem solving, and ability to take another's perspective -- are not either/or, "we've got 'em and you don't" characteristics.  They exist on a spectrum, and our determination to see ourselves as qualitatively different from the rest of the animal kingdom should be jettisoned as the wrong-headed nonsense it is.  Any difference between us and our non-human cousins is purely quantitative -- and the quantities involved are appearing to be, on the whole, exceedingly small.

********************************

This week's Skeptophilia book recommendation is by the team of Mark Carwardine and the brilliant author of The Hitchhiker's Guide to the Galaxy, the late Douglas Adams.  Called Last Chance to See, it's about a round-the-world trip the two took to see the last populations of some of the world's most severely endangered animals, including the Rodrigues Fruit Bat, the Mountain Gorilla, the Aye-Aye, and the Komodo Dragon.  It's fascinating, entertaining, and sad, as Adams and Carwardine take an unflinching look at the devastation being wrought on the world's ecosystems by humans.

But it should be required reading for anyone interested in ecology, the environment, and the animal kingdom. Lucid, often funny, always eye-opening, Last Chance to See will give you a lens into the plight of some of the world's rarest species -- before they're gone forever.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]





Tuesday, October 1, 2019

Noise alert

The week after I retired I made the mistake of saying to my wife, "I don't know what I'm going to do with all of my free time!"

Two days later we found out we had to have major foundation work done on our house.  I do mean major; erosion and settling on one corner was causing the slab to twist, and if we didn't do something, we were going to have our slab -- and almost certainly our walls -- crack catastrophically.

So yeah.  Me and my big mouth.  It's times like this I have a hard time maintaining my status as Non-Superstitious Guy.

The foundation work required that we more or less gut our formerly-finished basement.  We were already planning on redoing it, just not this completely or this precipitously.  It could be a nice space -- it's got a walk-out (we're built on a hill, which is part of what caused the problem in the first place) and with some messing about it could be a den or even a rental apartment, now that we're empty nesters and it's just me and Carol in this big house.

Me and my son working on demolition.  You can probably see the amazing family resemblance between us.

In any case, this all comes up because of a paper that appeared last week in Nature Communications  about why we perceive some sounds as unpleasant (such as shop vacs, reciprocating saws, dehumidifiers, and air filters -- all of which we had going at once down there).  And it turns out that it's not just the volume (amplitude) of the sound waves.

In "The Rough Sound of Salience Enhances Aversion Through Neural Synchronisation," by Luc H. Arnal, Andreas Kleinschmidt, Laurent Spinelli, Anne-Lise Giraud, and Pierre Mégevand of the University of Geneva, we find that the degree of perceived unpleasantness of a sound has to do with repeated peaks in "fast repetitive modulations" in the sound.  Put simply, there are two kinds of frequency most sounds have: the fundamental frequency of the tone, which we perceive as its pitch; and the rise and fall of overall loudness.  And what the researchers discovered is when that second frequency is between 30 and 150 hertz, we find it really unpleasant.  (One hertz is one vibration per second; so even 30 hertz is fast enough that we're not consciously aware of it as a repetitive noise.)

Apparently sounds in that range cause our neurons to synchronize at that frequency, heightening awareness and making them difficult to ignore.  The researchers suspect that it may be an evolved response because those sorts of noises may signal danger, but that's speculation at this point.

The authors write:
Fast repetitive modulations produce “temporally salient” flickering percepts (e.g. strobe lights, vibrators, and alarm sounds), which efficiently capture attention, generally induce rough and unpleasant sensations, and elicit avoidance.  Despite the high ecological relevance of such flickering stimuli, there is to our knowledge no existing operational definition of temporal salience and only limited experimental work accounting for the intriguing aversive sensation such auditory textures produce and the reactions they trigger.  Here, we introduce and explore the notion of temporal salience and investigate its behavioural and neural underpinnings.  Of note, although salience may not systematically result in aversive percept, we argue that in this specific context, temporal salience—owing to the imperative effect of exogenously saturating perceptual systems in time—constitutes a valid proxy of aversion.  Therefore, we hypothesise that providing fast, but still discretisable and perceptible, temporally salient acoustic cues should enhance neural processing and ensuing aversive sensation.
This discovery led to some surprising connections.  "These sounds solicit the amygdala, hippocampus and insula in particular, all areas related to salience, aversion and pain.  This explains why participants experienced them as being unbearable," said Luc Arnal, who was the paper's lead author.   "This is the first time that sounds between 40 and 80 hertz have been shown to mobilise these neural networks, although the frequencies have been used for a long time in alarm systems...  We now understand at last why the brain can't ignore these sounds.  Something particular happens at these frequencies, and there are also many illnesses that show atypical brain responses to sounds at 40 Hz.  These include Alzheimer's, autism and schizophrenia."

Which is unexpected and startling.  What is happening in the brain at those frequencies -- and how does it connect with overall mental functioning?  Does schizophrenia (for example) involve some sort of "brain noise" that is at a frequency that the sufferer can't ignore?

In any case, it's a fascinating piece of research, and on a more banal level explains why I find that shop vac so damned annoying.  At least we've got the demolition done, so I won't have any more huge messes to clean up.

Unless the universe is listening and causes some catastrophic upheaval in another part of our house.  You never know.  Just because I'm not superstitious doesn't mean I can't jinx myself.

********************************

This week's Skeptophilia book recommendation is by the team of Mark Carwardine and the brilliant author of The Hitchhiker's Guide to the Galaxy, the late Douglas Adams.  Called Last Chance to See, it's about a round-the-world trip the two took to see the last populations of some of the world's most severely endangered animals, including the Rodrigues Fruit Bat, the Mountain Gorilla, the Aye-Aye, and the Komodo Dragon.  It's fascinating, entertaining, and sad, as Adams and Carwardine take an unflinching look at the devastation being wrought on the world's ecosystems by humans.

But it should be required reading for anyone interested in ecology, the environment, and the animal kingdom. Lucid, often funny, always eye-opening, Last Chance to See will give you a lens into the plight of some of the world's rarest species -- before they're gone forever.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]





Monday, September 30, 2019

Feathered serpent gods and free association

Despite my fairly persistent railing against people who make outlandish, unverifiable claims, I find it even more perplexing when people make outlandish, demonstrably false claims, and amazingly enough I'm not talking about Donald Trump.

One of the problems, though, is that a lot of woo-woo claims are in the category of what physicist Wolfgang Pauli called "not even false" -- they're not verifiable in a scientific sense.  I mean, it's one thing to claim that last night your late Aunt Gertrude visited in spirit form and told you her secret recipe for making her Extra-Zesty Bean Dip.  I couldn't disprove that even if I wanted to, which I don't, because I actually kind of like bean dip.

But when someone makes a statement that is (1) falsifiable, and (2) clearly incorrect, and yet stands by it as if it made complete sense... that I find baffling.  "I'm sorry," they seem to be saying, "I know you've demonstrated that gravity pulls things toward the Earth, but I believe that in reality, it works the opposite way, so I'm wearing velcro shoes so I don't fall upward."

And for the record, I am also not talking about either Flat Earthers or biblical creationism.

This all comes up because of an article that appeared on Unexplained Mysteries a while back, the link to which I was sent by a loyal reader of Skeptophilia.  Entitled "Easter Island Heads -- They Speak At Last," it was written by L. M. Leteane.   If that name sounds familiar to regular readers of this blog, it's because Leteane has appeared here before, most recently for claiming that the Central American god Quetzalcoatl and the Egyptian god Thoth were actually the same person, despite one being a feathered snake and the other being a shirtless dude with the head of an ibis, which last I checked hardly look alike at all.  Be that as it may, Leteane concludes that this is why the Earth is going to end when a comet hits it in the year 3369.

So I suppose that given his past attempts, we should not expect L. M. Leteane to exactly knock us dead in the logic department.

But even starting out with low expectations, I have to say that he has outdone himself this time.

Here's the basic outline of his most recent argument, if I can dignify it by calling it that. Fasten your seatbelts, it's gonna be a bit of a bumpy ride.
  1. The Bantu people of south-central Africa came originally from Egypt, which in their language they called Khama-Roggo.  This name translates in Tswana as "Black-and-Red Land."
  2. Charles Berlitz, of The Mystery of Atlantis fame, says that Quetzalcoatl also comes from "Black-and-Red Land."  Berlitz, allow me to remind you, is the writer about whose credibility the skeptical researcher Larry Kusche said, "If Berlitz were to report that a ship was red, the chances of it being some other color is almost a certainty."
  3. The Olmecs were originally from Africa, but then they accompanied the god Thoth to Central America.  In a quote that I swear I am not making up, "That is evidently why their gigantic sculptured heads are always shown helmeted."
  4. The Babylonian goddess Ishtar was also a real person, who ruled in the Indus Valley for a while (yes, I know that India and Babylonia aren't the same place; just play along, okay?) until she got fed up and also moved to Central America.  She took some people with her called the Kassites.  This was because she was heavily interested in tin mining.
  5. Well, three gods in one place are just too many (three too many, in my opinion), and this started a war.  Hot words were spoken.  Nuclear weapons were detonated.   Devastation was wreaked.   Passive voice was used repeatedly for dramatic effect.
  6. After the dust settled, the Olmecs, who were somehow also apparently the Kassites and the Bantu, found themselves mysteriously deposited on Easter Island.  A couple of more similarities between words in various languages and Pascuanese (the language of the natives of Easter Island) are given, the best one being "Rapa Nui" (the Pascuanese name for the island) meaning "black giant" because Rapa is a little like the Hebrew repha (giant) and Nui sounds like the French nuit (night).  This proves that the island was settled by dark-skinned giant people from Africa.  Or somewhere.
  7. The Olmecs decided to name it "Easter Island" because "Easter" sounds like "Ishtar."
  8. So they built a bunch of stone heads. q. e. d.
[Image licensed under the Creative Commons Hhooper1 at English Wikipedia., Easter Island Ahu (2006), CC BY 2.5]

Well. I think we can all agree that that's a pretty persuasive logical chain, can't we?

Okay, maybe not so much.  

Let's start with the linguistic funny business.  Unfortunately for L. M. Leteane, there is a fundamental rule he seems to be unaware of, which is, "Do not fuck around with a linguist."  Linguistics is something I know a bit about; I have an M. A. in Historical Linguistics (yes, I know, I spent 32 years teaching biology.  It's a long story) and I can say with some authority that I understand how language evolution works.  

And one of the first things you're taught in that field is that you can't base language relationships on one or two words -- chance correspondences are all too common.  So just because roggo means "red" in Tswana (which I'm taking on faith because Leteane himself is from Botswana, and my expertise is not in African languages), and rouge is French for "red," doesn't mean a damn thing just because they happen to share a few letters.  Rouge goes back to the Latin ruber, then to Ancient Greek erythros, and finally to a reconstructed Proto-Indo-European root *reudr.  Any resemblance to the Tswana word for "red" is coincidental.  And as for "Rapa Nui" meaning "black giant" because of some similarity to those words in (respectively) French and Hebrew, that's ridiculous; Pascuanese is a Polynesian language, which is neither Indo-European nor Semitic, and has no underlying similarity to either French or Hebrew other than all of them being languages spoken by people somewhere.

And as far as Easter Island being named after Ishtar... well, let's just say it'll take me a while to recover from the headdesk I did when I read that.  Easter Island was so named by the Dutch explorer Jacob Roggeveen, because he first spotted it on Easter Sunday in 1722.  He called it Paasch-Eyland, Dutch for "Easter Island;" its official name is Isla de Pascua, which means the same thing in Spanish.   Neither one sounds anything like "Ishtar." 

And for the record: "Ishtar" and "Easter" don't have a common root anyway, something I dealt with back in 2014 when a thing kept being circulated that Easter was a pagan holiday involving sacrificing children to Babylonian gods.  Which I probably don't need to point out is 100% USDA Grade-A bullshit.  A quote from that post, which is just as applicable here: "Linguistics is not some kind of cross between free association and the game of Telephone."

And as for the rest of it... well, it sounds like the plot of a hyper-convoluted science fiction story to me.  Gods globe-trotting all over the world, bringing along slave labor, and having major wars, and conveniently leaving behind no hard evidence whatsoever.

The thing I find maddening about all of this is that Leteane mixes some facts (his information about Tswana) with speculation (he says that the name of the tin ore cassiterite comes from the Kassites, which my etymological dictionary says is "possible," but gives two other equally plausible hypotheses) with outright falsehood (that Polynesian, Bantu, Semitic, and Indo-European languages share lots of common roots) with wild fantasy (all of the stuff about the gods).  And people believe it.  His story had, last I checked, been tweeted and Facebook-liked dozens of times, and amongst the comments I saw was, "Brilliant piece of research connecting all the history you don't learn about in school!  Thank you for drawing together the pieces of the puzzle!"

So, anyway. I suppose I shouldn't get so annoyed by all of this.  Actually, on the spectrum of woo-woo beliefs, this one is pretty harmless.  No one ever blew himself up in a crowded market because he thought that the Olmecs came from Botswana.  My frustration is that there are seemingly so many people who lack the ability to think critically -- to look at the facts of an argument, and how the evidence is laid out, and to see if the conclusion is justified.  The problem, of course, is that learning the principles of scientific induction is hard work.  Much easier, apparently, to blather on about feathered serpents and goddesses who are seriously into tin.

********************************

This week's Skeptophilia book recommendation is by the team of Mark Carwardine and the brilliant author of The Hitchhiker's Guide to the Galaxy, the late Douglas Adams.  Called Last Chance to See, it's about a round-the-world trip the two took to see the last populations of some of the world's most severely endangered animals, including the Rodrigues Fruit Bat, the Mountain Gorilla, the Aye-Aye, and the Komodo Dragon.  It's fascinating, entertaining, and sad, as Adams and Carwardine take an unflinching look at the devastation being wrought on the world's ecosystems by humans.

But it should be required reading for anyone interested in ecology, the environment, and the animal kingdom. Lucid, often funny, always eye-opening, Last Chance to See will give you a lens into the plight of some of the world's rarest species -- before they're gone forever.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]





Saturday, September 28, 2019

A titanic undertaking

While I first ran into the idea of life on other worlds when I was a kid watching shows like Lost in Space and Star Trek, it wasn't until I was in college and read Arthur C. Clarke's followup to his novel 2001: A Space Odyssey, called 2010: Odyssey Two, that I first considered life around moons in our own Solar System.

The upshot of the book is that there is a developing intelligent species on Europa, one of the so-called "Galilean" moons of Jupiter.  It's not such a far-fetched idea; Europa has a water-ice crust and might well have liquid water underneath it, so it's entirely possible there's some life form or another living down there.  (In the book, there was, and the super-intelligent civilization that sent the famous monolith to Earth in the previous book starts broadcasting the message, "All these worlds are yours -- except Europa.  Attempt no landings there" in an attempt to keep humans from dropping in and fucking things up, which you have to admit we have a tendency to do.)

Europa is only one candidate for hosting life, however.  An even better bet is Titan, the largest moon of Saturn and the second largest (after Jupiter's moon Ganymede) moon in the Solar System.  It's larger than the planet Mercury, although less than half as massive, and its surface seems to be mostly composed of water and ammonia -- although in 2004 the Cassini-Huygens probe found liquid hydrocarbon geysers at its poles, which is certainly suggestive of some fancy organic chemistry going on underneath the surface.

A photograph of Titan taken by Cassini-Huygens.  Its featurelessness is because we're seeing the tops of the clouds -- thought to be, basically, photochemical smog.  [Image is in the Public Domain, courtesy of NASA/JPL]

In any case, it's a place ripe for some serious exploration.  And it's certainly looking better than even the nearest stars; our fastest spacecraft, Deep Space 1, would take about 81,000 years to reach the nearest star, Proxima Centauri, which is a little long to wait for results.  So I was thrilled to find out that NASA is talking about a mission to Titan -- that involves packs of "shapeshifting" robot drones.

One limitation of any probe we've sent out is that even if it's working optimally, it still can only survey a minuscule percentage of the target's surface.  What the planned Shapeshifter mission does is to send a spacecraft out there that's composed of hundreds (or more) smaller, self-propelled, robotic spacecrafts that can then roam around exploring the surface or dive down and puncture the crust and see what's down in the oceans that we believe exist below it.

"We have very limited information about the composition of the surface," said team leader Ali Agha, of NASA's Jet Propulsion Laboratory.  "Rocky terrain, methane lakes, cryovolcanoes – we potentially have all of these, but we don't know for certain.  So we thought about how to create a system that is versatile and capable of traversing different types of terrain but also compact enough to launch on a rocket."

The difficulty -- well, one of the many difficulties -- is whether we'll recognize life on Titan if we find it.  Besides an atmosphere that seems to be mostly made of ammonia and methane, Titan has an average surface temperature of around -180 C, which is a little chilly.  So any living thing there would have to be adapted to seriously different conditions than anything we've found on Earth.  There's no reason to believe that it would share characteristics with any terrestrial life form besides the most basic requirements for life -- reproduction, metabolism, and some kind of inheritable genetic code -- so we'll have to be pretty willing to expand our definition of "living thing" or we'll likely miss it entirely.  (Remember the Horta from the famous original Star Trek episode "The Devil in the Dark?"  It was a silicon-based life form that used hydrofluoric acid instead of water as its principal circulatory solvent -- and also as a defense mechanism, as various red-shirted unfortunates found out. The intrepid crew of the Enterprise at first thought the Horta was some bizarre geological formation -- which, of course, it sort of was.)

In any case, I hope Agha's project gets off the ground, both figuratively and literally.  If we can't develop faster-than-light travel, and unfortunately Einstein's ultimate universal speed limit seems to be strictly enforced in most jurisdictions, investigating other star systems is kind of impractical.  So we probably should focus on what's going on here at home -- and hope we're not told, "Attempt no landings on Titan."

Although if we were, that would be eye-opening in an entirely different way.

**********************************

This week's Skeptophilia book recommendation is especially for those of you who enjoy having their minds blown.  Niels Bohr famously said, "Anyone who is not shocked by quantum theory has not understood it."  Physicist Philip Ball does his best to explain the basics of quantum theory -- and to shock the reader thereby -- in layman's terms in Beyond Weird: Why Everything You Thought You Knew About Quantum Physics is Different, which was the winner of the 2018 Physics Book of the Year.

It's lucid, fun, and fascinating, and will turn your view of how things work upside down.  So if you'd like to know more about the behavior of the universe on the smallest scales -- and how this affects us, up here on the macro-scale -- pick up a copy of Beyond Weird and fasten your seatbelt.

[Note:  If you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]





Friday, September 27, 2019

Celebrity re-invention

Yesterday a friend and loyal reader of Skeptophilia brought to my attention a claim I hadn't run into before: that actor Morgan Freeman is actually the same person as musician Jimi Hendrix.

My first thought was, "Wait a moment.  Jimi Hendrix is dead and Morgan Freeman isn't," which you'd think would kind of preclude them from being the same person.  But what the writer of the article, Sean Adl-Tabatabai, is saying is that Jimi Hendrix faked his death, then reinvented himself as Morgan Freeman.

[Image licensed under the Creative Commons David Sifry, Morgan Freeman, 2006, CC BY 2.0]

The main evidence of the claim, if I can dignify it with that word, is that there's no record of Freeman before Hendrix's death from a drug overdose in 1970.  Which, as far as I can find out, is patently false; even his Wikipedia page lists plenty of stuff he did before 1970 (at which point Freeman would have been 33 years old), including a lot of easily-verifiable stuff like his winning a state-wide drama competition in Mississippi at age 12, performing in a radio show while in high school, and a four-year stint in the military.

Adl-Tabatabai says all that stuff is made up, and it's been done to protect Freeman from the notoriety he'd get if information about his former life came out.  As far as his own evidence, it mostly seems to consist of his screaming at us that we're idiots if we don't believe him, and anyone who says otherwise lacks critical thinking skills, common sense, and probably a brain as well.

After I read this article, I began to wonder if there were other celebrities about whom this sort of thing has come up, so I did a Google search for, "celebrities identity change conspiracy."  And all I can say is, there are a lot of people out there who are in serious need of a hobby, if not immediate intervention by a mental health professional.

So down the rabbit hole I went.

First we have singer Avril Lavigne, who died in 2003 and was replaced by a body-double named "Melissa."  The evidence here seems to be mostly that around that time Lavigne/Melissa started to dress differently.  Because obviously the only way to make a wardrobe change is to die and have your replacement start wearing different clothes.

Then there's Eminem, who died in 2007 either in a car crash or of a drug overdose, depending on which version you go for, possibly caused by the fact that Eminem turned down an opportunity to join the Illuminati, and then was replaced either by a clone or an android.  Apparently Eminem version 2.0 looks younger than the first one did, and he even slipped up and gave away the game in an interview in 2008 wherein he said, "Right now I'm kinda just concentrating on my own stuff, for right now and just banging out tracks and producing a lot of stuff.  You know, the more I keep producing the better it seems like I get, 'cause I just start knowing stuff."

Get it?  "Knowing stuff?"  What stuff do you know, Slim Shady?  *suspicious eyebrow raise*

And how about J. K. Rowling?  Here the conspiracy theorists went a step further than killing her, as they did with Lavigne and Eminem; they claim she never existed in the first place.  The entire Harry Potter series was written by a team of marketing professionals, because it's not possible for anyone to write that much that quickly.  As far as Rowling herself, she's an actress hired to "give a human face" to what is essentially a multimedia scam.

Well, as far as no one being able to put out books that fast, I'm calling bullshit on that one, because I got my first publishing contract in 2015 and as of right now, I have thirteen books in print.  I'm not sure my word count is up to Rowling's standard -- most of my books are a bit shorter than the Harry Potter novels, especially the last three -- but I know that kind of output is possible because I did it, while (for the record) holding down a full time job.

Or maybe I'm just a robot myself.  I dunno.

The last one I'll mention -- but far from the last one out there -- is that singer Katy Perry is a grown-up Jon-Benét Ramsey.  Ramsey, you probably know, is the tragic six-year-old beauty pageant contestant who was murdered in her own home in 1996, a crime that has never been solved.  The conspiracy theory is that Ramsey's parents staged her death for some unspecified reason, and kept her in hiding until she was eighteen, at which point she re-emerged as Katy Perry with her infamous song "I Kissed a Girl (and I Liked It)."

As far as evidence -- again using the word in its loosest sense -- other than a passing resemblance between the two, there's the line from Perry's memoir (she has a memoir?  who knew?), "Not that I was one of those stage kids.  There was no JonBenét Ramsey inside of me waiting to burst out."  It is kind of an odd thing to say -- there are a lot of child actors she could have mentioned who would have much more positive associations -- but the conspiracy theorists say this was Perry's way of owning up to who she actually was.  "The Illuminati always leave clues in plain sight," said one proponent of this claim.

Righty-o.  Because the Illuminati are just that wily.  "If we put clues out there that your average YouTube geek can find, it'll make everyone believe we can't possibly be this stupid."

At this point, the waters got a little deep, and I kind of gave up.  It illustrates to me a rather unfortunate thing, though -- that no matter how intrinsically ridiculous an idea is, you can get people to believe it if you just append to it the words "conspiracy coverup Illuminati."  But I need to wrap this up, because I need to finish my next novel by my deadline 4.8 seconds from now.  Should be a piece of cake, as I only have seventeen chapters left to write.

**********************************

This week's Skeptophilia book recommendation is especially for those of you who enjoy having their minds blown.  Niels Bohr famously said, "Anyone who is not shocked by quantum theory has not understood it."  Physicist Philip Ball does his best to explain the basics of quantum theory -- and to shock the reader thereby -- in layman's terms in Beyond Weird: Why Everything You Thought You Knew About Quantum Physics is Different, which was the winner of the 2018 Physics Book of the Year.

It's lucid, fun, and fascinating, and will turn your view of how things work upside down.  So if you'd like to know more about the behavior of the universe on the smallest scales -- and how this affects us, up here on the macro-scale -- pick up a copy of Beyond Weird and fasten your seatbelt.

[Note:  If you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]