Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
"If humans came from monkeys, why are there still monkeys?"
If there is one phrase that makes me want to throw a chair across the room, it's that one. (Oh, that and, "The Big Bang means that nothing exploded and became everything.") Despite the fact that a quick read of any of a number of reputable sites about evolution would make it clear that the question is ridiculous, I still see it asked in such a way that the person evidently thinks they've scored some serious points in the debate. My usual response is, "My ancestors came from France. Why are there still French people?" But the equivalence of the two seems to go so far over their heads that it doesn't even ruffle their hair.
Of course, not all the blame lies with the creationists and their ilk. How many times have you seen, in otherwise accurate sources, human evolution depicted with an illustration like this?
It sure as hell looks like each successive form completely replaced the one before it, so laypeople are perhaps to be excused for coming away with the impression that this is always the way evolution works. In fact, cladogenesis (branching evolution) is far and away the more common pattern, where species split over and over again, with different branches evolving at different rates or in different directions, and some of them becoming extinct.
If you're curious, this is the current best model we have for the evolution of hominins:
The cladogenesis of the hominin lineage; the vertical axis is time in millions of years before present [Image licensed under the Creative Commons Dbachmann, Hominini lineage, CC BY-SA 4.0]
The problem also lies with the word species, which is far and away the mushiest definition in all of biological science. As my evolutionary biology professor put it, "The only reason we came up with the idea of species as being these little impermeable containers is that we have no near relatives." In fact, we now know that many morphologically distinct populations, such as the Neanderthals and Denisovans, freely interbred with "modern" Homo sapiens. Most people of European descent have Neanderthal markers in their DNA; when I had my DNA sequenced a few years ago, I was pleased to find out I was above average in that regard, which is undoubtedly why I like my steaks medium-rare and generally run around half-naked when the weather is warm. Likewise, many people of East Asian, Indigenous Australian, Native American, and Polynesian ancestry have Denisovan ancestry, evidence that those hard-and-fast "containers" aren't so water-tight after all.
The reason all this comes up is because of a new study of the "Petralona Skull," a hominin skull found covered in dripstone (calcium carbonate) in a cave near Thessaloniki, Greece. The skull has been successfully dated to somewhere between 277,000 and 539,000 years ago -- the uncertainty is because of estimates in the rate of formation of the calcite layers.
The Petralona Skull [Image licensed under the Creative Commons Nadina / CC BY-SA 3.0]
Even with the uncertainty, this range puts it outside of the realm of possibility that it's a modern human skull. Morphologically, it seems considerably more primitive than typical Neanderthal skulls, too. So it appears that there was a distinct population of hominins living in southern Europe and coexisting with early Neanderthals -- one about which paleontologists know next to nothing.
Petralona Cave, where the skull was discovered [Image licensed under the Creative Commons Carlstaffanholmer / CC BY-SA 3.0]
So our family tree turns out to be even more complicated than we'd realized -- and there might well be an additional branch, not in Africa (where most of the diversification in hominins occurred) but in Europe.
You have to wonder what life was like back then. This would have been during the Hoxnian (Mindel-Riss) Interglacial, a period of warm, wet conditions, when much of Europe was covered with dense forests. Fauna would have included at least five species of mammoths and other elephant relatives, the woolly rhinoceros, the cave lion, cave lynx, cave bear, "Irish elk" (which, as the quip goes, was neither), and the "hypercarnivorous" giant dog Xenocyon.
Among many others.
So as usual, the mischaracterization of science by anti-science types misses the reality by a mile, and worse, misses how incredibly cool that reality is. The more we find out about our own species's past, the richer it becomes.
I guess if someone wants to dismiss it all with a sneering "why are there still monkeys?", that's up to them. But me, I'd rather keep learning. And for that, I'm listening to what the scientists themselves have to say.
Urban legends often have nebulous origins. As author Jan Harold Brunvand describes in his wonderful book The Choking Doberman and Other Urban Legends, "Urban legends are kissing cousins of myths, fairy tales and rumors. Legends differ from rumors because the legends are stories, with a plot. And unlike myths and fairy tales, they are supposed to be current and true, events rooted in everyday reality that at least could happen... Urban legends reflect modern-day societal concerns, hopes and fears... They are weird whoppers we tell one another, believing them to be factual. They maintain a persistent hold on the imagination because they have an element of suspense or humor, they are plausible, and they have a moral."
It's not that there's anything wrong with urban legends per se. A lot of the time, we're well aware that they're just "campfire stories" that are meant to scare, amuse, or otherwise entertain, and (absent of any further evidence) are just as likely to be false as true. After all, humans have been storytellers for a very long time, and -- as a fiction writer -- I'd be out of a job if we didn't have an appetite for tall tales.
When it becomes problematic is when someone has a financial interest in getting folks to believe that some odd claim or another is true. Then you have unethical people making money off others' credulity -- and often along the way obscuring or covering up outright any evidence to the contrary. And it's worse still when the guilty party is part of the news media.
Back in 1985 the British tabloid newspaper The Sun reported that a firefighter in Essex had more than once found undamaged copies of a painting of a crying child in houses that had otherwise been reduced to rubble by fires. Upon investigation, they said, they found that the painting was by Italian painter Giovanni Bragolin.
If that wasn't weird enough, The Sun claimed they'd found out that Bragolin was an assumed name, and that the painter was a mysterious recluse named Franchot Seville. Seville, they said, had found the little boy -- whose name was Don Bonillo -- after an unexplained fire had killed both of his parents. The boy was adopted by a priest, but fires seemed to follow in his wake wherever he went, to the extent that he was nicknamed "El Diablo." In 1970, the engine of a car the boy was riding in exploded, killing him along with the painter and the priest.
But, The Sun asked, did the curse follow even the paintings of the boy's tragic, weeping face?
It's not a headline, but we can invoke Betteridge's Law, wherein we learn that anything like that phrased as a question can be answered "No." Further inquiries by less biased investigators found that the story had enough holes to put a Swiss cheese to shame. There was no Don Bonillo; the model for the little boy was just some random kid. Yes, Bragolin went by the pseudonym Franchot Seville, but Bragolin was itself an assumed name; the painter's real name was Bruno Amadio, and he was still alive and well and painting children with big sad eyes until his death from natural causes in 1981 at age seventy.
As far as the survival of the painting, that turned out not to be much of a mystery, either. Bragolin/Seville/Amadio cranked out at least sixty different crying child paintings, from which literally tens of thousands of prints were made and then shipped out to department stores all across southern England. They sold like hotcakes for some reason. (I can't imagine why anyone would want a painting of a weepy toddler on their wall, but hey, you do you.) The prints were made on a heavy compressed cardboard, and then coated with fire-retardant varnish. Investigators Steven Punt and Martin Shipp actually purchased one of the prints and tried to set it alight deliberately, but the thing wouldn't burn. The surmise was that when the rest of the house went up in flames, the string holding the frame to the wall burned through and the print fell face-down on the floor, protecting it from being damaged.
Of course, a prosaic explanation like that was not in the interest of The Sun, which survives by keeping sensationalized stories alive for as long as possible. So no mention was made of Punt and Shipp and the probable explanation for the paintings' survival. Instead, they repeated the claims of a "curse," and told readers that if they owned a copy of The Crying Boy and wanted to get rid of it, The Sun would organize a public bonfire to destroy the prints forever.
How they were going to accomplish this, given that the whole shtick had to do with the fact that the painting couldn't be burned, I have no idea. But this evidently didn't occur to the readers, because within weeks The Sun had received hundreds of copies. A fire was held along the banks of the Thames in which the mailed-in prints were supposedly destroyed, an event about which a firefighter who had supervised the burning said, "I think there will be many people who can breathe a little easier now."
This in spite of the fact that the whole thing had been manufactured by The Sun. There would have been no widespread fear, no need for people to "breathe uneasily," if The Sun hadn't hyped the claim to begin with -- and, more importantly, ignored completely the entirely rational explanation for the few cases where the painting had survived a house fire.
It's probably unnecessary for me to say that this kind of thing really pisses me off. Humans are credulous enough; natural conditions like confirmation bias, dart-thrower's bias, and the argument from ignorance already make it hard enough for us to sort fact from fiction. Okay, The Sun is a pretty unreliable source to start with, but the fact remains that thousands of people read it -- and, presumably, a decent fraction of those take its reporting seriously.
The fact that it would deliberately mislead is infuriating.
The result is that the legend still persists today. There are online sites for discussing curses, and The Crying Boy comes up all too frequently, often with comments like "I would never have that in my house!" (Well, to be fair, neither would I, but for entirely different reasons.) As Brunvand points out in The Choking Doberman, one characteristic of urban legends is that they take on a life of their own. Word of mouth is a potent force for spreading rumor, and once these sorts of tales get launched, they are as impossible to eradicate as crabgrass.
But what's certain is that we do not need irresponsible tabloids like The Sun making matters worse.
I've written here before about unusual paleontological discoveries -- illustrations of the fact that Darwin's lovely phrase "many forms most beautiful and most wonderful" has applied throughout Earth's biological history.
We could also add the words "... and most weird." Some of the fossils paleontologists have uncovered look like something from a fever dream. A while back I wrote about the absolutely bizarre "Tully Monster" (Tullimonstrum spp.) that is so different from all other life forms studied that biologists can't even figure out whether it was a vertebrate or an invertebrate. But Tully is far from the only creature that has defied classification. Here are a few more examples of peculiar organisms whose placement on the Tree of Life is very much up for debate.
First, we have the strange Tribrachidium heraldicum, a creature of uncertain relationships to all species at the time or afterward. It had threefold symmetry -- itself pretty odd -- and its species name heraldicum comes from the striking resemblance to the triskelion design on the coat of arms of the Isle of Man:
Tribrachidium fossil from near Arkhangelsk, Russia [Image licensed under the Creative Commons Aleksey Nagovitsyn (User:Alnagov), Tribrachidium, CC BY-SA 3.0]
Despite superficial similarities to modern cnidarians (such as jellyfish) or echinoderms (such as sea urchins and starfish), Tribrachidium seems to be neither. It -- along with a great many of the Ediacaran assemblage, organisms that dominated the seas during the late Precambrian Era, between 635 and 538 million years ago -- is a mystery.
The Ediacaran is hardly the only time we have strange and unclassifiable life forms. From much later, during the Carboniferous Period (on the order of three hundred million years ago), the Mazon Creek Formation in Illinois has brought to light some really peculiar fossils. One of the most baffling is Etacystis communis, nicknamed the "H-animal":
Reconstruction of Etacystis [Image is in the Public Domain]
It's an invertebrate, but otherwise we're still at the "but what the hell is it?" stage with this one. Best guess is it might be a distant relative of hemichordates ("acorn worms"), but that's speculative at best.
Next we have Nectocaris. The name means "swimming shrimp," but a shrimp it definitely was not. It next was thought to be some kind of primitive cephalopod, perhaps related to cuttlefish or squid, but that didn't hold water, either. They had a long fin down each side that they probably used for propulsion, and a feeding tube shaped like a funnel (that you can see folded to the left in the photograph below):
Photograph of a Nectocaris fossil from the Burgess Shale Formation, British Columbia [Image is in the Public Domain]
All of the Nectocaris fossils known come from the early Cambrian. It's possible that they were a cousin of modern chaetognaths ("arrow worms"), but once again, no one is really sure.
Another Cambrian animal that has so far defied classification is Allonnia, which was initially thought to be related to modern sponges, but their microstructure is so different they're now placed in their own order, Chancelloriidae. You can see why the paleontologists were fooled for a while:
Reconstruction of Allonnia from fossils recovered from the Chengjiang Formation, Yunnan Province, China [Image licensed under the Creative Commons, Yun et al. 2024 f05 (preprint), CC BY 4.0]
At the moment, Allonnia and the other chancelloriids are thought to represent an independent branch of Kingdom Animalia that went extinct in the mid Cambrian Era and left no descendants -- or even near relatives.
Last, we have the bizarre Namacalathus hermanestes, which has been found in (very) late Precambrian shales in such widely-separated sites as Namibia, Canada, Paraguay, Oman, and Russia. Check out the reconstruction of this beast:
It's been tentatively connected to lophophorates (which include the much more familiar brachiopods), but if so, it must be a distant relationship, because they look a great deal more like something H. P. Lovecraft might have dreamed up:
The early Cambrian seas must have contained plenty of nightmare fuel.
And those are just five examples of organisms that would have certainly impelled Dr. McCoy to say, "It's life, Jim, but not as we know it." Given how infrequently organisms fossilize -- the vast majority die, decay away, and leave no traces, and the vagaries of geological upheaval often destroy the fossil-bearing strata that did form -- you have to wonder what we're missing. Chances are, for every one species we know about, there are hundreds more we don't.
What even more bizarre life forms might we see if we actually went back there into the far distant past?
I guess we'll have to wait until someone invents a time machine to find out.
The last time that happened was a couple of days ago, while I was working in my office and our puppy, Jethro, was snoozing on the floor. Well, as sometimes happens to dogs, he started barking and twitching in his sleep, and followed it up with sinister-sounding growls -- all the more amusing because while awake, Jethro is about as threatening as your average plush toy.
So my thought, naturally, was to wonder what he was dreaming about. Which got me thinking about my own dreams, and recalling some recent ones. I remembered some images, but mostly what came to mind were narratives -- first I did this, then the slimy tentacled monster did that.
That's when the blindside happened. Because Jethro, clearly dreaming, was doing all that without language.
How would thinking occur without language? For almost all humans, our thought processes are intimately tied to words. In fact, the experience of having a thought that isn't describable using words is so unusual that we have a word for it -- ineffable.
Mostly, though, our lives are completely, um, effable. So much so that trying to imagine how a dog (or any other animal) experiences the world without language is, for me at least, nearly impossible.
What's interesting is how powerful this drive toward language is. There have been studies of pairs of "feral children" who grew up together but with virtually no interaction with adults, and in several cases those children invented spoken languages with which to communicate -- each complete with its own syntax, morphology, and phonetic structure.
A fascinating study that came out in the Proceedings of the National Academy of Sciences, detailing research by Manuel Bohn, Gregor Kachel, and Michael Tomasello of the Max Planck Institute for Evolutionary Anthropology, showed that you don't even need the extreme conditions of feral children to induce the invention of a new mode of symbolic communication. The researchers set up Skype conversations between monolingual English-speaking children in the United States and monolingual German-speaking children in Germany, but simulated a computer malfunction where the sound didn't work. They then instructed the children to communicate as best they could anyhow, and gave them some words/concepts to try to get across.
They started out with some easy ones. "Eating" resulted in the child miming eating from a plate, unsurprisingly. But they moved to harder ones -- like "white." How do you communicate the absence of color? One girl came up with an idea -- she was wearing a polka-dotted t-shirt, and pointed to a white dot, and got the idea across.
But here's the interesting part. When the other child later in the game had to get the concept of "white" across to his partner, he didn't have access to anything white to point to. He simply pointed to the same spot on his shirt that the girl had pointed to earlier -- and she got it immediately.
Language is defined as arbitrary symbolic communication. Arbitrary because with the exception of a few cases like onomatopoeic words (bang, pow, ping, etc.) there is no logical connection between the sound of a word and its referent. Well, here we have a beautiful case of the origin of an arbitrary symbol -- in this case, a gesture -- that gained meaning only because the recipient of the gesture understood the context.
I'd like to know if such a gesture-language could gain another characteristic of true language -- transmissibility. "It would be very interesting to see how the newly invented communication systems change over time, for example when they are passed on to new 'generations' of users," said study lead author Manuel Bohn, in an interview with Science Daily. "There is evidence that language becomes more systematic when passed on."
In time, might you end up with a language that was so heavily symbolic and culturally dependent that understanding it would be impossible for someone who didn't know the cultural context -- like the Tamarians' language in the brilliant, poignant, and justifiably famous Star Trek: The Next Generation episode "Darmok"?
"Sokath, his eyes uncovered!"
It's through cultural context, after all, that languages start developing some of the peculiarities (also seemingly arbitrary) that led Edward Sapir and Benjamin Whorf to develop the hypothesis that now bears their names -- that the language we speak alters our brains and changes how we understand abstract concepts. In K. David Harrison's brilliant book The Last Speakers, he tells us about a conversation with some members of a nomadic tribe in Siberia who always described positions of objects relative to the four cardinal directions -- so at the moment my coffee cup wouldn't be on my right, it would be south of me. When Harrison tried to explain to his Siberian friends how we describe positions, at first he was greeted with outright bafflement.
Then, they all erupted in laughter. How arrogant, they told him, that you see everything as relative to your own body position -- as if when you turn around, suddenly the entire universe changes shape to compensate for your movement!
Another interesting example of this was the subject of a 2017 study by linguists Emanuel Bylund and Panos Athanasopoulos, and focused not on our experience of space but of time. And they found something downright fascinating. Some languages (like English) are "future-in-front," meaning we think of the future as lying ahead of us and the past behind us, turning time into something very much like a spatial dimension. Other languages retain the spatial aspect, but reverse the direction -- such as the Peruvian language of Aymara. For them, the past is in front, because you can remember it, just as you can see what's in front of you. The future is behind you -- therefore invisible.
Mandarin takes the spatial axis and turns it on its head -- the future is down, the past is up (so the literal translation of the Mandarin expression of "next week" is "down week"). Asked to order photographs of someone in childhood, adolescence, adulthood, and old age, they will place them vertically, with the youngest on top. English and Swedish speakers tend to think of time as a line running from left (past) to right (future); Spanish and Greek speakers tended to picture time as a spatial volume, as if it were something filling a container (so emptier = past, fuller = future).
All of which underlines how fundamental to our thinking language is. And further baffles me when I try to imagine how other animals think. Because whatever Jethro was imagining in his dream, he was clearly understanding and interacting with it -- even if he didn't know to attach the word "squirrel" to the concept.
Next to the purely religious arguments -- those that boil down to "it's in the Bible, so I believe it" -- the most common objection I hear to the evolutionary model is that "you can't get order out of chaos."
Or -- which amounts to the same thing -- "you can't get complexity from simplicity." Usually followed up by the Intelligent Design argument that if you saw the parts from which an airplane is built, and then saw an intact airplane, you would know there had to be a builder who put the parts together. This is unfortunately often coupled with some argument about how the Second Law of Thermodynamics (one formulation of which is, "in a closed system, the total entropy always increases") prohibits biological evolution, which shows a lack of understanding both of evolution and thermodynamics. For one thing, the biosphere is very much not a closed system; it has a constant flow of energy through it (mostly from the Sun). Turn that energy source off, and our entropy would increase post-haste. Also, the decrease in entropy you see within the system, such as the development of an organism from a single fertilized egg cell, does increase the entropy as a whole. In fact, the entropy increase from the breakdown of the food molecules required for an organism to grow is greater than the entropy decrease within the developing organism itself.
Just as the Second Law predicts.
So the thermodynamic argument doesn't work. But the whole question of how you get complexity in the first place is not so easily answered. On its surface, it seems like a valid objection. How could we start out with a broth of raw materials -- the "primordial soup" -- and even with a suitable energy source, have them self-organize into complex living cells?
Well, it turns out it's possible. All it takes -- on the molecular, cellular, or organismal level -- is (1) a rule for replication, and (2) a rule for selection. For example, with DNA, it can replicate itself, and the replication process is accurate but not flawless; the selection process comes in with the fact that some of those varying DNA configurations are better than others at copying themselves, so those survive and the less successful ones don't. From those two simple rules, things can get complex fast.
But to take a non-biological example that is also kind of mindblowing, have you heard of British mathematician John Horton Conway's "Game of Life?"
In the 1960s Conway became interested in a mathematical concept called a cellular automaton. The gist, first proposed by Hungarian mathematician John von Neumann, is to look at arrays of "cells" that then can interact with each other by a discrete set of rules, and see how their behavior evolves. The set-up can get as fancy as you like, but Conway decided to keep it really simple, and came up with the ground rules for what is now called his "Game of Life." You start out with a grid of squares, where each square touches (either on a side or a corner) eight neighboring cells. Each square can be filled ("alive") or empty ("dead"). You then input a starting pattern -- analogous to the raw materials in the primordial soup -- and turn it loose. After that, four rules determine how the pattern evolves:
Any live cell that has fewer than two live neighbors dies.
Any live cell that has two or three live neighbors lives to the next round.
Any live cell that has four or more live neighbors dies.
Any dead cell that has exactly three live neighbors becomes a live cell.
Seems pretty simple, doesn't it? It turns out that the behavior of patterns in the Game of Life is so wildly complex that it's kept mathematicians busy for decades. Here's one example, called "Gosper's Glider Gun":
Some start with as few as five live cells, and give rise to amazingly complicated results. Others have been found that do some awfully strange stuff, like this one, called the "Puffer Breeder":
What's astonishing is not only how complex this gets, but how unpredictable it is. One of the most curious results that has come from studying the Game of Life is that some starting conditions lead to what appears to be chaos; in other cases, the chaos settles down after hundreds, or thousands, of rounds, eventually falling into a stable pattern (either one that oscillates between two or three states, or produces something regular like the Glider Gun). Sometimes, however, the chaos seems to be permanent -- although because there's no way to carry the process to infinity, you can't really be certain. There also appears to be no way to predict from the initial state where it will end up ahead of time; no algorithm exists to take the input and determine what the eventual output will be. You just have to run the program and see what happens.
In fact, the Game of Life is often used as an example of Turing's Halting Problem -- that in general there is no way to be certain that a given algorithm will arrive at a solution in a finite number of steps. This theorem arises from such mind-bending weirdness as the Gödel Incompleteness Theorem, which proved rigorously that within mathematics, there are true statements that cannot be proven and false statements that cannot be disproven. (Yes -- it's a proof of unprovability.)
All of this, from a two-dimensional grid of squares and four rules so simple a fourth-grader could understand them.
Now, this is not meant to imply that biological systems work the same way as an algorithmic mathematical system; just a couple of weeks ago, I did an entire post about the dangers of treating an analogy as reality. My point here is that there is no truth to the claim that complexity can't arise spontaneously from simplicity. Given a source of energy, and some rules to govern how the system can evolve, you can end up with astonishing complexity in a relatively short amount of time.
People studying the Game of Life have come up with twists on it to make it even more complicated, because why stick with two dimensions and squares? There are ones with hexagonal grids (which requires a slightly different set of rules), ones on spheres, and this lovely example of a pattern evolving on a toroidal trefoil knot:
Kind of mesmerizing, isn't it?
The universe is a strange and complex place, and we need to be careful before we make pronouncements like "That couldn't happen." Often these are just subtle reconfigurations of the Argument from Ignorance -- "I don't understand how that could happen, therefore it must be impossible." The natural world has a way of taking our understanding and turning it on its head, which is why science will never end. As astrophysicist Neil deGrasse Tyson explained, "Surrounding the sea of our knowledge is a boundary that I call the Perimeter of Ignorance. As we push outward, and explain more and more, it doesn't erase the Perimeter of Ignorance; all it does is make it bigger. In science, every question we answer raises more questions. As a scientist, you have to become comfortable with not knowing. We're always 'back at the drawing board.' If you're not, you're not doing science."
One thing that really torques me is when people say "I did my research," when in fact what they did was a five-minute Google search until they found a couple of websites that agreed with what they already believed.
This is all too easy to do these days, now that any loudmouth with a computer can create a website, irrespective of whether what they have to say is well-thought-out, logical, or even true. (And I say that with full awareness that I myself am a loudmouth with a computer who created a website. To be fair, I've always been up front about the fact that I'm as fallible as the next guy and you shouldn't believe me out of hand any more than you do anyone else. I maintain that the best principle to rely on comes from Christopher Hitchens: "What can be asserted without evidence can be dismissed without evidence." This applies to me as well, and I do try my best not to break that rule.)
The problem is, it leaves us laypeople at sea with regards to trying to figure out what (and whom) to believe. The solution -- or at least, a partial one -- comes with always cross-checking your sources. Find out where a claim came from originally -- there are all too many examples of crazy ideas working their way up the ladder of credibility, starting out in some goofy publication like The Weekly World News, but being handed off like the baton in some lunatic relay race until they end up in places like Pravda, The Korea Times, and Xinhua. (Yes, this has actually happened.)
The water gets considerably muddier when you throw Wikipedia into the mix. Wikipedia is a great example of the general rule of thumb that a source is only as accurate as the least accurate person who contributed to it. Despite that, I think it's a good resource for quick lookups, and use it myself for that sort of thing all the time. A study by Thomas Chesney found that experts generally consider Wikipedia to be pretty accurate, although the same study admits that others have concluded that thirteen percent of Wikipedia entries have errors (how serious those errors are is unclear; an error in a single date is certainly more forgivable than one that gives erroneous information about a major world event). Another study concluded that between one-half and one-third of deliberately inserted errors are corrected within forty-eight hours.
But still. That means that between one-half and two-thirds of deliberately inserted errors weren't corrected within forty-eight hours, which is troubling. Given the ongoing screeching about what is and is not "fake news," having a source that could get contaminated by bias or outright falsehood, and remain uncorrected, is a serious issue.
Plus, there's the problem with error sneaking in, as it were, through the back door. There have been claims that began as hoaxes, but then were posted on Wikipedia (and elsewhere) by people who honestly thought what they were stating was correct. Once this happens, there tends to be a snake-swallowing-its-own-tail pattern of circular citations, and before you know it, what was a false claim suddenly becomes enshrined as "fact."
It was such a popular innovation that his name became a household word, especially in his native land. More than a dozen books (in various languages) list him as the popular kitchen appliance's inventor. The Scottish government's Brand Scotland website lauded MacMasters as an example of the nation's "innovative and inventive spirit." The BBC cooking show The Great British Menu featured an Edinburgh-based chef creating an elaborate dessert in MacMasters's honor. In 2018, the Bank of England polled the British public about who should appear on the newly-redesigned £50 note, and MacMasters was nominated -- and received a lot of votes. A Scottish primary school even had an "Alan MacMasters Day," on which the students participated in such activities as painting slices of toast and building pretend toasters out of blocks.
But before you proud Scots start raising your fists in the air and chanting "Scotland!", let's do this another way, shall we?
Back in 2012, a Scottish engineering student named -- you guessed it -- Alan MacMasters was in a class wherein the professor cautioned students against using Wikipedia as a source. The professor said that a friend of his named Maddy Kennedy had "even edited the Wikipedia entry on toasters to say that she had invented them." Well, the real MacMasters and a friend of his named Alex (last name redacted, for reasons you'll see momentarily) talked after class about whether it was really that easy. Turns out it was. So Alex decided to edit the page on toasters, took out Maddy Kennedy's name, and credited their invention to...
... his pal Alan MacMasters.
Alex got pretty elaborate. He uploaded a photograph supposedly of MacMasters (it's actually a rather clumsy digitally-modified photograph of Alex himself), provided biographical details, and generally tidied up the page to make it look convincing.
When Alex told MacMasters what he'd done, he laughed it off. "Alex is a bit of a joker, it's part of why we love him," MacMasters said. "The article had already been vandalized anyway, it was just changing the nature of the incorrect information. I thought it was funny, I never expected it to last."
Remember the errors that the Chesney study found didn't get corrected?
This was one of them.
The problem was suddenly amplified when The Mirror found the entry not long after it was posted, and listed it as a "life-changing everyday invention that put British genius on the map." By this time, both Alex and MacMasters had completely forgotten about what they'd done, and were entirely unaware of the juggernaut they'd launched. Over the following decade, the story was repeated over and over -- including by major news outlets -- and even ended up in one museum.
It wasn't until July 2022 that an alert fifteen-year-old happened on the Wikipedia article, and notified the editors that the photograph of MacMasters "looked faked." To their credit, they quickly recognized that the entire thing was fake, deleted the article, and banned Alex from editing Wikipedia for life. But by that time the hoax page had been up -- and used as a source -- for ten years.
(If you're curious, the actual credit for the invention of the electric toaster goes to Frank Shailor, who worked for General Electric, and submitted a patent for it in 1909.)
The problem, of course, is that if most of us -- myself included -- were curious about who invented the electric toaster, we'd do a fairly shallow search online, maybe one or two sources deep. If I then found that Brand Scotland, various news outlets, and a museum all agreed that it was invented by a Scottish guy named Alan MacMasters, I'm quite certain I'd believe it. Even if several of those sources led back to Wikipedia, so what?
Surely all of them couldn't be wrong, right? Besides, it's such a low-emotional-impact piece of information, who in their right mind would be motivated to falsify it?
So what reason would there be for me to question it?
Now, I'm aware that this is a pretty unusual case, and I'm not trying to make you disbelieve everything you read online. As I've pointed out before, cynicism is just as lazy as gullibility. And I'm still of the opinion that Wikipedia is a pretty good source, especially for purely factual information. But it is absolutely critical that we don't treat any source as infallible -- especially not those (1) for which we lack the expertise to evaluate, or (2) which contain bias-prone information that agrees with what we are already inclined to accept uncritically.
Confirmation bias is a bitch.
So the take-home lesson here is "be careful, and don't turn off your brain." It's not really, as some have claimed, that bullshit is more common now; take a look at any newspaper from the 1800s and you'll disabuse yourself of that notion mighty fast. It's just that the internet has provided an amazingly quick and efficient conduit for bullshit, so it spreads a great deal more rapidly.
It all goes back to the quote -- of uncertain provenance, but accurate whoever first said it -- that "a lie can travel all the way around the world while the truth is still lacing up its boots."
With the Hubble Space Telescope and the James Webb Telescope discovering new stars and planets and galaxies every single day, the astronomers can't keep up with naming them. So many of them end up with strings of numbers and letters that no one can ever remember. How much better would it be to have a heavenly body named after YOU? Or your loved one? Or your favorite pet?
We are the STAR REGISTERY [sic] SERVICE, where you can choose from thousands of unnamed stars, and give it whatever name you choose. You will receive a beautiful framable certificate of ownership with the location (what the astronomers call the Declination and Right Ascension) of your OWN PERSONAL STAR so you can go outside any clear night and find it.
A lovely idea for a gift -- or a gift to yourself!
We then are told that the fee for this service is a paltry $40 U.S., and that they accept PayPal, Venmo, and major credit cards. And it's accompanied by this enticing and irresistible photo:
Okay, there are a few problems with this.
First, I can think of a great many better uses for forty bucks, and that includes using it to start a campfire. Part of this is that I'm a major skinflint, but still.
Second, the vast majority of the "new stars and planets and galaxies" catalogued by the Hubble and JWST are far too faint to see with the naked eye, so I wouldn't be able to go outside on a clear night and see "my own personal star" unless I happened to bring along the Palomar Telescope. So the most I could do is to find the approximate location, and try to gain some sense of ownership by staring up into the dark.
Third, what on earth does it mean to claim that I "own a star?" The nearest star (which, so far as I know, is not for sale) is about forty trillion kilometers away, so unless warp drive is invented soon (not looking likely), I'll never go to visit my star. And doesn't selling something imply that the seller owned it to start with? I doubt seriously whether the "Star Registery Service" could demonstrate legal ownership of any of the things out there in space that they're trying to sell.
So needless to say, I'm not going to pay forty dollars for a piece of paper, however "beautiful" and "framable" it is. If I gave it as a present to my wife, she would roll her eyes enough to see the back of her own skull. And I'm not naming a star after my puppy. Jethro is a lovely little dog, but smart, he isn't. He seems to spend his entire existence in a state of mild puzzlement. Anything new is met with an expression that can be summed up as, "... wait, what?" So appreciating the wonders of astrophysics is kind of outside his wheelhouse. (Pretty much everything is outside his wheelhouse other than playing, snuggling, sleeping, and eating dinner.)
But I digress.
So anyway, I didn't respond to the email. But because I live for investigating the weird corners of human behavior -- and also because I never met a rabbit-hole I didn't like -- I started poking around into other examples of people claiming to own astronomical objects. And this, it turns out, has a long and storied history. Here are just a few examples I found out about:
In 1996, a German guy named Martin Juergens claimed that he owned the Moon. On 15 July 1756, Juergens said, German emperor Frederick the Great deeded the Moon to his ancestor Aul Juergens, and it passed down through the family, always being inherited by the youngest son. Needless to say, pretty much no one took him seriously, although apparently he believes it himself.
Back in 1936 the Pittsburgh Notary Public received a banker's check and a deed for establishment of property filed by one A. Dean Lindsay, wherein he claimed the ownership of all extraterrestrial objects in the Solar System. Lindsay had earlier submitted claims of ownership on the Atlantic and Pacific Oceans, but these were both denied. The extraterrestrial objects one, though, was apparently notarized and filed, with the Notary Public taking the attitude that if the dude wanted to spend money for something he couldn't ever get to, that was on him. Lindsay got the last laugh, however, when he was approached multiple times by other even loonier people who wanted to buy specific extraterrestrial objects from him. Lindsay was happy to sell. At a profit, of course.
When NASA landed the NEAR Shoemaker probe on the asteroid 433 Eros in 2001, they were promptly served with a bill for twenty dollars from Gregory Nemitz, who claimed he owned it and they owed him for parking. NASA unsurprisingly refused to pay.
Nemitz wasn't the only one to trouble NASA with claims of ownership. In 1996 three Yemeni men, Adam Ismail, Mustafa Khalil, and Abdullah al-Umari, sued NASA for "invading Mars." They said they had inherited the planet from their ancestors three thousand years ago. Once again, NASA declined to make reparations.
In 1980, an entrepreneur named Dennis Hope started a company called the Lunar Embassy Commission, which sells one-acre plots on the Moon for twenty dollars each. (It'd be fun to put him and Martin Juergens in a locked room and let them duke it out over whose property the Moon actually is.) Once he gets your money, he chooses your plot by randomly pointing to a lunar map with a stick, which seems kind of arbitrary; at least the "Star Registery Service" was gonna let me pick my own star. Despite this, he claims that former presidents Jimmy Carter and Ronald Reagan were both customers.
Lastly, in the Go Big Or Go Home department, we have noted eccentric James T. Mangan (1896–1970), who publicly claimed ownership of all of outer space in 1948. He founded what he called the Nation of Celestial Space (also known as "Celestia") and registered it with the Cook County, Illinois, Recorder of Deeds and Titles on 1 January 1949. At its height in 1960 the Nation of Celestial Space had almost twenty thousand, um, "residents," but since Mangan's death in 1970 it has more or less ceased to exist as an official entity. Space itself, of course, is still out there, and seems unaffected by the whole affair.
Anyhow, I think I'll pass on star ownership. (Or Moon, or Mars, or outer space, or whatnot.) The whole thing strikes me as a little ridiculous. Of course, if I think about it too hard, even our concept of owning land down here on Earth is pretty goofy; what does it mean to say I own this parcel of property, when it was here before I was born and will still be here long after I'm gone? Okay, I can use it to live on; ownership gives me certain rights according to the laws of New York State. I get that. But honestly, even the concept of dividing up the Earth using (mostly) arbitrary and invisible lines, and saying stuff is legal on one side of the line and illegal on the other side, is weird, too. (And don't even get me started about how to cross certain invisible lines, you need a special piece of paper, and if you don't have it and try to cross anyhow, mean people get to shoot you.)
You have to wonder what would happen if the intelligent creatures out there who come from those far distant star systems traveled here, and I tried to tell them, "See, your star, I bought that for forty dollars from some guy on the internet." My guess is they'd vaporize me with their laser pistol and head back out into space after stamping their map of the Solar System with the words "No Intelligent Life Present."