Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Saturday, May 17, 2025

The appearance of creativity

The word creativity is strangely hard to define.

What makes a work "creative?"  The Stanford Encyclopedia of Philosophy states that to be creative, a the created item must be both new and valuable.  The "valuable" part already skates out over thin ice, because it immediately raises the question of "valuable to whom?"  I've seen works of art -- out of respect to the artists, and so as not to get Art Snobbery Bombs lobbed in my general direction, I won't provide specific examples -- that looked to me like the product of finger paints in the hands of a below-average second-grader, and yet which made it into prominent museums (and were valued in the hundreds of thousands of dollars).

The article itself touches on this problem, with a quote from philosopher Dustin Stokes:

Knowing that something is valuable or to be valued does not by itself reveal why or how that thing is.  By analogy, being told that a carburetor is useful provides no explanatory insight into the nature of a carburetor: how it works and what it does.

This is a little disingenuous, though.  The difference is that any sufficiently motivated person could learn the science of how an engine works and find out for themselves why a carburetor is necessary, and afterward, we'd all agree on the explanation -- while I doubt any amount of analysis would be sufficient to get me to appreciate a piece of art that I simply don't think is very good, or (worse) to get a dozen randomly-chosen people to agree on how good it is.

Margaret Boden has an additional insight into creativity; in her opinion, truly creative works are also surprising.  The Stanford article has this to say about Boden's claim:

In this kind of case, the creative result is so surprising that it prompts observers to marvel, “But how could that possibly happen?”  Boden calls this transformational creativity because it cannot happen within a pre-existing conceptual space; the creator has to transform the conceptual space itself, by altering its constitutive rules or constraints.  Schoenberg crafted atonal music, Boden says, “by dropping the home-key constraint”, the rule that a piece of music must begin and end in the same key.  Lobachevsky and other mathematicians developed non-Euclidean geometry by dropping Euclid’s fifth axiom.  Kekulé discovered the ring-structure of the benzene molecule by negating the constraint that a molecule must follow an open curve.  In such cases, Boden is fond of saying that the result was “downright impossible” within the previous conceptual space.

This has an immediate resonance for me, because I've had the experience as a writer of feeling like a story or character was transformed almost without any conscious volition on my part; in Boden's terms, something happened that was outside the conceptual space of the original story.  The most striking example is the character of Marig Kastella from The Chains of Orion (the third book of the Arc of the Oracles trilogy).  Initially, he was simply the main character's boyfriend, and there mostly to be a hesitant, insecure, questioning foil to astronaut Kallman Dorn's brash and adventurous personality.  But Marig took off in an entirely different direction, and in the last third of the book kind of took over the story.  As a result his character arc diverged wildly from what I had envisioned, and he remains to this day one of my very favorite characters I've written. 

If I actually did write him, you know?  Because it feels like Marig was already out there somewhere, and I didn't create him, I got to know him -- and in the process he revealed himself to be a far deeper, richer, and more powerful person than I'd thought at first.

[Image licensed under the Creative Commons ShareAlike 1.0, Graffiti and Mural in the Linienstreet Berlin-Mitte, photographer Jorge Correo, 2014]

The reason this topic comes up is some research out of Aalto University in Finland that appeared this week in the journal ACM Transactions on the Human-Robot Interaction.  The researchers took an AI that had been programmed to produce art -- in this case, to reproduce a piece of human-created art, but the test subjects weren't told that -- and then asked the volunteers to rate how creative the product was.  In all three cases, the subjects were told that the piece had been created by AI.  The volunteers were placed in one of three groups:

  • Group 1 saw only the result -- the finished art piece;
  • Group 2 saw the lines appearing on the page, but not the robot creating it; and
  • Group 3 saw the robot itself making the drawing.

Even though the resulting art pieces were all identical -- and, as I said, the design itself had been created by a human being, and the robot was simply generating a copy -- group 1 rated the result as the least creative, and group 3 as the most.

Evidently, if we witness something's production, we're more likely to consider the act creative -- regardless of the quality of the product.  If the producer appears to have agency, that's all it takes.

The problem here is that deciding whether something is "really creative" (or any of the interminable sub-arguments over whether certain music, art, or writing is "good") all inevitably involve a subjective element that -- philosophy encyclopedias notwithstanding -- cannot be expunged.  The AI experiment at Aalto University highlights that it doesn't take much to change our opinion about whether something is or is not creativity.

Now, bear in mind that I'm not considering here the topic of ethics in artificial intelligence; I've already ranted at length about the problems with techbros ripping off actual human artists, musicians, and writers to train their AI models, and how this will exacerbate the fact that most of us creative types are already making three-fifths of fuck-all in the way of income from our work.  But what this highlights is that we humans can't even come to consensus on whether something actually is creativity.  It's a little like the Turing Test; if all we have is the output to judge by, there's never going to be agreement about what we're looking at.

So while the researchers were careful to make it obvious (well, after the fact, anyhow) that what their robot was doing was not creative, but was a replica of someone else's work, there's no reason why AI systems couldn't already be producing art, music, and writing that appears to be creative by the Stanford's criteria of being new, valuable, and surprising.

At which point we better figure out exactly what we want our culture's creative landscape to look like -- and fast.

****************************************


Friday, May 16, 2025

Passageways

I was asked a couple of days ago by a loyal reader of Skeptophilia if I'd ever heard of The Backrooms -- and if so, if I thought there was "anything to it."

I hadn't, but told him I needed clarification about what exactly he was looking for.  "Anything to it" is, after all, a little on the vague side.

"You know," he said.  "Something legitimately creepy.  Something more than just people getting freaked out over nothing, and then making shit up to explain why they're scared."

So I said I'd look into it.

The Backrooms turns out to have originated as a "creepypasta" -- a strange, usually first-person tale related as if it were true, that then gets passed around on the internet and kind of takes on a life of its own.  (Two famous stories that originated as creepypasta are Slender Man and the Black-eyed Children -- both of which I thought were cool enough that I ended up them using in my novels, in Signal to Noise and Eyes Like Midnight, respectively.)  The Backrooms has to do with someone who stumbled into an empty, fluorescent-lit space that didn't obey the regular laws of time and space; partitions changed position, doorways opened up or closed when you weren't looking, angles shifted and turned in unpredictable ways.  (Reminds me of the evil city of R'lyeh from H. P. Lovecraft's Cthulhu Mythos, where the geometry is so skewed you can't even tell what's horizontal and vertical.)

It was a place, they claimed, where you could "noclip out of reality" -- "noclipping" being a video game term where a character can pass right through a solid wall.

The original post was accompanied by the following photograph:


Well, the internet being what it is, pretty soon someone found that there was nothing paranormal about the photograph; it was, in fact, an empty furniture sales room in Oshkosh, Wisconsin that was being renovated into a hobby store.  But the garish lighting, sickly yellow cast, and odd angles definitely gives a surreal air to the photograph.

I'm not sure I would want to be alone at night in that place, rational skeptical attitudes notwithstanding.

The Backrooms (at least while it was empty) is a good example of a "liminal space" -- a place that appears to be a mysterious passageway to somewhere else, somewhere not quite of this world.  Consider how often that trope has been used in fiction -- H. G. Wells's "The Door in the Wall," the Wardrobe in C. S. Lewis's Narnia series, the hotel corridors in Stephen King's The Shining, and the weird labyrinth of empty streets leading to the door of Omo's barber shop in the Doctor Who episode "The Story and the Engine" are four obvious examples -- and much of the eeriness comes from the fact that while you're there, you're alone.

Just you and the twisted geometry of spacetime that rules such places.

"Liminal spaces include empty spots, like abandoned shopping malls, corridors, and waiting rooms after hours," said architect Tara Ogle.  "These are spaces that are liminal in a temporal way, that occupy a space between use and disuse, past and present, transitioning from one identity to another.  While there, we are standing on a threshold between how we lived previously and new ways of living, working and occupying space.  It's understandable that we react emotionally to such places."

Liminal spaces, it seems, are to architecture what the uncanny valley is to faces.

Despite my reluctance to attribute any of this to the paranormal, I'm no stranger to the feelings evoked by places that seem to be caught between the real world and somewhere else.  I've described here my odd reaction to spending an afternoon in the ruins of Rievaulx Abbey in northern England, an experience that felt quite real even though there was no scientifically-admissible evidence that anything untoward was going on.

In fact, for a skeptic, I have to admit I'm pretty damn suggestible.  I suspect I went into science as a way of compensating for the fact that my emotions are like an out-of-control pinball game most of the time.  So while on the surface I might seem like a good choice to accompany you into the investigation of a haunted house, I'd probably react more like Shaggy in Scooby Doo, leaping into the air at the first creaking floorboard and then running away in a comical fashion, my feet barely even touching the ground.

Be that as it may, in response to my reader's question: I doubt seriously there's "anything to" The Backrooms and other liminal spaces besides people's tendency to react with fear to being in odd situations, which (after all) includes being in a completely empty, fluorescent-lit furniture showroom at night.  I don't think you're going to end up passing through a doorway into an exciting fantasy world if you go exploring there.

Which is kind of a shame.  On the other hand, you are also unlikely to meet creepy little twin girl ghosts or an evil barber who wants to use your imagination as a power source.  So like everything, I guess it's a mixed bag.

****************************************


Thursday, May 15, 2025

Borrowers and lenders

My master's thesis is titled, "The Linguistic Effects of the Viking Invasions on England and Scotland," which should put it in contention for winning the Scholarly Research With The Least Practical Applications Award.

Even so, I still think it's a pretty interesting topic.  My contention was that the topography of the two countries are a big part of the reason that their languages, Old English and Old Gaelic respectively, were affected so differently.  England, with its largely level countryside and a networked road system even back then, adopted hundreds of Old Norse borrow-words into every lexical category, even though the explicit rule by Scandinavia (the "Danelaw") was confined to the eastern half of the country and only lasted two centuries.  Hundreds of place names in England are Norse in origin; any town ending in "-by" owes that part of its name to the Norse word for "town."  (Similarly. places ending in -thorpe, -thwaite, -foss, -toft, or -ness reflect a Norse influence; and all the streets in the city of York that end in -gate -- well, gata is Old Norse for "street.")  

The usual pattern is that languages borrow words for concepts they didn't already have covered, but Old English saw Norse supersede even perfectly good native words that were in wide use.  The result is that Modern English has way more words of Norse origin than you might expect, including many in the common, everyday vocabulary.  A few examples of the more than two hundred documented Norse borrow-words:

  • window
  • gift
  • sky
  • egg
  • scare
  • scream
  • anger
  • awkward
  • fellow

Even the pronoun "they" is Norse in origin; the Old English words for "he," "she," and "they," hé, híe, and héo, respectively, were pronounced so much alike that it could be confusing knowing who you were talking about.  The practical English fixed this by palatalizing híe to she and adopting the Norse third-person plural pronoun ∂eira as our modern "they" and "their."

Gaelic, though, responded differently.  Scotland was (and is) rugged terrain, and the big settlements tended to be clustered around the coast and inland waterways.  Even though Scandinavian rule in Scotland lasted much longer -- Norwegian rule of the Hebrides didn't end until 1266 -- the influence on the language was minor, and largely restricted to place names (the -ey found in the names of lots of the islands of Scotland simply means "island" in Old Norse) and terms related to living near water.  The Gaelic words for net, sail, anchor, boat, ford, delta, beach, seagull, seaweed, and skiff are all Norse in origin, but of the common vocabulary, only a few are (including the words for noise, shoe, guide, time, and scatter).

[Nota bene: The Orkneys were a different matter entirely.  Norse rule in the Orkneys continued until 1472, and the people there actually lost Gaelic altogether.  Until the eighteenth century the main language was Norn, a dialect of West Norse, at which point it was superseded by the Orcadian dialect of Scots English.  The last native speaker of Norn died in 1850.]

Of course, English is an amalgam of a great many languages; not only did the Vikings leave their thumbprint on it, but the Normans in the eleventh century brought in a great many words of French origin.  Additionally, a lot of our technical vocabulary comes from Latin and Greek.  Until the eighteenth century, English was kind of a backwater language spoken only by people in one corner of Europe, so when scientists and other academics from different countries were communicating, they usually did so in Latin.  The result is that we still have a ton of Latin and Greek borrow-words in English, including most of our scientific, legal, and scholarly vocabulary.  To demonstrate how dependent the sciences are on Latin and Greek roots, the brilliant science fiction author Poul Anderson wrote a piece on the atomic theory using only words native to Old English -- and the result ("Uncleftish Beholding") sounds like some ancient mythological tale, and gives you an idea of just how much Latin and Greek have influenced the cadence of our language.  Here's a short excerpt to give the flavor, but you really should read the whole thing, because it's just that wonderful:

For most of its being, mankind did not know what things are made of, but could only guess.  With the growth of worldken, we began to learn, and today we have a beholding of stuff and work that watching bears out, both in the workstead and in daily life.

The underlying kinds of stuff are the *firststuffs*, which link together in sundry ways to give rise to the rest.  Formerly we knew of ninety-two firststuffs, from waterstuff, the lightest and barest, to ymirstuff, the heaviest. Now we have made more, such as aegirstuff and helstuff.

The firststuffs have their being as motes called *unclefts*.  These are mightly small; one seedweight of waterstuff holds a tale of them like unto two followed by twenty-two naughts.  Most unclefts link together to make what are called *bulkbits*.  Thus, the waterstuff bulkbit bestands of two waterstuff unclefts, the sourstuff bulkbit of two sourstuff unclefts, and so on.  (Some kinds, such as sunstuff, keep alone; others, such as iron, cling together in ices when in the fast standing; and there are yet more yokeways.)  When unlike clefts link in a bulkbit, they make *bindings*.  Thus, water is a binding of two waterstuff unclefts with one sourstuff uncleft, while a bulkbit of one of the forestuffs making up flesh may have a thousand thousand or more unclefts of these two firststuffs together with coalstuff and chokestuff.
Everywhere English speakers went -- which, for better or worse, was kind of everywhere -- we picked up and adopted new words.  The result is a rich, often confusing patchwork quilt of a language, with strange sound-to-spelling correspondences, remnants of grammar and morphology from a dozen different places, and weird attempts to blend it all together.  (I don't know how many times I told students that the plurals of hippopotamus and rhinoceros were not hippopotami and rhinoceri.  That'd be trying to pluralize them like Latin words, and they're actually Greek -- hippopotamus is Greek for "river horse," and rhinoceros for "nose horn" -- so if you want to be fancy about it, it'd be hippoipotamou and rhinoucerates.  But that sounds pretentious as hell, so let's stick with hippopotamuses and rhinoceroses.)

Anyhow, that's our excursion into our peculiar hodgepodge of a language.  Hodgepodge, by the way, is French in origin, from hochepot, meaning "a stew."  The hoche part comes from the Old Germanic word hocher, meaning "to shake."

Okay, I'd better stop here.  I could do this all day.

****************************************


Wednesday, May 14, 2025

By any other name...

Scientists have an undeserved reputation for being dry and humorless.

If you doubt the "undeserved" part, consider scientific names.  Because by convention scientific names usually have Greek or Latin roots, they sound pretty sophisticated and fancy -- until you translate them.  The adorable black-footed ferret of the American Rockies is Mustela nigripes, which translates to... "black-footed ferret."  The western diamondback rattlesnake, Crotalus atrox?  Greek for "scary noisemaker."  The name of the mammalian order containing shrews and moles, Eulipotyphla, is kind of insulting.  It means "really fat and blind."  But they only get sillier from there.  How about Eucritta melanolimnetes, a species of amphibian from the Carboniferous Period?  The name means "the real Creature from the Black Lagoon."

And the order of mammals that includes rabbits, Order Lagomorpha?  Translated from Greek, "Lagomorpha" literally means "it's shaped like a bunny."

The privilege of naming a newly-discovered species goes to the discoverer, and if they choose they can name it in honor of someone (it's considered bad form to name it after yourself).  Lots of biologists name species after their teachers or mentors, but the field is wide open.  Entomologists Kelly Miller and Quentin Wheeler named a species of slime-mold beetle after former Vice President Dick Cheney -- whether Agathidium cheneyi was an honor or an insult is open to interpretation.  Some paleontologists working in Madagascar liked to listen to music while they worked, and became convinced that whenever they played Dire Straits, they found lots of new fossils.  Thus, there's a species of Cretaceous dinosaur named Masiakasaurus knopfleri.  (Upon hearing about this, Mark Knopfler allegedly responded, "And people said I was a dinosaur before.")  A genus of carabid beetles, Agra, has a species named Agra schwartzeneggeri.  Terry Erwin, the entomologist responsible for that one, found a number of other Agra species, and thus we have Agra vation, Agra phobia, and Agra cadabra.

You can even name species after fictional characters.  Thus we have a fuzzy mite named Polemistus chewbacca, an Australian moth with marks that resemble a second head named Erechthias beeblebroxi, an Ordovician trilobite named Han solo, a sponge-like fungus from Malaysia named -- I shit you not -- Spongiforma squarepantsii, a cave-dwelling insect from Spain named Gollumjapyx smeagol, and -- my favorite -- a fish from the fjords of New Zealand named Fiordichthys slartibartfasti.

If you get why that last one is fall-out-of-your-chair hilarious, congratulations; you're as big a nerd as I am.

Some are just outright silly.  Consider the Australian wasp discovered by entomologist Arnold Menke in 1977.  He was so delighted at the find that he gave it the scientific name Aha ha.

And I would be remiss in not mentioning a genus of small mollusks named Bittium.  When a related genus of even smaller mollusks was discovered, they named it... you guessed it... Ittibittium.

The reason all this silliness comes up is a discovery that was the subject of a paper in PLOS-One.  Paleontologists working in Brazil found a fossil of a new species of tanystropheid, a group of Triassic dinosaurs with such bizarrely elongated necks that scientists are still trying to figure out how they walked without doing a face-plant.  (One possible answer is that they were aquatic, but that's not certain.)

Tanystropheus longobardicus, which is itself sort of a goofy name.  It means "long, bent thing with a long beard."  I have to wonder how many controlled substances the scientists had partaken of before they came up with that one.  [Image licensed under the Creative Commons Nobu Tamura email: nobu.tamura@yahoo.com http://spinops.blogspot.com/, Tanystropheus NT small, CC BY-SA 4.0]

Anyhow, the new species was christened Elessaurus gondwanoccidens.  The species name isn't so interesting -- it means "from western Gondwana," after one of the supercontinents around during the Triassic Period -- but the genus name is clever.  It plays on the usual -saurus (Greek for "lizard") ending of many genera of dinosaurs, but was actually named for Elessar, one of the many monikers of King Aragorn II from The Lord of the Rings.  Elessar, which means "elf-stone" in J. R. R. Tolkien's wonderful conlang Quenya, was the title Aragorn took after Sauron got his clocks cleaned by Frodo et al. and the former Strider became the King of Gondor.

So that's a look at the deadly serious, dry-as-dust subject of biological taxonomy.  And I haven't even gotten into the off-color ones, which is a whole subject in and of itself.  Suffice it to say that orchid is Greek for "testicle," and there's a mushroom with the scientific name Phallus impudicus ("shameless penis").  I'll leave you to research the rest of that topic on your own.

****************************************


Tuesday, May 13, 2025

The second Sun

I know the universe can be a weird place sometimes, but... let's follow Carl Sagan's dictum of looking for a normal and natural explanation for things before jumping to a paranormal or supernatural one, mmmkay?

The reason this comes up is because of a discussion I saw online about the strange phenomenon of a "double Sun" -- when there appears to be a split view of the Sun (or, sometimes, a smaller "second Sun" near the main one).  The first clue that this is a completely natural (albeit odd-looking) occurrence is that it always happens when (1) the sky is hazy, and (2) when the Sun is near the horizon.  It turns out to be caused by the Sun's light refracting off particles of ice or smoke in the upper atmosphere, creating an ephemeral double image.

It is, in fact, simply an optical illusion.

A "double Sun" caused by wildfire smoke, seen from Jervis Bay National Park, New South Wales, Australia [Image licensed under the Creative Commons Doug McLean, Bushfire smoke induced Double Sun, CC BY-SA 4.0]

One of the commenters, evidently a science type, gave a measured and reasonable response explaining light refraction, and that resulted in everyone basically going, "Oh, that's cool!  An interesting atmospheric phenomenon!  Thank you for the scientific explanation!"

Ha!  I'm lying.  Of course that's not how people responded.  He was immediately shouted down by about a hundred other folks, who had "explanations" like the following.  Spelling and grammar are exactly as written, because you can only add [sic] so many times:

  • It’s just more proof that the Earth is flat.  We’ve been viewing a computer CGI simulation since the late 1800s, and it has just been a matter of time before we start seeing glitches in the man’s software.
  • Is it nibiru?  I've read planet x?  Is it a sun like star or what?  I'm so confused.
  • It has been photographed before, from Seattle to Wisconsin.  NASA has known about the approach of Nibiru, the Destroyer, Planet X or the countless other names it is known by, including Wormwoof, which it is known by in the bible.  It is an entire star system travelling on an elliptical orbit towards our earth.  It has its own Sun (which you are seeing) and several planets that travel with it.  All the people want to know why can’t you see it.  The answer is because it’s a dead brown star that can only be seen in the infra red spectrum.  The only 2 places that have a black light telescope is in Antarctica and the Vatican.  Go figure.
  • If you don’t mind, I will actually give you a serious reply, depending on what you believe in depending on what you think is possible and aside from that, depending on what frequency you operate at you’re able to see in those things, I’ve heard a lot from people who are a lot smarter than me that by 2027 the two suns will be completely visible as well as open contact.  I don’t care if I’m labeled crazy I don’t channel.
  • Idk what any of this can or will mean for us here, but boys and girls I don't think the comet in our orbit, that they say should remain visible to the naked eye, but only while facing due West, and get this....only during, or immediately after the sunset will it appear near the Sun, I don't think that's what they are telling us it is.  Is this the reason all these billionaires have been building massive underground bunkers suddenly this past year?
  • Trump the Antichrist is here and two suns is the beginning of the end.  In many apocalyptic and religious interpretations, the imagery of “two suns crossing in the sky” is often associated with the arrival of the Antichrist, signifying a significant and ominous event that marks the beginning of the end times, often interpreted as a sign of a false messiah or a powerful evil force emerging into the world, e.g. Trump and Musk.
  • There is a second sun behind our sun but we can never see it because it stays behind the sun.  It’s gravitational balanced by the tiny black hole on the other side of our moon that we can’t see either.  Every 276 years in June the moon’s black hole and the second son have a tilting wobble and the second sun becomes visible for a few minutes in a small viewing zone across the northern hemisphere.  Behind the second sun there are a few more things that we can’t see, like second Jupiter.

A few thoughts about all that.

  • What the actual fuck?
  • Okay, I can see Trump as the Antichrist, given that he embodies all Seven Deadly Sins in one individual.  But somehow I don't think even his level of evil can make two Suns appear in the sky.
  • If it's only visible for a few minutes every 276 years, it was pretty lucky the dude got a snapshot of it, wasn't it?
  • So, Nibiru is en vogue again, eh?  Last I heard of Nibiru was about ten years ago, and I figured it had become passé, replaced by far more believable claims like targeted weather modification and 5G mind control and Jewish space lasers.
  • If I've never seen a "second Sun," it's because I'm "operating on the wrong frequency?"  I didn't know humans were like radios, and came equipped with a frequency dial.  That's pretty awesome.  Maybe if mine is set right I can tune into the BBC.
  • Only Antarctica and the Vatican have "black light telescopes"?  I'm trying to come up with some kind of clever response to this, but... nope, I got nothin'.
  • If I ever get another pit bull, I'm gonna name him "Wormwoof."
  • At the risk of repeating myself, what the actual fuck?

What astounds me about all of this is how many people seem to gravitate toward this sort of nonsense instead of looking first for a rational explanation.  It's not like the science in this case is hard to understand, or even hard to find; the website of the National Radio Astronomy Observatory posted a perfectly good explanation that shows up on the first page of a Google search for "double Sun."

But loony claims like Nibiru and dead brown stars and second Jupiters and simulation glitches are, apparently, more attractive.  Is it because it makes the universe seem weirder and cooler?  Or is it the appeal of "seeing through a coverup" by scientists or the government or whatnot?

It's always seemed to me that the scientific explanations of what we observe are plenty cool enough, and some of them -- like quantum physics -- plenty weird enough.  Why do so many people need to add extra layers of wackiness onto things?

I'll end with another quote from Carl Sagan, which I think sums things up nicely: "For me, it is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring."

****************************************


Monday, May 12, 2025

Djinn and paradox

In the very peculiar Doctor Who episode "Joy to the World," the character of Joy Almondo is being controlled by a device inside a briefcase that -- if activated -- will release as much energy as a supernova, destroying the Earth (and the rest of the Solar System).  But just at the nick of time, a future version of the Doctor (from exactly one year later) arrives and gives the current Doctor the override code, saving the day.

The question comes up, though, of how the future Doctor knew what the code was.  The current Doctor, after all, hadn't known it until he was told.  He reasons that during that year, he must have learned the code from somewhere or someone -- but the year passes without anyone contacting him about the briefcase and its contents.  Right before the year ends (at which point he has to jump back to complete the loop) he realizes that his surmise wasn't true.  Because, of course, he already knew the code.  He'd learned it from his other self.  So armed with that knowledge, he jumps back and saves the day.

Well, he saves the moment, at least.  As it turns out, their troubles are just beginning, but that's a discussion for another time.

A similar trope occurred in the 1980 movie Somewhere in Time, but with an actual physical object rather than just a piece of information.  Playwright Richard Collier (played by Christopher Reeve) is at a party celebrating the debut of his most recent play, and is approached by an elderly woman who hands him an ornate pocket watch and says, in a desperate voice, "Come back to me."  Collier soon goes back in time by six decades, finds her as a young woman, and they fall desperately in love -- and he gives her the pocket watch.  Ultimately, he's pulled back into the present, and his girlfriend grows old without him, but right before she dies she finds him and gives him back the watch, closing the loop.

All of this makes for a fun twist; such temporal paradoxes are common fare in fiction, after all.  And the whole thing seems to make sense until you ask the question of, respectively (1) where did the override code originally come from? and (2) who made the pocket watch?

Because when you think about it -- and don't think too hard, because these kinds of things are a little boggling -- neither one has any origin.  They're self-creating and self-destroying, looped like the famous Ouroboros of ancient myth, the snake swallowing its own tail. 

[Image is in the Public Domain]

The pocket watch is especially mystifying, because after all, it's an actual object.  If Collier brought it back with him into the past, then it didn't exist prior to the moment he arrived in 1920, nor after the moment he left in 1980 -- which seems to violate the Law of Conservation of Matter and Energy.

Physicists Andrei Lossev and Igor Novikov called such originless entities "djinn particles," because (like the djinn, or "genies," of Arabian mythology) they seem to appear out of nowhere.  Lossev and Novikov realized that although "closed timelike curves" are, theoretically at least, allowed by the Theory of General Relativity, they all too easily engender paradoxes.  So they proposed something they call the self-consistency principle -- that time travel into the past is possible if and only if it does not generate a paradox.

So let's say you wanted to do something to change history.  Say, for example, that you wanted to go back in time and give Arthur Tudor, Prince of Wales some medication to save his life from the fever that otherwise killed him at age fifteen.  This would have made him king of England seven years later instead of his younger brother, who would have become the infamous King Henry VIII, thus dramatically changing the course of history.  In the process, of course, it also generates a paradox; because if Henry VIII never became king, you would have no motivation to go back into the past and prevent him from becoming king, right?  Your own memories would be consistent with the timeline of history that led to your present moment.  Thus, you wouldn't go back in time and save Arthur's life.  But this would mean Arthur would die at fifteen, Henry VIII becomes king instead, and... well, you see the difficulty.

Lossev and Novikov's self-consistency principle fixes this problem.  It tells us that your attempt to save Prince Arthur must have failed -- because we know that didn't happen.  If you did go back in time, you were simply incorporated into whatever actually did happen.

Timeline of history saved.  Nothing changed.  Ergo, no paradox.

You'd think that physicists would kind of go "whew, dodged that bullet," but interestingly, most of them look at the self-consistency principle as a bandaid, an unwarranted and artificial constraint that doesn't arise from the models themselves.  Joseph Polchinski came up with another paradoxical situation -- a billiard ball fired into a wormhole at exactly the right angle that when it comes out of the other end, it runs into (and deflects) itself, preventing it from entering the wormhole in the first place -- and analysis by Nobel Prize-winning physicist Kip Thorne found there's nothing inherent in the models that prevents this sort of thing.

Some have argued that the ease with which time travel into the past engenders paradox is an indication that it's simply an impossibility; eventually, they say, we'll find that there's something in the models that rules out reversing the clock entirely.  In fact, in 2009, Stephen Hawking famously hosted a time-travelers' party at Cambridge University, complete with fancy food, champagne, and balloons -- but only sent out invitations the following day.  He waited several hours, and no one showed up.

That, he said, was that.  Because what time traveler could resist a party?

But there's still a lingering issue, because it seems like if it really is impossible, there should be some way to prove it rigorously, and thus far, that hasn't happened.  Last week we looked at the recent paper by Gavassino et al. that implied a partial loophole from the Second Law of Thermodynamics -- if you could travel into the past, entropy would run backwards during part of the loop and erase your memory of what had happened -- but it still leaves the question of djinn particles and self-deflecting billiard balls unsolved.

Seems like we're stuck with closed timelike curves, paradoxes notwithstanding.

Me, I think my mind is blown sufficiently for one day.  Time to go play with my puppy, who only worries about paradoxes like "when is breakfast?" and the baffling question of why he is not currently getting a belly rub.  All in all, probably a less stressful approach to life.

****************************************


Saturday, May 10, 2025

Mystery, certainty, and heresy

I've been writing here at Skeptophilia for fourteen years, so I guess it's to be expected that some of my opinions have changed over that time.

I think the biggest shift has been in my attitude toward religion.  When I first started this blog, I was much more openly derisive about religion in general.  My anger is understandable, I suppose; I was raised in a rigid and staunchly religious household, and the attitude of "God as micromanager" pervaded everything.  It brings to mind the line from C. S. Lewis's intriguing, if odd, book The Pilgrim's Regress: "...half the rules seemed to forbid things he'd never heard of, and the other half forbade things he was doing every day and could not imagine not doing; and the number of rules was so enormous that he felt he could never remember them all."

But the perspective of another fourteen years, coupled with exploring a great many ideas (both religious and non-religious) during that time, has altered my perspective some.  I'm still unlikely ever to become religious myself, but I now see the question as a great deal more complex than the black-and-white attitude I had back then.  My attitude now is more that everyone comes to understand this weird, fascinating, and chaotic universe in their own way and time, and who am I to criticize how someone else squares that circle?  As long as religious people accord me the same right to my own beliefs and conscience as they have, and they don't use their doctrine to sledgehammer in legislation favoring their views, I've got no quarrel.

The reason this comes up is, of course, because of the election of a new Pope, Leo XIV, to lead the Roman Catholic Church.  I watched the scene unfold two days ago, and I have to admit it was kind of exciting, even though I'm no longer Catholic myself.  The new Pope seems like a good guy.  He's already pissed off MAGA types -- the white smoke had barely dissipated from over St. Peter's before the ever-entertaining Laura Loomer shrieked "WOKE MARXIST POPE" on Twitter -- so I figure he must be doing something right.  I guess in Loomer's opinion we can't have a Pope who feeds the poor or treats migrants as human beings or helps the oppressed.

Or, you know, any of those other things that were commanded by Jesus.

The fact remains, though, that even though I have more respect and tolerance for religion than I once did, I still largely don't understand it.  After Pope Leo's election, I got online to look at other Popes who had chosen the name "Leo," and following that thread all the way back to the beginning sent me down a rabbit hole of ecclesiastical history that highlighted how weird some of the battles fought in the church have been.

The first Pope Leo ruled back in the fifth century, and his twenty-one year reign was a long and arduous fight against heresy.  Not, you understand, people doing bad stuff; but people believing wrongly, at least in Leo's opinion.

Pope Leo I (ca. 1670) by Francisco Herrera [Image is in the Public Domain]

The whole thing boils down to the bizarre argument called "Christology," which is doctrine over the nature of Jesus.  Leo's take on this was that Jesus was the "hypostatic union" of two natures, God-nature and human nature, in one person, "with neither confusion or division."  But this pronouncement immediately resulted in a bunch of other people saying, "Nuh-uh!"  You had the:

  • Monophysites, who said that Jesus only had one nature (divine);
  • Dyophysites, who said that okay, Jesus had two natures, but they were separate from each other;
  • Monarchians, who said that God is one indivisible being, so Jesus wasn't a distinct individual at all;
  • Docetists, who said that Jesus's human appearance was only a guise, without any true reality;
  • Arianists, who said that Jesus was divine in origin but was inferior to God the Father;
  • Adoptionists, who said that Jesus only became the Son of God at his baptism; and
  • probably a dozen or so others I'm forgetting about.

So Leo called together the Council of Chalcedon and the result was that most of these were declared heretical.  This gave the church leaders license to persecute the heretics, which they did, with great enthusiasm.  But what occurs to me is the question, "How did they know any of this?"  They were all working off the same set of documents -- the New Testament, plus (in some cases) some of the Apocrypha -- but despite that, all of them came to different conclusions.  Conclusions that they were so certain of they were completely confident about using them to justify the persecution of people who believed differently (or, in the case of the heretics themselves, that they believed so strongly they were willing to be imprisoned or executed rather than changing their minds).

Myself, I find it hard to imagine much of anything that I'm that sure of.  I try my hardest to base my beliefs on the evidence and logic insofar as I understand them at the time, but all bets are off if new data comes to light.  That's why although I consider myself a de facto atheist, I'm hesitant to say "there is no God."  The furthest I'll go is that from what I know of the universe, and what I've experienced, it seems to me that there's no deity in charge of things. 

But if God appeared to me to point out the error of my ways, I'd kind of be forced to reconsider, you know?  It's like the character of Bertha Scott -- based very much on my beloved grandmother -- said, in my novella Periphery:

"Until something like this happens, you can always talk yourself out of something."  Bertha chuckled.  "It’s like my daddy said about the story of Moses and the burning bush.  I remember he once said after Mass that if he was Moses, he’d’a just pissed himself and run for the hills.  Mama was scandalized, him talking that way, but I understood.  Kids do, you know.  Kids always understand this kind of thing...  You see, something talks to you out of a flaming bush, you can think it’s God, you can lay down and cry, you can run away, but the one thing you can’t do is continue to act like nothing’s happened."

So while my own views are, in some sense, up for grabs, my default is to stick with what I know from science.  And the fifth century wrangling by the first Pope Leo over the exact nature of Jesus strikes me as bizarre.  As former Secretary of the Treasury Robert Rubin put it, "Some people are more certain of everything than I am of anything."

Be that as it may, I wish all the best to this century's Pope Leo.  Like I said, he looks like a great choice, and a lot of my Catholic friends seem happy with him.  As far as my own mystification about a lot of the details of religion, it's hardly the only thing about my fellow humans I have a hard time understanding.  But like I said earlier, as long as religious people don't use their own certainty to try to force me into belief, I'm all about the principle of live and let live.

****************************************