Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Tuesday, March 26, 2024

The shadow knows

One of the most terrifying sleep-related phenomena is sleep paralysis.

I say this only from hearing about the experiences of others; I have never had it happen to me.  But the people I've talked to who have had episodes of sleep paralysis relate being wide awake and conscious, but unable to move -- often along with some odd sensory experiences -- such as feelings of being watched or having someone in the room; hissing, humming, or sizzling noises; a tingling in the extremities that feels like a mild electric shock; a feeling of being suffocated; and (understandably) the emotions of fear and panic.

The reason all of this comes up is an article that appeared over at the site Mysterious Universe about "Shadow People."  The piece was by Nick Redfern, whose name should be familiar to anyone who is an aficionado of cryptozoology; Redfern has been involved in a number of investigations of the paranormal, and is the author of books such as The Roswell UFO Conspiracy, Shapeshifters: Morphing Monsters and Changing Cryptids, The Real Men in Black, The New World Order Book, and a variety of other titles I encourage you to peruse.

So Redfern has a pretty obvious bias, here, which is why I was already primed to view his piece on the Shadow People with a bit of a jaundiced eye.  Let me let him speak for himself, though.  Redfern tells us that there are these entities that we should all be on the lookout for, and then tells us the following:
Jason Offutt is an expert on the Shadow People, and the author of a 2009 book on the subject titled Darkness Walks: The Shadow People Among Us.  He says there are eight different kinds of Shadow People – at least, they are the ones we know about.  He labels them as Benign Shadows, Shadows of Terror, Red-Eyed Shadows, Noisy Shadows, Angry Hooded Shadows, Shadows that Attack, Shadow Cats, and the Hat Man.
Shadow Cats?  Why only cats?  Cats, in my experience, are already conceited enough that they don't need another feather in their caps.  Of course, the positive side is that Shadow Cats wouldn't be very threatening. The cats I've owned specialized in two behaviors: Sitting Around Looking Bored, and Moving Closer To Where We Are So We'll Appreciate How Bored They Are.  If their Shadow versions are no more motivated, it's hard to see why you'd even care they were around, since Shadow Cats presumably don't eat, drink, or use a litter box.  They'd kind of be a low-impact paranormal home décor item.

On the other hand, I'm just as glad there are no Shadow Dogs, because then we'd have yet another source of the really obnoxious noise that dogs make when they are conducting intimate personal hygiene, a sound my wife calls "glopping."  Our three dogs glop enough, there's no need for additional glopping from the spirit world.

But then there's "Hat Man."  On first glance, that seemed fairly non-threatening, but Redfern tells us that Hat Man is the scariest one on the list:
I sat and listened at my table [at a conference, speaking to an attendee] as he told me how, back in July of this year, he had three experiences with the Hat Man – and which were pretty much all identical – and which were very familiar to me.  He woke up in the early hours of the morning to a horrific vision: the outside wall of his bedroom was displaying a terrifying image of a large city on fire, with significant portions of it in ruins.  It was none other than Chicago.  The sky was dark and millions were dead.  Circling high above what was left of the city was a large, human-like entity with huge wings.  And stood [sic] next to the guy, as he watched this apocalyptic scenario unravel from his bed, was the Hat Man, his old-style fedora hat positioned firmly on his head.  The doomsday-like picture lasted for a minute or two, making it clear to the witness that a Third World War had begun.  On two more occasions in the same month, a near-identical situation played out.  It’s hardly surprising that the man was still concerned by all this when we chatted at the weekend.
So he talked to some other people, and more than one person mentioned seeing Hat Man, and always associated with images of doom and destruction.  Toward the end, he mentions the fact that one of the people who'd seen Hat Man suffered from sleep paralysis... which kind of made me go, "Aha."

In a paper by Walther and Schulz back in 2004 entitled, "Recurrent Isolated Sleep Paralysis: Polysomnographic and Clinical Findings," it was found that people who suffered from sleep paralysis showed abnormal patterns of REM and non-REM sleep, and (most interestingly) fragmentation of REM.  REM, you probably know, is associated with dreaming; suppressing or disturbing REM causes a whole host of problems, up to and including hallucination.  Another paper -- Cheyne, Rueffer, and Newby-Clark, in 1999, "Hypnagogic and Hypnopompic Hallucinations during Sleep Paralysis: Neurological and Cultural Construction of the Night-Mare" -- has another interesting clue, which is that during sleep paralysis, cholinergic neurons (the neural bundles that promote wakefulness and REM) are hyperactive, whereas the serotonergic neurons (ones that initiate relaxation and a sense of well-being) are inhibited.  This implies that the mind becomes wakeful, but emotionally uneasy, before the brain-body connection comes back online.

[Image is in the Public Domain]

The problem here is that if you're in sleep paralysis, or the related phenomenon of hypnagogic experiences (dreams in light sleep), what you are perceiving is not reflective of reality.  So as creepy as Shadow People are -- not to mention "Hat Man" -- I'm pretty certain that what we've got here is a visual hallucination experienced during a dream state.

Not sure about the Shadow Cats, though.  I still don't see how that'd work.  Given my luck at trying to get cats comply with simple rules such as "Stay The Hell Off The Kitchen Counter," my guess is that even feline hallucinations wouldn't want to cooperate.  If you expected them to show up and scare some poor dude who was just trying to get a good night's sleep, they'd probably balk because it wasn't their idea.  Shadow Dogs, on the other hand, would be happy to climb on the sleeping dude's bed and glop right next to his ear.  They're just helpful that way.

****************************************



Monday, March 25, 2024

Dog days

Our new dog, Jethro, is in the middle of a six-week puppy obedience class.

After three weeks of intensive training, he reliably knows the command "Sit."  That's about it.  The difficulty is he's the most chill dog I've ever met.  He's not motivated to do much of anything except whatever it takes to get a belly rub. 

Jethro in a typical position

Otherwise, whatever he's doing, he's perfectly content to keep doing it, especially if it doesn't require any extra effort.  In class a couple of weeks ago I finally got him to lie down when I said, "Down," but then he didn't want to get up again.  In fact, he flopped over on his side and refused to move even when I tried tempting him with a doggie treat.  After a few minutes, the instructor said, "Is your dog still alive?"

I assured him that he was, and that this was typical behavior.

After a few more futile attempts, I gave up, sat on the floor, and gave him a belly rub.

Jethro, not the instructor.

So after working with Jethro in class and at home, I've reached three conclusions:

  1. He has an incredibly sweet, friendly disposition.
  2. He's cute as a button.
  3. He has the IQ of a PopTart.

When we give him a command, he looks at us with this cheerful expression, as if to say, "Those are words, aren't they?  I'm pretty sure those are words."  Then he thinks, "Maybe those words have something to do with belly rubs."  So he flops over on his back, and his lone functioning brain cell goes back to sleep, having accomplished its mission.

Jethro in a rare philosophical mood

I couldn't help but think of Jethro when I read a study out of Eötvös Loránd University in Budapest, Hungary, which looked at how an electroencephalogram trace changes when dogs are told the names of things (rather than commands to do things), and it found that the parts of the brain that are involved in mental representations of objects activate in dogs -- just as they do in humans.  The upshot is that dogs seem to form mental images when they hear the names of the objects.

"Dogs do not only react with a learned behavior to certain words," said study lead author Marianna Boros, in an interview with Science Daily.  "They also don't just associate that word with an object based on temporal contiguity without really understanding the meaning of those words, but they activate a memory of an object when they hear its name."

Interestingly, this response seemed to be irrespective of a particular dog's vocabulary.  "It doesn't matter how many object words a dog understands," Boros said.  "Known words activate mental representations anyway, suggesting that this ability is generally present in dogs and not just in some exceptional individuals who know the names of many objects."

"Dogs are not merely learning a specific behavior to certain words, but they might actually understand the meaning of some individual words as humans do," said Lilla Magyari, who co-authored the study.  "Your dog understands more than he or she shows signs of."

Well, okay, maybe your dog does.  With Jethro, the best response he seems to be capable of is mild puzzlement.  I wish he'd been one of the test subjects, but my fear would be that when they'd say a word to him, the response on the EEG would be *soft static*, and the researchers would come to me with grave expressions and say, "I'm sorry to give you the bad news, Mr. Bonnet, but your dog appears not to have any higher brain function."

Of course, I have to admit that it's hard to discern between "I don't understand what you're saying" and "I don't give a damn about what you're saying."  Yesterday when my wife was trying to teach him to catch a foam rubber frisbee, and he repeatedly allowed the frisbee to bonk off of the top of his head, it might be that he knew perfectly well what she wanted him to do and just didn't want to do it.  So perhaps Lilla Magyari's right, and he's smarter than we think he is. 

Given how often he's persuaded us to give up on all the "Sit," "Down," and "Stay" bullshit and just give him a belly rub, maybe he's not the one who's a slow learner.

****************************************



Saturday, March 23, 2024

Twisted faces

One of the most terrifying episodes The X Files ever did was called "Folie à Deux."  In the opening scene, a man sees his boss not as a human but as a hideous-looking insectile alien who is, one by one, turning the workers in the company into undead zombies.

The worst part is that he's the only one who sees all of this.  Everyone else thinks everything is perfectly normal.

The episode captures in appropriately ghastly fashion the horror of psychosis -- the absolute conviction that the awful things you're experiencing are real despite everyone's reassurance that they're not.  In the show, of course, they are real; it's the people who aren't seeing it who are delusional.  But when this sort of thing happens in the real world, it is one of the scariest things I can imagine.  As I made the point in my neuroscience classes, your brain is taking the information it receives from your sensory organs and trying to assemble a picture of reality from those inputs; if something goes wrong, and the brain puts that information together incorrectly, that flawed picture becomes your reality.  At that point, there is no reliable way to distinguish reality from hallucination.

I was, unfortunately, reminded of that episode when a friend and loyal reader of Skeptophilia sent me a link yesterday to a story in NBC News Online about a man with prosopometamorphopsia, a (thank heaven) rare disorder that causes the patient's perception of human faces to go awry.  When he looks at another person, he sees their face as grotesquely stretched, with deep grooves in the forehead and cheeks.

Computer-generated images of what the patient describes seeing [Image credit: Antônio Mello, Dartmouth University]

Weirdly, it doesn't happen when he looks at a drawing or a photograph; only actual faces trigger the shift.  A moving face -- someone talking, for example -- accentuates the distortion.

Some people with prosopometamorphopsia (PMO) have it from birth; most, though, acquire it through physical damage to the brain, such as a stroke or traumatic brain injury.  The patient who was the first subject of this study shows up in MRI images with a lesion on the left side of his brain that is undoubtedly the origin of the distorted perception.  As far as the origin of that, he had a severe concussion in his forties (he's now 59), but also suffered from accidental carbon monoxide poisoning four months before the onset of symptoms.  Which of those is the root cause of the lesion, or if it's from something else entirely, is unknown.

At least now that he knows what's going on, he has been reassured that he's not going insane -- or worse, that he's seeing the world as it actually is, and like the man in "Folie à Deux," become convinced that he's the only one who does.  "My first thought was I woke up in a demon world," the patient told researchers, regarding how he felt when the symptoms started.  "I came so close to having myself institutionalized.  If I can help anybody from the trauma that I experienced with it and keep people from being institutionalized and put on drugs because of it, that’s my number-one goal."

I was immediately reminded of a superficially similar disorder called Charles Bonnet syndrome. (Nota bene: Charles Bonnet is no relation.  My French great-grandfather's name was changed upon arrival in the United States, so my last name shouldn't even be Bonnet.)  In this disorder, people with partial blindness, often from macular degeneration, start putting together the damaged and incomplete information their eyes are relaying to their brains in novel ways, causing what are called visual release hallucinations.  They can be complex -- one elderly woman saw what appeared to be tame lions strolling about in her house -- but there's no actual psychosis.  The people experiencing them, as with PMO, know (or can be convinced) that what they're seeing isn't real, which takes away a great deal of the anxiety, fear, and trauma of having hallucinations.

So at least that's one upside for PMO sufferers.  Still, it's got to be disorienting to look at the world around you and know for certain that what you're seeing isn't the way it actually is.  My eyesight isn't great, even with bifocals, but at least what I am seeing is real.  I'll take that over twisted faces and illusory lions any day.

****************************************



Friday, March 22, 2024

Leading the way into darkness

New from the "I Thought We Already Settled This" department, we have: the West Virginia State Legislature has passed a bill, and the Governor is expected to sign it, which would allow the teaching of Intelligent Design and other "alternative theories" to evolution in public school biology classes.

It doesn't state this in so many words, of course.  The Dover (PA) decision of 2005 ruled that ID is not a scientific theory, has no place in the classroom, and to teach it violates the Establishment Clause of the United States Constitution.  No, the anti-evolutionists have learned from their mistakes.  State Senator Amy Grady (R), who introduced the bill, deliberately eliminated any specific mention of ID in the wording of the bill.  It says, "no local school board, school superintendent, or school principal shall prohibit a public school classroom teacher from discussing and answering questions from students about scientific theories of how the universe and/or life came to exist" -- but when questioned on the floor of the Senate, Grady reluctantly admitted that it would allow ID to be discussed.

And, in the hands of a teacher who was a creationist, to be presented as a viable alternative to evolution.

I think the thing that frosts me the most about all this is an exchange between Grady and Senator Mike Woelfel (D) about using the words "scientific theories" without defining them.  Woelfel asked Grady if there was such a definition in the bill, and she said there wasn't, but then said,  "The definition of a theory is that there is some data that proves something to be true.  But it doesn’t have to be proven entirely true."

*brief pause for me to scream obscenities*

No, Senator Grady, that is not the definition of a theory.  I know a lot of your colleagues in the Republican Party think we live in a "post-truth world" and agree with Kellyanne Conway that there are "alternative facts," but in science you can't just make shit up, or define terms whatever way you like and then base your argument on those skewed definitions.  Let me clarify for you what a scientific theory is, which I only have to do because apparently you can't even be bothered to read the first paragraph of a fucking Wikipedia article:

A scientific theory is an explanation of an aspect of the natural world and universe that can be (or a fortiori, that has been) repeatedly tested and corroborated in accordance with the scientific method, using accepted protocols of observation, measurement, and evaluation of results.  Where possible, some theories are tested under controlled conditions in an experiment... Established scientific theories have withstood rigorous scrutiny and embody scientific knowledge.

Intelligent Design is not a theory.  It does not come from the scientific method, it is not based on data and measurements, and it makes no predictions.  It hinges on the idea of irreducible complexity -- that there are structures or phenomena in biology that are too complex, or have too many interdependent pieces, to have arisen through evolution.  This sounds fancy, but it boils down to "we don't understand this, therefore God did it."  (If you want an absolutely brilliant takedown of Intelligent Design, read Richard Dawkins's book The Blind Watchmaker.  How, after reading that, anyone can buy ID is beyond me.)

[Image licensed under the Creative Commons Hannes Grobe, Watch with no background, CC BY 3.0]

And don't even get me started on Young-Earth Creationism.

What gets me is how few people are willing to call out people like Amy Grady on their bullshit.  People seem to have become afraid to stand up and say, "You are wrong."  "Alternative facts" aren't facts; they are errors at best and outright lies at worst.

And if we live in a "post-truth world" it's because we're choosing to accept errors and lies rather than standing up to them.

As historian Timothy Snyder put it, in his 2021 essay "The American Abyss":

Post-truth is pre-fascism...  When we give up on truth, we concede power to those with the wealth and charisma to create spectacle in its place.  Without agreement about some basic facts, citizens cannot form the civil society that would allow them to defend themselves.  If we lose the institutions that produce facts that are pertinent to us, then we tend to wallow in attractive abstractions and fictions...  Post-truth wears away the rule of law and invites a regime of myth.

But Carl Sagan warned us of this almost thirty years ago, in his brilliant (if unsettling) book The Demon-Haunted World: Science as a Candle in the Dark:

Science is more than a body of knowledge; it is a way of thinking.  I have a foreboding of an America in my children's or grandchildren's time – when the United States is a service and information economy; when nearly all the key manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what's true, we slide, almost without noticing, back into superstition and darkness.

People like Amy Grady are leading the way into that darkness, and it seems like hardly anyone notices.

We cannot afford to have a generation of children going through public school and coming out thinking that ignorant superstition is a theory, that sloppily-defined terms are truth, and that pandering to the demands of a few that their favorite myths be elevated to the status of fact is how science is done.  It's time to stand up to the people who are trying to co-opt education into religious indoctrination.

In the Dover Decision, we won a battle, but it's becoming increasingly apparent that we have not yet won the war.

****************************************



Thursday, March 21, 2024

Crown jewel

A white dwarf is the remnant of an average-to-small star at the end of its life.  When a star like our own Sun exhausts its hydrogen fuel, it goes through a brief period of fusing helium into carbon and oxygen, but that too eventually runs out.  This creates an imbalance between the two opposing forces ruling a star's life -- the outward thermal pressure from the heat released by fusion, and the inward compression from gravity.  When fusion ceases, the thermal pressure drops, and the star collapses until the electron degeneracy pressure becomes high enough to stop the expansion.  The Pauli Exclusion Principle states that two electrons can't occupy the same quantum state, and the force generated in order to prevent this happening is sufficient to counterbalance the gravitational pressure.  (At higher masses, even that's not enough to stop the collapse; the electrons are forced to fuse with protons, generating a neutron star, or at higher masses still, a black hole.)

For a star like our Sun, in a single-star system, that's pretty much that.  The outer layers of the star's atmosphere get blown away to form a ghostly shell called a planetary nebula, and the white dwarf -- actually the star's core -- remains to slowly cool down and dim over the next billion-odd years.  But in multiple-star systems, something far more interesting happens.

White dwarfs, although nowhere near as dense as neutron stars, still have a strong gravitational field.  If the white dwarf is part of a close binary system, the gravitational pull of the white dwarf is sufficient to siphon off gas from the upper atmosphere of its companion star.  The material from the companion is heated and compressed as it falls toward the white-hot surface of the white dwarf, and once enough of it builds up, it suddenly becomes hot enough to fuse, generating a huge burst of energy in a runaway thermonuclear reaction.

The result is called a nova -- a "new star," even though it's not new at all, it has merely flared up enough to see from a long way away.  (The other name for this phenomenon is a cataclysmic binary, which I like better not only because it's more accurate but because it sounds badass.)  Once the new fuel gets exhausted, it dims again, but the process merely starts over.  The siphoning restarts, and depending on the rate of accretion, there'll eventually be another flare-up.

Artist's concept of a nova flare-up [Image courtesy of NASA Conceptual Image Lab/Goddard Flight Center]

The topic comes up because there is a recurrent nova that is due to erupt soon, and when it does, a "new star" will be visible in the Northern Hemisphere.  It's in the rather dim, crescent-shaped constellation of Corona Borealis, between Boötes and Hercules, which can be seen in the evening in late spring to midsummer.  The star T Coronae Borealis is ordinarily magnitude +10, and thus far too dim to see with the naked eye; most people can't see anything unaided dimmer than magnitude +6, and that's if you've got great eyes and it's a completely clear, dark night.  But in 1946 this particular star started to dim even more, then suddenly flared up to magnitude +2 -- about as bright as Polaris -- before gradually dimming over the next days to weeks back down to its previous near-invisibility.

And the astrophysicists are seeing signs that it's about to repeat its behavior from 78 years ago.  The best guesses are that it'll flare some time before September, which is perfect timing for seeing it if you live in the Northern Hemisphere.  If you're a star-watcher, keep an eye on the usually unremarkable constellation of Corona Borealis -- at some point soon, there will be a new jewel in the crown, albeit a transient one.

You have to wonder, though, if at some point the white dwarf in the T Coronae Borealis binary system will pick up enough extra mass from its companion to cross the Chandrasekhar Limit.  This value -- about 1.4 solar masses -- was determined by the brilliant Indian physicist Subrahmanyan Chandrasekhar as the maximum mass a white dwarf can have before the electron degeneracy pressure is insufficient to halt the collapse.  At that point, it falls inward so fast the entire star blows itself to smithereens in a type-1a supernova, one of the most spectacular events in the universe.  If T Coronae Borealis did this -- not that it's likely any time soon -- it would be far brighter than the full Moon, and easily visible in broad daylight, probably for weeks to months.

Now that I would like to see.

****************************************



Wednesday, March 20, 2024

Grammar wars

In linguistics, there's a bit of a line in the sand drawn between the descriptivists and the prescriptivists.  The former believe that the role of linguists is simply to describe language, not establish hard-and-fast rules for how language should be.  The latter believe that grammar and other linguistic rules exist in order to keep language stable and consistent, and therefore there are usages that are wrong, illogical, or just plain ugly.

Of course, most linguists don't fall squarely into one camp or the other; a lot of us are descriptivists up to a point, after which we say, "Okay, that's wrong."  I have to admit that I'm far more of a descriptivist bent myself, but there are some things that bring out my inner ruler-wielding grammar teacher, like when I see people write "alot."  Drives me nuts.  And I know it's now become acceptable, but "alright" affects me exactly the same way.

It's "all right," dammit.

However, some research published in Nature shows, if you're of a prescriptivist disposition, eventually you're going to lose.

In "Detecting Evolutionary Forces in Language Change," Mitchell G. Newberry, Christopher A. Ahern, Robin Clark, and Joshua B. Plotkin of the University of Pennsylvania describe that language change is inevitable, unstoppable, and even the toughest prescriptivist out there isn't going to halt the adoption of new words and grammatical forms.

The researchers analyzed over a hundred thousand texts from 1810 onward, looking for changes in morphology -- for example, the decrease in the use of past tense forms like "leapt" and "spilt" in favor of "leaped" and "spilled."  The conventional wisdom was that irregular forms (like pluralizing "goose" to "geese") persist because they're common; less common words, like "turf" -- which once pluralized to "turves" -- eventually regularize because people don't use the word often enough to learn the irregular inflection, and eventually the regular one (in this case, "turfs") takes over.

The research by Newberry et al. shows that this isn't true -- when there are two competing forms, which one wins is more a matter of random chance than commonness.  They draw a very cool analogy between this phenomenon, which they call stochastic drift, to the genetic drift experienced by evolving populations of living organisms.

"Whether it is by random chance or selection, one of the things that is true about English – and indeed other languages – is that the language changes,” said Joshua Plotkin, who co-authored the study.  "The grammarians might [win the battle] for a decade, but certainly over a century they are going to be on the losing side.  The prevailing view is that if language is changing it should in general change towards the regular form, because the regular form is easier to remember.  But chance can play an important role even in language evolution – as we know it does in biological evolution."

So in the ongoing battles over grammatical, pronunciation, and spelling change, the purists are probably doomed to fail.  It's worthwhile remembering how many words in modern English that are now completely accepted by descriptivist and prescriptivist alike are the result of such mangling.  Both "uncle" and "umpire" came about because of an improper split of the indefinite article ("a nuncle" and "a numpire" became "an uncle" and "an umpire").  "To burgle" came about because of a phenomenon called back formation -- when a common linguistic pattern gets applied improperly to a word that sounds like it has the same basic construction.  A teacher teaches, a baker bakes, so a burglar must burgle.  (I'm surprised, frankly, given how English yanks words around, we don't have carpenters carpenting.)


Anyhow, if this is read by any hard-core prescriptivists, all I can say is "I'm sorry."  It's a pity, but the world doesn't always work the way we'd like it to.  But even so, I'm damned if I'm going to use "alright" and "alot."  A line has to be drawn somewhere.  And I'm gonna draw it a lot, all right?

****************************************



Tuesday, March 19, 2024

Cosmological conundrums

Three of the most vexing problems in physics -- and ones I've hit on a number of times here at Skeptophilia -- are:
  1. dark matter -- the stuff that (by its gravitational influence) seems to make up 26% of the mass/energy of the universe, and yet has resisted every effort at detection or inquiry into what other properties it might have.
  2. dark energy -- a mysterious "something" that is said to be responsible for the apparent runaway expansion of the universe, and which (like dark matter) has defied detection or explanation in any other way.  This makes up 69% of the universe's mass/energy -- meaning the ordinary matter we're made of comprises only 5% of the apparent content of the universe.
  3. the conflict between the general theory of relativity (i.e. the theory of gravitation) and quantum physics.  In the realm of the very small (or at high energies), the theory of relativity falls apart -- it's irreconcilable with the nondeterministic model of quantum mechanics.  Despite over a century of the best minds in theoretical physics trying to find a quantum theory of gravity, the two most fundamental underpinnings of our understanding of the universe just don't play well together.
A while back I was discussing this with the fiddler in my band, who also happened to be a Cornell physics lecturer.  Her comment was that the mess physics is currently in suggests we're missing something major -- the same way that the apparent constancy of the speed of light in a vacuum, regardless of reference frame, created an intractable nightmare for physicists at the end of the nineteenth century.  It took Einstein coming up with the Theories of Relativity to show that the problem wasn't a problem at all, but a fundamental reality about how space and time work, to resolve it all.

"We're still waiting for this century's Einstein," Kathy said.

[Image licensed under the Creative Commons ESA/Hubble, Collage of six cluster collisions with dark matter maps, CC BY 4.0]

There's no shortage of physicists working on stepping into those shoes -- and just last week, two papers came out suggesting possible solutions for the first two problems.

One claims to solve all three simultaneously.

Both of them start with a similar take on dark matter and dark energy as Einstein did about the luminiferous aether, the mysterious substance that nineteenth-century physicists thought was the medium through which light propagated; they simply don't exist.  

The first one, from Rajendra Gupta of the University of Ottawa, proposes that the need for both dark matter and dark energy in the model comes from a misconception about how the laws of physics change on a cosmological time scale.  The prevailing wisdom has been "they don't;" the laws now are the same as the laws thirteen billion years ago, not long after the Big Bang.  Gupta suggests that making two modifications to the model -- assuming that the strength of the four fundamental forces of nature (gravity, electromagnetism, and the weak and strong nuclear forces) have decreased over time, and that light loses energy as it travels over long distances, explain all the astrophysical observations we've made, and obviates the need for dark matter and dark energy.

"The study's findings confirm that our previous work -- JWST early-universe observations and ΛCDM cosmology -- about the age of the universe being 26.7 billion years [rather than the usually accepted value of 13.8 billion years] has allowed us to discover that the universe does not require dark matter to exist," Gupta said.  "In standard cosmology, the accelerated expansion of the universe is said to be caused by dark energy but is in fact due to the weakening forces of nature as it expands, not due to dark energy."

The second, by Jonathan Oppenheim and Andrea Russo of University College London, suggests a different solution that (if correct) not only gets rid of dark matter and dark energy, but in one fell swoop resolves the conflict between relativity and quantum physics.  They propose that the problem is the deterministic nature of gravity; if a quantum-like uncertainty is introduced into gravitational models, the whole shebang works without the need for some mysterious dark matter and dark energy that no one has ever been able to find experimentally.

The mathematics of the model -- which, I must admit up front, are beyond me -- introduce new terms to explain the behavior of gravity at low accelerations, which are (not coincidentally) the regime where the effects of dark matter become apparent.  It's a striking approach; physicist Sabine Hossenfelder, who is generally reluctant to hop on the latest Grand Unified Theory bandwagon (and whose pessimism has been, unfortunately, justified in the past) writes in an essay on the new theory, "Reading Oppenheim’s new papers—published in the journals Nature Communications and Physical Review X—about what he dubs 'Post-Quantum Gravity,' I have been impressed by how far he has pushed the approach.  He has developed a full-blown framework that combines quantum physics with classical physics, and he tells me that he has another paper in preparation which shows that he can solve the problem of infinites that plague the Big Bang and black holes."

Despite this, Hossenfelder is still dubious about Post-Quantum Gravity.  "I don’t want to withhold from you that I think Oppenheim’s theory is wrong, because it remains incompatible with Einstein’s cherished principle of locality, which says that causes should only travel from one place to its nearest neighbours and not jump over distances," she writes.  "I suspect that this is going to cause problems sooner or later, for example with energy conservation.  Still, I might be wrong...  If Oppenheim’s right, it would mean Einstein was both right and wrong: right in that gravity remained a classical, non-quantum theory, and wrong in that God did play dice indeed.  And I guess for the good Lord, we would have to be both sorry and not sorry."

So we'll just have to wait and see.  If either of these theories is right, we're talking Nobel Prize material.  If the second one is right, it'd be the physics discovery of the century.  Like Sabine Hossenfelder, I'm not holding my breath; attempts to solve definitively the three problems I started this post with are, thus far, batting zero.  And I'm hardly qualified to make a judgment about what the chances are for these two.  But like many interested laypeople, I'll be fascinated to see which way it goes -- and to see if we might, in the words of my bandmate/physicist friend, be "looking at the twenty-first century's Einstein."

****************************************