Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Friday, February 20, 2026

Emergent nonsense

Today I'd like to look at two articles that are especially interesting in juxtaposition.

The first is about a study out of the University of New South Wales, where researchers in psychology found that people are largely overconfident about their ability to detect AI-generated human faces.  No doubt this confidence comes from the fact that it used to be easier -- AI faces had a slick, animated quality, that for many of us was an immediate red flag that the image wasn't real.

Not anymore.

It's not the Dunning-Kruger effect -- the (now widely disputed) tendency of people to overestimate their competence -- it's more that the quality of AI images has simply improved.  Drastically.  One thing that makes this study especially interesting is that the research team deliberately included a cohort of people called "super-recognizers" -- people whose ability to remember faces is significantly better than average -- as well as a group of people with ordinary facial recognition ability.  

"Up until now, people have been confident of their ability to spot a fake face," said study co-author James Dunn.  "But the faces created by the most advanced face-generation systems aren’t so easily detectable anymore...  What we saw was that people with average face-recognition ability performed only slightly better than chance.  And while super-recognizers performed better than other participants, it was only by a slim margin.  What was consistent was people’s confidence in their ability to spot an AI-generated face – even when that confidence wasn’t matched by their actual performance."

AI or real?  There are six of each.  Answers at the end of the post.  [Image credit: Dunn et al., UNSW]

The second study, out of the University of Bergen, appeared this week in the journal Information, Communication, and Society, and was titled, "What is a Fact?  Fact-checking as an Epistemological Lens," and its findings are -- or should be -- so alarming I'll quote the authors verbatim:
Generative AI systems produce outputs that are coherent and contextually plausible yet not necessarily anchored in empirical evidence or ground truth.  This challenges traditional notions of factuality and prompts a revaluation of what counts as a fact in computational contexts.  This paper offers a theoretical examination of AI-generated outputs, employing fact-checking as an epistemic lens.  It analyses how three categories of facts – evidence-based facts, interpretative-based facts and rule-based facts – operate in complementary ways, while revealing their limitations when applied to AI-generated content.  To address these shortcomings, the paper introduces the concept of emergent facts, drawing on emergence theory in philosophy and complex systems in computer science.  Emergent facts arise from the interaction between training data, model architecture, and user prompts; although often plausible, they remain probabilistic, context-dependent, and epistemically opaque.

Is it just me, or does the whole "emergent fact" thing remind you of Kellyanne Conway's breezy, "Yes, well, we have alternative facts"?

I mean, evaluating philosophical claims is way above my pay grade, but doesn't "epistemically opaque" mean "it could either be true or false, and we have no way of knowing which?"  And if my interpretation is correct, how can the output of a generative AI system even qualify as a "fact" of any kind?

So, we have AI systems that are capable of fooling people in a realm where most of us have a strikingly good, evolutionarily-driven ability -- recognizing what is and what is not a real human face -- and simultaneously, the people who study the meaning of truth are saying straight out that what comes out of large language models is effectively outside the realm of provable truth?  It makes sense, given how LLMs work; they're probabilistic sentence generators, using a statistical model to produce sentences that sound good based on a mathematical representation of the text they were trained on.  It's unsurprising, I suppose, that they sometimes generate bullshit -- and that it sounds really convincing.  

Please tell me I'm not the only one who finds this alarming.

Is this really the future that the techbros want?  A morass of AI-generated slop that is so cleverly constructed we can't tell the difference between it and reality?

The most frightening thing, to me, is that it puts a terrifying amount of power in the hands of bad actors who will certainly use AI's capacity to mislead for their own malign purposes.  Not only in creating content that is fake and claiming it's real, but the reverse.  For example, when photographic and video evidence of Donald Trump's violent pedophilia is made public -- it's only a matter of time -- I guarantee that he will claim that it's an AI-generated hoax.

And considering "emergent facts" and the phenomenal improvement in AI-generated imagery, will it even be possible to prove otherwise?  Gone are the days that you could just count the fingers or look for joints bending the wrong way. 

I know I've been harping on the whole AI thing a lot lately, and believe me, I wish I didn't have to.  I'd much rather write about cool discoveries in astronomy, geology, genetics, and meteorology.  But the current developments are so distressing that I feel driven to post about them, hoping that someone is listening who is in a position to put the brakes on.

Otherwise, I fear that we're headed toward a world where telling truth from lies will slide from "difficult" to "impossible" -- and where that will lead, I have no idea.  But it's nowhere good.

Faces 2, 3, 5, 8, 9, and 11 are AI-generated.  The others are real.

****************************************


Thursday, February 19, 2026

Twinkle, twinkle, little antistar

It's a big mystery why anything exists.

I'm not just being philosophical, here.  According to the current most widely-accepted cosmological model, when the Big Bang occurred, matter and antimatter would have formed in equal quantities.  As anyone who has watched Star Trek knows, when matter and antimatter come into contact, they mutually annihilate and all of the mass therein is converted to a huge amount energy in the form of gamma rays, the exact quantity of which is determined by Einstein's law of E = mc^2.

So if we started out with equal amounts of matter and antimatter, why didn't it all eventually go kablooie, leaving a universe filled with nothing but gamma rays?  Why was there any matter left over?

The answer is: we don't know.  Some cosmologists and astrophysicists think that there may have been a slight asymmetry in favor of matter, driven by random quantum fluctuations early on, so while most of the matter and antimatter were destroyed by collisions, there was a little bit of matter left, and that's what's around today.  (And "a little bit" is honestly not an exaggeration; the vast majority of the universe is completely empty.  An average cubic meter of space is very unlikely to have much more than an atom or two in it.)

One question this sometimes brings up is whether the stars and galaxies we see in the night sky are matter; if, perhaps, some entire galaxies are made of antimatter, and there really are equal amounts of the two.  After all, antimatter is predicted to act exactly like matter except that its fundamental particles have the opposite charges -- its protons are negative, its electrons positive, and so forth.  So a planet entirely formed of antimatter would look (from a safe distance) exactly like an ordinary planet.

And just to throw this out there, an antiplanet wouldn't have copies of all of us except for having the opposite personalities, for example some people who are good guys being evil and/or having beards, as outlined in the highly scientific Lost in Space episode "The Antimatter Man:"


Nor would there be a creepy bridge between the two universes, covered with fog and backed by eerie music:


Which is a shame, because I always kinda liked that episode.

Considerations of evil Major Don West with a beard notwithstanding, here are two arguments why most physicists believe that the stars we see, even the most distant, are made of ordinary matter.  The first is that there is no known process that would have sorted out the matter from the antimatter early in the universe's life, leaving isolated clumps of each to form their respective stars and galaxies.  Secondly, if there were antistars and antigalaxies, then there'd be an interface between them and the nearest clump of ordinary stars and galaxies, and at that interface matter and antimatter would be constantly meeting and mutually annihilating.  This would produce a hell of a gamma ray source -- and we haven't seen anything out there that looks like a matter/antimatter interface (although I will return to this topic in a moment with an interesting caveat).

A paper a while back found that the key to understanding why matter prevailed might lie in the mysterious "ghost particles" called neutrinos.  There are three kinds of neutrinos -- electron neutrinos, muon neutrinos and tau neutrinos -- and one curious property they have is that they oscillate, meaning they can convert from one type to another.  The rate at which they do this is predicted from current theories, and it's thought that antineutrinos do exactly the same thing at exactly the same rate.

The experiment described in the paper took place in Japan, and found that there is an unexpected asymmetry between neutrinos and antineutrinos.  Beams of muon neutrinos and muon antineutrinos were sent on a six-hundred-kilometer journey across Japan, and upon arriving at a detector, were analyzed to see how many had converted to one of the other two "flavors."  The surprising result was that the neutrinos had oscillated a lot more than predicted, and the antineutrinos a lot less -- something called a "CP (charge-parity) violation" that shows antimatter doesn't, in fact, behave exactly like matter.  This asymmetry could lie at the heart of why the balance tipped in favor of matter.

But now an analysis of data from the Fermi Gamma-ray Space Telescope has thrown another monkey wrench into the works.  The study was undertaken because of a recent puzzling detection by an instrument on the International Space Station of nuclei of antihelium, which (if current models are correct) should be so rare in the vicinity of ordinary matter that they'd be entirely undetectable.  But what if the arguments against antistars and antigalaxies I described earlier aren't true, and there are such odd things out there?  Antistars would be undergoing fusion just like the Sun does, and producing antihelium (and other heavier antielements), which then would be shed from the surface just like our Sun sheds helium.  And some of it might arrive here, only to fall into one of our detectors.

But what about the whole gamma-rays-at-the-interface thing?  Turns out, the study in question, the subject of a paper in the journal Physical Review D, found that there are some suspicious gamma-ray sources out there.

Fourteen of them, in fact.

These gamma-ray sources are producing photons with an energy that's hard to explain from known sources of gamma rays -- pulsars and black holes, for example.  In fact, the energy of these gamma rays is perfectly consistent with the source being ordinary matter coming into contact with an antistar.

Curiouser and curiouser.

It doesn't eliminate the problem of why the universe is biased toward matter; even if these are antistars, their frequency in the universe suggests that only one in every 400,000 stars is an antistar.  So we still have the imbalance to explain.

But it's a strange and fascinating finding.  Astrophysicists are currently re-analyzing the data from every angle they can think of to try and account for the odd gamma-ray sources in any way other than it being evidence of antistars, so it may be that the whole thing will fizzle.  But for now, it's a tantalizing discovery.  It brings to mind the famous quote from J. B. S. Haldane -- "The universe is not only queerer than we imagine, it's queerer than we can imagine."

****************************************


Wednesday, February 18, 2026

The D. C. of D. C.

A loyal reader of Skeptophilia commented that the last few posts have been pretty grim, and maybe I should write about something more uplifting, like kitties.

I am nothing if not obliging.

Today's post is not only about kitties, though.  It's about something that has struck me over and over, in the fifteen years I've been writing here at Skeptophilia headquarters; how little it takes to get a weird belief going.

Which brings us to: the strange legend of the Demon Cat of Washington D. C.

[Image licensed under the Creative Commons X737257, Black cat looking down from a white wall, CC BY-SA 4.0]

There's been a persistent legend in Washington of a demonic (or ghostly, or both) cat that stalks its way around the White House and Capitol Building, and is prone to appearing when something big is going to happen -- especially prior to the death of a major public figure.

Me, I'm currently wondering where that freakin' cat is when you need him.

On the other hand, it apparently also appears prior to stuff like wars being declared and the economy tanking hard, and whatever problems we currently have, we don't need that added into the mix.

Be that as it may, the Demon Cat -- often just known as D. C. -- is an ordinary-looking black house cat, but if approached it "swells up to the size of a giant tiger" and then either pounces on the unfortunate witness, or else... explodes.

I can see how this could be alarming.  Tigers are scary enough without detonating suddenly.

Interestingly, this legend is not of recent vintage; it goes all the way back to the mid-1800s.  It was reported prior to Lincoln's assassination; and right before McKinley's assassination in the 1890s, a guard saw the Demon Cat and allegedly died of a heart attack.  It's been fired at more than once, to no apparent effect. 

It's an odd urban legend, and the striking thing about it is its longevity -- 175 years and still going strong.  Steve Livengood, chief tour guide of the U. S. Capitol Historical Society, says it has a prosaic origin.  Back in the mid-nineteenth century, the Capitol Police had a bad habit of hiring unqualified people, often family members or friends of congresspeople who were unemployed for good reason.  Drunkenness on the job was rampant, and one day, a policeman had passed out on the floor of the White House at night, and woke to find a black cat staring at him.  He freaked, told his supervisor, and the supervisor sent him home to "recover."  This started a rash of reports from other policemen claiming they'd seen a giant demonic cat so they, too, would be given a day off.

But it's curious the legend has persisted for so long.  I'm sure part of it is just that it's funny -- passed along as a tall tale by people who don't really believe it.  But some reports seem entirely serious.  A 1935 sighting claimed the Demon Cat's eyes "glow with the all the hue and ferocity of the headlights of a fire engine."  As Jordy Yager writes in The Hill:
The fiendish feline is said to be spotted right before a national catastrophe occurs (like the stock market plunging or a national figure being shot) and before presidential power shifts hands.  The story finds its origins in the days when rats used to run rampant in the basement tunnels of the Capitol and officials brought in cats to hunt them down.  The Demon Cat was one that never left.

It's a curious feature of human psychology that it's really easy to get a belief started, and damn near impossible to eradicate it once it's taken hold.  (Something Fox News uses with malice aforethought; they make whatever wild claims serve their purpose, knowing that even if they have to retract them, the retraction will never undo the damage done by the original claims.)  So stories of the Giant Exploding Kitties of Doom might sound ridiculous, but the fact that the story is still out there despite being completely ridiculous is itself interesting.

On the other hand, maybe at the moment it's just wishful thinking.

"Here, kitty, kitty, kitty..."

****************************************


Tuesday, February 17, 2026

The meatlocker

In the episode of Star Trek called "The Return of the Archons," Captain Kirk and the team from the Enterprise visit a planet where some Federation representatives had gone missing, and find that the missing officers -- and, in fact, the planet's entire native population -- are seemingly bewitched.  They walk around in a trance, completely blissed out, and when Kirk and the away team appear and obviously act different, they're suspiciously asked, "Are you of the Body?"

"The Body" turns out to be the sum total of sentient life on the planet, which is under the control of a superpowerful computer named Landru.  Some time in the planet's past, the powers-that-be had thought it was a nifty plan to turn over the agency of all of the inhabitants to the administration of an intelligent machine.


Ultimately, McCoy and Sulu get absorbed, and to get them out of the predicament Kirk and Spock drive Landru crazy with illogic (a trope that seemed to get used every other week), and then vaporize the mainframe with their phasers.

I still remember watching that episode when I was a teenager, and finding it entertaining enough, but thinking, "How stupid were these aliens, to surrender themselves voluntarily to being controlled by a computer?  Who the fuck thought this was a good idea?  At least we humans would never be that catastrophically dumb."

Turns out that, as is so common with the idealism of youth, I was seriously overestimating how smart humans are.  According to an article published yesterday in Nature, humans not only are that catastrophically dumb, they're jostling by the hundreds of thousands to be first in line.

Not that we have a Landru-equivalent yet, quite, but what's come up is definitely in the same spirit.  Two software engineers named Alexander Liteplo and Patricia Tani have developed a platform called -- I shit you not -- RentAHuman.ai, in which artificial intelligence "agents" take applications from real, flesh-and-blood humans (nicknamed, I once again shit you not, "meatspace workers") for jobs that the AI can't handle yet, like physically going to a place, taking photographs or collecting other sorts of information, and reporting back.

As of this writing, 450,000 people have applied to the site for work.

I swear, I wouldn't have believed this if it hadn't been in Nature.  I thought we had reached the pinnacle of careless foolishness two weeks ago, with the creation of a social media platform that is AI-only, and that has already gone seriously off the rails.  Now, we're not only letting the AI have its own special, human-free corner of the internet, we're actively signing up to be its servants?

Chris Benner, a researcher into technological change and economic restructuring at the University of California - Santa Cruz, says it's not as bad as it sounds, because the AI is just acting as an intermediary, using the instructions of the humans who created it to assign jobs.  Also, the paychecks are still coming from RentAHuman.ai's owners.  But it's significant that one of the site's creators, Alexander Liteplo, refused to be interviewed by Nature, and when someone tweeted at him that what he'd created was dystopian, responded only with, "LMAO yep."

So is that what we've become?  Just more choices in the meatlocker?


What bothers me about all this is not that I think we're on the verge of a planet-wide computer-controlled society, but that we're walking wide-eyed toward a future where human creativity is buried underneath a mountain of justifications about how "we were just trying to make things easier."  Each step seems small, innocuous, painless.  Each time there's a rationalization that what's being relinquished is really not that big a deal.

As I recall, the aliens in "The Return of the Archons" had a rationalization for why it was better to give up and give in to Landru, too.  No violence, no struggle, no pain, nothing but peace of mind.  All you have to do is to cede your agency in exchange.

I'm not a big believer in what's been nicknamed the slippery-slope fallacy -- that small steps always lead to bigger ones.  But here, we seem to be less on a slippery slope than rushing headlong toward a precipice.  I'll end with a quote from C. S. Lewis's novel That Hideous Strength that I've always found chilling -- in which a character is so lulled into indolence that he won't even make an effort to change course when he sees his own personal ruin is imminent:
The last scene of Doctor Faustus where the man raves and implores on the edge of hell is, perhaps, stage fire.  The last moments before damnation are not often so dramatic.  Often the man knows with perfect clarity that some still possible action of his own will could yet save him.  But he cannot make this knowledge real to himself.  Some tiny habitual sensuality, some resentment too trivial to waste on a noisy fly, the indulgence of some fatal lethargy, seems to him at that moment more important than the choice between joy and total destruction.  With eyes wide open, seeing that the endless terror is just about to begin and yet (for the moment) unable to feel terrified, he watches passively, not moving a finger for his own rescue, while the last links with joy and reason are severed, and drowsily sees the trap close upon his soul.  So full of sleep are they at the time when they leave the right way.
****************************************


Monday, February 16, 2026

The kids are all right

Kids these days, ya know what I mean?

Wiser heads than mine have commented on the laziness, disrespectfulness, and general dissipation of youth.  Here's a sampler:
  • Parents themselves were often the cause of many difficulties.  They frequently failed in their obvious duty to teach self-control and discipline to their own children.
  • We defy anyone who goes about with his eyes open to deny that there is, as never before, an attitude on the part of young folk which is best described as grossly thoughtless, rude, and utterly selfish.
  • The children now love luxury; they have bad manners, contempt for authority; they show disrespect for elders and love chatter in place of exercise.  Children are now tyrants, not the servants of their households.  They no longer rise when elders enter the room.  They contradict their parents, chatter before company, gobble up dainties at the table, cross their legs, and tyrannize their teachers.
  • Never has youth been exposed to such dangers of both perversion and arrest as in our own land and day.  Increasing urban life with its temptations, prematurities, sedentary occupations, and passive stimuli just when an active life is most needed, early emancipation and a lessening sense for both duty and discipline, the haste to know and do all befitting man's estate before its time, the mad rush for sudden wealth and the reckless fashions set by its gilded youth -- all these lack some of the regulatives they still have in older lands with more conservative conditions.
  • Youth were never more saucy -- never more savagely saucy -- as now... the ancient are scorned, the honourable are condemned, and the magistrate is not dreaded.
  • Our sires' age was worse than our grandsires'.  We, their sons, are more worthless than they; so in our turn we shall give the world a progeny yet more corrupt.
  • [Young people] are high-minded because they have not yet been humbled by life, nor have they experienced the force of circumstances…  They think they know everything, and are always quite sure about it.
Of course, I haven't told you where these quotes come from. In order:
  • from an editorial in the Leeds Mercury, 1938
  • from an editorial in the Hull Daily Mail, 1925
  • Kenneth John Freeman, Cambridge University, 1907
  • Granville Stanley Hall, The Psychology of Adolescence, 1904
  • Thomas Barnes, The Wise Man's Forecast Against the Evil Time, 1624
  • Horace, Odes, Book III, 20 B.C.E.
  • Aristotle, 4th century B.C.E.
So yeah.  Adults saying "kids these days" has a long, inglorious history.  [Nota bene: the third quote, from Kenneth Freeman, has often been misattributed to Socrates, but it seems pretty unequivocal that Freeman was the originator.]

This comes up because of a study that was published in Science Advances, by John Protzko and Jonathan Schooler, called "Kids These Days: Why the Youth of Today Seem Lacking."  And its unfortunate conclusion -- unfortunate for us adults, that is -- is that the sense of today's young people being irresponsible, disrespectful, and lazy is mostly because we don't remember how irresponsible, disrespectful, and lazy we were when we were teenagers.  And before you say, "Wait a moment, I was a respectful and hard-working teenager" -- okay, maybe.  But so are many of today's teenagers.  If you want me to buy that we're in a downward spiral, you'll have to convince me that more teenagers back then were hard-working and responsible, and that I simply don't believe.

And neither do Protzko and Schooler.

So the whole thing hinges more on idealization of the past, and our own poor memories, than on anything real.  I also suspect that a good many of the older adults who roll their eyes about "kids these days" don't have any actual substantive contact with young people, and are getting their impressions of teenagers from the media -- which certainly doesn't have a vested interest in portraying anyone as ordinary, honest, and law-abiding.

My own experience of teaching corroborates this.  Sure, I had a handful of students who were unmotivated, disruptive, or downright obnoxious; but in general, I found that my classes responded to my own enthusiasm about my subject with interest and engagement.  Whenever I raised the bar, they met and often exceeded it.  I still recall one of the best classes I ever taught -- one of my Critical Thinking classes, perhaps five years prior to my retirement.  It was a class of about 25, so large by my school's standards, but to say they were eager learners is a dramatic understatement.  I still recall when we were doing a unit on ethics, and I'd given them a series of readings (amongst them Jean-Paul Sartre's "The Wall" and Richard Feynman's "Who Stole the Door?") centered around the question of intent.  Are you lying if you thought what you said was a lie but accidentally told the truth -- or if you deliberately told the truth so unconvincingly that it seemed like a lie, and no one believed you?

Well, I gave them a week to do the reading, and we were going to have a class discussion of the topic, but I was walking to lunch one day (maybe three days after I'd given the assignment) and I got nabbed in the hall by five of my students who said they'd all done the readings and had been arguing over them, and wanted me to sit in the cafeteria with them and discuss what they'd read.  I reassured them we'd be hashing the whole thing out in class in a day or two.

"Oh, no," one kid said, completely serious.  "We can't wait to settle this.  We want to discuss it now."

This is the same class in which we were talking about your basis for knowledge.  If you believe something to be true, how can you be certain?  There are things we strongly believe despite having never experienced them -- based on having heard it from a trusted authority, or seeing indirect evidence, or simply that whatever it is seems consistent with what you know from other sources.  So I said, as an example, "With what you have with you right now, I want you to prove to me that pandas exist."

Several kids reached for their smartphones -- but one young woman reached into her backpack, and completely straight-faced, brought out a stuffed panda and set it on her desk.

I cracked up, and said, "Fine, you win."  At the end of the semester she gave me the panda as a keepsake -- and I still have him.


Those are just two of many experiences I had as a teacher of students and classes that were engaged, curious, hard-working, creative, and challenging (in the best possible ways).  Don't try to convince me there's anything wrong with "kids these days."

So I'm an optimist about today's youth.  I saw way too many positive things in my years as a high school teacher to feel like this is going to be the generation that trashes everything through irresponsibility and disrespect for tradition.  And if after reading this, you're still in any doubt about that, I want you to think back on your own teenage years, and ask yourself honestly if you were as squeaky-clean as you'd like people to believe.

Or were you -- like the youth in Aristotle's day -- guilty of thinking you knew everything, and being quite sure about it?

****************************************


Saturday, February 14, 2026

With a whimper

The death of massive stars, ten or more times the mass of the Sun, is thought to have a predictable -- if violent -- trajectory.

During most of their lifetimes, stars are in a relative balance between two forces.  Fusion of hydrogen into helium in the core releases heat energy, which increases the pressure in the core and generates an outward-pointing force.  At the same time, the inexorable pull of gravity generates an inward-pointing force.  For the majority of the star's life, the two are in equilibrium; if something makes the core cool a little bit, gravity wins for a while and the star shrinks, increasing the pressure and thus the rate of fusion.  This heats the core up, increasing the outward force and stopping the collapse.

Nice little example of negative feedback and homeostasis, that.  Stars in this long, relatively quiescent phase are on the "Main Sequence" of the famous Hertzsprung-Russell Diagram:

[Image licensed under the Creative Commons Richard Powell, HRDiagram, CC BY-SA 2.5]

Once the hydrogen fuel starts to deplete, though, the situation shifts.  Gravity wins once again, but this time there's not enough hydrogen-to-helium fusion to counteract the collapse.  The core shrinks, raising the temperature to hundreds of millions of degrees Kelvin -- enough to fuse helium to carbon.  This release of energy causes the outer atmosphere to balloon outward, and the star becomes a red supergiant -- the surface is cool (and thus reddish), but the interior is far hotter than the core of our Sun.

Two famous stars -- Betelgeuse (in Orion) and Antares (in Scorpio) are in this final stage of their lives.

Here's where things get interesting, because the helium fuel doesn't last forever, either.  The carbon "ash" left behind needs an even higher temperature to fuse into oxygen, nitrogen, and heavier elements, which happens when the previous process repeats itself -- further core collapse, followed by further heating.  But this can't go on indefinitely.  When the fusion reaction starts to generate iron, the game is up.  Iron represents the turnaround point on the curve of binding energy, where fusion stops being an exothermic (energy-releasing) reaction and becomes endothermic (energy-consuming).  At that point, the core can't respond with anything to support the pull of gravity, and the entire star collapses.  The outer atmosphere rebounds off the collapsing core, creating a shockwave called a core-collapse (type II) supernova, releasing in a few seconds as much energy as the star did during its entire life on the main sequence.  What's left afterward is a super-dense remnant -- either a neutron star or a black hole, depending on its mass.

Well, that's what we thought happened.  But now a paper in Science describing the collapse of a supergiant star in the Andromeda Galaxy has suggested there may be a different fate for at least some massive stars -- that they may go out not with a bang, but with a whimper.

The occurrence that spurred this discovery was so underwhelming that it took astronomers a while to realize it had happened.  A star began to glow intensely in the infrared region of the spectrum, and then suddenly -- it didn't anymore.  It seemed to vanish, leaving behind a faintly glowing shell of dust.  Kishalay De, lead author of the paper, says what happened is that we just witnessed a black hole forming without a supernova preceding it.  The core ran out of fuel, the outer atmosphere collapsed, and the star itself just kind of... winked out.

"This has probably been the most surprising discovery of my life," De said.  "The evidence of the disappearance of the star was lying in public archival data and nobody noticed for years until we picked it out...  The dramatic and sustained fading of this star is very unusual, and suggests a supernova failed to occur, leading to the collapse of the star’s core directly into a black hole.  Stars with this mass have long been assumed to always explode as supernovae.  The fact that it didn’t suggests that stars with the same mass may or may not successfully explode, possibly due to how gravity, gas pressure, and powerful shock waves interact in chaotic ways with each other inside the dying star."

It's honestly unsurprising that we don't have the mechanisms of supernovae and black hole formation figured out completely.  They're not frequent occurrences.  The most recent easily visible supernova in the Milky Way was all the way back in 1604 -- "Kepler's Supernova," as it's often called.  Since then we've seen them occur in other galaxies, but that means from here they're invisible to the naked eye, and often difficult to study even with powerful telescopes.

But I will say that the whole thing has me worried.  Betelgeuse is predicted to run out of fuel soon, and all my life I've been waiting for it to explode violently (yes, yes, I know that "soon" to an astrophysicist means "some time in the next hundred thousand years).  If it just decides to go pfft and vanish one night, I'm gonna be pissed.

Oh, well, as my grandma used to tell me, wishin' don't make it so.  But still.  Life down here on Earth has been pretty damn distressing lately, can't we have just one nice thing?

****************************************


Friday, February 13, 2026

The hazard of "just-so stories"

One of the problems with scientific research is there's a sneaky bias that can creep in -- manifesting as explaining a phenomenon a certain way because the explanation lines up with a narrative that seems so intuitive it's not even questioned.

Back in 1978, evolutionary biologist Stephen Jay Gould nicknamed these "just-so stories," after the 1902 book by Rudyard Kipling containing fairy tales about how animals gained particular traits (the most famous of which is "How the Leopard Got His Spots").  Gould was mainly pointing his finger at the relatively new field of evolutionary psychology -- giving straightforward evolutionary explanations for complex human behaviors -- but his stinging criticism can be levied against a great many other fields, too.

The difficulty is, this bias slips its way in because these explanations seem so damned reasonable.  It's not quite like confirmation bias -- where we accept thin corroborative evidence for ideas we already agreed with, and demand ridiculously high standards for counter-evidence that might falsify them.  It's almost like confirmation bias, only backwards -- after hearing it, we experience a "wow, I never knew that!" sort of delight.  We didn't already believe the explanation; but when we find out about it, we respond with open-armed acceptance.

One good example, that I had to contend with every single year while teaching high school biology, was the whole "right-brained versus left-brained personality" thing, which was roundly debunked a long time ago.  It's certainly true that our brains are lateralized, and most of us have a physically dominant hemisphere; also, it's undeniable that some of us are more holistic and creative and others more reductionistic and analytical; and it's also true that the cognitive parts of the right and left brain seem to process information differently.  Putting these three together seems natural.  The truth is, however, that any connection between brain dominance and personality type is tenuous in the extreme.

But it seems like it should be true, doesn't it?  That's the hallmark of a "just-so story."

The reason this topic comes up is a recent paper in the journal Global Ecology and Conservation that challenges one of the most appealing of the "just-so stories" -- that the reintroduction of wolves to Yellowstone National Park caused a "trophic cascade," positively affecting the landscape and boosting species richness and species diversity in the entire region.

The original claim came from research by William Ripple et al., and connected the extirpation of wolves with the corresponding higher survival rate of elk and deer.  This, they said, resulted in overbrowsing of willow and alder, to the point that as older plants died they were not being replaced by saplings.  This, in turn, led to higher erosion into streams, silting of the gravel bottoms required for salmon and trout to spawn, so a drop in fish population.  Last in the chain, this resulted in less food for bears, so a reduction in survival rates for bear cubs, and a decrease in the numbers of grizzly and black bears.

The reintroduction of wolves -- well, supposedly it undid all that.  Within a few years of the establishment of a stable wolf population, the willows and alders rebounded because of higher predation on elk and deer -- leading to a resurgence of trout and salmon and an increase in the bear population.

This all sounds pretty cool, and doesn't it line up with what we'd like to be true?  The eco-minded amongst us just love wolves.  There's a reason they're featured in every wildlife calendar ever printed.

[Image licensed under the Creative Commons User:Mas3cf, Eurasian wolf 2, CC BY-SA 4.0]

It's why I almost hate to tell you about the new paper, by Daniel MacNulty, Michael Procko, and T. J. Clark-Wolf of Utah State University, and David Cooper of Colorado State University.  Here's the upshot, in their own words:

Ripple et al.... argued that large carnivore recovery in Yellowstone National Park triggered one of the world’s strongest trophic cascades, citing a 1500% increase in willow crown volume derived from plant height data...  [W]e show that their conclusion is invalid due to fundamental methodological flaws.  These include use of a tautological volume model, violations of key modeling assumptions, comparisons across unmatched plots, and the misapplication of equilibrium-based metrics in a non-equilibrium system.  Additionally, Ripple et al. rely on selectively framed photographic evidence and omit critical drivers such as human hunting in their causal attribution.  These shortcomings explain the apparent conflict with Hobbs et al., who found evidence for a relatively weak trophic cascade based on the same height data and a long-term factorial field experiment.  Our critique underscores the importance of analytical rigor and ecological context for understanding trophic cascade strength in complex ecosystems like Yellowstone.

MacNulty et al. demonstrate that if you re-analyze the same data and rigorously address these flaws, the trophic cascade effect largely vanishes.  "Once these problems are accounted for, there is no evidence that predator recovery caused a large or system-wide increase in willow growth," said study co-author David Cooper.  "The data instead support a more modest and spatially variable response influenced by hydrology, browsing, and local site conditions."

It's kind of a shame, isn't it?  Definitely one of those "it'd be nice if it were true" things.  It'll be interesting to see how Ripple et al. respond.  I'm reminded of a video on astronomer David Kipping's wonderful YouTube channel The Cool Worlds Lab about his colleague Matthew Bailes -- who in 1990 announced what would have been the first hard evidence of an exoplanet, and then a few months later had to retract the announcement because he and his co-authors had realized there'd been an unrecognized bias in the data.  Such admissions are, naturally, deeply embarrassing to make, but to Bailes's credit, he and his co-authors Andrew Lyne and Setnam Shemar owned up and retracted the paper, which was certainly the honest thing to do.

Here, though -- well, perhaps Ripple et al. will be able rebut this criticism, although having read both papers, it's hard for me to see how.  We'll have to wait and see.

Note, too, that MacNulty et al. are not saying that there's anything wrong with reintroducing wolves to Yellowstone -- just that the response of a complex system to tweaking a variable is going to be, well, complex.  And we shouldn't expect anything different, however much we like neat tales of How the Leopard Got His Spots.

So that's today's kind of disappointing news from the world of science.  How we have to be careful about ideas that have an immediate intuitive appeal.  Just keep in mind physicist Richard Feynman's wise words: "The first rule in science is that you must not fool yourself -- and you are the easiest person to fool."

****************************************


Thursday, February 12, 2026

Echoes of the ancestor

One of the most persuasive pieces of evidence of the common ancestry of all life on Earth is genetic overlap -- and the fact that the percent overlap gets higher when you compare more recently-diverged species.

What is downright astonishing, though, is that there is genetic overlap between all life on Earth.  Yeah, okay, it's easy enough to imagine there being genetic similarity between humans and gorillas, or dogs and foxes, or peaches and plums; but what about more distant relationships?  Are there shared genes between humans... and bacteria?

The answer, amazingly, is yes, and the analysis of these universal paralogs was the subject of a fascinating paper in the journal Cell Genomics last week.  Pick any two organisms on Earth -- choose them to be as distantly-related to each other as you can, if you like -- and they will still share five groups of genes, used for making the following classes of enzymes:

  • aminotransferases
  • imidazole-4-carboxamide isomerase
  • carbamoyl phosphate synthetases
  • aminoacyl-tRNA synthetases
  • initiation facter IF2

The first three are connected with amino acid metabolism; the last two, with the process of translation -- which decodes the message in mRNA and uses it to synthesize proteins.

The fact that all life forms on Earth have these five gene groups suggests something wild; that we're looking at genes that were present in LUCA -- the Last Universal Common Ancestor, our single-celled, bacteria-like forebear that lived in the primordial seas an estimated four billion years ago.  Since then, two things happened -- the rest of LUCA's genome diverged wildly, under the effects of mutation and selection, so that now we have kitties and kangaroos and kidney beans; and those five gene groups were under such extreme stabilizing selection that they haven't significantly changed, in any of the branches of the tree of life, in millions or billions of generations.

The authors write:

Universal paralog families are an important tool for understanding early evolution from a phylogenetic perspective, offering a unique and valuable form of evidence about molecular evolution prior to the LUCA.  The phylogenetic study of ancient life is constrained by several fundamental limitations.  Both gene loss across multiple lineages and low levels of conservation in some gene families can obscure the ancient origin of those gene families.  Furthermore, in the absence of an extensive diagnostic fossil record, the dependence of molecular phylogenetics on conserved gene sequences means that periods of evolution that predated the emergence of the genetic system cannot be studied.  Even so, emerging technologies across a number of areas of computational biology and synthetic biology will expand our ability to reconstruct pre-LUCA evolution using these protein families.  As our understanding of the LUCA solidifies, universal paralog protein families will provide an indispensable tool for pushing our understanding of early evolutionary history even further back in time, thereby describing the foundational processes that shaped life as we know it today.
It's kind of mind-boggling that after all that time, there's any commonality left, much less as much as there's turned out to be.  "The history of these universal paralogs is the only information we will ever have about these earliest cellular lineages, and so we need to carefully extract as much knowledge as we can from them," said Greg Fournier of MIT, who co-authored the paper, in an interview with Science Daily.

So all life on Earth really is connected, and the biological principle of "unity in diversity" is literally true.  Good thing for us; the fact that we have shared metabolic pathways -- and especially, shared genetic transcription and translation mechanisms -- is what allows us to create transgenic organisms, which express a gene from a different species.  For example, this technique is the source of most of the insulin used by the world's diabetics -- bacteria that have been engineered to contain a human insulin gene.  Bacteria read DNA exactly the same way we do, so they transcribe and translate the human insulin gene just as our own cells would, producing insulin molecules identical to our own.

This is also, conversely, why the idea of an alien/human hybrid would never work.  Even assuming that some alien species we met was humanoid, and had all the right protrusions and indentations to allow mating to work, there is just about a zero likelihood that the genetics of two species that didn't have a common ancestor would line up well enough to allow hybridization.  Consider that most of the time, even relatively closely-related terrestrial species can't hybridize and produce fertile offspring; there's no way humans could do so with any presumed alien species.

Star Trek's claims to the contrary notwithstanding.


So that's our mind-blowing science news of the day.  The discovery of five gene families that were present in our ancestors four billion years ago, and which are still present today in every life form on Earth.  Some people apparently think it's demeaning to consider that we're related to "lower" species; me, I think it's amazingly cool to consider that everything is connected, that I'm just one part of a great continuum that has been around since not long after the early Earth cooled enough to have liquid water.  All the more reason to take care of the biosphere -- considering it's made up of our cousins.

****************************************


Wednesday, February 11, 2026

Watching the clock

If I had to pick the scientific law that is the most misunderstood by the general public, it would have to be the Second Law of Thermodynamics.

The First Law of Thermodynamics says that the total quantity of energy and mass in a closed system never changes; it's sometimes stated as, "Mass and energy cannot be destroyed, only transformed."  The Second Law states that in a closed system, the total disorder (entropy) always increases.  As my long-ago thermodynamics professor put it, "The First Law says you can't win; the Second Law says you can't break even."

Hell of a way to run a casino, that.

So far, there doesn't seem to be anything particularly non-intuitive about this.  Even from our day-to-day experience, we can surmise that the amount of stuff seems to remain pretty constant, and that if you leave something without maintenance, it tends to break down sooner or later.  But the interesting (and less obvious) side starts to appear when you ask the question, "If the Second Law says that systems tend toward disorder, how can a system become more orderly?  I can fling a deck of cards and make them more disordered, but if I want I can pick them up and re-order them.  Doesn't that break the Second Law?"

It doesn't, of course, but the reason why is quite subtle, and has some pretty devastating implications.  The solution to the question comes from asking how you accomplish re-ordering a deck of cards.  Well, you use your sensory organs and brain to figure out the correct order, and the muscles in your arms and hands (and legs, depending upon how far you flung them in the first place) to put them back in the correct order.  How did you do all that?  By using energy from your food to power the organs in your body.  And to get the energy out of those food molecules -- especially glucose, our primary fuel -- you broke them to bits and jettisoned the pieces after you were done with them.  (When you break down glucose to extract the energy, a process called cellular respiration, the bits left are carbon dioxide and water.  So the carbon dioxide you exhale is actually broken-down sugar.)

Here's the kicker.  If you were to measure the entropy decrease in the deck of cards, it would be less -- way less -- than the entropy increase in the molecules you chopped up to get the energy to put the cards back in order.  Every time you increase the orderliness of a system, it always (1) requires an input of energy, and (2) increases the disorderliness somewhere else.  We are, in fact, little chaos machines, leaving behind a trail of entropy everywhere we go, and the more we try to fix things, the worse the situation gets.

I've heard people arguing that the Second Law disproves evolution because the evolutionary model claims we're in a system that has become more complex over time, which according to the Second Law is impossible.  It's not; and in fact, that statement betrays a fundamental lack of understanding of what the Second Law means.  The only reason why any increase in order occurs -- be it evolution, or embryonic development, or stacking a deck of cards -- is because there's a constant input of energy, and the decrease in entropy is offset by a bigger increase somewhere else.  The Earth's ecosystems have become more complex in the 4.5 billion year history of life because there's been a continuous influx of energy from the Sun.  If that influx were to stop, things would break down.

Fast.

The reason all this comes up is because of a paper in Physical Review X that gives another example of trying to make things better, and making them worse in the process.  This one has to do with the accuracy of clocks -- a huge deal to scientists who are studying the rate of reactions, where the time needs to be measured to phenomenal precision, on the scale of nanoseconds or better.  The problem is, we learn from "Measuring the Thermodynamic Cost of Timekeeping," the more accurate the clock is, the higher the entropy produced by its workings.  So, in effect, you can only measure time in a system to the extent you're willing to screw the system up.

[Image licensed under the Creative Commons Robbert van der Steeg, Eternal clock, CC BY-SA 2.0]

The authors write:
All clocks, in some form or another, use the evolution of nature towards higher entropy states to quantify the passage of time.  Due to the statistical nature of the second law and corresponding entropy flows, fluctuations fundamentally limit the performance of any clock.  This suggests a deep relation between the increase in entropy and the quality of clock ticks...  We show theoretically that the maximum possible accuracy for this classical clock is proportional to the entropy created per tick, similar to the known limit for a weakly coupled quantum clock but with a different proportionality constant.  We measure both the accuracy and the entropy.  Once non-thermal noise is accounted for, we find that there is a linear relation between accuracy and entropy and that the clock operates within an order of magnitude of the theoretical bound.
Study co-author Natalia Ares, of the University of Oxford, summarized their findings succinctly in an article in Science News; "If you want a better clock," she said, "you have to pay for it."

So a little like the Heisenberg Uncertainty Principle, the more you try to push things in a positive direction, the more the universe pushes back in the negative direction.

Apparently, even if all you want to know is what time it is, you still can't break even.

So that's our somewhat depressing science for the day.  Entropy always wins, no matter what you do.  Maybe I can use this as an excuse for not doing housework.  Hey, if I make things more orderly here, all it does is mess things up elsewhere, so what's the point?

Nah, never mind.  My wife'll never buy it.

****************************************


Tuesday, February 10, 2026

Falling rock zone

Some of you may have heard of the Sylacauga meteorite -- a 5.5 kilogram, grapefruit-sized piece of rock that gained more notoriety than most because it crashed through a woman's roof on the afternoon of November 30, 1954, and hit her on the hip as she slept on the sofa.

The victim, Ann Hodges of Sylacauga, Alabama, was bruised but otherwise okay.

Here's Hodges with her rock, and an expression that clearly communicates, "A woman can't even take a damn nap around here without this kind of shit happening." 

Hodges isn't the only one who's been way too close to falling space rocks.  In August of 1992 a boy in Mbale, Uganda was hit by a small meteorite -- fortunately, it had been slowed by passing through the tree canopy, and he was startled but unharmed.  Only two months later, a much larger (twelve kilogram) meteorite landed in Peekskill, New York, and clobbered a parked Chevy Malibu:


The most deadly meteorite fall in historical times, though, is a likely airburst and subsequent shower of rocks that occurred near Qingyang, in central China, in the spring of 1490.  I say "likely" because there haven't been any meteorites from the incident that have survived to analyze, but a meteoritic airburst -- a "bolide" -- is the explanation that seems to fit the facts best.

Stones fell like rain in the Qingyang district.  The larger ones were four to five catties [a catty is a traditional Chinese unit of mass, equal to about a half a kilogram], and the smaller ones were two to three catties.  Numerous stones rained in Qingyang.  Their sizes were all different.  The larger ones were like goose's eggs and the smaller ones were like water-chestnuts.  More than ten thousand people were struck dead.  All of the people in the city fled to other places.
The magnitude of the event brings up comparisons to the colossal Tunguska airburst of 1908, when a meteorite an estimated fifty meters in diameter exploded above a (fortunately) thinly-populated region of Siberia, creating a shock wave that blew down trees radially outward for miles around, and registered on seismographs in London.

Interestingly, the Qingyang airburst wasn't the only strange astronomical event in 1490; Chinese, Korean, and Japanese astronomers also recorded the appearance of a new comet in December of that year.  From their detailed records of its position, modern astronomers have calculated that its orbit is parabolic -- in other words, it won't be back, and is currently on its way out of the Solar System.  However, it left a debris trail along the path of its one pass near us which is thought to be the origin of the bright Quadrantid meteor shower, which peaks in early January.

It's likely, however, that the Qingyang airburst and the December comet were unrelated events.

Much has been made of the likelihood of Earth being struck by an asteroid, especially something like the Chicxulub Impactor, which 66 million years ago ended the hegemony of the dinosaurs.  Thing is, most of the bigger items in the Solar System's rock collection have been identified, tracked, and pose no imminent threat.  (There is, however, a four percent chance that a seventy-meter-wide asteroid will hit the Moon in 2032, triggering a shower of debris, some of which could land on Earth.)

But there are lots of smaller rocks out there that we'd never see coming.  The 2013 Chelyabinsk airburst was estimated to be from an eighteen-meter-wide meteor, and created a shock wave that blew in windows, and a fireball that was visible a hundred kilometers away.  Our observational ability has improved dramatically, but eighteen meters is still below the threshold of what we could detect before it's too late.

The Double Asteroid Redirection Test (DART) Mission of 2022 showed that if we had enough time, we could theoretically run a spacecraft into an asteroid and change its orbit enough to deflect it, but for smaller meteors, we'd never spot them soon enough.

The good part of all this is that your chance of being hurt or killed by a meteorite is still way less than a lot of things we take for granted and do just about every day, like getting into a car.  That last bit, though, is why people tend to over-hype the risk; we do that with stuff that's weird, things that would make the headlines of your local newspaper.  (I remember seeing a talk about risk that showed a photograph of an erupting volcano, a terrorist bombing, an airplane crash, and a home in-ground swimming pool, and the question was, "Which of these is not like the others?"  The answer, of course, was the swimming pool -- because statistically, it's much more likely to kill someone than any of the others.)

So it's nothing to lose sleep over.  Unless you're Ann Hodges of Sylacauga, Alabama, who was just trying to take a damn nap for fuck's sake when this stupid rock came crashing through the roof and hit her, if you can believe it.

****************************************