Saturday, May 2, 2026

Cracker crumbs

Three years ago, I wrote here at Skeptophilia about the scary Cascadia Subduction Zone, which is capable of enormous earthquakes and tsunamis -- and which, unfortunately, lies right off the coast of British Columbia, Washington, and Oregon.  A subduction zone is a region along which two plates are coming together, forcing one underneath the other.  Because rocks experience a high degree of friction, the two plates often get stuck, sometimes for centuries, and then can give suddenly.  This lurch is what causes big earthquakes.

The motive forces here are convection and drag.  Rising plumes of magma underneath ridges diverge, and the friction between the magma plume and the underside of the plates forces them apart.  Where the leading edge of the plate strikes another, something's got to give.  In this case, the oceanic Juan de Fuca Plate, made of (relatively) thin, brittle basaltic rock, hits the old, thicker and colder North American Plate.  The Juan de Fuca Plate jams up and eventually plunges underneath.  The downward drag produces a trench, and inland from the trench you often find volcanoes, created as the subducted plate melts and the molten rock pushes its way to the surface.  (This is how the Cascade Volcanoes, most famously Mount Rainier, Mount Shasta, Mount Hood, and Mount Saint Helens, formed.)

The red dots are undersea earthquakes; the green ones, on-land earthquakes.  [Image is in the Public Domain courtesy of the United States Geological Survey]

What hasn't been clear until now is how exactly subduction happens.  We know that the process usually isn't smooth (as I described, it often goes by fits and starts rather than releasing the compressional force gradually).  But what happens to the plate itself as it descends and is destroyed in the upper mantle?

Thanks to a new study out of Louisiana State University, we now have our first good picture of how this process occurs.

It turns out that the destruction of the last piece of a plate, such as Juan de Fuca -- which is one of the only remaining fragments of the Farallon Plate, that once underlay most of the northeastern Pacific Ocean -- is anything but orderly.  The (relatively) small slab of solid rock beneath the ocean off the coast of the Pacific Northwest is being bent as its eastern edge is pulled downward, creating multiple fractures and dozens of "microplates."  "Getting a subduction zone started is like trying to push a train uphill -- it takes a huge effort," said geologist Brandon Shuck, lead author of the study, which appeared in Science Advances. "But once it's moving, it's like the train is racing downhill, impossible to stop.  Ending it requires something dramatic -- basically, a train wreck...  This is the first time we have a clear picture of a subduction zone caught in the act of dying.  Rather than shutting down all at once, the plate is ripping apart piece by piece, creating smaller microplates and new boundaries.  So instead of a big train wreck, it's like watching a train slowly derail, one car at a time."

Which, if you think about it, makes sense.  Picture shoving together two saltine crackers.  One will likely push underneath the other, but the leading edges are going to crumble, and what you'll be left with will probably be a disordered pile of cracker crumbs.

This process doesn't really change the picture with regards to earthquake risk; just because the plate is shattering into smaller chunks doesn't mean the effects will be small when the breaks occur.  One example -- the Shuck et al. research found a major, 75-kilometer long fault where pieces of it have dropped by five kilometers.  The scary part is that despite the fault collapse, it's not done separating.  "This is a very large fault that's actively breaking the [subducting] plate," Shuck said.  "It's not one hundred percent torn off yet, but it's close."

Further reinforcing my assessment that while I dearly love the Pacific Northwest for some of the most beautiful scenery in the world and the absolute best gardening climate in the United States, I'd never live there again.

It bears mention, however, that it may be that the fault won't rupture for another two hundred years; on the other hand, it could happen tomorrow.  While our ability to analyze plate tectonics is light years beyond what it was even thirty years ago, when the situation in the Northwest first began to come clear, we still don't have any way to determine when the earthquake will happen with any kind of precision.  At the moment, all we know is that it will rupture, sooner or later.

And I don't want to be anywhere near it when it does.

****************************************



Friday, May 1, 2026

Tense situation

In my Critical Thinking classes, I did a unit on statistics and data, and how you tell if a measurement is worth paying attention to.  One of the first things to consider, I told them, is whether a particular piece of data is accurate or merely precise -- two words that in common parlance are used interchangeably.

In science, though, they don't mean the same thing.  A piece of equipment is said to be precise if it gives you close to the same value every time.  Accuracy is a higher standard; data are accurate if the values are not only close to each other when measured with the same equipment, but agree with data taken independently, using a different device or a different method.

A simple example is that if my bathroom scale tells me every day for a month that my mass is (to within one kilogram either way) 239 kilograms, it's highly precise, but very inaccurate.

This is why scientists always look for independent corroboration of their data.  It's not enough to keep getting the same numbers over and over; you've got to be certain those numbers actually reflect reality.

This all comes up because of an exciting new approach to one of the most vexing scientific questions known -- the rate of expansion of the entire universe.

[Image is in the Public Domain, courtesy of NASA]

A while back, I wrote about some experiments that were allowing physicists to home in on the Hubble constant, a quantity that is a measure of how fast everything in the universe is flying apart.  And initially, the news appeared to be good; from a range of between 50 and 500, physicists had been able to narrow down the value of the Hubble constant to between 65.3 and 75.6.

The problem is, nobody's been able to get closer than that -- and in fact, recent measurements have widened, not narrowed, the gap.

There are two main ways to measure the Hubble constant.  The first is to use information from Type 1A supernovae (whose brightening and eventual dimming curves are connected to their intrinsic brightness) and Cepheid variables (stars whose period of brightness oscillation varies predictably with their luminosity); these properties make them good "standard candles" to determine the distance to other galaxies.  Once you know a star's intrinsic luminosity, you can use that to determine how far away it is -- just as you can estimate your distance to an oncoming motorcycle at night because you know how bright a motorcycle's headlight actually is.  This, coupled with the galaxy's redshift, allows you to figure out how fast the galaxies we see are receding from each other, and thus, how fast space is expanding. 

The other method is to use the cosmic microwave background radiation -- the leftovers from the radiation produced by the Big Bang -- to determine the age of the universe, and therefore, how much bigger it's gotten since then.  The problem with this method is that it relies heavily on the correctness of our current models of the evolution of the universe, some of which have resulted in predictions not matched by the available observations.

Here's the issue: not only does each of the methods -- standard candles/cosmic ladder, and the CMBR method -- each have its difficulties, the measurement of the Hubble constant by these two methods has resulted in two irreconcilably different values.

So the astrophysicists have tried to narrow in from both ends.  Improve the data, and improve the models.  This backfired.  As our measurement ability has become more and more precise, the error bars associated with data collection have shrunk considerably; at the same time, the models have improved dramatically.  You'd think this would result in the two values getting closer and closer together.

Exactly the opposite has happened.

This result, called the Hubble tension, is considered to be one of the most frustrating problems in astrophysics.  And it's not just some fringe-y side quest; this is a fundamental issue with our understanding of the entire universe.

Here's where the new research, out of the Technical University of Münich, comes in.  You probably know about the phenomenon of gravitational lensing, where light traveling through the curved space near a massive object (like a galaxy or a supermassive black hole) gets bent, in much the same fashion as light going through a glass lens.  Sometimes this causes distant bright objects to look like they're stretched, or even multiplied.  For these objects, there is more than one pathway the light can take through space to get here to us, so the image we see is distorted.

Well, we've just detected one of the most remarkable examples of gravitational lensing ever observed; a supernova in a brilliant galaxy whose light split up into five separate paths in order to get here.

Put a different way, we saw the same supernova occur five different times.

Now, here's the kicker: because the paths that each of those beams of light took to get here differ in distance, comparing the timing of arrival of each image could give us the first-ever direct, no-assumptions-required method of measuring the Hubble constant, one with far fewer systematic uncertainties.

"We nicknamed this supernova SN Winny, inspired by its official designation SN 2025wny," said astrophysicist Sherry Suyu, who co-wrote the paper on the discovery.  "It is an extremely rare event that could play a key role in improving our understanding of the cosmos.  The chance of finding a superluminous supernova perfectly aligned with a suitable gravitational lens is lower than one in a million.  We spent six years searching for such an event by compiling a list of promising gravitational lenses, and in August 2025, SN Winny matched exactly with one of them."

In-depth analysis of the timing and positions of the five supernova appearances is currently underway.

Whether this will resolve the Hubble tension, of course, remains to be seen.  The worst-case scenario is that the SN Winny data doesn't agree with either the cosmic ladder value or the CMBR value, or has error bars large enough to overlap with both.  A happier outcome would be a decisive landing in one camp or the other -- although that'd still leave the astrophysicists puzzling over why the losing method doesn't work.

But it's an incredible discovery, and I know I'll be watching the science news to see what comes out of it.  Settling the Hubble tension question would be an amazing coup; having it resolved because of a one-in-a-million observation of a lensed supernova -- well, if you don't find that super cool, I don't even know what to say to you.

****************************************