Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label physics. Show all posts
Showing posts with label physics. Show all posts

Monday, August 4, 2025

Thunderbolts and lightning (very very frightening)

The cause of lightning has been strangely elusive.

Oh, in the broadest-brush terms, we've understood it for a while.  The rapidly-rising column of air in a cumulonimbus cloud induces charge separation, resulting in an electric potential difference between the ground and the air.  At a potential of about three megavolts per meter, the dielectric strength of damp air is exceeded -- the maximum voltage it can withstand without the molecules ionizing, and becoming conductive to electrical current.  This creates a moving channel of ionized air called a stepped leader.  When the leader reaches the ground, the overall resistance between the ground and the cloud drops dramatically, and discharge occurs, called the return stroke.  This releases between two hundred megajoules and seven gigajoules of energy in a fraction of a second, heating the air column to around thirty thousand degrees Celsius -- five times hotter than the surface of the Sun.

That's the origin of both the flash of light and the shock wave in the air that we hear as thunder.

The problem is, there was no consensus on what exactly caused the very first step -- the charge separation in the cloud that triggers the voltage difference.  Some scientists believed that it was friction between the air and the updrafting raindrops (and hail) characteristic of a thundercloud, similar to the way you can induce a static charge on a balloon by rubbing it against your shirt.  But experiments weren't able to confirm that, and most places you look, you'll see words like "still being investigated" and "uncertain at best" and "poorly understood process."

Until now.

A team of scientists led by Victor Pasko of Pennsylvania State University have shown that the initiation of lightning is caused by a literal perfect storm of conditions.  They found that free "seed" electrons, knocked loose by cosmic rays, are accelerating into the rapidly-rising air column at "relativistic" speeds -- i.e., a significant fraction of the speed of light -- and then ram into nitrogen and oxygen atoms.  These collisions trigger a shower of additional electrons, causing an avalanche, which is then swept upward into the upper parts of the cloud.

This is what causes the charge separation, the voltage difference between top and bottom, and the eventual discharge we see as lightning.

It also produces electromagnetic radiation across the spectrum from radio waves to gamma rays, something that had been observed but never explained.

"By simulating conditions with our model that replicated the conditions observed in the field, we offered a complete explanation for the X-rays and radio emissions that are present within thunderclouds," Pasko said.  "We demonstrated how electrons, accelerated by strong electric fields in thunderclouds, produce X-rays as they collide with air molecules like nitrogen and oxygen, and create an avalanche of electrons that produce high-energy photons that initiate lightning...  [T]he high-energy X-rays produced by relativistic electron avalanches generate new seed electrons driven by the photoelectric effect in air, rapidly amplifying these avalanches.  In addition to being produced in very compact volumes, this runaway chain reaction can occur with highly variable strength, often leading to detectable levels of X-rays, while accompanied by very weak optical and radio emissions.  This explains why these gamma-ray flashes can emerge from source regions that appear optically dim and radio silent."

There's still a lot left to explain, however.  Also this week, a paper came out of Arizona State University about the astonishing "megaflash" that occurred in October 2017, where a single lightning bolt traveled over eight hundred kilometers -- from eastern Texas all the way to Kansas City.  Even though the megaflash dropped some cloud-to-ground leaders along the way, it didn't discharge completely until the very end.  Megaflashes are rare, but what conditions could lead to a main stepped leader (and the corresponding return stroke) extending that far before grounding are unknown.

So like with all good science, the new research answers some questions and raises others.  Here in upstate New York we're in thunderstorm season, and while we don't get the crazy storms they see in the southeast and midwest, we've had some powerful ones this summer.  I've always liked a good storm, as long as the lightning stays away from my house.  A friend of ours had his house struck by lightning a few years ago and it fried his electrical system (including his computer) -- something that leads me to unplug my laptop and router as soon as I hear rumbling.

Even if the mechanisms of lightning are now less mysterious, it's still just as dangerous.  Very very frightening, as Freddie Mercury observed.

****************************************


Monday, May 26, 2025

Time and tide

I don't know if you've had the experience of running into a relatively straightforward concept that your brain just doesn't seem to be able to wrap itself around.

One such idea for me is the explanation for tides.  I've gone through it over and over, starting in high school physics, and I keep having to go back and revisit it because I think I've got it and then my brain goes, "...wait, what?" and I have to look it up again.

The sticking point has always been why there are two high tides on opposite sides of the Earth.  I get that the water on the side of the Earth facing the Moon experiences the Moon's extra gravitational attraction and is pulled away from the Earth's surface, creating a bulge.  But why is there a bulge on the side facing away from the Moon?

Now that I'm 64 and have gone over it approximately 482 times, I think I've finally got it.  Which is more than I can say for Bill O'Reilly:


So, let's see if I can prove Mr. O'Reilly wrong.

Consider three points on the Earth: A (on the surface, facing the Moon), B (at the center of the Earth), and C (on the surface, opposite the Moon).  Then ask yourself what the difference is in the pull of the Moon on those three points.

Isaac Newton showed that the force of gravity is proportional to two things -- the masses of the objects involved, and the inverse square of the distance between them.  The second part is what's important here.  Because A, B, and C are all different distances from the Moon, they experience a difference in the gravitational attraction they experience.  A is pulled hardest and C the least, with B in the middle.

This means that the Earth is stretched.  Everything experiences these tidal forces, but water, which is freer to move, responds far more than land does.  At point A, the water is pulled toward the Moon, and experiences a high tide.  (That's the obvious part.)  The less obvious part is that because points B and C are subject to a difference in the gravitational attraction, the net effect is to pull them apart -- so from our perspective on the Earth's surface, the water at C pulls away and upward, so there's a high tide there, as well.

There's practically no limit to how big these forces can get.  On the Earth, they're fairly small, although sometimes phenomena like a seiche (a standing wave in a partially-enclosed body of water) can amplify the effect and create situations like what happens in the Bay of Fundy, Nova Scotia, where the difference in the water level between high and low tide can be as much as sixteen meters.

But out in space, you can find systems where the masses and distances combine to create tidal forces that are, to put it in scientific terms, abso-fucking-lutely enormous.  This, in fact, is why the whole subject comes up today; the discovery of a binary system in the Large Magellanic Cloud made up of a supergiant with a mass thirty-five times that of the Sun, and a smaller (but still giant) companion ten times the mass of the Sun.  They're close enough that they orbit their common center of gravity about once a month.  And the combination of the huge masses and close proximity creates tidal bulges about three million kilometers tall.

That's over three times the diameter of the Sun.

You think the people living along the Bay of Fundy have it bad.

Artist's conception of the system in the Large Magellanic Cloud [Illustration by Melissa Weiss of NASA/Chandra X-Ray Observatory/Center for Astrophysics]

And that's not even as extreme as tidal forces can get.  If you were unfortunate enough to fall feet-first into a black hole, you would undergo what physicists call -- I'm not making this up -- spaghettification.  The tidal forces are so huge that they're even significant across a small distance like that between your head and your feet, so you'd be stretched along your vertical axis and compressed along your horizontal one.  Put more bluntly, you'd be squished like a tube of toothpaste, ultimately comprising the same volume as before but a much greater length.

It would not be pleasant.

Be that as it may, I think I've finally got the explanation for tides locked down.  We'll see how long it lasts.

At least I'm pretty sure I'm still ahead of Bill O'Reilly.

****************************************


Monday, May 12, 2025

Djinn and paradox

In the very peculiar Doctor Who episode "Joy to the World," the character of Joy Almondo is being controlled by a device inside a briefcase that -- if activated -- will release as much energy as a supernova, destroying the Earth (and the rest of the Solar System).  But just at the nick of time, a future version of the Doctor (from exactly one year later) arrives and gives the current Doctor the override code, saving the day.

The question comes up, though, of how the future Doctor knew what the code was.  The current Doctor, after all, hadn't known it until he was told.  He reasons that during that year, he must have learned the code from somewhere or someone -- but the year passes without anyone contacting him about the briefcase and its contents.  Right before the year ends (at which point he has to jump back to complete the loop) he realizes that his surmise wasn't true.  Because, of course, he already knew the code.  He'd learned it from his other self.  So armed with that knowledge, he jumps back and saves the day.

Well, he saves the moment, at least.  As it turns out, their troubles are just beginning, but that's a discussion for another time.

A similar trope occurred in the 1980 movie Somewhere in Time, but with an actual physical object rather than just a piece of information.  Playwright Richard Collier (played by Christopher Reeve) is at a party celebrating the debut of his most recent play, and is approached by an elderly woman who hands him an ornate pocket watch and says, in a desperate voice, "Come back to me."  Collier soon goes back in time by six decades, finds her as a young woman, and they fall desperately in love -- and he gives her the pocket watch.  Ultimately, he's pulled back into the present, and his girlfriend grows old without him, but right before she dies she finds him and gives him back the watch, closing the loop.

All of this makes for a fun twist; such temporal paradoxes are common fare in fiction, after all.  And the whole thing seems to make sense until you ask the question of, respectively (1) where did the override code originally come from? and (2) who made the pocket watch?

Because when you think about it -- and don't think too hard, because these kinds of things are a little boggling -- neither one has any origin.  They're self-creating and self-destroying, looped like the famous Ouroboros of ancient myth, the snake swallowing its own tail. 

[Image is in the Public Domain]

The pocket watch is especially mystifying, because after all, it's an actual object.  If Collier brought it back with him into the past, then it didn't exist prior to the moment he arrived in 1920, nor after the moment he left in 1980 -- which seems to violate the Law of Conservation of Matter and Energy.

Physicists Andrei Lossev and Igor Novikov called such originless entities "djinn particles," because (like the djinn, or "genies," of Arabian mythology) they seem to appear out of nowhere.  Lossev and Novikov realized that although "closed timelike curves" are, theoretically at least, allowed by the Theory of General Relativity, they all too easily engender paradoxes.  So they proposed something they call the self-consistency principle -- that time travel into the past is possible if and only if it does not generate a paradox.

So let's say you wanted to do something to change history.  Say, for example, that you wanted to go back in time and give Arthur Tudor, Prince of Wales some medication to save his life from the fever that otherwise killed him at age fifteen.  This would have made him king of England seven years later instead of his younger brother, who would have become the infamous King Henry VIII, thus dramatically changing the course of history.  In the process, of course, it also generates a paradox; because if Henry VIII never became king, you would have no motivation to go back into the past and prevent him from becoming king, right?  Your own memories would be consistent with the timeline of history that led to your present moment.  Thus, you wouldn't go back in time and save Arthur's life.  But this would mean Arthur would die at fifteen, Henry VIII becomes king instead, and... well, you see the difficulty.

Lossev and Novikov's self-consistency principle fixes this problem.  It tells us that your attempt to save Prince Arthur must have failed -- because we know that didn't happen.  If you did go back in time, you were simply incorporated into whatever actually did happen.

Timeline of history saved.  Nothing changed.  Ergo, no paradox.

You'd think that physicists would kind of go "whew, dodged that bullet," but interestingly, most of them look at the self-consistency principle as a bandaid, an unwarranted and artificial constraint that doesn't arise from the models themselves.  Joseph Polchinski came up with another paradoxical situation -- a billiard ball fired into a wormhole at exactly the right angle that when it comes out of the other end, it runs into (and deflects) itself, preventing it from entering the wormhole in the first place -- and analysis by Nobel Prize-winning physicist Kip Thorne found there's nothing inherent in the models that prevents this sort of thing.

Some have argued that the ease with which time travel into the past engenders paradox is an indication that it's simply an impossibility; eventually, they say, we'll find that there's something in the models that rules out reversing the clock entirely.  In fact, in 2009, Stephen Hawking famously hosted a time-travelers' party at Cambridge University, complete with fancy food, champagne, and balloons -- but only sent out invitations the following day.  He waited several hours, and no one showed up.

That, he said, was that.  Because what time traveler could resist a party?

But there's still a lingering issue, because it seems like if it really is impossible, there should be some way to prove it rigorously, and thus far, that hasn't happened.  Last week we looked at the recent paper by Gavassino et al. that implied a partial loophole from the Second Law of Thermodynamics -- if you could travel into the past, entropy would run backwards during part of the loop and erase your memory of what had happened -- but it still leaves the question of djinn particles and self-deflecting billiard balls unsolved.

Seems like we're stuck with closed timelike curves, paradoxes notwithstanding.

Me, I think my mind is blown sufficiently for one day.  Time to go play with my puppy, who only worries about paradoxes like "when is breakfast?" and the baffling question of why he is not currently getting a belly rub.  All in all, probably a less stressful approach to life.

****************************************


Thursday, February 20, 2025

Order out of chaos

When I was an undergraduate, I sang in the University of Louisiana Choir in a production of Franz Josef Haydn's spectacular choral work The Creation.

The opening is a quiet, eerie orchestral passage called "The Representation of Chaos" -- meant to evoke the unformed "void" that made up the universe prior to the moment of creation.  Then the Archangel Raphael sings, "In the beginning, God made Heaven and Earth; and the Earth was without form and void, and darkness was upon the face of the deep."  The chorus joins in -- everything still in a ghostly pianissimo -- "In the spirit, God moved upon the face of the waters; and God said, "Let there be light.  And... there... was...

...LIGHT!"

The last word is sung in a resounding, major-chord fortissimo, with the entire orchestra joining in -- trumpets blaring, tympanis booming, the works.  

Even if you don't buy the theology, it's a moment that sends chills up the spine.  (You can hear it yourself here.)

Of course, the conventional wisdom amongst the cosmologists has been that the universe didn't begin in some kind of chaotic, dark void; quite the opposite.  The Big Bang -- or at least, the moment after it -- is usually visualized as a searingly hot, dense fireball, which expanded and cooled, leading to a steady entropy increase.  So by our current models, we're heading toward chaos, not away from it.

Well, maybe.

A recent paper by the pioneering Portuguese physicist and cosmologist João Magueijo has proposed a new model for the origins of the universe that overturns that entire scenario -- and far from being ridiculed off the stage, he's captured the attention even of hard-nosed skeptics like Sabine Hossenfelder, who did a video on her YouTube channel about his paper a few days ago that is well worth watching in its entirety.  But the gist, as far as a layperson like myself can understand it, goes like this.

It's long been a mystery why the fundamental constants of physics have the values they do, and why they actually are constant.  A handful of numbers -- the speed of light, the strength of the electromagnetic interaction, the strength of the gravitational force, the fine-structure constant, and a few others -- govern the behavior of, well, pretty much everything.  None seem to be derivable from more fundamental principles; i.e., they appear to be arbitrary.  None have ever been observed to shift, regardless how far out in space (and therefore how far back in time) you look.  And what's curious is that most of them have values that are tightly constrained, at least from our perspective.  Even a percent or two change in either direction, and you'd have situations like stars burning out way too fast to host stable planetary systems, atoms themselves falling apart, or matter not generating sufficient gravity to clump together.

So to many, the universe has appeared "fine-tuned," as if some omnipotent deity had set the dials just right at the moment of creation of the universe to favor everything we see around us (including life).  This is called the anthropic principle -- the strong version implying a master fine-tuner, the weak version being the more-or-less tautological statement that if those numbers had been any different, we wouldn't be here to ask the question.

But that doesn't get us any closer to figuring out why the fundamental constants are what they are.  Never one to shy away from the Big Questions, that's exactly what Magueijo has undertaken -- and what he's come up with is, to put it mildly, intriguing.

What he did was to start from the assumption that the fundamental constants aren't... constant.  That In The Beginning (to stick with our original Book of Genesis metaphor), the universe was indeed chaos -- the constants could have had more or less any values.  The thing is, the constants aren't all independent of each other.  Just as numbers in our mundane life can push and pull on each other -- to give a simple example, if you alter housing prices in a town, other numbers such as average salaries, rates of people moving in and moving out, tax rates, and funding for schools will shift in response -- the fundamental constants of physics affect each other.  What Magueijo did was to set some constraints on how those constants can evolve, then let the model run to see what kind of universe eventually came out.

And what he found was that after jittering around for a bit, the constants eventually found stable values and settled into an equilibrium.  In Hossenfelder's video, she uses the analogy of sand grains on a vibration plate being jostled into spots that have the highest stability (the most resistance to motion).  At that point, the pattern that emerges doesn't change again no matter how long you vibrate the plate.  What Magueijo suggests is that the current configuration of fundamental constants may not be the only stable one, but the range of what the constants could be might be far narrower than we'd thought -- and it also explains why we don't see the constants changing any more.

Why they are, in fact, constant.

Stable pattern of grains on a vibrating pentagonal Chladni plate [Image licensed under the Creative Commons Matemateca (IME USP), Chladni plate 16, CC BY-SA 4.0]

Magueijo's work might be the first step toward solving one of the most vexing questions of physics -- why the universe exists with these particular laws and constants, despite there not seeming to be any underlying reason for it.  Perhaps we've been looking at the whole thing the wrong way.  The early universe really may have been without substance and void -- but instead of a voice crying "let there be light!", things simply evolved until they reached a stable configuration that then generated everything around us.

It might not be as audibly dramatic as Haydn's vision of The Creation, but it's just as much of an eye-opener.

****************************************

Tuesday, November 12, 2024

Bubbles, dimensions, and black holes

One of the weirder claims of modern physics, which I first ran into when I was reading about string theory a few years ago, is that the universe could have more than three spatial dimensions -- but the extra ones are "curled up" and are (extremely) sub-microscopic.

I've heard it explained by an analogy of an ant walking on a string.  There are two ways the ant can go -- back and forth on the string, or around the string.  The "around the string" dimension is curled into a loop, whereas the back-and-forth one has a much greater spatial extent.

Scale that up, if your brain can handle it, to three dimensions of the back-and-forth variety, and as many as nine or ten of the around-the-string variety, and you've got an idea of what the claim is.

The problem is, those extra dimensions have proven to be pretty thoroughly undetectable, which has led critics to quote Wolfgang Pauli's quip, that it's a theory that "is not even wrong," it's unverifiable -- which is synonymous to saying "it isn't science."  But the theorists are still trying like mad to find an indirect method to show the existence of these extra dimensions.

To no avail at the present, although we did have an interesting piece added to the puzzle a while back that I somehow missed the first time 'round.  Astronomers Katie Mack of North Carolina State University and Robert McNees of Loyola University published a paper in arXiv that puts a strict limit on the number of macroscopic dimensions -- and that limit is three.

So sorry, fans of A Wrinkle in Time, there's no such thing as the tesseract.  The number of dimensions is three, and three is the number of dimensions.  Not four.  Nor two, unless thou proceedest on to three. 

Five is right out.

The argument by Mack and McNees -- which, although I have a B.S. in physics, I can't begin to comprehend fully -- boils down to the fact that the universe is still here.  If there were extra macroscopic spatial dimensions (whether or not we were aware of them) it would be possible that two cosmic particles of sufficient energy could collide and generate a miniature black hole, which would then give rise to a universe with different physical laws.  This new universe would expand like a bubble rising in a lake, its boundaries moving at the speed of light, ripping apart everything down to and including atoms as it went.

"If you’re standing nearby when the bubble starts to expand, you don’t see it coming," Mack said.  "If it’s coming at you from below, your feet stop existing before your mind realizes that."

This has been one of the concerns about the Large Hadron Collider, since the LHC's entire purpose is to slam together particles at enormous velocities.  Ruth Gregory of Durham University showed eight years ago that there was a non-zero possibility of generating a black hole that way, which triggered the usual suspects to conjecture that the scientists were trying to destroy the universe.  Why they would do that, when they inhabit said universe, is beyond me.  In fact, since they'd be standing right next to the Collider when it happened, they'd go first, before they even had a chance to cackle maniacally and rub their hands together about the fate of the rest of us.

"The black holes are quite naughty," Gregory said, which is a sentence that is impossible to hear in anything but a British accent.  "They really want to seed vacuum decay.  It’s a very strong process, if it can proceed."

"No structures can exist," Mack added.  "We’d just blink out of existence."

Of course, it hasn't happened, so that's good news.  Although I suppose this wouldn't be a bad way to go, all things considered.  At least it would be over quickly, not to mention being spectacular.  "Here lies Gordon, killed during the formation of a new universe," my epitaph could read, although there wouldn't be anyone around to write it, nor anything to write it on.

Which is kind of disappointing.

Anyhow, what Mack and McNees have shown is that this scenario could only happen if there was a fourth macroscopic dimension, and since it hasn't happened in the universe's 13.8 billion year history, it probably isn't going to.

So don't cancel your meetings this week.  Mack and McNees have shown that any additional spatial dimensions over the usual three must be smaller than 1.6 nanometers, which is about three times the diameter of your average atom; bigger than that, and we would already have become victims of "vacuum decay," as the expanding-bubble idea is called.

A cheering notion, that.  Although I have to say, it's an indication of how bad everything else has gotten that "We're not dead yet" is the best I can do for good news.


That's our news from the world of scientific research -- particle collisions, expanding black holes, and vacuum decay.  Myself, I'm not going to worry about it.  I figure if it happens, I'll be gone so fast I won't have time to be upset at my imminent demise, and afterwards none of my loved ones will be around to care.  Another happy thought is that I'll take Nick Fuentes, Tucker Carlson, Elon Musk, Stephen Miller, and Andrew Tate along with me, which might almost make destroying the entire universe worth it.

****************************************


Thursday, September 5, 2024

Quantum foams and tiny wormholes

One of the most frustrating things for insatiably curious laypeople like myself is to find that despite our deep and abiding interest in a topic, there's simply a limit to what we're capable of understanding.

I know that happened to me with mathematics.  All through grade school, and even into college, I found math to be one of my easiest subjects.  I never had to struggle to understand it, and got high grades without honestly trying all that hard.

Then I hit Calculus 3.

I use the word "hit" deliberately, because it felt like running into a brick wall.  I think the problem was that this was the point where I stopped being able to visualize what was going on, and without that concrete sense of why things worked the way they did, it turned into memorization and application of a set of what appeared to be randomly-applied rules, a technique that only worked when I remembered them accurately.  I lost the intuitiveness of my earlier experience.  It returned to some extent when I took Differential Equations (partly due to a stupendous teacher), but I went from there to Vector Calculus, and it was all over.

That was the moment I decided that I am a Bear Of Very Little Brain, and the effect of the experience (combined with a similar unfortunate roadblock in Classical Mechanics) convinced me that a career as a physicist was not in the cards.

That feeling came back to me full-force when I ran across a paper in the journal Physical Review D entitled "Dark Energy from Topology Change Induced by Microscopic Gauss-Bonnet Wormholes," by Stylianos A. Tsilioukas, Emmanuel N. Saridakis, and Charalampos Tzerefos, of the University of Thessaly.  Even reading the abstract left me with an expression rather like the one my puppy has when I try to explain a concept to him that is simply beyond his comprehension, like why he shouldn't eat my gym socks.  You can tell he's trying to understand, he clearly wants to understand, but it's just not getting through.

But as far as the paper goes, at least I can tell that the idea is really cool, so I'm going to attempt to tell you about it.  If there are any physics boffins in the studio audience who want to correct my misapprehensions or misstatements, please feel free to let me know in the comments.

About seventy percent of the mass/energy content of the universe is something called dark energy.  (It's entirely unrelated to dark matter; the potential confusion between the two has led to a push to rename it vacuum energy.)  Dark energy is a bit of a placeholder name anyhow, given that we don't really know what it is; all we see is its effect, which is the measured increasing expansion rate of the universe.

The current best guess about its nature is that dark energy is a property of space itself (i.e., not something that space contains, but an inherent characteristic of the fabric of spacetime).  This energy manifests as a repulsive force, but because it's intrinsic, it doesn't dilute as space expands, the way a cloud might dissipate into air; its content per unit volume remains constant, so as space expands, the total amount of dark energy in the universe increases, resulting in a steady acceleration of the expansion rate.  At the moment, at least on the local level, gravity is still stronger than the expansion, so we're safe enough; but eventually (we're talking a long way in the future) space will have expanded so much that dark energy will overwhelm all other forces, and matter itself will be torn to shreds.

But despite this, we still have no idea what causes it, or even what it really is.

The Tsilioukas et al. paper -- once again, as far as I can understand it -- proposes a solution to that.

On the smallest scales, spacetime seems to be a "quantum foam" -- a roiling, bubbling ferment of virtual particles and antiparticles, constantly being created and destroyed.  That these virtual particles are real has been demonstrated experimentally, despite their existing for such a short time that most physicists would question even using the word "existing" as a descriptor.  So these incredibly quick fluctuations in spacetime -- even in a complete vacuum -- can have a discernible effect despite the fact that detecting the particles themselves is theoretically impossible.

What Tsilikouas et al. suggest is that there's a feature of the quantum foam that, described mathematically, is basically a network of tiny wormholes -- tunnels through spacetime connecting two separate points.  They're (1) as quick to appear and vanish as the aforementioned virtual particles, and (2) extremely submicroscopic, so don't get your hopes up about visiting Deep Space Nine any time soon.


The mathematics of these wormholes is described by a principle from topology called the Gauss-Bonnet theorem, named after mathematicians Carl Friederich Gauss and Pierre Ossian Bonnet (no relation), and when you include a Gauss-Bonnet term in the equations of General Relativity, you get something that seems to act just like the observed effects of dark energy.

So the runaway expansion of the universe might be due to tiny wormholes forming from the quantum foam of the vacuum -- and those minuscule fluctuations in spacetime add up to seventy percent of the total mass/energy content of the universe.

Like I said, it's not like I'm any more qualified to analyze whether they're on to something than Jethro is to explain why chewing up my gym socks makes him a Very Bad Puppy.  And it must be said that these theoretical models sometimes run into the sad truth from Thomas Henry Huxley, that "the great tragedy of science is the slaying of a beautiful hypothesis by an ugly fact."

But given that up till now, dark energy has been nothing more than a mysterious, undetectable, unanalyzable something that nevertheless outweighs all other kinds of matter and energy put together -- a rather embarrassing situation for physicists to find themselves in -- the new explanation seems to be a significant step in the right direction.

At least to a Bear Of Very Little Brain.

****************************************


Wednesday, August 28, 2024

Baby Bear's universe

The idea of Intelligent Design is pretty flimsy, at least when it comes to biology.  The argument boils down to something the ID proponents call irreducible complexity -- that there are some features in organisms that are simply too complex, requiring too many interlocking parts, to have evolved through natural selection.  The problem is, the ones most commonly cited, such as the vertebrate eye, have been explained pretty thoroughly, with nothing needed but a good understanding of genetics, biochemistry, and physiology to comprehend how they evolved.  The best takedown of biological ID remains Richard Dawkins's The Blind Watchmaker, which absolutely shreds the arguments of ID proponents like Michael Behe.  (Yes, I know Dawkins has recently made statements indicating that he holds some fairly repulsive opinions; I never said he was a nice guy, but there's no doubt that his writings on evolutionary biology are on-point.)

While biological ID isn't worth much, there's a curious idea from physics that has even the reputable scientists wondering.  It has to do with the number of parameters (by some estimates, around thirty of them) in the Standard Model of Particle Physics and the Theories of Relativity that don't appear to be derivable from first principles; in other words, we know of no compelling reason why they are the values they are, and those values are only known empirically.

[Image licensed under the Creative Commons Cush, Standard Model of Elementary Particles, CC BY 3.0]

More eye-opening is the fact that for most of them, if they held any other values -- in some cases, off by only a couple of percent either way -- the universe would be uninhabitable.

Here are a few examples:
  • The degree of anisotropy (unevenness in density) of the cosmic microwave background radiation.  This is thought to reflect the "clumpiness" of matter in the early universe, which amounts to about one part in ten thousand.  If it was only a little bigger -- one part in a thousand -- the mutual attraction of those larger clumps of matter would have triggered early gravitational collapse, and the universe would now be composed almost entirely of supermassive black holes.  Only a little smaller -- one part in a hundred thousand -- and there would have been insufficient gravitational attraction to form stars, and the universe would be a thin, cold fog of primordial hydrogen and helium.
  • The fact that electrons have a spin of one-half, making them fermions.  Fermions have an odd property; two can't occupy the same quantum mechanical state, something called the Pauli Exclusion Principle.  (Bosons, such as photons, don't have that restriction, and can pass right through one another.)  This feature is why electrons exist in orbitals in atoms.  If they had integer spin, there would be no such thing as chemistry.
  • The masses of the various subatomic particles.  To take only one example, if the quarks that make up protons and neutrons were much heavier, the strong nuclear force would all but evaporate -- meaning that the nuclei of atoms would fly apart.  (Well, more accurately, they never would have formed in the first place.)
  • The value of the fine-structure constant, which is about 1/137 (it's a dimensionless number, so it doesn't matter what units you use).  This constant determines, among other things, the relative strength of the electromagnetic and strong nuclear forces.  Any larger, and atoms would collapse; any smaller, and they would break apart into their fundamental particles.
  • The value of the gravitational constant G.  It's about 6.67 x 10^-11 meters cubed per kilogram per second -- i.e., a really tiny number, meaning gravity is an extremely weak force.  If G was larger, stars would burn through their hydrogen fuel much faster, and it's doubtful they'd live long enough for planets to have time to evolve intelligent life.  If G was smaller, there wouldn't be enough gravitational pull to initiate fusion in the first place.  No fusion = no stars.
  • The flatness of the universe.  While space near massive objects is curved as per the General Theory of Relativity, its overall shape is apparently Euclidean.  Its makeup -- around 5% conventional matter and energy, 25% dark matter, and 70% dark energy -- is exactly what you'd need to generate a flat universe.
  • The imbalance between matter and antimatter.  There appears to be no reason why, at the Big Bang, there weren't exactly equal numbers of matter and antimatter particles created.  But in fact -- and fortunately for us -- there was a very slight imbalance favoring matter.  The estimate is that there was about one extra unpaired matter particle out of every one hundred million pairs, so when the pairs underwent mutual annihilation, those few extra particles were left over.  The survivors became the matter we have today; without that tiny imbalance, the entire universe today would be filled with nothing but photons.
  • The cosmological constant -- a repulsive force exerted by space itself (which is the origin of dark energy).  This is the most amazing one, because for a long time, physicists thought the cosmological constant was exactly zero; Einstein looked upon his introduction of a nonzero cosmological constant as an inexcusable fudge factor in his equations, and called his attempt to shoehorn it in as his "greatest blunder."  In fact, recent studies show that the cosmological constant does exist, but it's so close to zero that it's hard to imagine -- it's about a decimal point, followed by 121 zeroes, followed by a 3 (as expressed in Planck units).  But if it was exactly zero, the universe would have collapsed by now -- and any bigger than it is, and the expansion of space would have overwhelmed gravity and torn apart matter completely!
And so on and so forth.  The degree of fine-tuning that seems to be required to set all these independent parameters so that the conditions are juuuuuust right for our existence (to borrow a phrase from Baby Bear) strikes a lot of people, even some diehard rationalist physicists, as mighty peculiar.  As cosmologist Fred Hoyle put it, "It looks very much as if a super-intellect has monkeyed with physics as well as with chemistry and biology."

The idea that some Master Architect twiddled the knobs on the various constants in physics, setting them exactly as needed for the production of matter and ultimately ourselves, is called the Strong Anthropic Principle.  It sets a lot of people's teeth on edge -- it's a little too much like the medieval idea of humanity's centrality in the universe, something that was at the heart of the resistance to Copernicus's heliocentric model.  It seems like all science has done since then is to move us farther from the center -- first, the Earth orbits the Sun; then, the stars themselves are suns, and our own Sun is only a smallish and rather ordinary one; then, the Sun and planets aren't central to the galaxy; and finally, our own galaxy is only one of billions.

Now, suddenly, the fine-tuning argument has seemingly thrust us back into a central position.  However small a piece of the cosmos we actually represent, was it all set this way for our benefit?

In his book The Cosmic Landscape: String Theory and the Illusion of Intelligent Design, theoretical physicist Leonard Susskind answers this with a resounding "no."  His argument, which is sometimes called the Weak Anthropic Principle, looks at the recent advances in string theory, inflation, and cosmology, and suggests that the apparent fine-tuning is because the cosmos we're familiar with is only one pocket universe in a (much) larger "landscape," where the process of dropping into a lower energy state triggers not only expansion, but sets the values of the various physical parameters.  Afterward, each of those bubbles is then governed by its own physics.  Most would be inhospitable to life; a great many probably don't have atoms heavier than helium.  Some probably have very short life spans, collapsing almost immediately after formation.  And the models suggest that the number of different possible configurations -- different settings on the knobs, if you will -- might be as many as ten to the five-hundredth power.

That's a one followed by five hundred zeroes.

Susskind suggests that we live in this more-or-less friendly one not because the constants were selected by a deity with us in mind, but because if our universe's constants had any other value, we wouldn't be here to ask the question.  It might be extremely unlikely that a universe would have exactly these settings, but if you have that many universes to choose from, they're going to show up that way somewhere.

We only exist because this particular universe is the one that got the values right on the nose.

While I think this makes better sense than the Master Architect idea of the Strong Anthropic Principle -- and I certainly don't want to pretend I could argue the point with a physicist of Susskind's caliber -- I have to admit feeling a twinge of discomfort still.  Having all of those parameters line up so perfectly just seems like too much of coincidence to swallow.  It does occur to me that in my earlier statement, that the constants aren't derivable from first principles, I should amend that by adding "as far as we understand at the moment."  After all, the geocentric model, and a lot of other discredited ideas, were discarded not because they overestimated our importance, but because we got better data and used it to assemble a more accurate theory.  It may be that some of these parameters are actually constrained -- they couldn't have any other value than the one they do -- we just haven't figured out why yet.

After all, that's my main criticism of Intelligent Design in biology; it boils down to the argument from incredulity -- I can't imagine how this could have happened, so it must be that God did it.

That said, the best models of physics we now have don't give us any clue of why the thirty-odd free parameters in the Standard Model are what they are, so for now, the Weak Anthropic Principle is the best we can do, at least as far as scientific approaches go.  That we live in a Baby Bear universe is no more mysterious than why you find fish in a lake and not in a sand dune.  Our hospitable surroundings are merely good fortune -- a lucky break that was not shared in the other ten-to-the-five-hundredth-power universes (minus one) out there in the cosmic landscape.

****************************************


Tuesday, August 13, 2024

The barrage

At the last Tompkins County Friends of the Library Used Book Sale, I picked up a copy of Donald Yeomans's fascinating book titled Near-Earth Objects (which has the rather alarming subtitle, Finding Them Before They Find Us).  Yeomans has impeccable credentials -- senior fellow with NASA's Jet Propulsion Laboratory, manager/supervisor of the Near-Earth Object Program Office and Solar System Dynamics Group, and researcher with the Deep Impact Project that investigates the composition, origins, and trajectories of comets.  His book is about the potential for a significant asteroid or comet strike on Earth -- and, more importantly, how we might find potentially hazardous orbiting objects soon enough to have a chance to avert the collision.

As Canadian astronaut Chris Hadfield put it, "The dinosaurs went extinct because they didn't have a space program."

One of the topics in Yeomans's book is the history of impacts, including the famous one that ended the Mesozoic Era.  But his timeline goes back a great deal further than that; one of the sections is devoted to a period called the Late Heavy Bombardment -- on the order of four billion years ago -- during which it is thought that the Earth got absolutely pummeled.

What caused this barrage?  Well, first of all, it must be stated that not all scientists even think it happened.  The geological processes on the Earth's surface have erased most of the evidence.  Studies of cratering on the Moon (which presumably would also have gotten clobbered during the same period) have yielded conflicting results; Patrick Boehnke and Mark Harrison, of the University of California, wrote a paper back in 2016 suggesting that the radioisotope dating of rocks from the Moon supported a uniformly decreasing impact rate over its history (i.e., no sudden spike about four billion years ago).

Other researchers disagree.  Three of the largest impact basins on the Moon, the Mare Imbrium, Mare Serenitatis, and Mare Nectaris, all appear to date from right around the time of the hypothesized bombardment.  If the same happened on Earth, it was cataclysmic -- turning large areas of the Earth's crust into molten lava, and vaporizing huge volumes of water in the early oceans.

[Image licensed under the Creative Commons CC-BY-SA, from https://ancient-life-and-history-earth.fandom.com/wiki/Late_Heavy_Bombardment]

Where it gets interesting is the explanation for why the Late Heavy Bombardment happened -- if it did.  The whole thing hinges on a bit of physics that falls into the "stuff that I theoretically knew, but never really thought about" department.

The orbital path of a planet (or asteroid, or comet, or whatever) remains stable as long as nothing adds or removes energy from it.  If something subtracts energy, the orbit becomes smaller; if something adds energy, the orbit gets bigger.  Enough added energy, and it achieves escape velocity and is ejected from the system altogether.  But what would itself have enough energy to interact with something the size of a planet in such a way as to make any difference?

Back in the early history of the Solar System, there was a clutter of debris left over from its formation.  We still have three major bands of it left -- the Asteroid Belt between Mars and Jupiter, the Kuiper Belt beyond the orbit of Neptune, and the Oort Cloud way out past the orbit of Pluto.  There are few asteroids left in the vicinity of the planets, because any that were there were swept up gravitationally.  In fact, that's one of the requirements for an object to be classified as a planet; that it clear the space near it of asteroids.  (This is the characteristic that caused Pluto to get demoted.)

But four billion years ago, there was a great deal more debris around.  Any large-ish asteroids that got near a planet resulted in their giving a gravitational yank on each other; if the asteroid was ahead of the planet, it had a bit of its energy stolen by the planet (making the planet's orbital axis get bigger); if it passed behind the planet, the reverse happened (making the planet's orbital axis shrink).  Well, according to the models described by Yeomans, eventually the pushing and pulling by all of the asteroids added up, and a curious thing happened.

The two largest planets, Jupiter and Saturn, had their orbits altered until they were in a highly stable configuration called a 2:1 orbital resonance.  

What this means is that they were in a pattern where Saturn's orbital period was exactly twice Jupiter's.  (They're still close to that; Saturn orbits the Sun once every 29.4 years, and Jupiter once every 11.9 years.)  But when they were in perfect 2:1 resonance, they reinforced each other's gravitational influence on the outer planets, Uranus and Neptune, giving them a kick every time they lined up -- a little like a kid on a playground swing kicking off every time they pass the ground.

This did two things.  First, it gave energy to Uranus and Neptune, making their orbits bigger, moving them outwards.  Second, it subtracted energy from Jupiter and Saturn, making their orbits smaller (and eventually destroying the resonance).  But the important one here is Neptune, because the increase of its orbit moved it out into a region of space that hadn't been cleared of debris.  When Neptune slipped outward into the inner Kuiper Belt, around four billion years ago, this had the effect of slingshotting a great deal of that debris into the inner Solar System...

... turning Earth into a gigantic bullseye for meteor strikes.

So it's fascinating that if the Late Heavy Bombardment actually did occur, there's a good model for what might have caused it.

The good news is that now that Jupiter and Saturn are no longer in resonance, Neptune is more or less staying put, so any further target practice is unlikely.  Doesn't mean we're out of the woods completely, of course.  Yeomans's whole book is about the possibility of asteroid strikes.

But at least it looks like the barrage is a thing of the past.

****************************************



Monday, August 5, 2024

A matter of scale

In Douglas Adams's brilliant book, The Hitchhiker's Guide to the Galaxy, a pair of alien races, the Vl'Hurg and the G'gugvuntt, spent millennia fighting each other mercilessly until they intercept a message from Earth that they misinterpret as being a threat.  They forthwith decide to set aside their grievances with each other, and team up for an attack on our planet in retaliation:
Eventually of course, after their Galaxy had been decimated over a few thousand years, it was realized that the whole thing had been a ghastly mistake, and so the two opposing battle fleets settled their few remaining differences in order to launch a joint attack on our own Galaxy...

For thousands more years the mighty ships tore across the empty wastes of space and finally dived screaming on to the first planet they came across -- which happened to be the Earth -- where due to a terrible miscalculation of scale the entire battle fleet was accidentally swallowed by a small dog.

I was reminded of the Vl'Hurg and G'gugvuntt while reading the (much more serious) book The View from the Center of the Universe, by physicist Joel Primack and author and polymath Nancy Abrams.  In it, they look at our current understanding of the basics of physics and cosmology, and how it intertwines with metaphysics and philosophy, in search of a new "foundational myth" that will help us to understand our place in the universe.

What brought up Adams's fictional tiny space warriors was one of the most interesting things in the Primack/Abrams book, which is the importance of scale.  There are about sixty orders of magnitude (powers of ten) between the smallest thing we can talk meaningfully about (the Planck length) and the largest (the size of the known universe), and we ourselves fall just about in the middle.  This is no coincidence, the authors say; much smaller life forms are unlikely to have to have the complexity to develop intelligence, and much larger ones would be limited by a variety of physical factors such as the problem that if you increase length in a linear fashion, mass increases as a cube.  (Double the length, the mass goes up by a factor of eight, for example.)  Galileo knew about this, and used it to explain why the shape of the leg bones of mice and elephants are different.  Give an animal the size of an elephant the relative leg diameter of a mouse, and it couldn't support its own weight.  (This is why you shouldn't get scared by all of the bad science fiction movies from the fifties with names like The Cockroach That Ate Newark.  The proportions of an insect wouldn't work if it were a meter long, much less twenty or thirty.)

Pic from the 1954 horror flick Them!

Put simply: scale matters.  Where it gets really interesting, though, is when you look at the fundamental forces of nature.  We don't have a quantum theory of gravity yet, but that hasn't held back technology from using the principles of quantum physics; on the scale of the very small, gravity is insignificant and can be effectively ignored in most circumstances.  Once again, we ourselves are right around the size where gravity starts to get really critical.  Drop an ant off a skyscraper, and it will be none the worse for wear.  A human, though?

And the bigger the object, the more important gravity becomes, and (relatively speaking) the less important the other forces are.  On Earth, mountains can only get so high before the forces of erosion start pulling them down, breaking the cohesive electromagnetic bonds within the rocks and halting further rise.  In environments with lower gravity, though, mountains can get a great deal bigger.  Olympus Mons, the largest volcano on Mars, is almost 22 kilometers high -- 2.5 times taller than Mount Everest.  The larger the object, the more intense the fight against gravity becomes.  The smoothest known objects in the universe are neutron stars, which have such immense gravity their topographic relief over the entire surface is on the order of a tenth of a millimeter.

Going the other direction, the relative magnitudes of the other forces increase.  A human scaled down to the size of a dust speck would be overwhelmed by electromagnetic forces -- for example, static electricity.  Consider how dust clings to your television screen.  These forces become much less important on a larger scale... whatever Gary Larson's The Far Side would have you believe:

Smaller still, and forces like the strong and weak nuclear forces -- the one that allows the particles in atomic nuclei to stick together, and the one that causes some forms of radioactive decay, respectively -- take over.  Trying to use brains that evolved to understand things on our scale (what we term "common sense") simply doesn't work on the scale of the very small or very large.

And a particularly fascinating bit, and something I'd never really considered, is how scale affects the properties of things.  Some properties are emergent; they result from the behavior and interactions of the parts.  A simple example is that water has three common forms, right?  Solid (ice), liquid, and gaseous (water vapor).  Those distinctions become completely meaningless on the scale of individual molecules.  One or two water molecules are not solid, liquid, or gaseous; those terms only acquire meaning on a much larger scale.

This is why it's so interesting to try to imagine what things would be like if you (to use Primack's and Abrams's metaphor) turned the zoom lens one way and then the other.  I first ran into this idea in high school, when we watched the mind-blowing short video Powers of Ten, which was filmed in 1968 (then touched up in 1977) but still impresses:


Anyhow, those are my thoughts about the concept of scale.  An explanation of why the Earth doesn't have to worry about either Vl'Hurgs and G'gugvuntts, enormous bugs, or static cling making your child stick to the ceiling.  A relief, really, because there's enough else to lose sleep over.  And given how quickly our common sense fails on unfamiliar scales, it's a good thing we have science to explain what's happening -- not to mention fueling our imaginations about what those scales might be like.

****************************************