Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label cosmology. Show all posts
Showing posts with label cosmology. Show all posts

Friday, August 22, 2025

Bounce

Today's post is about a pair of new scientific papers that have the potential to shake up the world of cosmology in a big way, but first, some background.

I'm sure you've all heard of dark energy, the mysterious energy that permeates the entire universe and acts as a repulsive force, propelling everything (including space itself) outward.  The most astonishing thing is that it appears to account for 68% of the matter/energy content of the universe.  (The equally mysterious, but entirely different, dark matter makes up another 27%, and all of the ordinary matter and energy -- the stuff we see and interact with on a daily basis -- only comprises 5%.)

Dark energy was proposed as an explanation for why the expansion of the universe appears to be speeding up.  Back when I took astronomy in college, I remember the professor explaining that the ultimate fate of the universe depended only on one thing -- the total amount of mass it contains.  Over a certain threshold, and its combined gravitational pull would be enough to compress it back into a "Big Crunch;" under that threshold, and it would continue to expand forever, albeit at a continuously slowing rate.  So it was a huge surprise when it was found out that (1) the universe's total mass seemed to be right around the balance point between those two scenarios, and yet (2) the expansion was dramatically speeding up.

So the cosmological constant -- the "fudge factor" Einstein threw in to his equations to generate a static universe, and which he later discarded -- seemed to be real, and positive.  In order to explain this, the cosmologists fell back on what amounts to a placeholder; "dark energy" ("dark" because it doesn't interact with ordinary matter at all, it just makes the space containing it expand).  So dark energy, they said, generates what appears to be a repulsive force.  Further, since the model seems to indicate that the quantity of dark energy is invariant -- however big space gets, there's the same amount of dark energy per cubic meter -- its relative effects (as compared to gravity and electromagnetism, for example) increase over time as the rest of matter and energy thins.  This resulted in the rather nightmarish scenario of our universe eventually ending when the repulsion from dark energy overwhelms every other force, ripping first chunks of matter apart, then molecules, then the atoms themselves.

The "Big Rip."

[Image is in the Public Domain courtesy of NASA]

I've always thought this sounded like a horrible fate, not that I'll be around to witness it.  This is not even a choice between T. S. Eliot's "bang" or "whimper;" it's like some third option that's the cosmological version of being run through a wood chipper.  But as I've observed before, the universe is under no compulsion to be so arranged as to make me happy, so I reluctantly accepted it.

Earlier this year, though, there was a bit of a shocker that may have given us some glimmer of hope that we're not headed to a "Big Rip."  DESI (the Dark Energy Spectroscopic Instrument) found evidence, which was later confirmed by two other observatories, that dark energy appears to be decreasing over time.  And now a pair of papers has come out showing that the decreasing strength of dark energy is consistent with a negative cosmological constant, and that value is exactly what's needed to make it jibe with a seemingly unrelated (and controversial) model from physics -- string theory.

(If you, like me, get lost in the first paragraph of an academic paper on physics, you'll get at least the gist of what's going on here from Sabine Hossenfelder's YouTube video on the topic.  If from there you want to jump to the papers themselves, have fun with that.)

The upshot is that dark energy might not be a cosmological constant at all; if it's changing, it's actually a field, and therefore associated with a particle.  And the particle that seems to align best with the data as we currently understand them is the axion, an ultra-light particle that is also a leading candidate for explaining dark matter!

So if these new papers are right -- and that's yet to be proven -- we may have a threefer going on here.  Weakening dark energy means that the cosmological constant isn't constant, and is actually negative, which bolsters string theory; and it suggests that axions are real, which may account for dark matter.

In science, the best ideas are always like this -- they bring together and explain lots of disparate pieces of evidence at the same time, often linking concepts no one even thought were related.  When Hess, Matthews, and Vine dreamed up plate tectonics in the 1960s, it explained not only why the continents seemed to fit together like puzzle pieces, but the presence and age of the Mid-Atlantic Ridge, the magnetometry readings on either side of it, the weird correspondences in the fossil record, and the configuration of the "Pacific Ring of Fire" (just to name a few).  Here, we have something that might simultaneously account for some of the biggest mysteries in cosmology and astrophysics.

A powerful claim, and like I said, yet to be conclusively supported.  But it does have that "wow, that explains a lot" characteristic that some of the boldest strokes of scientific genius have had.

And, as an added benefit, it seems to point to the effects of dark energy eventually going away entirely, meaning that the universe might well reverse course at some point and then collapse -- and, perhaps, bounce back in another Big Bang.  The cyclic universe idea, first described by the brilliant physicist Roger Penrose.  Which I find to be a much more congenial way for things to end.

So keep your eyes out for more on this topic.  Cosmologists will be working hard to find evidence to support this new contention -- and, of course, evidence that might discredit it.  It may be that it'll come to nothing.  But me?  I'm cheering for the bounce.

A fresh start might be just what this universe needs.

****************************************


Wednesday, July 16, 2025

Tense situation

In my Critical Thinking classes, I did a unit on statistics and data, and how you tell if a measurement is worth paying attention to.  One of the first things to consider, I told them, is whether a particular piece of data is accurate or merely precise -- two words that in common parlance are used interchangeably.

In science, they don't mean the same thing.  A piece of equipment is said to be precise if it gives you close to the same value every time.  Accuracy, though, is a higher standard; data are accurate if the values are not only close to each other when measured with the same equipment, but agree with data taken independently, using a different device or a different method.

A simple example is that if my bathroom scale tells me every day for a month that my mass is (to within one kilogram either way) 239 kilograms, it's highly precise, but very inaccurate.

This is why scientists always look for independent corroboration of their data.  It's not enough to keep getting the same numbers over and over; you've got to be certain those numbers actually reflect reality.

This all comes up because of a new look at one of the biggest scientific questions known -- the rate of expansion of the entire universe.

[Image is in the Public Domain, courtesy of NASA]

A while back, I wrote about some experiments that were allowing physicists to home in on the Hubble constant, a quantity that is a measure of how fast everything in the universe is flying apart.  And the news appeared to be good; from a range of between 50 and 500 kilometers per second per megaparsec, physicists had been able to narrow down the value of the Hubble constant to between 65.3 and 75.6.

The problem is, nobody's been able to get closer than that -- and in fact, recent measurements have widened, not narrowed, the gap.

There are two main ways to measure the Hubble constant.  The first is to use information like red shiftCepheid variables (stars whose period of brightness oscillation varies predictably with their intrinsic brightness, making them a good "standard candle" to determine the distance to other galaxies), and type 1a supernovae to figure out how fast the galaxies we see are receding from each other.  The other is to use the cosmic microwave background radiation -- the leftovers from the radiation produced by the Big Bang -- to determine the age of the universe, and therefore, how fast it's expanding.

So this is a little like checking my bathroom scale by weighing myself on it, then comparing my weight as measured by the scale at the gym and seeing if I get the same answer.

And the problem is, the measurement of the Hubble constant by these two methods is increasingly looking like it's resulting in two irreconcilably different values.  

The genesis of the problem is that as our measurement ability has become more and more precise, the error bars associated with data collection have shrunk considerably.  And if the two measurements were not only precise, but also accurate, you would expect that our increasing precision would result in the two values getting closer and closer together.

Exactly the opposite has happened.

"Five years ago, no one in cosmology was really worried about the question of how fast the universe was expanding," said astrophysicist Daniel Mortlock of Imperial College London.  "We took it for granted.  Now we are having to do a great deal of head scratching – and a lot of research...  Everyone’s best bet was that the difference between the two estimates was just down to chance, and that the two values would converge as more and more measurements were taken. In fact, the opposite has occurred.  The discrepancy has become stronger.  The estimate of the Hubble constant that had the lower value has got a bit lower over the years and the one that was a bit higher has got even greater."

This discrepancy -- called the Hubble tension -- is one of the most vexing problems in astrophysics today.  Especially given that repeated analysis of both the methods used to determine the expansion rate have resulted in no apparent problem with either one.

The two possible solutions to this boil down to (1) our data are off, or (2) there's new physics we don't know about.  A new solution that falls into the first category was proposed last week at the annual meeting of the Royal Astronomical Society by Indranil Banik of the University of Portsmouth, who has been deeply involved in researching this puzzle.  It's possible, he said, that the problem is with one of our fundamental assumptions -- that the universe is both homogeneous and isotropic.

These two are like the ultimate extension of the Copernican principle, that the Earth (and the Solar System and the Milky Way) do not occupy a privileged position in space.  Homogeneity means that any randomly-chosen blob of space is equally likely to have stuff in it as any other; in other words, matter and energy are locally clumpy but universally spread out.  Isotropy means there's no difference dependent on direction; the universe looks pretty much the same no matter which direction you look.

What, Banik asks, if our mistake is in putting together the homogeneity principle with measurements of what the best-studied region of space is like -- the parts near us?

What if we live in a cosmic void -- a region of space with far less matter and energy than average?

We've known those regions exist for a while; in fact, regular readers might recall that a couple of years ago, I wrote a post about one of the biggest, the Boötes Void, which is so large and empty that if we lived right at the center of it, we wouldn't even have been able to see the nearest stars to us until the development of powerful telescopes in the 1960s.  Banik suggests that the void we're in isn't as dramatic as that, but that a twenty percent lower-than-average mass density in our vicinity could account for the discrepancy in the Hubble constant.

"A potential solution to [the Hubble tension] is that our galaxy is close to the center of a large, local void," Banik said.  "It would cause matter to be pulled by gravity towards the higher density exterior of the void, leading to the void becoming emptier with time.  As the void is emptying out, the velocity of objects away from us would be larger than if the void were not there.  This therefore gives the appearance of a faster local expansion rate...  The Hubble tension is largely a local phenomenon, with little evidence that the expansion rate disagrees with expectations in the standard cosmology further back in time.  So a local solution like a local void is a promising way to go about solving the problem."

It would also, he said, line up with data on baryon acoustic oscillations, the fossilized remnants of shock waves from the Big Bang, which account for some of the fine structure of the universe.

"These sound waves travelled for only a short while before becoming frozen in place once the universe cooled enough for neutral atoms to form," Banik said.  "They act as a standard ruler, whose angular size we can use to chart the cosmic expansion history.  A local void slightly distorts the relation between the BAO angular scale and the redshift, because the velocities induced by a local void and its gravitational effect slightly increase the redshift on top of that due to cosmic expansion.  By considering all available BAO measurements over the last twenty years, we showed that a void model is about one hundred million times more likely than a void-free model with parameters designed to fit the CMB observations taken by the Planck satellite, the so-called homogeneous Planck cosmology."

Which sounds pretty good.  I'm only a layperson, but this is the most optimistic I've heard an astrophysicist get on the topic.  Now, it falls back on the data -- showing that the mass/energy density in our local region of space really is significantly lower than average.  In other words, that the universe isn't homogeneous, at least not on those scales.

I'm sure the astrophysics world will be abuzz with this new proposal, so keep your eyes open for developments.  Me, I think it sounds reasonable.  Given recent events here on Earth, it's unsurprising the rest of the universe is rushing away from us.  I bet the aliens lock the doors on their spaceships as they fly by.

****************************************


Tuesday, March 25, 2025

Bang or whimper

I've always loved Robert Frost's razor-sharp poem, written in 1920, called "Fire and Ice":

Some say the world will end in fire,
Some say in ice.
From what I’ve tasted of desire
I hold with those who favor fire.
But if it had to perish twice,
I think I know enough of hate
To say that for destruction ice
Is also great
And would suffice.

How the world will end has fascinated people for as long as we've been able to think about the question.  Various mythologies created their own pictures of the universe's swan song -- the best-known of which is the Norse tale of Ragnarök, when the forces of good (the Æsir, Vanir, and their allies) teamed up against the forces of evil (the Jötnar, trolls, and various Bad Guys like Surtr, the trolls, Midgard's Serpent, Níðhöggr, and, of course, Loki).  Interestingly, in the Norse conception of things, good and evil were pretty evenly matched, and they more or less destroyed each other; only a few on either side survived, along with enough humans to repopulate the devastated world.

Once we started to take a more rational view of things, scientists naturally brought their knowledge to bear on the same question.  After figuring out about stellar mechanics, we've become fairly certain that the Earth will meet its end when the Sun runs out of hydrogen fuel, swells up into a red giant -- at which point it's likely the Earth's orbit will be inside the radius of the Sun -- then ultimately jettisons its outer atmosphere to become a white dwarf.  

But what about the universe as a whole?

When I was in school, just about everyone (well, just about everyone who understood science, anyhow) accepted that the universe had begun at the Big Bang.  The mechanism for what caused it, and what (if anything) had come before it, was unknown then and is still unknown now; but once it occurred, space expanded dramatically, carrying matter and energy with it, an outward motion that is still discernible in the red shift of distant galaxies.  But would that expansion go on forever?  I think the first time I ran into a considered answer to the question was in Carl Sagan's Cosmos, where he explained that the ultimate fate of the universe depended on its mass.  If the overall mass of the universe was above a particular quantity, its gravity would be sufficient to halt the expansion, ultimately sending everything hurtling backward into a "Big Crunch."  Below that critical quantity -- the expansion would slow continuously but would nevertheless keep going, spreading everything out until it was a uniform, thin, cold gas, a fate that goes by the cheery name "the Heat Death of the Universe."

But it turned out the picture wasn't even that simple.  In 1998, Adam Riess and others discovered the baffling fact that the universe wasn't slowing at all, so neither of the above scenarios seemed to be right.  Data from distant galaxies showed -- and it has since been confirmed over and over -- that the universe's expansion is accelerating.  The existence of a repulsive force powering the expansion was proposed, and nicknamed dark energy, but how that could possibly work was (and is) unknown.

Then they found out that dark energy comprises just shy of three-quarters of the universe's total mass-energy.  Physicists had a huge conundrum to explain.

[Image licensed under the Creative Commons NASA/ESA, SN1994D, CC BY 3.0]

It also led to another possibility for the universe's fate, and one that's even more dire than the Heat Death.  If the amount of dark energy per unit volume of space is constant -- which it appeared to be -- then the relative proportion of dark energy will increase over time, because conventional matter and energy is thinning out as space expands (and dark energy is not).  As this happens, the relative strength of the dark energy repulsion will eventually increase to the point that it overwhelms all other forces, including electromagnetism and the nuclear forces -- tearing matter up into a soup of fundamental particles.

The "Big Rip."

Confused yet?  Because the reason all this comes up is that there's just been another discovery, this one by DESI (the Dark Energy Spectroscopic Instrument) indicating fairly strongly that the force of dark energy has been decreasing over time.  I say "fairly strongly" because at the moment the data sets this is based on range from 2.8 to 4.2 sigma (this is an indicator of how strongly the data supports the claim; for reference, 3 sigma represents a 0.3% possibility that the data is a statistical fluke, and 5 sigma is considered the threshold for breaking out the champagne).  So it appears that although the quantity of dark energy per unit volume of space is constant, the strength of the dark energy force is less now than it was in the early universe.

So what does this mean about the fate of the universe?  Will it be, in Frost's terms, fire or ice?  A bang or a whimper?  We don't know.  The first thing is to figure out what the hell dark energy actually is, and how it works, and -- if the DESI results hold up -- why it seems to be diminishing.

All I can say is the cosmologists have a lot of explaining to do.

****************************************


Monday, March 3, 2025

Lost horizon

While our knowledge of the origin of the universe has grown tremendously in the past hundred years, there are still plenty of cosmological mysteries left to solve.

One of the most vexing is called the horizon problem.

It's one of those situations where at first, it seems like "where's the problem?"  Then you look into it a little more, and kind of go, "... oh."  The whole thing has to do with how fast a change can percolate through a system.  Amongst the (many) outcomes of the General Theory of Relativity, we are reasonably certain that the upper bound at which disturbances of any kind can propagate is the speed of light.

So if a change of some sort happens in region A, but it is so far away from region B that there hasn't been enough time for light to travel between the two, it is fundamentally impossible for that change to have any effect at all in region B.  Such regions are said to be causally disconnected.

So far, so good.  The thing is, though, there are plenty of sets of causally disconnected regions in the universe.  If at midnight in the middle of winter you were to aim a very powerful telescope straight up into the sky, the farthest objects you could see are on the order of ten billion light years away.  Do the same six months later, in midsummer, and you'd be looking at objects ten billion light years away in the other direction.  The distance between the two is therefore on the order of twenty billion light years (and this is ignoring the expansion of the universe, which makes the problem even worse).  Since the universe is only something like 13.8 billion years old, there hasn't been enough time for light to travel between the objects you saw in winter and those you saw in summer.

Therefore, they can't affect each other in any way.  Furthermore, they've always been causally disconnected, at least as far back as we have good information.  By our current models, they were already too far apart to communicate three hundred thousand years after the Big Bang, the point at which decoupling occurred and the 2.7 K cosmic microwave background radiation formed. 

Herein lies the problem.  The cosmic microwave background (CMB for short) is very nearly isotropic -- it's the same no matter which direction you look.  There are minor differences in the temperature, thought to be due to quantum fluctuations at the moment of decoupling, but those average out to something very close to uniformity.  It seems like some process homogenized it, a bit like stirring the cream into a cup of coffee.  But how could that happen, if opposite sides of the universe were already causally disconnected from each other at the point when it formed?

A map of the CMB from the Wilkinson Microwave Anisotrophy Probe [Image is in the Public Domain courtesy of NASA]

It's worse still, however, which I just found out about when I watched a video by the awesome physicist and science educator Sabine Hossenfelder a couple of days ago.  Because a 2003 paper found that the CMB isn't isotropic after all.

I'm not talking about the CMB dipole anisotropy -- the fact that one region of the sky has CMB a little warmer than average, and the opposite side of the sky a little cooler than average.  That much we understand pretty well.  The Milky Way Galaxy is itself moving through space, and that creates a blue shift on one side of the sky and a red shift on the other, accounting for the measurably warmer and cooler regions, respectively.

What Hossenfelder tells us about is that there's an anisotropy in the sizes of the warm and cool patches.  It's called the hemispherical power spectrum asymmetry, and simply put, if you sort out the sizes of the patches at different temperatures, you find that one side of the sky is "grainier" than the other.  Like I said, we've known about this since 2003, but there was nothing in any of the models that could account for this difference, so cosmologists kind of ignored the issue in the hopes that better data would make the problem go away.

It didn't.  A recent paper using newly-collected data from the Planck mission found that the hemispherical power spectrum asymmetry is real.

And we haven't the first idea what could have caused it.

In a way, of course, this is tremendously exciting.  A great many scientific discoveries have started with someone looking at something, frowning, and saying, "Okay, hang on a moment."  Here we have something we already didn't understand (CMB isotropy and the horizon problem) gaining an added layer of weirdness (it's not completely isotropic after all, but is anisotropic in a really strange way).  What this shows us is that our current models of the origins of the universe are still incomplete.

Looks like it's a good time to go into cosmology.  In what other field is there a universe-sized problem waiting to be solved?

****************************************


Thursday, February 20, 2025

Order out of chaos

When I was an undergraduate, I sang in the University of Louisiana Choir in a production of Franz Josef Haydn's spectacular choral work The Creation.

The opening is a quiet, eerie orchestral passage called "The Representation of Chaos" -- meant to evoke the unformed "void" that made up the universe prior to the moment of creation.  Then the Archangel Raphael sings, "In the beginning, God made Heaven and Earth; and the Earth was without form and void, and darkness was upon the face of the deep."  The chorus joins in -- everything still in a ghostly pianissimo -- "In the spirit, God moved upon the face of the waters; and God said, "Let there be light.  And... there... was...

...LIGHT!"

The last word is sung in a resounding, major-chord fortissimo, with the entire orchestra joining in -- trumpets blaring, tympanis booming, the works.  

Even if you don't buy the theology, it's a moment that sends chills up the spine.  (You can hear it yourself here.)

Of course, the conventional wisdom amongst the cosmologists has been that the universe didn't begin in some kind of chaotic, dark void; quite the opposite.  The Big Bang -- or at least, the moment after it -- is usually visualized as a searingly hot, dense fireball, which expanded and cooled, leading to a steady entropy increase.  So by our current models, we're heading toward chaos, not away from it.

Well, maybe.

A recent paper by the pioneering Portuguese physicist and cosmologist João Magueijo has proposed a new model for the origins of the universe that overturns that entire scenario -- and far from being ridiculed off the stage, he's captured the attention even of hard-nosed skeptics like Sabine Hossenfelder, who did a video on her YouTube channel about his paper a few days ago that is well worth watching in its entirety.  But the gist, as far as a layperson like myself can understand it, goes like this.

It's long been a mystery why the fundamental constants of physics have the values they do, and why they actually are constant.  A handful of numbers -- the speed of light, the strength of the electromagnetic interaction, the strength of the gravitational force, the fine-structure constant, and a few others -- govern the behavior of, well, pretty much everything.  None seem to be derivable from more fundamental principles; i.e., they appear to be arbitrary.  None have ever been observed to shift, regardless how far out in space (and therefore how far back in time) you look.  And what's curious is that most of them have values that are tightly constrained, at least from our perspective.  Even a percent or two change in either direction, and you'd have situations like stars burning out way too fast to host stable planetary systems, atoms themselves falling apart, or matter not generating sufficient gravity to clump together.

So to many, the universe has appeared "fine-tuned," as if some omnipotent deity had set the dials just right at the moment of creation of the universe to favor everything we see around us (including life).  This is called the anthropic principle -- the strong version implying a master fine-tuner, the weak version being the more-or-less tautological statement that if those numbers had been any different, we wouldn't be here to ask the question.

But that doesn't get us any closer to figuring out why the fundamental constants are what they are.  Never one to shy away from the Big Questions, that's exactly what Magueijo has undertaken -- and what he's come up with is, to put it mildly, intriguing.

What he did was to start from the assumption that the fundamental constants aren't... constant.  That In The Beginning (to stick with our original Book of Genesis metaphor), the universe was indeed chaos -- the constants could have had more or less any values.  The thing is, the constants aren't all independent of each other.  Just as numbers in our mundane life can push and pull on each other -- to give a simple example, if you alter housing prices in a town, other numbers such as average salaries, rates of people moving in and moving out, tax rates, and funding for schools will shift in response -- the fundamental constants of physics affect each other.  What Magueijo did was to set some constraints on how those constants can evolve, then let the model run to see what kind of universe eventually came out.

And what he found was that after jittering around for a bit, the constants eventually found stable values and settled into an equilibrium.  In Hossenfelder's video, she uses the analogy of sand grains on a vibration plate being jostled into spots that have the highest stability (the most resistance to motion).  At that point, the pattern that emerges doesn't change again no matter how long you vibrate the plate.  What Magueijo suggests is that the current configuration of fundamental constants may not be the only stable one, but the range of what the constants could be might be far narrower than we'd thought -- and it also explains why we don't see the constants changing any more.

Why they are, in fact, constant.

Stable pattern of grains on a vibrating pentagonal Chladni plate [Image licensed under the Creative Commons Matemateca (IME USP), Chladni plate 16, CC BY-SA 4.0]

Magueijo's work might be the first step toward solving one of the most vexing questions of physics -- why the universe exists with these particular laws and constants, despite there not seeming to be any underlying reason for it.  Perhaps we've been looking at the whole thing the wrong way.  The early universe really may have been without substance and void -- but instead of a voice crying "let there be light!", things simply evolved until they reached a stable configuration that then generated everything around us.

It might not be as audibly dramatic as Haydn's vision of The Creation, but it's just as much of an eye-opener.

****************************************

Tuesday, November 12, 2024

Bubbles, dimensions, and black holes

One of the weirder claims of modern physics, which I first ran into when I was reading about string theory a few years ago, is that the universe could have more than three spatial dimensions -- but the extra ones are "curled up" and are (extremely) sub-microscopic.

I've heard it explained by an analogy of an ant walking on a string.  There are two ways the ant can go -- back and forth on the string, or around the string.  The "around the string" dimension is curled into a loop, whereas the back-and-forth one has a much greater spatial extent.

Scale that up, if your brain can handle it, to three dimensions of the back-and-forth variety, and as many as nine or ten of the around-the-string variety, and you've got an idea of what the claim is.

The problem is, those extra dimensions have proven to be pretty thoroughly undetectable, which has led critics to quote Wolfgang Pauli's quip, that it's a theory that "is not even wrong," it's unverifiable -- which is synonymous to saying "it isn't science."  But the theorists are still trying like mad to find an indirect method to show the existence of these extra dimensions.

To no avail at the present, although we did have an interesting piece added to the puzzle a while back that I somehow missed the first time 'round.  Astronomers Katie Mack of North Carolina State University and Robert McNees of Loyola University published a paper in arXiv that puts a strict limit on the number of macroscopic dimensions -- and that limit is three.

So sorry, fans of A Wrinkle in Time, there's no such thing as the tesseract.  The number of dimensions is three, and three is the number of dimensions.  Not four.  Nor two, unless thou proceedest on to three. 

Five is right out.

The argument by Mack and McNees -- which, although I have a B.S. in physics, I can't begin to comprehend fully -- boils down to the fact that the universe is still here.  If there were extra macroscopic spatial dimensions (whether or not we were aware of them) it would be possible that two cosmic particles of sufficient energy could collide and generate a miniature black hole, which would then give rise to a universe with different physical laws.  This new universe would expand like a bubble rising in a lake, its boundaries moving at the speed of light, ripping apart everything down to and including atoms as it went.

"If you’re standing nearby when the bubble starts to expand, you don’t see it coming," Mack said.  "If it’s coming at you from below, your feet stop existing before your mind realizes that."

This has been one of the concerns about the Large Hadron Collider, since the LHC's entire purpose is to slam together particles at enormous velocities.  Ruth Gregory of Durham University showed eight years ago that there was a non-zero possibility of generating a black hole that way, which triggered the usual suspects to conjecture that the scientists were trying to destroy the universe.  Why they would do that, when they inhabit said universe, is beyond me.  In fact, since they'd be standing right next to the Collider when it happened, they'd go first, before they even had a chance to cackle maniacally and rub their hands together about the fate of the rest of us.

"The black holes are quite naughty," Gregory said, which is a sentence that is impossible to hear in anything but a British accent.  "They really want to seed vacuum decay.  It’s a very strong process, if it can proceed."

"No structures can exist," Mack added.  "We’d just blink out of existence."

Of course, it hasn't happened, so that's good news.  Although I suppose this wouldn't be a bad way to go, all things considered.  At least it would be over quickly, not to mention being spectacular.  "Here lies Gordon, killed during the formation of a new universe," my epitaph could read, although there wouldn't be anyone around to write it, nor anything to write it on.

Which is kind of disappointing.

Anyhow, what Mack and McNees have shown is that this scenario could only happen if there was a fourth macroscopic dimension, and since it hasn't happened in the universe's 13.8 billion year history, it probably isn't going to.

So don't cancel your meetings this week.  Mack and McNees have shown that any additional spatial dimensions over the usual three must be smaller than 1.6 nanometers, which is about three times the diameter of your average atom; bigger than that, and we would already have become victims of "vacuum decay," as the expanding-bubble idea is called.

A cheering notion, that.  Although I have to say, it's an indication of how bad everything else has gotten that "We're not dead yet" is the best I can do for good news.


That's our news from the world of scientific research -- particle collisions, expanding black holes, and vacuum decay.  Myself, I'm not going to worry about it.  I figure if it happens, I'll be gone so fast I won't have time to be upset at my imminent demise, and afterwards none of my loved ones will be around to care.  Another happy thought is that I'll take Nick Fuentes, Tucker Carlson, Elon Musk, Stephen Miller, and Andrew Tate along with me, which might almost make destroying the entire universe worth it.

****************************************


Wednesday, August 28, 2024

Baby Bear's universe

The idea of Intelligent Design is pretty flimsy, at least when it comes to biology.  The argument boils down to something the ID proponents call irreducible complexity -- that there are some features in organisms that are simply too complex, requiring too many interlocking parts, to have evolved through natural selection.  The problem is, the ones most commonly cited, such as the vertebrate eye, have been explained pretty thoroughly, with nothing needed but a good understanding of genetics, biochemistry, and physiology to comprehend how they evolved.  The best takedown of biological ID remains Richard Dawkins's The Blind Watchmaker, which absolutely shreds the arguments of ID proponents like Michael Behe.  (Yes, I know Dawkins has recently made statements indicating that he holds some fairly repulsive opinions; I never said he was a nice guy, but there's no doubt that his writings on evolutionary biology are on-point.)

While biological ID isn't worth much, there's a curious idea from physics that has even the reputable scientists wondering.  It has to do with the number of parameters (by some estimates, around thirty of them) in the Standard Model of Particle Physics and the Theories of Relativity that don't appear to be derivable from first principles; in other words, we know of no compelling reason why they are the values they are, and those values are only known empirically.

[Image licensed under the Creative Commons Cush, Standard Model of Elementary Particles, CC BY 3.0]

More eye-opening is the fact that for most of them, if they held any other values -- in some cases, off by only a couple of percent either way -- the universe would be uninhabitable.

Here are a few examples:
  • The degree of anisotropy (unevenness in density) of the cosmic microwave background radiation.  This is thought to reflect the "clumpiness" of matter in the early universe, which amounts to about one part in ten thousand.  If it was only a little bigger -- one part in a thousand -- the mutual attraction of those larger clumps of matter would have triggered early gravitational collapse, and the universe would now be composed almost entirely of supermassive black holes.  Only a little smaller -- one part in a hundred thousand -- and there would have been insufficient gravitational attraction to form stars, and the universe would be a thin, cold fog of primordial hydrogen and helium.
  • The fact that electrons have a spin of one-half, making them fermions.  Fermions have an odd property; two can't occupy the same quantum mechanical state, something called the Pauli Exclusion Principle.  (Bosons, such as photons, don't have that restriction, and can pass right through one another.)  This feature is why electrons exist in orbitals in atoms.  If they had integer spin, there would be no such thing as chemistry.
  • The masses of the various subatomic particles.  To take only one example, if the quarks that make up protons and neutrons were much heavier, the strong nuclear force would all but evaporate -- meaning that the nuclei of atoms would fly apart.  (Well, more accurately, they never would have formed in the first place.)
  • The value of the fine-structure constant, which is about 1/137 (it's a dimensionless number, so it doesn't matter what units you use).  This constant determines, among other things, the relative strength of the electromagnetic and strong nuclear forces.  Any larger, and atoms would collapse; any smaller, and they would break apart into their fundamental particles.
  • The value of the gravitational constant G.  It's about 6.67 x 10^-11 meters cubed per kilogram per second -- i.e., a really tiny number, meaning gravity is an extremely weak force.  If G was larger, stars would burn through their hydrogen fuel much faster, and it's doubtful they'd live long enough for planets to have time to evolve intelligent life.  If G was smaller, there wouldn't be enough gravitational pull to initiate fusion in the first place.  No fusion = no stars.
  • The flatness of the universe.  While space near massive objects is curved as per the General Theory of Relativity, its overall shape is apparently Euclidean.  Its makeup -- around 5% conventional matter and energy, 25% dark matter, and 70% dark energy -- is exactly what you'd need to generate a flat universe.
  • The imbalance between matter and antimatter.  There appears to be no reason why, at the Big Bang, there weren't exactly equal numbers of matter and antimatter particles created.  But in fact -- and fortunately for us -- there was a very slight imbalance favoring matter.  The estimate is that there was about one extra unpaired matter particle out of every one hundred million pairs, so when the pairs underwent mutual annihilation, those few extra particles were left over.  The survivors became the matter we have today; without that tiny imbalance, the entire universe today would be filled with nothing but photons.
  • The cosmological constant -- a repulsive force exerted by space itself (which is the origin of dark energy).  This is the most amazing one, because for a long time, physicists thought the cosmological constant was exactly zero; Einstein looked upon his introduction of a nonzero cosmological constant as an inexcusable fudge factor in his equations, and called his attempt to shoehorn it in as his "greatest blunder."  In fact, recent studies show that the cosmological constant does exist, but it's so close to zero that it's hard to imagine -- it's about a decimal point, followed by 121 zeroes, followed by a 3 (as expressed in Planck units).  But if it was exactly zero, the universe would have collapsed by now -- and any bigger than it is, and the expansion of space would have overwhelmed gravity and torn apart matter completely!
And so on and so forth.  The degree of fine-tuning that seems to be required to set all these independent parameters so that the conditions are juuuuuust right for our existence (to borrow a phrase from Baby Bear) strikes a lot of people, even some diehard rationalist physicists, as mighty peculiar.  As cosmologist Fred Hoyle put it, "It looks very much as if a super-intellect has monkeyed with physics as well as with chemistry and biology."

The idea that some Master Architect twiddled the knobs on the various constants in physics, setting them exactly as needed for the production of matter and ultimately ourselves, is called the Strong Anthropic Principle.  It sets a lot of people's teeth on edge -- it's a little too much like the medieval idea of humanity's centrality in the universe, something that was at the heart of the resistance to Copernicus's heliocentric model.  It seems like all science has done since then is to move us farther from the center -- first, the Earth orbits the Sun; then, the stars themselves are suns, and our own Sun is only a smallish and rather ordinary one; then, the Sun and planets aren't central to the galaxy; and finally, our own galaxy is only one of billions.

Now, suddenly, the fine-tuning argument has seemingly thrust us back into a central position.  However small a piece of the cosmos we actually represent, was it all set this way for our benefit?

In his book The Cosmic Landscape: String Theory and the Illusion of Intelligent Design, theoretical physicist Leonard Susskind answers this with a resounding "no."  His argument, which is sometimes called the Weak Anthropic Principle, looks at the recent advances in string theory, inflation, and cosmology, and suggests that the apparent fine-tuning is because the cosmos we're familiar with is only one pocket universe in a (much) larger "landscape," where the process of dropping into a lower energy state triggers not only expansion, but sets the values of the various physical parameters.  Afterward, each of those bubbles is then governed by its own physics.  Most would be inhospitable to life; a great many probably don't have atoms heavier than helium.  Some probably have very short life spans, collapsing almost immediately after formation.  And the models suggest that the number of different possible configurations -- different settings on the knobs, if you will -- might be as many as ten to the five-hundredth power.

That's a one followed by five hundred zeroes.

Susskind suggests that we live in this more-or-less friendly one not because the constants were selected by a deity with us in mind, but because if our universe's constants had any other value, we wouldn't be here to ask the question.  It might be extremely unlikely that a universe would have exactly these settings, but if you have that many universes to choose from, they're going to show up that way somewhere.

We only exist because this particular universe is the one that got the values right on the nose.

While I think this makes better sense than the Master Architect idea of the Strong Anthropic Principle -- and I certainly don't want to pretend I could argue the point with a physicist of Susskind's caliber -- I have to admit feeling a twinge of discomfort still.  Having all of those parameters line up so perfectly just seems like too much of coincidence to swallow.  It does occur to me that in my earlier statement, that the constants aren't derivable from first principles, I should amend that by adding "as far as we understand at the moment."  After all, the geocentric model, and a lot of other discredited ideas, were discarded not because they overestimated our importance, but because we got better data and used it to assemble a more accurate theory.  It may be that some of these parameters are actually constrained -- they couldn't have any other value than the one they do -- we just haven't figured out why yet.

After all, that's my main criticism of Intelligent Design in biology; it boils down to the argument from incredulity -- I can't imagine how this could have happened, so it must be that God did it.

That said, the best models of physics we now have don't give us any clue of why the thirty-odd free parameters in the Standard Model are what they are, so for now, the Weak Anthropic Principle is the best we can do, at least as far as scientific approaches go.  That we live in a Baby Bear universe is no more mysterious than why you find fish in a lake and not in a sand dune.  Our hospitable surroundings are merely good fortune -- a lucky break that was not shared in the other ten-to-the-five-hundredth-power universes (minus one) out there in the cosmic landscape.

****************************************


Monday, August 5, 2024

A matter of scale

In Douglas Adams's brilliant book, The Hitchhiker's Guide to the Galaxy, a pair of alien races, the Vl'Hurg and the G'gugvuntt, spent millennia fighting each other mercilessly until they intercept a message from Earth that they misinterpret as being a threat.  They forthwith decide to set aside their grievances with each other, and team up for an attack on our planet in retaliation:
Eventually of course, after their Galaxy had been decimated over a few thousand years, it was realized that the whole thing had been a ghastly mistake, and so the two opposing battle fleets settled their few remaining differences in order to launch a joint attack on our own Galaxy...

For thousands more years the mighty ships tore across the empty wastes of space and finally dived screaming on to the first planet they came across -- which happened to be the Earth -- where due to a terrible miscalculation of scale the entire battle fleet was accidentally swallowed by a small dog.

I was reminded of the Vl'Hurg and G'gugvuntt while reading the (much more serious) book The View from the Center of the Universe, by physicist Joel Primack and author and polymath Nancy Abrams.  In it, they look at our current understanding of the basics of physics and cosmology, and how it intertwines with metaphysics and philosophy, in search of a new "foundational myth" that will help us to understand our place in the universe.

What brought up Adams's fictional tiny space warriors was one of the most interesting things in the Primack/Abrams book, which is the importance of scale.  There are about sixty orders of magnitude (powers of ten) between the smallest thing we can talk meaningfully about (the Planck length) and the largest (the size of the known universe), and we ourselves fall just about in the middle.  This is no coincidence, the authors say; much smaller life forms are unlikely to have to have the complexity to develop intelligence, and much larger ones would be limited by a variety of physical factors such as the problem that if you increase length in a linear fashion, mass increases as a cube.  (Double the length, the mass goes up by a factor of eight, for example.)  Galileo knew about this, and used it to explain why the shape of the leg bones of mice and elephants are different.  Give an animal the size of an elephant the relative leg diameter of a mouse, and it couldn't support its own weight.  (This is why you shouldn't get scared by all of the bad science fiction movies from the fifties with names like The Cockroach That Ate Newark.  The proportions of an insect wouldn't work if it were a meter long, much less twenty or thirty.)

Pic from the 1954 horror flick Them!

Put simply: scale matters.  Where it gets really interesting, though, is when you look at the fundamental forces of nature.  We don't have a quantum theory of gravity yet, but that hasn't held back technology from using the principles of quantum physics; on the scale of the very small, gravity is insignificant and can be effectively ignored in most circumstances.  Once again, we ourselves are right around the size where gravity starts to get really critical.  Drop an ant off a skyscraper, and it will be none the worse for wear.  A human, though?

And the bigger the object, the more important gravity becomes, and (relatively speaking) the less important the other forces are.  On Earth, mountains can only get so high before the forces of erosion start pulling them down, breaking the cohesive electromagnetic bonds within the rocks and halting further rise.  In environments with lower gravity, though, mountains can get a great deal bigger.  Olympus Mons, the largest volcano on Mars, is almost 22 kilometers high -- 2.5 times taller than Mount Everest.  The larger the object, the more intense the fight against gravity becomes.  The smoothest known objects in the universe are neutron stars, which have such immense gravity their topographic relief over the entire surface is on the order of a tenth of a millimeter.

Going the other direction, the relative magnitudes of the other forces increase.  A human scaled down to the size of a dust speck would be overwhelmed by electromagnetic forces -- for example, static electricity.  Consider how dust clings to your television screen.  These forces become much less important on a larger scale... whatever Gary Larson's The Far Side would have you believe:

Smaller still, and forces like the strong and weak nuclear forces -- the one that allows the particles in atomic nuclei to stick together, and the one that causes some forms of radioactive decay, respectively -- take over.  Trying to use brains that evolved to understand things on our scale (what we term "common sense") simply doesn't work on the scale of the very small or very large.

And a particularly fascinating bit, and something I'd never really considered, is how scale affects the properties of things.  Some properties are emergent; they result from the behavior and interactions of the parts.  A simple example is that water has three common forms, right?  Solid (ice), liquid, and gaseous (water vapor).  Those distinctions become completely meaningless on the scale of individual molecules.  One or two water molecules are not solid, liquid, or gaseous; those terms only acquire meaning on a much larger scale.

This is why it's so interesting to try to imagine what things would be like if you (to use Primack's and Abrams's metaphor) turned the zoom lens one way and then the other.  I first ran into this idea in high school, when we watched the mind-blowing short video Powers of Ten, which was filmed in 1968 (then touched up in 1977) but still impresses:


Anyhow, those are my thoughts about the concept of scale.  An explanation of why the Earth doesn't have to worry about either Vl'Hurgs and G'gugvuntts, enormous bugs, or static cling making your child stick to the ceiling.  A relief, really, because there's enough else to lose sleep over.  And given how quickly our common sense fails on unfamiliar scales, it's a good thing we have science to explain what's happening -- not to mention fueling our imaginations about what those scales might be like.

****************************************



Monday, June 10, 2024

Mirror image

One of the hallmarks of science is its falsifiability.  Models should generate predictions that are testable, allowing you to see if they conform to what we observe and measure of the real universe.  It's why science works as well as it does; ultimately, nature has the last word.

The problem is that there are certain realms of science that don't lend themselves all that well to experiment.  Paleontology, for example -- we're dependent on the fossils that happen to have survived and that we happen to find, and the genetic evidence from the descendants of those long-gone species, to piece together what the ancient world was like.  It's a little difficult to run an experiment on a triceratops.

An even more difficult one is cosmology -- the study of the origins and evolution of the universe as a whole.  After all, we only have the one universe to study, and are limited to the bits of it we can observe from here.  Not only that, but the farther out in space we look, the less clear it becomes,  By the time light gets here from a source ten billion light years away, it's attenuated by the inverse-square law and dramatically red-shifted by all the expanding space it traveled through to get here, which is why it takes the light-collecting capacity of the world's most powerful telescopes even to see it.

None of this is meant as a criticism of cosmology, nor of cosmologists.  But the fact remains that they're trying to piece together the whole universe from a data set that makes what the paleontologists have look like an embarrassment of riches.

The result is that we're left with some massive mysteries, one of the most vexing of which is dark energy.  This is a placeholder name for the root cause of the runaway expansion of the universe, which (according to current models) accounts for 68% of the mass/energy content of the universe.  (Baryonic, or ordinary, matter is a mere 5%.)  And presently, we have no idea what it is.  Attempts either to detect dark energy directly, or to create a model that will account for observations without invoking its existence have, by and large, been unsuccessful. 

But that hasn't stopped the theorists from trying.  And the latest attempt to solve the puzzle is a curious one; that dark energy isn't necessary if you assume our universe has a partner universe that is a reflection of our own.  In that universe, three properties would all be mirror images of the corresponding properties in ours; positive and negative charges would flip, spatial "handedness" (what physicists call parity) would be reversed, and time would run backwards.

Couldn't help but think of this, of course.


The idea is intriguing.  Naman Kumar, who authored the paper on the model, is enthusiastic about its potential for explaining the expansion of the universe.  "The results indicate that accelerated expansion is natural for a universe created in pairs," Kumar writes.  "Moreover, studying causal horizons can deepen our understanding of the universe.  The beauty of this idea lies in its simplicity and naturalness, setting it apart from existing explanations."

Which may well be true.  The difficulty, however, is that the partner universe isn't reachable (or even directly detectable) from our own, Lost in Space notwithstanding.  It makes me wonder how this will ever be more than just an interesting possibility -- an idea that, in Wolfgang Pauli's often-quoted words, "isn't even wrong" because there's no way to test whether it accounts for the data any better than the other, less "natural" models do.

In any case, that's the latest from the cosmologists.  Mirror-image universes created in pairs may obviate the need for dark energy.  We'll see what smarter people than myself have to say about whether the claim holds water; or, maybe, just wait for Evil Major West With A Beard to show up and settle the matter once and for all.

****************************************