Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label Hubble constant. Show all posts
Showing posts with label Hubble constant. Show all posts

Wednesday, July 16, 2025

Tense situation

In my Critical Thinking classes, I did a unit on statistics and data, and how you tell if a measurement is worth paying attention to.  One of the first things to consider, I told them, is whether a particular piece of data is accurate or merely precise -- two words that in common parlance are used interchangeably.

In science, they don't mean the same thing.  A piece of equipment is said to be precise if it gives you close to the same value every time.  Accuracy, though, is a higher standard; data are accurate if the values are not only close to each other when measured with the same equipment, but agree with data taken independently, using a different device or a different method.

A simple example is that if my bathroom scale tells me every day for a month that my mass is (to within one kilogram either way) 239 kilograms, it's highly precise, but very inaccurate.

This is why scientists always look for independent corroboration of their data.  It's not enough to keep getting the same numbers over and over; you've got to be certain those numbers actually reflect reality.

This all comes up because of a new look at one of the biggest scientific questions known -- the rate of expansion of the entire universe.

[Image is in the Public Domain, courtesy of NASA]

A while back, I wrote about some experiments that were allowing physicists to home in on the Hubble constant, a quantity that is a measure of how fast everything in the universe is flying apart.  And the news appeared to be good; from a range of between 50 and 500 kilometers per second per megaparsec, physicists had been able to narrow down the value of the Hubble constant to between 65.3 and 75.6.

The problem is, nobody's been able to get closer than that -- and in fact, recent measurements have widened, not narrowed, the gap.

There are two main ways to measure the Hubble constant.  The first is to use information like red shiftCepheid variables (stars whose period of brightness oscillation varies predictably with their intrinsic brightness, making them a good "standard candle" to determine the distance to other galaxies), and type 1a supernovae to figure out how fast the galaxies we see are receding from each other.  The other is to use the cosmic microwave background radiation -- the leftovers from the radiation produced by the Big Bang -- to determine the age of the universe, and therefore, how fast it's expanding.

So this is a little like checking my bathroom scale by weighing myself on it, then comparing my weight as measured by the scale at the gym and seeing if I get the same answer.

And the problem is, the measurement of the Hubble constant by these two methods is increasingly looking like it's resulting in two irreconcilably different values.  

The genesis of the problem is that as our measurement ability has become more and more precise, the error bars associated with data collection have shrunk considerably.  And if the two measurements were not only precise, but also accurate, you would expect that our increasing precision would result in the two values getting closer and closer together.

Exactly the opposite has happened.

"Five years ago, no one in cosmology was really worried about the question of how fast the universe was expanding," said astrophysicist Daniel Mortlock of Imperial College London.  "We took it for granted.  Now we are having to do a great deal of head scratching – and a lot of research...  Everyone’s best bet was that the difference between the two estimates was just down to chance, and that the two values would converge as more and more measurements were taken. In fact, the opposite has occurred.  The discrepancy has become stronger.  The estimate of the Hubble constant that had the lower value has got a bit lower over the years and the one that was a bit higher has got even greater."

This discrepancy -- called the Hubble tension -- is one of the most vexing problems in astrophysics today.  Especially given that repeated analysis of both the methods used to determine the expansion rate have resulted in no apparent problem with either one.

The two possible solutions to this boil down to (1) our data are off, or (2) there's new physics we don't know about.  A new solution that falls into the first category was proposed last week at the annual meeting of the Royal Astronomical Society by Indranil Banik of the University of Portsmouth, who has been deeply involved in researching this puzzle.  It's possible, he said, that the problem is with one of our fundamental assumptions -- that the universe is both homogeneous and isotropic.

These two are like the ultimate extension of the Copernican principle, that the Earth (and the Solar System and the Milky Way) do not occupy a privileged position in space.  Homogeneity means that any randomly-chosen blob of space is equally likely to have stuff in it as any other; in other words, matter and energy are locally clumpy but universally spread out.  Isotropy means there's no difference dependent on direction; the universe looks pretty much the same no matter which direction you look.

What, Banik asks, if our mistake is in putting together the homogeneity principle with measurements of what the best-studied region of space is like -- the parts near us?

What if we live in a cosmic void -- a region of space with far less matter and energy than average?

We've known those regions exist for a while; in fact, regular readers might recall that a couple of years ago, I wrote a post about one of the biggest, the Boötes Void, which is so large and empty that if we lived right at the center of it, we wouldn't even have been able to see the nearest stars to us until the development of powerful telescopes in the 1960s.  Banik suggests that the void we're in isn't as dramatic as that, but that a twenty percent lower-than-average mass density in our vicinity could account for the discrepancy in the Hubble constant.

"A potential solution to [the Hubble tension] is that our galaxy is close to the center of a large, local void," Banik said.  "It would cause matter to be pulled by gravity towards the higher density exterior of the void, leading to the void becoming emptier with time.  As the void is emptying out, the velocity of objects away from us would be larger than if the void were not there.  This therefore gives the appearance of a faster local expansion rate...  The Hubble tension is largely a local phenomenon, with little evidence that the expansion rate disagrees with expectations in the standard cosmology further back in time.  So a local solution like a local void is a promising way to go about solving the problem."

It would also, he said, line up with data on baryon acoustic oscillations, the fossilized remnants of shock waves from the Big Bang, which account for some of the fine structure of the universe.

"These sound waves travelled for only a short while before becoming frozen in place once the universe cooled enough for neutral atoms to form," Banik said.  "They act as a standard ruler, whose angular size we can use to chart the cosmic expansion history.  A local void slightly distorts the relation between the BAO angular scale and the redshift, because the velocities induced by a local void and its gravitational effect slightly increase the redshift on top of that due to cosmic expansion.  By considering all available BAO measurements over the last twenty years, we showed that a void model is about one hundred million times more likely than a void-free model with parameters designed to fit the CMB observations taken by the Planck satellite, the so-called homogeneous Planck cosmology."

Which sounds pretty good.  I'm only a layperson, but this is the most optimistic I've heard an astrophysicist get on the topic.  Now, it falls back on the data -- showing that the mass/energy density in our local region of space really is significantly lower than average.  In other words, that the universe isn't homogeneous, at least not on those scales.

I'm sure the astrophysics world will be abuzz with this new proposal, so keep your eyes open for developments.  Me, I think it sounds reasonable.  Given recent events here on Earth, it's unsurprising the rest of the universe is rushing away from us.  I bet the aliens lock the doors on their spaceships as they fly by.

****************************************


Monday, February 6, 2023

The next phase

When I put on water for tea, something peculiar happens.

Of course, it happens for everyone, but a lot of people probably don't think about it.  For a while, the water quietly heats.  It undergoes convection -- the water in contact with the element at the bottom of the pot heats up, and since warmer water is less dense, it rises and displaces the cooler layers above.  So there's a bit of turbulence, but that's it.

Then, suddenly, a bit of the water at the bottom hits 100 C and vaporizes, forming bubbles.  Those bubbles rapidly rise, dispersing heat throughout the pot.  Very quickly afterward, the entire pot of water is at what cooks call "a rolling boil."

This quick shift from liquid to gas is called a phase transition.  The most interesting thing about phase transitions is that when they occur, what had been a smooth and gradual change in physical properties (like the density of the water in the teapot) undergoes an enormous, abrupt shift -- consider the difference in density between liquid water and water vapor.

The reason this comes up is that some physicists in Denmark and Sweden have proposed a phase transition mechanism to account for the evolution of the (very) early universe -- and that proposal may solve one of the most vexing questions in astrophysics today.

A little background.

As no doubt all of you know, the universe is expanding.  This fact, discovered through the work of astronomer Edwin Hubble and others, was based upon the observation that light from distant galaxies was significantly red-shifted, indicating that they were moving away from us.  More to the point, the farther away the galaxies were, the faster they are moving.  This suggested that some very long time in the past, all the matter and energy in the universe was compressed into a very small space.

Figuring out how long ago that was -- i.e., the age of the universe -- depends on knowing how fast that expansion is taking place.  This number is called the Hubble constant.

[Image licensed under the Creative Commons Munacas, Big-bang-universo-8--644x362, CC BY-SA 4.0]

This brings up an issue with any kind of scientific measurement, and that's the difference between precision and accuracy.  While we use those words pretty much interchangeably in common speech, to a scientist they aren't the same thing at all.  Precision in an instrument means that every time you use it to measure something, it gives you the same answer.  Accuracy, on the other hand, means that the value you get from one instrument agrees with the value you get from using some other method for measuring the same thing.  So if my car's odometer tells me, every time I drive to my nearby village for groceries, that the store is exactly eight hundred kilometers from my house, the odometer is highly precise -- but extremely inaccurate.

The problem with the Hubble constant is that there are two ways of measuring it.  One is using the aforementioned red shift; the other is using the cosmic microwave background radiation.  Those two methods, each taken independently, are extremely precise; they always give you the same answer.

But... the two answers don't agree.  (If you want a more detailed explanation of the problem, I wrote a piece on the disagreement over the value of the Hubble constant a couple of years ago.)

Hundreds of measurements and re-analyses have failed to reconcile the two, and the best minds of theoretical physics have been unable to figure out why. 

Perhaps... until now.

Martin Sloth and Florian Niedermann, of the University of Southern Denmark and the Nordic Institute for Theoretical Physics, respectively, just published a paper in Physics Letters B that proposes a new model for the early universe which makes the two different measurements agree perfectly -- a rate of 72 kilometers per second per megaparsec.  Their proposal, called New Dark Energy, suggests that very quickly after the Big Bang, the energy of the universe underwent an abrupt phase transition, a bit like the water in my teapot suddenly boiling.  At this point, these "bubbles" of rapidly dissipating energy drove apart the embryonic universe.

"If we trust the observations and calculations, we must accept that our current model of the universe cannot explain the data, and then we must improve the model," Sloth said.  "Not by discarding it and its success so far, but by elaborating on it and making it more detailed so that it can explain the new and better data.  It appears that a phase transition in the dark energy is the missing element in the current Standard Model to explain the differing measurements of the universe's expansion rate.  It could have lasted anything from an insanely short time -- perhaps just the time it takes two particles to collide -- to 300,000 years.  We don't know, but that is something we are working to find out...  If we assume that these methods are reliable -- and we think they are -- then maybe the methods are not the problem.  Maybe we need to look at the starting point, the basis, that we apply the methods to.  Maybe this basis is wrong."

It's this kind of paradigm shift in understanding -- itself a sort of phase transition -- that triggers great leaps forward in science.  To be fair, some of them fizzle.  Most of them, honestly.  But sometimes, there are visionary scientists who take previously unexplained knowledge and turn our view of the universe on its head, and those are the ones who revolutionize science.  Think of how Galileo and Copernicus (heliocentrism), Kepler (planetary motion), Darwin (biological evolution), Mendel (genetics), Einstein (relativity), de Broglie and Schrödinger (quantum physics), Watson, Crick, and Franklin (DNA), and Matthews and Vine (plate tectonics) changed our world.

Will Sloth and Niedermann join that list?  Way too early to know.  But just the fact that one shift in the fundamental assumptions about the early universe reconciled measurements that heretofore had stumped the best theoretical physicists is a hopeful sign.

Time will tell if this turns out to be the next phase in cosmology.

****************************************


Monday, May 23, 2022

Behind the mirror

I know I've snarked before about the how unbearably goofy the old 1960s television show Lost in Space was, but I have to admit that every once in a (long) while, they nailed it.  And one of the best examples is the first-season episode "The Magic Mirror."

Well, mostly nailed it.  The subplot about how real girls care about makeup and hair and being pretty is more than a little cringe-inducing.  But the overarching story -- about mirrors being portals to a parallel world, and a boy who is trapped behind them because he has no reflection -- is brilliant.  And the other-side-of-the-mirror world he lives in is hauntingly surreal.


I was thinking about this episode because of a paper that appeared in Physical Review Letters last week entitled, "Symmetry of Cosmological Observables, a Mirror World Dark Sector, and the Hubble Constant," by Francis-Yan Cyr-Ravine, Fei Ge, and Lloyd Knox, of the University of New Mexico.  What this paper does is offer a possible solution to the Hubble constant problem -- that the rate of expansion of the universe as predicted by current mathematical models is significantly smaller than the actual measured expansion rate.

What Cyr-Racine, Ge, and Knox propose is that there is an unseen "mirror world" of particles that coexists alongside our own, interacting only through gravity but otherwise invisible to detection.  At first, I thought they might be talking about something like dark matter -- a form of matter that only (very) weakly interacts with ordinary matter -- but it turns out that what they're saying is even weirder.

"This discrepancy is one that many cosmologists have been trying to solve by changing our current cosmological model," Cyr-Racine told Science Daily "The challenge is to do so without ruining the agreement between standard model predictions and many other cosmological phenomena, such as the cosmic microwave background...  Basically, we point out that a lot of the observations we do in cosmology have an inherent symmetry under rescaling the universe as a whole.  This might provide a way to understand why there appears to be a discrepancy between different measurements of the universe's expansion rate.  In practice, this scaling symmetry could only be realized by including a mirror world in the model -- a parallel universe with new particles that are all copies of known particles.  The mirror world idea first arose in the 1990s but has not previously been recognized as a potential solution to the Hubble constant problem.  This might seem crazy at face value, but such mirror worlds have a large physics literature in a completely different context since they can help solve important problem in particle physics.  Our work allows us to link, for the first time, this large literature to an important problem in cosmology."

The word "important" is a bit of an understatement.  The Hubble constant problem is one of the biggest puzzles in physics; why theory and observation are so different on this one critical point, and how to fix the theory without blowing to smithereens everything that the theory does predict correctly.  "It went from two and a half Sigma, to three, and three and a half to four Sigma. By now, we are pretty much at the five-Sigma level," said Cyr-Racine.  "That's the key number which makes this a real problem because you have two measurements of the same thing, which if you have a consistent picture of the universe should just be completely consistent with each other, but they differ by a very statistically significant amount.  That's the premise here, and we've been thinking about what could be causing that and why are these measurements discrepant?  So that's a big problem for cosmology.  We just don't seem to understand what the universe is doing today."

I know that despite my background in science, I can have a pretty wild imagination.  It's an occupational hazard of being a speculative fiction writer.  I hear some new scientific finding, and immediately start putting some weird spin on it that, though it might be interesting, is completely unwarranted by the actual research.  But look at Cyr-Racine's own words: a parallel universe with new particles that are all copies of known particles.  I think I'm to be excused for thinking of "The Magic Mirror" and other science fiction stories about ghostly worlds coexisting, unseen, with our own.

I'm not going to pretend to understand the math behind the Cyr-Racine et al. paper; despite having a B.S. in physics, academic papers in the discipline usually lose me in the first paragraph (if not the abstract).  But it's a fascinating and spooky idea.  I doubt if what's going on has anything to do with surreal worlds behind mirrors and boys who are trapped because they have no reflection, but the reality -- if it bears up under analysis -- isn't a whole lot less weird.

**************************************

Wednesday, March 2, 2022

Weighty matter

Springboarding off yesterday's post, about a discovery of fossils that seem to have come from animals killed the day the Chicxulub Meteorite struck 66 million years ago, today we have a paper in arXiv that looks at why the meteorite hit in the first place.

When you're talking about an event that colossal, I suppose it's natural enough to cast about for a reason other than just shrugging and saying, "Shit happens."  But even allowing for that tendency, the solution landed upon by Leandros Perivolaropoulos, physicist at the University of Ioannina (Greece), seems pretty out there.

Perivolaropoulos attributes the meteorite strike to a sudden increase in Newton's gravitational constant, G -- the number that relates the ratio of the product of two masses and the square of the distance between them to the magnitude of the gravitational force:

F=G{\frac{m_1m_2}{r^2}}

The generally accepted value for G is 6.67430 x 10^-11 m^3 kg^-1 s^-2.  Being a constant, the assumption is that it's... constant.  And always has been.

Perivolaropoulos's hypothesis is that millions of years ago, there was a sudden jump in the value of G by about ten percent.  As you can tell from the above equation, if you keep the masses and the distance between them constant, F is directly proportional to G; if G increased by ten percent, so would the magnitude of the gravitational force.  His thought is that this spike in the attractive force caused the orbits of asteroids and comets to destabilize, and sent them hurtling in toward the inner Solar System.  The result: collisions that marked the violent, sudden end of the Mesozoic Era and the hegemony of the dinosaurs.

To be fair to Perivolaropoulos, his surmise is not just based on a single meteorite collision.  He claims that this increase in G could also resolve the "Hubble crisis" -- the fact that two different measures of the rate of the expansion of the universe generate different answers.  The first, using the cosmic microwave background radiation, comes up with a value of 67.8 kilometers/second/megaparsec; the second, from using "standard candles" like Cepheid variables and type 1A supernovas, comes up with 73.2.  (You can read an excellent summary of the dispute, and the current state of the research, here.)

[Image is in the Public Domain courtesy of NASA]

Perivolaropoulos says that his hypothesis takes care of both the Hubble crisis and the reason behind the end-Cretaceous meteorite collision in one fell swoop.

Okay, where to start?

There are a number of problems with this conjecture.  First -- what on earth (or off it) could cause a universe-wide alteration in one of the most fundamental physical constants?  Perivolaropoulos writes, "Physical mechanisms that could induce an ultra-late gravitational transition include a first order scalar tensor theory phase transition from an early false vacuum corresponding to the measured value of the cosmological constant to a new vacuum with lower or zero vacuum energy."  Put more simply, we're looking at a sudden phase shift in space/time, analogous to what happens when the temperature of water falls below 0 C and it suddenly begins to crystallize into ice.  But why?  What triggered it?

Second, if G did suddenly increase by ten percent, it would create some serious havoc in everything undergoing any sort of gravitational interaction.  I.e., everything.  Just to mention one example, the relationship between the mass of the Sun, the velocity of a planet, and the distance between the two is governed by the equation

 

So if the Earth (for example) experienced a sudden increase in the value of G, the radius of its orbit would (equally suddenly) decrease by ten percent.  Moving the Earth ten percent closer to the Sun would, of course, lead to an increase in temperature.  Oh, he says, but that actually happened; ten million years after the extinction of the dinosaurs we have the Paleocene-Eocene Thermal Maximum, when the temperatures went up by something like 7 C.  However, the PETM is sufficiently explained by a fast injection of five thousand gigatons of carbon dioxide into the atmosphere and oceans, likely triggered by massive volcanism in the North Atlantic Igneous Province -- and there's significant evidence of a carbon dioxide spike from stratigraphic evidence.  No need for the Earth to suddenly lurch closer to the Sun.

It wouldn't just affect orbits, of course.  Everything would suddenly weigh ten percent more.  It would take more energy to run, jump, even stand up.  Mountain building would slow down.  Anything in freefall -- from boulders in an avalanche to raindrops -- would accelerate faster.  Tidal fluctuations would decrease (although with the Moon now closer to the Earth, maybe that one would balance out).  

Also, if G did increase everywhere -- it's called the "universal gravitational constant," after all -- then the same thing would have happened simultaneously across the entire universe.  Then, for some reason, there was a commensurate decrease sometime between then and now, leveling G out at the value we now measure.  So we really need not one, but two, mysterious unexplained universal phase transitions, as if one weren't bad enough.

Then there's the issue that the discrepancy in the measurement in the Hubble constant isn't as big as all that -- it's only 3.4 sigma, not yet reaching the 5 sigma threshold that is the touchstone for results to be considered significant in (for example) particle physics.  Admittedly, 3.4 sigma isn't something we can simply ignore; it definitely deserves further research, and (hopefully) an explanation.  But explaining the Hubble constant measurement issue by appeal to an entirely different set of discrepant measurements that have way less experimental support seems like it's not solving anything, it's just moving the mystery onto even shakier ground.

Last, though, I come back to two of the fundamental rules of thumb in science; Ockham's razor (the explanation that adequately accounts for all the facts, and requires the fewest ad hoc assumptions, is most likely to be correct) and the ECREE principle (extraordinary claims require extraordinary evidence).  Perivolaropoulos's hypothesis not only blasts both of those to smithereens, it postulates a phenomenon that occurred once, millions of years ago, then mysteriously reversed itself, and along the way left behind no other significant evidence.

I hate to break out Wolfgang Pauli's acerbic quote again, but "This isn't even wrong."

Now, to be up front, I'm not a physicist.  I have a distantly-remembered B.S. in physics, which hardly qualifies me to evaluate an academic paper on the subject with anything like real rigor.  So if there are any physicists in the studio audience who disagree with my conclusions and want to weigh in, I'm happy to listen.  Maybe there's something going on here that favors Perivolaropoulos's hypothesis that I'm not seeing, and if so, I'll revise my understanding accordingly.

But until then, I think we have to mark the Hubble crisis as "unresolved" and the extinction of the dinosaurs as "really bad luck."

**************************************

Wednesday, November 27, 2019

Rushing toward a paradigm shift

I have a sneaking suspicion that the physicists are on the threshold of a paradigm-breaking discovery.

The weird data have been building up for some time now, observations and measurements that are at odds with our current models of how the universe works.  I say "models (plural)" because one of the most persistent roadblocks in physics is the seeming incompatibility of quantum mechanics and general relativity -- in other words, coming up with a Grand Unified Theory that pulls a consistent explanation of gravity into our conceptual framework for the other three fundamental forces (electromagnetism and the weak and strong nuclear forces).  All attempts to come up with an amalgam have either "led to infinities" (had places in the relevant equations that generate infinite answers, usually an indicator that something is seriously wrong with your model) or have become so impossibly convoluted that even the experts can't agree on the details (such as string theory with its eleven spatial dimensions, something that's always reminded me of Ptolemy's flailing about to save the geocentric model by adding more loops and twists and epicycles so the data would fit).

And still the anomalous data keep rolling in.  Three weeks ago I wrote about a troubling discrepancy that's been discovered in the value of the Hubble Constant, which describes the rate of expansion of the universe -- there are two ways to measure it, which presumably should give the same answer, but don't.

Then last week, physicists at a lab in Hungary announced that they'd found new evidence of "X17," a mysterious particle that could be a carrier for a fifth fundamental force.  The argument is a bit like the observation that led to the discovery of the neutrino back in 1959 -- during beta radioactive decay, the particles emitted seemed to break the laws of conservation of energy and momentum, until that time strictly enforced in all jurisdictions.  Wolfgang Pauli said, basically, "Well, that can't be right," and postulated that an undetected particle was carrying off the "lost" momentum and energy.  It took twenty-eight years to prove, but he was right.

Here, it's the behavior another radioactive substance, beryllium-8, which emits light at the "wrong" angle to account for all of the energy involved (again, breaking the law of conservation of energy).  Conservation could be re-established if there was an undetected particle being emitted with a mass of 17 MeV (about 33 times the rest mass of an electron).  Even considering the neutrino, this seemed a little bit ad hoc -- "we need a particle, so we'll invent one to make our data fit" -- until measurements from an excited helium nucleus generated anomalous results that could be explained by a fifth force carried by a particle with exactly the same mass.

Hmm.  Curiouser and curiouser.

If that's not enough, just this week a paper appeared in Nature Astronomy about that elusive and mysterious substance "dark matter" that, despite defying every effort to detect it, outweighs the ordinary matter you and I are made of by a factor of five.  Its gravitational signature is everywhere, and appears to be most of what's responsible for holding galaxies together -- without it, the Milky Way and other rotating galaxies would fly apart.

But what is it?  No one knows.  There are guesses, but once again, those guesses have come up empty-handed with respect to any kind of experimental verification.  (And that's not even considering the even-weirder dark energy, which outweighs dark matter by a factor of two, and is thus the most common stuff in the universe, comprising 68% of what's out there -- even though we have not the slightest clue what it might be.)

The paper, by a team led by astrophysicist Qi Guo of the Chinese Academy of Sciences, is called, "Further Evidence for a Population of Dark-Matter-Deficient Dwarf Galaxies," and describes no less than nineteen different galaxies that have significantly less dark matter than conventional explanations (such as they are) would need to explain (1) how they formed, and (2) what's holding them together.  Lead author Guo, for her part, is baffled, and although the data seem solid, she admits to being at a bit of a loss.  "We are not sure why and how these galaxies form," she said, in a press release in Science News.

Elliptical galaxy Abell S740 [Image is in the Public Domain, courtesy of NASA]

So the anomalous observations keep piling up, and thus far, no one has been able to explain them, much less reconcile them with all the others.  I'm reminded of what Thomas Kuhn wrote, in his seminal book The Structure of Scientific Revolutions: "Scientific revolutions are inaugurated by a growing sense... that an existing paradigm has ceased to function adequately in the exploration of an aspect of nature to which that paradigm itself had previously led the way."

It must be both nerve-wracking and exhilarating to be a physicist right now.  Nerve-wracking because suddenly finding out that your previous model, the one you were taught to understand and cherish during your training, is inadequate -- well, the response is frequently to do what Irish science historian, writer, and filmmaker James Burke calls "scrambling about to stop the rug from being pulled out from under years of happy status-quo."  On the one hand, you can understand that, apart from any emotional attachment one might have to an accepted model; it is an accepted model because it worked perfectly well for a while, accounting for all the evidence we had.  And there are countless examples when a model was challenged by what appeared to be contradictory data, and it turned out the data were mismeasurements, misinterpretations, or outright fabrications.

Which is why Pauli was so sure that the neutrino existed -- the law of conservation of energy, he reasoned, was so well-supported that it just couldn't be wrong.

But now -- well, as I said, that data keep piling up.  Whatever's going on here, they aren't all mismeasurements.  It remains to be seen what revision of our understanding will sweep away all the oddities and internal contradictions and make sense of what the physicists are seeing, but I have no doubt we'll find it at some point.

And there's the exhilarating part of it.  What a time to be in research physics -- when the race is on to pull together and explain an increasingly huge body of anomalous stuff, and revise our understanding of the universe in a fundamental way.  It's the kind of climate in which Nobel Prizes are won.

Being an observer is exciting enough; I can't imagine what it might be like to be inside it all.

*******************************

Long-time readers of Skeptophilia have probably read enough of my rants about creationism and the other flavors of evolution-denial that they're sick unto death of the subject, but if you're up for one more excursion into this, I have a book that is a must-read.

British evolutionary biologist Richard Dawkins has made a name for himself both as an outspoken atheist and as a champion for the evolutionary model, and it is in this latter capacity that he wrote the brilliant The Greatest Show on Earth.  Here, he presents the evidence for evolution in lucid prose easily accessible to the layperson, and one by one demolishes the "arguments" (if you can dignify them by that name) that you find in places like the infamous Answers in Genesis.

If you're someone who wants more ammunition for your own defense of the topic, or you want to find out why the scientists believe all that stuff about natural selection, or you're a creationist yourself and (to your credit) want to find out what the other side is saying, this book is about the best introduction to the logic of the evolutionary model I've ever read.  My focus in biology was evolution and population genetics, so you'd think all this stuff would be old hat to me, but I found something new to savor on virtually every page.  I cannot recommend this book highly enough!

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]






Monday, November 4, 2019

The problem with Hubble

In my Critical Thinking classes, I did a unit on statistics and data, and how you tell if a measurement is worth paying attention to.  One of the first things to consider, I told them, is whether a particular piece of data is accurate or merely precise -- two words that in common parlance are used interchangeably.

In science, they don't mean the same thing.  A piece of equipment is said to be precise if it gives you close to the same value every time.  Accuracy, though, is a higher standard; data are accurate if the values are not only close to each other when measured with the same equipment, but agree with data taken independently, using a different device or a different method.

A simple example is that if my bathroom scale tells me every day for a month that my mass is (to within one kilogram either way) 239 kilograms, it's highly precise, but very inaccurate.

This is why scientists always look for independent corroboration of their data.  It's not enough to keep getting the same numbers over and over; you've got to be certain those numbers actually reflect reality.

This all comes up because of some new information about one of the biggest scientific questions known -- the rate of expansion of the entire universe.

[Image is in the Public Domain, courtesy of NASA]

A few months ago, I wrote about some recent experiments that were allowing physicists to home in on the Hubble constant, a quantity that is a measure of how fast everything in the universe is flying apart.  And the news appeared to be good; from a range of between 50 and 500, physicists had been able to narrow down the value of the Hubble constant to between 65.3 and 75.6.

The problem is, nobody's been able to get closer than that -- and in fact, recent measurements have widened, not narrowed, the gap.

There are two main ways to measure the Hubble constant.  The first is to use information like red shift and Cepheid variables (stars whose period of brightness oscillation varies predictably with their intrinsic brightness, making them a good "standard candle" to determine the distance to other galaxies) to figure out how fast the galaxies we see are receding from each other.  The other is to use the cosmic microwave background radiation -- the leftovers from the radiation produced by the Big Bang -- to determine the age of the universe, and therefore, how fast it's expanding.

So this is a little like checking my bathroom scale by weighing myself on it, then comparing my weight as measured by the scale at the gym and seeing if I get the same answer.

And the problem is, the measurement of the Hubble constant by these two methods is increasingly looking like it's resulting in two irreconcilably different values.

The genesis of the problem is that our measurement ability has become more and more precise -- the error bars associated with data collection have shrunk considerably.  And if the two measurements were not only precise, but also accurate, you would expect that our increasing precision would result in the two values getting closer and closer together.

Exactly the opposite has happened.

"Five years ago, no one in cosmology was really worried about the question of how fast the universe was expanding.  We took it for granted," said astrophysicist Daniel Mortlock of Imperial College London.  "Now we are having to do a great deal of head scratching – and a lot of research...  Everyone’s best bet was that the difference between the two estimates was just down to chance, and that the two values would converge as more and more measurements were taken.  In fact, the opposite has occurred.  The discrepancy has become stronger.  The estimate of the Hubble constant that had the lower value has got a bit lower over the years and the one that was a bit higher has got even greater."

The discovery of dark matter and dark energy, the first by Vera Rubin, Kent Ford, and Ken Freeman in the 1970s, and the second by Adam Riess and Saul Perlmutter in the 1990s, accounted for the fact that the rate of expansion seemed wildly out of whack with the amount of observable matter in the universe.  The problem is, since the discovery of the effects of dark matter and dark energy, we haven't gotten any closer to finding out what they actually are.  Every attempt to directly detect either one has resulted in zero success.

Now, it appears that the problems run even deeper than that.

"Those two discoveries [dark matter and dark energy] were remarkable enough," said Riess.  "But now we are facing the fact there may be a third phenomenon that we had overlooked – though we haven’t really got a clue yet what it might be."

"The basic problem is that having two different figures for the Hubble constant measured from different perspectives would simply invalidate the cosmological model we made of the universe," Mortlock said.  "So we wouldn’t be able to say what the age of the universe was until we had put our physics right."

It sounds to me a lot like the situation in the late 1800s, when physicists were trying to determine the answer to a seemingly simple question -- in what medium do light waves propagate?  Every wave has to be moving through something; water waves come from regular motion of water molecules, sound waves from oscillation of air molecules, and so on.  With light waves, what was "waving?"

Because the answer most people accepted was, "something has to be waving even if we don't know what it is," scientists proposed a mysterious substance called the "aether" that permeated all of space, and was the medium through which light waves were propagating.  All attempts to directly detect the aether were failures, but this didn't discourage people from saying that it must be there, because otherwise, how would light move?

Then along came the brilliant (and quite simple -- in principle, anyhow) Michelson-Morley experiment, which proved beyond any doubt that the aether didn't exist.  Light traveling in a vacuum appeared to have a constant speed in all frames of reference, which is entirely unlike any other wave ever studied.  And it wasn't until Einstein came along and turned our entire understanding upside down with the Special Theory of Relativity that we saw the piece we'd been missing that made sense of all the weird data.

What we seem to be waiting for is this century's Einstein, who will explain the discrepancies in the measurements of the Hubble constant, and very likely account for the mysterious, undetectable dark matter and dark energy (which sound a lot like the aether, don't they?) at the same time.  But until then, we're left with a mystery that calls into question one of the most fundamental conclusions of modern physics -- the age of the universe.

**********************************

This week's Skeptophilia book recommendation is a fun book about math.

Bet that's a phrase you've hardly ever heard uttered.

Jordan Ellenberg's amazing How Not to Be Wrong: The Power of Mathematical Thinking looks at how critical it is for people to have a basic understanding and appreciation for math -- and how misunderstandings can lead to profound errors in decision-making.  Ellenberg takes us on a fantastic trip through dozens of disparate realms -- baseball, crime and punishment, politics, psychology, artificial languages, and social media, to name a few -- and how in each, a comprehension of math leads you to a deeper understanding of the world.

As he puts it: math is "an atomic-powered prosthesis that you attach to your common sense, vastly multiplying its reach and strength."  Which is certainly something that is drastically needed lately.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]





Thursday, July 11, 2019

Revising Hubble

If I had to pick the most paradigm-changing discovery of the twentieth century, a strong contender would be the discovery of red shift by astronomer Edwin Hubble.

What Hubble found was that when he analyzed the spectral lines from stars in distant galaxies, the lines -- representing wavelengths of light emitted by elements in the stars' atmospheres -- had slid toward the red (longer-wavelength) end of the spectrum.  Hubble realized that this meant that the galaxies were receding from us at fantastic speeds, resulting in a Doppler shift of the light coming from them.

What was most startling, though, is that the further away a galaxy was, the faster it was moving.  This observation led directly to the theory of the Big Bang, that originally all matter in the universe was coalesced into a single point, then -- for reasons still unclear -- began to expand outward at a rate that defies comprehension.

There's a simple quantity (well, simple to define, anyhow) that describes the relationship that Hubble discovered.  It's called the Hubble constant, and is defined at the ratio between the velocity of a galaxy and its distance from us.  The relationship seems to be linear (meaning the constant isn't itself dependent upon distance), but the exact value has proven extremely difficult to determine.  Measurements have varied between 50 and 500 kilometers per second per megaparsec, which is a hell of a range for something that's supposed to be a constant.

And the problem is, the value has varied depending on how it's calculated.  Measurements based upon the cosmic microwave background radiation give one range of values; measurements using Type 1A supernovae (a commonly-used "standard candle" for calculating the distances to galaxies) give a different range.

Enter Kenta Hotokezaka of Princeton University, who has decided to tackle this problem head-on.  “The Hubble constant is one of the most fundamental pieces of information that describes the state of the universe in the past, present and future," Hotokezaka said in a press release.  "So we’d like to know what its value is...  either one of [the accepted calculations of the constant] is incorrect, or the models of the physics which underpin them are wrong.  We’d like to know what is really happening in the universe, so we need a third, independent check."

Hotokezaka and his team have found the check they were looking for in the collision of two neutron stars in a distant galaxy.  The measurements made of the gravitational waves emitted by this collision were so precise it kind of boggles the mind.  Adam Deller, of Swinburne University of Technology in Australia, who co-authored the paper, said, "The resolution of the radio images we made was so high, if it was an optical camera, it could see individual hairs on someone’s head 3 miles away."

[Image licensed under the Creative Commons ESA, Colliding neutron stars ESA385307, CC BY-SA 3.0 IGO]

Using this information, the researchers were able to narrow in on the Hubble constant -- reducing the uncertainty to between 65.3 and 75.6 kilometers per second per megaparsec.

Quite an improvement over 50 to 500, isn't it?

"This is the first time that astronomers have been able to measure the Hubble constant by using a joint analysis of a gravitational-wave signals and radio images,"  Hotokezaka said about the accomplishment of his team.  "It is remarkable that only a single merger event allows us to measure the Hubble constant with a high precision — and this approach relies neither on the cosmological model (Planck) nor the cosmic-distance ladder (Type Ia)."

I'm constantly astonished by what we can learn of our universe as we sit here, stuck on this little ball of spinning rock around an average star in one arm of an average galaxy.  It's a considerable credit to our ingenuity, persistence, and creativity, isn't it?  From our vantage point, we're able to gain an understanding of the behavior of the most distant objects in the universe -- and from that, deduce how everything began.

**************************************

This week's Skeptophilia book recommendation is pure fun for anyone who (like me) appreciates both plants and an occasional nice cocktail -- The Drunken Botanist by Amy Stewart.  Most of the things we drink (both alcohol-containing and not) come from plants, and Stewart takes a look at some of the plants that have provided us with bar staples -- from the obvious, like grapes (wine), barley (beer), and agave (tequila), to the obscure, like gentian (angostura bitters) and hyssop (Bénédictine).

It's not a scientific tome, more a bit of light reading for anyone who wants to know more about what they're imbibing.  So learn a little about what's behind the bar -- and along the way, a little history and botany as well.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]





Thursday, May 9, 2019

Into the expanse

Last week, I did a post about dark matter and dark energy -- and how those could potentially drive a reworking of what we know about physics.  Today, there's another finding that is causing some serious head-scratching amongst the physicists:

The universe may be expanding faster than we thought.  Not by a small amount, either.  The difference amounts to about 9%.  Further, this means that the universe might also be younger than we'd thought -- by almost a billion years.

This rather puzzling conclusion is the result of work by a team led by Adam Riess, of Johns Hopkins University.  At issue here is the Hubble constant, the rate of outward expansion of spacetime.  It's not an easy thing to measure.  The usual method has been to use what are called standard candles, which need a bit of explanation.

The difficulty with accurately measuring the distance to the nearest stars is a problem that's been apparent for several centuries.  If two stars are equally bright as seen from Earth, it may be that they're shining at the same luminosity and are the same distance.  It's more likely, however, that they're actually at different distances, but the brighter one is farther away.  But how could you tell?

For the nearest stars, we can use parallax -- the apparent movement of the star as the Earth revolves around the Sun.  Refinements in this technique have resulted in our ability to measure a parallax shift of 10 microarcseconds -- one ten-millionth of 1/3600th of the apparent circumference of the sky.  This translates to being able to measure distances of up to 10,000 light years this way.

But for astronomical objects that are farther away, parallax doesn't work, so you have to rely on something that tells you the star's intrinsic brightness; then you can use that information to figure out how far away it is.  There are two very common ones used:
  1. Cepheid variables.  Cepheids are a class of variable stars -- ones that oscillate in luminosity -- that have an interesting property.  The rate at which their brightness oscillates is directly proportional to its actual luminosity.  So once you know how fast it's oscillating, you can calculate how bright it actually is, and from that determine how far away it is.
  2. Type 1a supernovae.  These colossal stellar explosions always result in the same peak luminosity.  So when one occurs in a distant galaxy, astronomers can chart its apparent brightness peak -- and from that, determine how far away the entire galaxy is.
A Cepheid variable [Image is in the Public Domain, courtesy of the Hubble Space Telescope]

So the standard candle method has allowed us to estimate the distances to other galaxies, you can combine that information with its degree of red shift (a measure of how fast it's moving away from us) to estimate the rate of expansion of space.

And here's where the trouble lies.  Previous measurements of the rate of expansion of space, made using information such as the three-degree microwave background radiation, have consistently given the same value for the Hubble constant and the same age of the universe -- 13.7 billion years.  Riess's measurement of standard candles in distant galaxies is also giving a consistent answer... but a different one, on the order of 12.8 billion years.

"It’s looking more and more like we’re going to need something new to explain this," Riess said.

John Cromwell Mather, winner of the 2006 Nobel Prize in Physics, was even more blunt.  "There are only two options," Mather said.  "1. We’re making mistakes we can’t find yet. 2. Nature has something we can’t find yet."

"You need to add something into the universe that we don’t know about,” said Chris Burns, an astrophysicist at the Carnegie Institution for Science.  "That always makes you kind of uneasy."

To say the least.  Throw this in with dark matter and dark energy, and you've got a significant piece of the universe that physicists have not yet explained.  It's understandable that it makes them uneasy, since finding the explanation might well mean that a sizable chunk of our previous understanding was misleading, incomplete, or simply wrong.

But it's exciting.  Gaining insight into previously unexplained phenomena is what science does.  My guess is we're awaiting some astrophysicist having a flash of insight and crafting an answer that will blow us all away, much the way that Einstein's insight -- which we now call the Special Theory of Relativity -- blew us away by reframing the "problem of the constancy of the speed of light."  Who this century's Einstein will be, I have no idea.

But it's certain that whoever it is will overturn our understanding of the universe in some very fundamental ways.

*************************************

I grew up going once a summer with my dad to southern New Mexico and southern Arizona, with the goal of... finding rocks.  It's an odd hobby for a kid to have, but I'd been fascinated by rocks and minerals since I was very young, and it was helped along by the fact that my dad did beautiful lapidary work.  So while he was poking around looking for turquoise and agates and gem-quality jade, I was using my little rock hammer to hack out chunks of sandstone and feldspar and quartzite and wondering how, why, and when they'd gotten there.

Turns out that part of the country has some seriously complicated geology, and I didn't really appreciate just how complicated until I read John McPhee's four-part series called Annals of the Former World.  Composed of Basin and Range, In Suspect Terrain, Rising from the Plains, and Assembling California, it describes a cross-country trip McPhee took on Interstate 80, accompanied along the way with various geologists, with whom he stops at every roadcut and outcrop along the way.  As usual with McPhee's books they concentrate on the personalities of the people he's with as much as the science.  But you'll come away with a good appreciation for Deep Time -- and how drastically our continent has changed during the past billion years.

[Note:  If you order this book using the image/link below, part of the proceeds will go to support Skeptophilia!]