Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label electrons. Show all posts
Showing posts with label electrons. Show all posts

Monday, July 10, 2023

The conservation conundrum

A major underpinning of our understanding of physics has to do with symmetry and conservation laws.

Both have to do with order, balance, and the concept that you can't get something for nothing.  A lot of the most basic research in theoretical physics is driven by the assumption that despite the seeming complexity and chaos in the universe, at its heart is a deep simplicity, harmony, and beauty. 

The mathematical expression of this concept reaches its pinnacle in the laws of conservation.

You undoubtedly ran into conservation laws in your high school science classes.  The law of the conservation of matter and energy (you can move matter and energy around and change its form, but the total amount stays the same).  Conservation of charge (the total charge present at the beginning of a reaction is equal to the total charge present at the end; this one is one of the fundamental rules governing chemistry).  Conservation of momentum, conservation of spin, conservation of parity.

All of these are fairly well understood, and physicists use them constantly to make predictions about how interactions in the real world will occur.  Add to them the mathematical models of quantum physics, and you have what might well be the single most precise system ever devised by human minds.  The predictions of this system match the actual experimental measurements to a staggering accuracy of ten decimal places.  (This is analogous to your taking a tape measure to figure out the length of a two-by-four, and your answer being correct to the nearest billionth of a meter.)

So far, so good.  But there's only one problem with this.

Symmetry and conservation laws provide no explanation of how there's something instead of nothing.

We know that photons (zero charge, zero mass) can produce pairs of particles -- one matter, one antimatter, which (by definition) have opposite charges.  These particles usually crash back together and mutually annihilate within a fraction of a second, resulting in a photon with the same energy as the original one had, as per the relevant conservation laws.  Immediately after the Big Bang, the universe (such as it was) was filled with extremely high energy photons, so this pair production was going at a furious rate, with such a roiling sea of particles flying about that some of them survived being annihilated.  This, it's thought, is the origin of the matter we see around us, the matter we and everything else are made of.

But what we know about symmetry and conservation suggests that there should have been exactly equal amounts of matter and antimatter created, so very quickly, there shouldn't have been anything left but photons.  Instead, we see an imbalance -- an asymmetry -- favoring matter.  Fortunately for us, of course.

So there was some matter left over after everything calmed down.  But why?

One possibility is that when we look out at the distant stars and galaxies, some of them are actually antimatter.  On the surface, it seems like there'd be no way to tell; except for the fact that every particle that makes it up would have the opposite properties (i.e. protons would have a negative charge, electrons a positive charge, and so on), antimatter would have identical properties to matter.  (In fact, experimentally-produced antihydrogen was shown in 2016 to have the same energy levels, and therefore exactly the same spectrum, as ordinary hydrogen.)  From a distance, therefore, it should look exactly like matter does.

So could there be antimatter planets, stars, and galaxies out there?  Maybe even with Evil Major Don West With A Beard?


The answer is almost certainly no.  The reason is that if there was a galaxy out there made of antimatter, then between it and the nearest ordinary matter galaxy, there'd be a boundary where the antimatter thrown off by the antimatter galaxy would be constantly running into the matter thrown off by the ordinary galaxy.  So we'd see a sheet dividing the two, radiating x-rays and gamma rays, where the matter and antimatter were colliding and mutually annihilating.  Nothing of the sort has ever been observed, so the conclusion is that what we see out in space, out to the farthest quasars, is all made of matter.

This, though, leaves us with the conundrum of how this happened.  What generated the asymmetry between matter and antimatter during the Big Bang?

One possibility, physicists thought, could be that the particles of matter themselves are asymmetrical.  If the shape or charge distribution of (say) an electron has a slight asymmetry, this would point to there being a hitherto-unknown asymmetry in the laws of physics that might favor matter over antimatter.  This conjecture is, in fact, why the topic comes up today; a paper last week in Science described an experiment at the University of Colorado - Boulder to measure an electron's dipole moment, the offset of charges within an electron.  Lots of molecules have a nonzero dipole moment; it's water's high dipole moment that results in water molecules having a positive end and a negative end, so they stick together like little magnets.  A lot of water's odd properties come from the fact that it's highly polar, including why it hurts like a sonofabitch when you do a belly flop off a diving board -- you're using your body to break simultaneously all of those linked molecules.

What the team did was to create a strong magnetic field around an extremely pure collection of hafnium fluoride molecules.  If electrons did have a nonzero dipole moment -- i.e., they were slightly egg-shaped -- the magnetic field would cause them to pivot so they were aligned with the field, and the resulting torque on the molecules would be measurable.

They found that to the limit of their considerable measuring ability, electrons are perfectly spherical and have an exactly zero dipole moment.

"I don’t think Guinness tracks this, but if they did, we’d have a new world record," said Tanya Roussy, who led the study.  "The new measurement is so precise that, if an electron were the size of Earth, any asymmetry in its shape would have to be on a scale smaller than an atom."

That's what I call accuracy.

On the other hand, it means we're back to the drawing board with respect to why there's something instead of nothing, which as a scientific question, is kind of a big deal.  At the moment, there don't seem to be any other particularly good candidates out there for an explanation, which is an uncomfortable position to be in.  Either there's something major we're missing in the laws of physics -- which, as I said, otherwise give stunningly accurate predictions of real-world experimental results -- or we're left with the even less satisfying answer of "it just happened that way."

But that's the wonderful thing about science, isn't it?  Scientists never write the last word on a subject and assume nothing will ever change thereafter.  There will always be new information, new perspectives, and new models, refining what we know and gradually aligning better and better with this weird, chaotic universe we live in.

So I'm not writing off the physicists yet.  They have a damn good track record of solving what appear to be intractable problems -- my guess is that sooner or later, they'll figure out the answer to this one.

****************************************



Monday, June 26, 2023

Advanced elegance

I think it's a natural human tendency to be awed by what we don't understand.

I know when I see some abstruse concept that is far beyond my grasp, I'm impressed not only by how complex the universe can be, but that there are people who can comprehend it.  I first ran into this in a big way when I was in college, and took a class called Classical Mechanics.  The topic was the mathematics of why and how objects move, how that motion affects other objects, and so on.

It was the first time in my life I had ever collided with something that regardless of my effort, I couldn't get.  The professor, Dr. Spross, was a very patient man, but his patience was up against a classical-mechanics-proof brain.  On the first exam, I scored a 19.

Percent.

And I'm convinced that he had dredged up the 19 points from somewhere so I wouldn't end up with a single-digit score.  I ended that class with a C-, which I think Dr. Spross gave me simply because he didn't want me back again the following semester, spending another four months ramming my poor physics-deficient head up against a metaphorical brick wall.

There's one memory that stands out from that experience, over forty years ago, besides the overwhelming frustration.  It was when Dr. Spross introduced the concept of the "Hamiltonian function," a mathematical framework for analyzing motion.  He seemed so excited about it.  It was, he said, an incredibly elegant way to consider velocity, acceleration, force, momentum, and so on.  So I thought, "Cool!  That sounds pretty interesting."

Following that cheerful thought was an hour and a half of thinking, "I have no fucking idea what any of this means."  It was completely opaque.  The worst part was that a number of my classmates were nodding their heads, writing stuff down, and seemed to get it with no problem.

So either I was the only complete dunderhead in the class, or they were just better at hiding their dismay than I was.

Anyhow, I think that was the moment I realized a career in research physics was not in the cards for me.

To this day, the "Hamiltonian function" remains something that in my mind symbolizes the Unknowable.  I have deep and abiding admiration for people for whom that concept makes sense (first and foremost, William Rowan Hamilton, who developed it).  And I'm sure it is elegant, just as Dr. Spross said.  But experiencing that elegance was (and probably still is) entirely beyond me.

It's this tendency to find what we can't understand awe-inspiring that has led to the idea of the God of the gaps -- in which gaps in our scientific knowledge are attributed to the incomprehensible hand of the divine.  Theologian Dietrich Bonhoeffer realized what the problem with this was, at least for people who are religious:
How wrong it is to use God as a stop-gap for the incompleteness of our knowledge.  If in fact the frontiers of knowledge are being pushed further and further back (and that is bound to be the case), then God is being pushed back with them, and is therefore continually in retreat.  We are to find God in what we know, not in what we don't know.
Anyhow, that was a long-winded preamble as an explanation of why all of this comes up in today's post.  I immediately thought of the awe-inspiring nature of what we don't understand when I read an article yesterday about two researchers at the University of Rochester, Tamar Friedmann and Carl Hagen, who found that a method for calculating the energy levels of a hydrogen atom generates the well-known number pi.

[Image is in the Public Domain]

It turns out to have something to do with a mathematical function called the Wallis product, which says that you can generate π/2 by a simple series of multiplications:
π/2 = (2/1) x (2/3) x (4/3) x (4/5) x (6/5) x (6/7) x (8/7) x (8/9)....
The pattern is that the numerators of the fractions are 2, 2, 4, 4, 6, 6, 8, 8... and the denominators 1, 3, 3, 5, 5, 7, 7, 9, 9...  And the cool thing is, the more terms you add, the closer you get to π/2.

Now, as for why this is so... well, I tried reading the explanation, and my eyes started spinning.  And I've taken lots of math courses, including calculus and differential equations, and like I said earlier, I majored in physics (as much of a mistake as that turned out to be).  But when I took a look at the paper about the energy levels of hydrogen and the Wallis product and gamma functions, I almost could hear Dr. Spross's voice, explaining it in a tone implying it would be immediately clear to a small child, or even an unusually intelligent dog.

And all of those feelings from Classical Mechanics came bubbling up to the surface.

So I'm left with being a little in awe about it all.  Somehow, even though I have no real understanding of why, the same number that I learned about in geometry class as the ratio between a circle's circumference and its diameter shows up in the energy levels of hydrogen atoms.  Predictably, I'm not inclined to attribute such correspondences to the hand of the divine, but they do seem to be (in Dr. Spross's words) "elegant."  And even if I never get much beyond that, I can still appreciate the minds of the people who can.

****************************************



Monday, April 18, 2022

Sending pucks to Bolivia

Over the last few days I've been reading physicist Sean Carroll's wonderful book Something Deeply Hidden, which is about quantum physics, and although a lot of it (so far) is at least familiar to me in passing, he has a way of explaining things that is both direct and simultaneously completely mind-blowing.

I'm thinking especially of the bit I read last night, about the fact that even the physicists are unsure what quantum mechanics is really describing.  It's not that it doesn't work; the model has been tested every different way you can think of (and probably ones neither one of us would have thought of), and it's passed every test, often to levels of precision other realms of physics can only dream of.  The equations work; there's no doubt about that.  But what is it, exactly, that they're describing?

Here's the analogy he uses.  Suppose there was some physicist who was able to program a computer with all of Newton's laws of motion and the other equations of macroscopic physics that have been developed since Newton's time.  So if you wanted to know anything about the position, velocity, momentum, or energy of an object, all you have to do is input the starting conditions, and the computer will spit out the final state after any given amount of time elapsed.

A simple example: a cannon fires a cannonball with an initial velocity of 150 m/s at an incline of 45 degrees.  The (constant) acceleration due to gravity is -9.8 m/s^2 (the negative sign is because the acceleration vector points downward).  Ignoring air resistance, what is the highest point in its trajectory?

And the computer spits out 574.4 meters.

Now, anyone who took high school physics could figure this out with a few calculations.  But the point Carroll makes is this: could someone input numbers like that into the software, and get an output number, without having any clue what the model is actually doing?

The answer, of course, is yes.  You might even know what the different variables mean, and know that your answer is "maximum height of the cannonball," and that when you check, the answer is right.  But as far as knowing why it works, or even what's happening in the system that makes it work, you wouldn't have any idea.

That's the situation we're in with quantum physics.

And of course, quantum physics is a hell of a lot less intuitive than Newtonian mechanics.  I think the piece if it that always boggles me the most is the probabilistic nature of matter and energy on the submicroscopic level.  

Let me give you an example, analogous to the cannonball problem.  Given a certain set of conditions, what is the position of an electron?

The answer -- which, to reiterate, has been confirmed experimentally countless times -- is that prior to observation, the electron kind of isn't anywhere in particular.  Or it's kind everywhere at once, which amounts to the same thing.  Electrons -- and all other forms of matter and energy -- are actually fields of probabilities.  You can calculate those probabilities to as many decimal places as you like, and it gives phenomenally accurate predictions.  (In fact, the equations describing those probabilities have a load of real-world applications, including semiconductors, microchips, and lasers.)  But even so, there's no doubt that it's weird.  Let's say you repeatedly measure electron positions hundreds or thousands of times, and plot those points on a graph.  The results conform perfectly to Schrödinger's wave equation, the founding principle of quantum physics.  But each individual measurement is completely uncertain.  Prior to measurement, the electron really is just a smeared-out field of probabilities; after measurement, it's localized to one specific place.

Now, let me point out something that this isn't saying.  Quantum physics is not claiming that the electron actually is in a specific location, and we simply don't have enough information to know where.  This is not an issue of ignorance.  This was shown without any question by the famous double-slit experiment, where photons are shot through a pair of closely-spaced slits, and what you see at the detector on the other side is an interference pattern, as if the photons are acting like waves -- basically, going through both slits at the same time.  You can even shoot one photon at a time through the slits, and the detector (once again after many photons are launched through), still shows an interference pattern.  Now, change one thing: add another detector at each slit, so you know for sure which slit each photon went through.  When you do that, the interference pattern disappears.  The photons, apparently, aren't little packets of energy; they're spread-out fields of probabilities, and when they're moving they take all possible paths to get from point A to point B simultaneously.  If you don't observe its path, what you measure is the sum of all the possible paths the photon could have taken; only if you observe which slit it went through do you force it to take a single path.

It's as if when Wayne Gretzky winds up for a slap shot, the puck travels from his stick to the net taking every possible path, including getting there via Bolivia, unless you're following it with a high-speed camera -- if you do that, the puck only takes a single path.

If you're saying, "what the hell?" -- well, so do we all.  The most common interpretation of this -- called the Copenhagen interpretation, after the place it was dreamed up -- is that observing the electron "collapses the wave function," meaning that it forces the electron to condense into a single place described by a single path.  But this opens up all sorts of troublesome questions.  Why does observation have that effect?  What counts as an observer?  Does it have to be a sentient being?  If a photon lands on the retina of a cat, does its wave function collapse?  What if the photon is absorbed by a rock?  Most importantly -- what is actually happening that makes the wave function collapse in the first place?

To add to the mystery, there's also the Heisenberg uncertainty principle, which states that for certain pairs of variables -- most famously, position and velocity -- you can't know both of them to high precision at the same time.  The more you know about a particle's position, the less you can know even theoretically about its velocity.  Or, more accurately, if you pinpoint a particle's position, its velocity can only be described as a wide field of probabilities.  And vice versa.

I think the passage in Carroll's book that made me the most astonished was the following summation of all this:

Classical [Newtonian] mechanics offers a clear and unambiguous relationship between what we see and what the theory describes.  Quantum mechanics, for all its successes, offers no such thing.  The enigma at the heart of quantum reality can be summed up in a simple motto: what we see when we look at the world seems to be fundamentally different from what actually is.

So.  Yeah.  You can see why I was kind of wide-eyed, and I'm not even a quarter of the way through the book yet.  

Anyhow, maybe we should lighten things up by ending with my favorite joke.

Schrödinger and Heisenberg are out for a drive, with Heisenberg at the wheel.  After a while, they get pulled over by a cop.

The cop says to Heisenberg, "Do you have any idea how fast you were going?"

Heisenberg replies, "No, but I know exactly where I am."

The cop says, "You were going 85 miles an hour!"

Heisenberg throws his hands up and the air and says, "Great!  Now I'm lost!"

The cop by this time is getting pissed off, and says, "Fine, if you're going to be a smartass, I'm gonna search your car."  So he opens the trunk, and in the trunk is a dead cat.

The cop says, "Did you know there's a dead cat in your trunk?"

Schrödinger says, "Well, there is now."

Thanks.  You've been a great audience.  I'll be here all week.

**************************************

Wednesday, November 15, 2017

Advanced elegance

I think it's a natural human tendency to be awed by what we don't understand.

I know when I see some abstruse concept that is far beyond my grasp, I'm impressed not only by how complex the universe can be, but that there are people who can comprehend it.  I first ran into this in a big way when I was in college, and took a class called Classical Mechanics.  The topic was why and how objects move, how that motion affects other objects, and so on.

It was the first time in my life I had ever collided with something that regardless of my effort, I couldn't get.  The professor, Dr. Spross, was a very patient man, but his patience was up against a classical-mechanics-proof brain.  On the first exam, I scored a 19.

Percent.

And I'm convinced that he had dredged up the 19 points from somewhere so I wouldn't end up with a single-digit score. I ended that class with a C-, which I think Dr. Spross gave me simply because he didn't want me back again the following semester, spending another four months ramming my poor physics-deficient head up against a metaphorical brick wall.

There's one memory that stands out from that experience, nearly forty years ago, besides the overwhelming frustration.  It was when Dr. Spross introduced the concept of the "Hamiltonian function," a mathematical framework for analyzing motion.  He seemed so excited about it.  It was, he said, an incredibly elegant way to consider velocity, acceleration, force, momentum, and so on.  So I thought, "Cool!  That sounds pretty interesting."

Following that cheerful thought was an hour and a half of thinking, "I have no fucking idea what any of this means."  It was completely opaque.  The worst part was that a number of my classmates were nodding their heads, writing stuff down, and seemed to get it with no problem.

So I was either the only dumb one in the class, or they were just better at hiding their dismay than I was.

Anyhow, I think that was the moment I realized a career in research physics was not in the cards for me.

To this day, the "Hamiltonian function" remains something that in my mind symbolizes the Unknowable.  I have deep and abiding admiration for people for whom that concept makes sense (first and foremost, William Rowan Hamilton, who developed it).  And I'm sure it is elegant, just as Dr. Spross said.  But experiencing that elegance was (and probably still is) entirely beyond me.

It's this tendency to find what we can't understand awe-inspiring that has led to the idea of the god of the gaps -- in which gaps in our scientific knowledge are attributed to the incomprehensible hand of the divine.  Theologian Dietrich Bonhoeffer realized what the problem with this was, at least for people who are religious:
How wrong it is to use God as a stop-gap for the incompleteness of our knowledge.  If in fact the frontiers of knowledge are being pushed further and further back (and that is bound to be the case), then God is being pushed back with them, and is therefore continually in retreat.  We are to find God in what we know, not in what we don't know.
Anyhow, that was a long-winded preamble as an explanation of why all of this comes up in today's post.  I immediately thought of the awe-inspiring nature of what we don't understand when I read an article yesterday about two researchers at the University of Rochester, Tamar Friedmann and Carl Hagen, who found that a method for calculating the energy levels of a hydrogen atom generates the well-known number pi.

[image courtesy of the Wikimedia Commons]

It turns out to have something to do with a mathematical function called the Wallis product, which says that you can generate π/2 by a simple series of multiplications:
π/2 = (2/1) x (2/3) x (4/3) x (4/5) x (6/5) x (6/7) x (8/7) x (8/9)....
The pattern is that the numerators of the fractions are 2, 2, 4, 4, 6, 6, 8, 8... and the denominators 1, 3, 3, 5, 5, 7, 7, 9, 9...  And the cool thing is, the more terms you add, the closer you get to π/2.

Now, as for why this is so... well, I tried reading the explanation, and my eyes started spinning.  And I've taken lots of math courses, including calculus and differential equations, and like I said earlier, I majored in physics (as much of a mistake as that turned out to be).  But when I took a look at the paper about the energy levels of hydrogen and the Wallis product and gamma functions, I almost could hear Dr. Spross's voice, explaining it in a tone that implies that it would be immediately clear to a small child, or even an unusually intelligent dog.

And all of those feelings from Classical Mechanics came bubbling up to the surface.

So I'm left with being a little in awe about it all.  Somehow, even though I have no real understanding of why, the same number that I learned about in geometry class as the ratio between a circle's circumference and its diameter shows up in the energy levels of hydrogen atoms.  Predictably, I'm not inclined to attribute such correspondences to the hand of the divine, but I do think they're (in Dr. Spross's words) "elegant."  And even if I never get much beyond that, I can still appreciate the minds of the people who can.