Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label entropy. Show all posts
Showing posts with label entropy. Show all posts

Friday, February 2, 2024

Going against the flow

Two of the most extensively-tested laws of physics are the First and Second Laws of Thermodynamics -- and in the nearly two centuries since they were first formulated, there has not been a single exception found.

The First Law is the less shocking one.  It's sometimes called the Law of Conservation of Matter and Energy, and says simply that in a closed system, the total amount of matter and energy does not change.  You can turn one into the other, or change its form, but the total quantity doesn't vary.  Unsurprising, and in fact can seem a little circular given that this is how a closed system is defined in the first place.

The Second Law is where things get interesting.  It can be formulated a variety of ways, but the simplest is that in a closed system, the amount of entropy (disorder) always increases.  If entropy is being decreased somewhere (the system is becoming more orderly) it always requires (1) an input of energy, and (2) that somewhere else entropy is increasing, and that increase is larger than the localized decrease.  An example is the human body.  When you develop from a single fertilized egg cell to an adult, your overall entropy decreases significantly.  But in the process, you are taking the food molecules you eat and (1) extracting their energy, and (2) increasing their entropy monumentally by chopping them up into little pieces and strewing the pieces about.  So you're able to locally decrease your own entropy, but you leave behind a trail of chaos wherever you go.

Or, as my thermodynamics professor in college put it, a lot of years ago: the First Law says you can't win; the Second Law says you can't break even.  Explaining why the United States Patent Office's official policy is that any application that claims to have a working model of a perpetual motion machine goes directly into the trash without being read any further.

The Carnot Heat Engine [Image is in the Public Domain]

All of this is by way of background for a paper that I ran across in Science, called, "Heat Flowing From Cold to Hot Without External Intervention by Using a 'Thermal Inductor," by Andreas Schilling, Xiaofu Zhang, and Olaf Bossen of the University of Zürich.  Because in this paper, the three physicists have demonstrated the passage of heat energy from a colder object to a warmer one, without any external energy input -- something first shown as impossible by French physicist Sadi Carnot in 1824.

The authors write:
The cooling of boiling water all the way down to freezing, by thermally connecting it to a thermal bath held at ambient temperature without external intervention, would be quite unexpected.  We describe the equivalent of a “thermal inductor,” composed of a Peltier element and an electric inductance, which can drive the temperature difference between two bodies to change sign by imposing inertia on the heat flowing between them, and enable continuing heat transfer from the chilling body to its warmer counterpart without the need of an external driving force.
When I read this, I sat up, squinted at my computer screen, and uttered an expression of surprise that I will leave to your imagination.  In my AP Biology class, I always described the Laws of Thermodynamics as two of the most unshakeable laws of science -- two rules that are never, ever broken.  The idea that three scientists in Switzerland had taken a simple Peltier element -- a type of heat pump often found in refrigerators -- and made it run without expending any energy was earthshattering.

But before you dust off your plans for a perpetual motion machine, read the next lines in the paper:
We demonstrate its operation in an experiment and show that the process can pass through a series of quasi-equilibrium states while fully complying with the second law of thermodynamics.  This thermal inductor extends the analogy between electrical and thermal circuits and could serve, with further progress in thermoelectric materials, to cool hot materials well below ambient temperature without external energy supplies or moving parts.
I'm not going to claim I fully understand how this all works, and how despite the system's bizarre behavior it still obeys the Second Law, but apparently the key point is that despite the heat energy flowing the "wrong way," the system still gains entropy overall.

Which, I must say, was a bit of a relief.

It's still a pretty fantastic discovery.  "With this very simple technology, large amounts of hot solid, liquid or gaseous materials could be cooled to well below room temperature without any energy consumption," study co-author Andreas Schilling said, in a press release from Phys.org.  "Theoretically, this experimental device could turn boiling water to ice, without using any energy."

So don't believe any of the hype that I'm already seeing on dubiously-accurate websites, to the effect that "An Exception Has Been Discovered to the Laws of Thermodynamics!  Physicists Dismayed!  Textbooks Will Have to be Rewritten!"  It's a curiosity, sure, and pretty cool, and sounds like it will have a good many applications, but you shouldn't discount everything you learned in physics class quite yet.

****************************************



Thursday, November 23, 2023

Dreaming the past

My novel In the Midst of Lions opens with a character named Mary Hansard -- an ordinary forty-something high school physics teacher -- suddenly realizing she can see the future.

More than that, really; she now has no reliable way of telling the future from the past.  She "remembers" both of them, and if she has no external context by which to decide, she can't tell if what's in her mind occurred in the past or will occur in the future.  Eventually, she realizes that the division of the passage of time she'd always considered real and inviolable has changed.  Instead of past, present, and future, there are now only two divisions: present and not-present.  Here's how she comes to see things:

In the past two months, it felt like the universe had changed shape.  The linear slow march of time was clean gone, and what was left was a block that was unalterable, the people and events in it frozen in place like butterflies in amber.  Her own position in it had become as observer rather than participant.  She could see a wedge of the block, extending back into her distant past and forward into her all-too-short future.  Anything outside that wedge was invisible...  She found that it completely dissolved her anxiety about what might happen next.  Being not-present, the future couldn’t hurt her.  If pain lay ahead of her, it was as removed from her as her memories of a broken arm when she was twelve.  Neither one had any impact on the present as it slowly glided along, a moving flashlight beam following her footsteps through the wrecked cityscape.

 I found myself thinking about Mary and her peculiar forwards-and-backwards perception while I was reading physicist Sean Carroll's wonderful and mind-blowing book From Eternity to Here: A Quest for the Ultimate Theory of Time, which looks at the puzzling conundrum of what physicists call time's arrow -- why, when virtually all physical laws are time-reversible, there is a clear directionality to our perceptions of the universe.  A classic example is the motion of billiard balls on a table.  Each ball's individual motion is completely time-reversible (at least if you discount friction with the table); if you filmed a ball rolling and bouncing off a bumper, then ran the recording backwards, it would be impossible to tell which was the original video and which was the reversed one.  The laws of motion make no differentiation between time running forward and time running backward.

But.

If you played a video of the initial break of the balls at the beginning of the game, then ran the recording backwards -- showing the balls rolling around and after a moment, assembling themselves back into a perfect triangle -- it would be blatantly obvious which was the reversed video.  The difference, Carroll explains, is entropy, which is a measure of the number of possible ways a system can exist and be indistinguishable on the macro level.  What I mean by this is that the racked balls are in a low-entropy state; there aren't that many ways you can assemble fifteen balls into a perfect equilateral triangle.  On the other hand, after the break, with the balls scattered around the table seemingly at random -- there are nearly an infinite number of ways you can have the balls arranged that would be more or less indistinguishable, in the sense that any of them would be equally likely to occur following the break.  Given photographs of thousands of different positions, not even Commander Data could determine which one was the pic taken immediately after the balls stopped moving.

Sure, it's possible you could get all the balls rolling in such a way that they would come to rest reassembled into a perfect triangle.  It's just extremely unlikely.  The increase in entropy, it seems, is based on what will probably happen.  There are so many high-entropy states and so few low-entropy states that if you start with a low-entropy arrangement, the chances are it will evolve over time into a high-entropy one.  The result is that it is (very) strongly statistically favored that entropy increases over time.  

The Arrow of Time by artist benpva16 [Image licensed under the Creative Commons Creative Commons BY-NC-ND 3.0 license: creativecommons.org/licenses/b…]

The part of the book that I am still trying to parse is chapter nine, "Information and Life," where he ties the physical arrow of time (an example of which I described above) with the psychological arrow of time.  Why can't we all do what Mary Hansard can do -- see the past and future both -- if the only thing that keeps us knowing which way is forward and which way is backward is the probability of a state's evolution?  After all, there are plenty of cases where entropy can locally go down; a seed growing into a tree, for example.  (This only occurs because of a constant input of energy; contrary to what creationists would have you believe, the Second Law of Thermodynamics doesn't disprove evolution, because living things are open systems and require an energy source.  Turn off the Sun, and entropy would increase fast.)

So if entropy actually explains the psychological arrow of time, why can I remember events where entropy went down -- such as yesterday, when I took a lump of clay and fashioned it into a sculpture?

Carroll's explanation kind of made my mind blow up.  He says that our memories themselves aren't real reflections of the past; they're a state of objects in our environment and neural firings in our brain in the present that we then assemble into a picture of what we think the past was, based on our assumption that entropy was lower in the past than it is now.  He writes:

So let's imagine you have in your possession something you think of as a reliable record of the past: for example, a photograph taken of your tenth birthday party.  You might say to yourself, "I can be confident that I was wearing a red shirt at my tenth birthday party, because this photograph of that event shows me wearing a red shirt."...

[Is] the present macrostate including the photo... enough to conclude with confidence that we were really wearing a red shirt at our tenth birthday party?

Not even close.  We tend to think that [it is], without really worrying about the details too much as we get through our lives.  Roughly speaking, we figure that a photograph like that is a highly specific arrangement of its constituent molecules.  (Likewise for a memory in our brain of the same event.)  It's not as if those molecules are just going to randomly assemble themselves into the form of that particular photo -- that's astronomically unlikely.  If, however, there really was an event in the past corresponding to the image portrayed in the photo, and someone was there with a camera, then the existence of the photo becomes relatively likely.  It's therefore very reasonable to conclude that the birthday party really did happen in the way seen in the photo.

All of those statements are reasonable, but the problem is that they are not nearly enough to justify the final conclusion...  Yes, the photograph is a very specific and unlikely arrangement of molecules.  However, the story we are telling to "explain" it -- an elaborate reconstruction of the past, involving birthday parties and cameras and photographs surviving essentially undisturbed to the present day -- is even less likely than the photo all by itself...

Think of it this way: You would never think to appeal to some elaborate story in the future to explain the existence of a particular artifact in the present.  If we ask about the future of our birthday photo, we might have some plans to frame it or whatnot, but we'll have to admit to a great deal of uncertainty -- we could lose it, it could fall into a puddle and decay, or it could burn in a fire.  Those are all perfectly plausible extrapolations of the present state into the future, even with the specific anchor point provided by the photo here in the present.  So why are we so confident about what the photo implies concerning the past?

The answer, he says, is that we're relying on probability and the likelihood that the past had lower entropy -- in other words, that the photo didn't come from some random collision of molecules, just as our surmise about the billiard balls' past came from the fact that a perfect triangular arrangement is way less likely than a random one.  All we have, Carroll says, is our knowledge of the present; everything else is an inference.  In every present moment, our reconstruction of the past is a dream, pieced together using whatever we're experiencing at the time.

So maybe we're not as different from Mary Hansard, with her moving flashlight beam gliding along and spotlighting the present, as I'd thought.

Mind = blown.

I'm still not completely convinced I'm understanding all the subtleties in Carroll's arguments, but I get enough of it that I've been thinking about it ever since I put the book down.  But in any case, I'd better wrap this up, because...

... I'm running short on time.

****************************************



Wednesday, November 10, 2021

Can't win, can't break even

Dear readers,

I'm going to take a short break from Skeptophilia -- my next post will be Thursday, November 18.  I'll still be lining up topics during the time I'm away, so keep those suggestions coming!

cheers,

Gordon

**********************************

One of the most misunderstood laws of physics is the Second Law of Thermodynamics.

Honestly, I understand why.  It's one of those bits of science that seem simple on first glance, then the more you learn, the weirder it gets.  The simplest way to state the Second Law is "systems tend to proceed toward disorder," so on the surface it's so common-sensical that it triggers nothing more than a shrug and, "Well, of course."  But a lot of its ramifications are seriously non-intuitive, and a few are downright mindblowing.

The other problem with it is that it exists in multiple formulations that seem to have nothing to do with one another.  These include:
  • the aforementioned statement that without an energy input, over time, systems become more disordered.
  • if you place a warm object and cool object in contact with each other, energy will flow from the warmer to the cooler; the warmer object will cool off, and the cooler one will heat up, until they reach thermal equilibrium (equal temperatures).
  • no machine can run at 100% efficiency (i.e., turning all of its energy input into usable work).
  • some processes are irreversible; for example, there's nothing odd about knocking a wine glass off the table and shattering it, but if you were watching and the shards gathered themselves back together and leapt off the floor and back onto the table as an intact wine glass, you might wonder if all you'd been drinking was wine.
The fact that all of these are, at their basis, different ways of stating the same physical law is not obvious.

For me, the easiest way to understand the "why" of the Second Law has to do with a deck of playing cards.  Let's say you have a deck in order; each suit arranged from ace to king, and the four suits in the order hearts, spades, diamonds, clubs.  How many possible ways are there to arrange the cards in exactly that way?

Duh.  Only one, by definition.

Now, let's say you accidentally drop the deck, then pick it up.  Unless you flung the deck across the room, chances are, there will still be some of the cards in the original order, but some of the orderliness will probably have been lost.  Why?  Because there's only a single way to arrange the cards in the order you started with, but there are lots of ways to have them mostly out of order.  The chances of jumping from the single orderly state to one of the many disorderly states is a near certainty.  Then you drop them again (you're having a clumsy day, apparently).  Are they more likely to become more disordered or more orderly?

You see where this is going; since at each round, there are way more disorderly states than orderly ones, just by the laws of statistics you're almost certainly going to watch the deck becoming progressively more disordered.  Yes, it's possible that you could take a completely random deck, toss them in the air, and they'd fall into ace-through-king, hearts-spades-diamonds-clubs -- but if you're waiting for that to happen by random chance, you're going to have a long wait.

You can, of course, force them back into order by painstakingly rearranging the cards, but that takes an input of energy (in the form of your brain and muscles using up chemical energy to accomplish it).  And here's where it gets weird; if you were to measure the decrease in entropy (disorder) in the deck of cards as you rearranged them, it would be outweighed by the increase in entropy of the energy-containing molecules you burned through to do it.  The outcome: you can locally and temporarily decrease entropy, but only at the expense of creating more entropy somewhere else.  Everything we do makes things more chaotic, and any decrease in entropy we see is illusory.  In the end, entropy always wins.

As my long-ago thermodynamics professor told us, "The First Law of Thermodynamics says that you can't win.  The Second Law says you can't break even."

Hell of a way to run a casino, that.

[Image is in the Public Domain]

The reason this all comes up is a paper that a friend of mine sent me a link to, which looks at yet another way of characterizing the Second Law; instead of heat transfer or overall orderliness, it considers entropy as a measure of information content.  The less information you need to describe a system, the lower its entropy; in the example of the deck of cards, I was able to describe the orderly state in seven words (ace-through-king, hearts-spades-diamonds-clubs).  High-entropy states require a lot of information; pick any of the out-of-order arrangements of the deck of cards, and pretty much the only way to describe it is to list each card individually from the top of the deck to the bottom.

The current paper has to do with information stored inside machines, and like many formulations of the Second Law, it results in some seriously weird implications.  Consider, for example, a simple operation on a calculator -- 2+2, for example.  When you press the "equals" sign, and the calculator tells you the answer is four, have you lost information, or gained it?

Most people, myself included, would have guessed that you've gained information; you now know that 2+2=4, if you didn't already know that.  In a thermodynamic sense, though, you've lost information.  When you get the output (4), you irreversibly erase the input (2+2).  Think about going the other way, and it becomes clearer; someone gives you the output (4) and asks you what the input was.

No way to tell.  There are, in fact, an infinite number of arithmetic operations that would give you the answer "4".  What a calculator does is time-irreversible.  "Computing systems are designed specifically to lose information about their past as they evolve," said study co-author David Wolpert, of the Santa Fe Institute.

By reducing the information in the calculator, you're decreasing its entropy (the answer has less information than the input did).  And that means that the calculator is increasing entropy more somewhere else -- in this case, it heats up the surrounding air.

And that's one reason why your calculator gets warm when you use it.  "There's this deep relationship between physics and information theory," said study co-author Artemy Kolchinsky.  "If you erase a bit of information, you have to generate a little bit of heat."

But if everything you do ultimately increases the overall entropy, what does that say about the universe as a whole?

The implication is that the entire universe's entropy was at a minimum at its creation in the Big Bang -- that it started out extremely ordered, with very low information content.  Everything that's happened since has stirred things up and made them more chaotic (i.e., requiring more information for a complete description).  Eventually, the universe will reach a state of maximal disorder, and after that, it's pretty much game over; you're stuck there for the foreseeable future.  This state goes by the cheerful name the "heat death of the universe."

Not to worry, though.  It won't happen for a while, and we've got more pressing matters to attend to in the interim.

To end on a positive note, though -- going back to our original discussion of the increase of entropy as stemming from the likelihood of jumping from a disordered state back to an orderly one, recall that the chance isn't zero, it's just really really really small.  So once the heat death of the universe has occurred, there is a non-zero chance that it will spontaneously come back together into a second very-low-entropy singularity, at which point the whole thing starts over.  Yeah, it's unlikely, but once the universe is in heat death, it's not like it's got much else to do besides wait.

*********************************************

If Monday's post, about the apparent unpredictability of the eruption of the Earth's volcanoes, freaked you out, you should read Robin George Andrews's wonderful new book Super Volcanoes: What They Reveal About the Earth and the Worlds Beyond.

Andrews, a science journalist and trained volcanologist, went all over the world interviewing researchers on the cutting edge of the science of volcanoes -- including those that occur not only here on Earth, but on the Moon, Mars, Venus, and elsewhere.  The book is fascinating enough just from the human aspect of the personalities involved in doing primary research, but looks at a topic it's hard to imagine anyone not being curious about; the restless nature of geology that has generated such catastrophic events as the Yellowstone Supereruptions.

Andrews does a great job not only demystifying what's going on inside volcanoes and faults, but informing us how little we know (especially in the sections on the Moon and Mars, which have extinct volcanoes scientists have yet to completely explain).  Along the way we get the message, "Will all you people just calm down a little?", particularly aimed at the purveyors of hype who have for years made wild claims about the likelihood of an eruption at Yellowstone occurring soon (turns out it's very low) and the chances of a supereruption somewhere causing massive climate change and wiping out humanity (not coincidentally, also very low).

Volcanoes, Andrews says, are awesome, powerful, and fascinating, but if you have a modicum of good sense, nothing to fret about.  And his book is a brilliant look at the natural process that created a great deal of the geology of the Earth and our neighbor planets -- plate tectonics.  If you are interested in geology or just like a wonderful and engrossing book, you should put Super Volcanoes on your to-read list.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Thursday, April 29, 2021

Watching the clock

 If I had to pick the scientific law that is the most misunderstood by the general public, it would have to be the Second Law of Thermodynamics.

The First Law of Thermodynamics says that the total quantity of energy and mass in a closed system never changes; it's sometimes stated as, "Mass and energy cannot be destroyed, only transformed."  The Second Law states that in a closed system, the total disorder (entropy) always increases.  As my long-ago thermodynamics professor put it, "The First Law says you can't win; the Second Law says you can't break even."

Hell of a way to run a casino, that.

So far, there doesn't seem to be anything particularly non-intuitive about this.  Even from our day-to-day experience, we can surmise that the amount of stuff seems to remain pretty constant, and that if you leave something without maintenance, it tends to break down sooner or later.  But the interesting (and less obvious) side starts to appear when you ask the question, "If the Second Law says that systems tend toward disorder, how can a system become more orderly?  I can fling a deck of cards and make them more disordered, but if I want I can pick them up and re-order them.  Doesn't that break the Second Law?"

It doesn't, of course, but the reason why is quite subtle, and has some pretty devastating implications.  The solution to the question comes from asking how you accomplish re-ordering a deck of cards.  Well, you use your sensory organs and brain to figure out the correct order, and the muscles in your arms and hands (and legs, depending upon how far you flung them in the first place) to put them back in the correct order.  How did you do all that?  By using energy from your food to power the organs in your body.  And to get the energy out of those food molecules -- especially glucose, our primary fuel -- you broke them to bits and jettisoned the pieces after you were done with them.  (When you break down glucose to extract the energy, a process called cellular respiration, the bits left are carbon dioxide and water.  So the carbon dioxide you exhale is actually broken-down sugar.)

Here's the kicker.  If you were to measure the entropy decrease in the deck of cards, it would be less -- way less -- than the entropy increase in the molecules you chopped up to get the energy to put the cards back in order.  Every time you increase the orderliness of a system, it always (1) requires an input of energy, and (2) increases the disorderliness somewhere else.  We are, in fact, little chaos machines, leaving behind a trail of entropy everywhere we go, and the more we try to fix things, the worse the situation gets.

I've heard people arguing that the Second Law disproves evolution because the evolutionary model claims we're in a system that has become more complex over time, which according to the Second Law is impossible.  It's not; and in fact, that statement betrays a fundamental lack of understanding of what the Second Law means.  The only reason why any increase in order occurs -- be it evolution, or embryonic development, or stacking a deck of cards -- is because there's a constant input of energy, and the decrease in entropy is offset by a bigger increase somewhere else.  The Earth's ecosystems have become more complex in the 4.5 billion year history of life because there's been a continuous influx of energy from the Sun.  If that influx were to stop, things would break down.

Fast.

The reason all this comes up is because of a paper this week in Physical Review X that gives another example of trying to make things better, and making them worse in the process.  This one has to do with the accuracy of clocks -- a huge deal to scientists who are studying the rate of reactions, where the time needs to be measured to phenomenal precision, on the scale of nanoseconds or better.  The problem is, we learn from "Measuring the Thermodynamic Cost of Timekeeping," the more accurate the clock is, the higher the entropy produced by its workings.  So, in effect, you can only measure time in a system to the extent you're willing to screw the system up.

[Image licensed under the Creative Commons Robbert van der Steeg, Eternal clock, CC BY-SA 2.0]

The authors write:

All clocks, in some form or another, use the evolution of nature towards higher entropy states to quantify the passage of time.  Due to the statistical nature of the second law and corresponding entropy flows, fluctuations fundamentally limit the performance of any clock.  This suggests a deep relation between the increase in entropy and the quality of clock ticks...  We show theoretically that the maximum possible accuracy for this classical clock is proportional to the entropy created per tick, similar to the known limit for a weakly coupled quantum clock but with a different proportionality constant.  We measure both the accuracy and the entropy.  Once non-thermal noise is accounted for, we find that there is a linear relation between accuracy and entropy and that the clock operates within an order of magnitude of the theoretical bound.

Study co-author Natalia Ares, of the University of Oxford, summarized their findings succinctly in an article in Science News; "If you want a better clock," she said, "you have to pay for it."

So a little like the Heisenberg Uncertainty Principle, the more you try to push things in a positive direction, the more the universe pushes back in the negative direction.  

Apparently, even if all you want to know is what time it is, you still can't break even.

So that's our somewhat depressing science for the day.  Entropy always wins, no matter what you do.  Maybe I can use this as an excuse for not doing housework.  Hey, if I make things more orderly here, all it does is mess things up elsewhere, so what's the point?

Nah, never mind.  My wife'll never buy it.

****************************************

When people think of mass extinctions, the one that usually comes to mind first is the Cretaceous-Tertiary Extinction of 66 million years ago, the one that wiped out all the non-avian dinosaurs and a good many species of other types.  It certainly was massive -- current estimates are that it killed between fifty and sixty percent of the species alive at the time -- but it was far from the biggest.

The largest mass extinction ever took place 251 million years ago, and it destroyed over ninety percent of life on Earth, taking out whole taxa and changing the direction of evolution permanently.  But what could cause a disaster on this scale?

In When Life Nearly Died: The Greatest Mass Extinction of All Time, University of Bristol paleontologist Michael Benton describes an event so catastrophic that it beggars the imagination.  Following researchers to outcrops of rock from the time of the extinction, he looks at what was lost -- trilobites, horn corals, sea scorpions, and blastoids (a starfish relative) vanished completely, but no group was without losses.  Even terrestrial vertebrates, who made it through the bottleneck and proceeded to kind of take over, had losses on the order of seventy percent.

He goes through the possible causes for the extinction, along with the evidence for each, along the way painting a terrifying picture of a world that very nearly became uninhabited.  It's a grim but fascinating story, and Benton's expertise and clarity of writing makes it a brilliant read.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Monday, April 22, 2019

Going against the flow

Two of the most extensively-tested laws of physics are the First and Second Laws of Thermodynamics -- and in the nearly two centuries since they were first formulated, there has not been a single exception found.

The First Law is the less shocking one.  It's sometimes called the Law of Conservation of Matter and Energy, and says simply that in a closed system, the total amount of matter and energy does not change.  You can turn one into the other, or change its form, but the total quantity doesn't vary.  Unsurprising, and in fact can seem a little circular given that this is how a closed system is defined in the first place.

The Second Law is where things get interesting.  It can be formulated a variety of ways, but the simplest is that in a closed system, the amount of entropy (disorder) always increases.  If entropy is being decreased somewhere (the system is becoming more orderly) it always requires (1) an input of energy, and (2) that somewhere else entropy is increasing, and that increase is larger than the localized decrease.  An example is the human body.  When you go from a single fertilized egg cell to an adult, your overall entropy decreases significantly.  But in the process, you are taking the food molecules you eat and (1) extracting their energy, and (2) increasing their entropy monumentally by chopping them up into little pieces and strewing the pieces about.  So you're able to locally decrease your own entropy, but you leave behind a trail of chaos wherever you go.

Or, as my thermodynamics professor in college put it, a lot of years ago: the First Law says you can't win; the Second Law says you can't break even.  Explaining why the United States Patent Office's official policy is that any application that claims to have a working model of a perpetual motion machine goes directly into the trash without being read any further.

The Carnot Heat Engine [Image is in the Public Domain]

All of this is by way of background for a paper that appeared last week in Science, called, "Heat Flowing From Cold to Hot Without External Intervention by Using a 'Thermal Inductor," by Andreas Schilling, Xiaofu Zhang, and Olaf Bossen of the University of Zurich.  Because in this paper, the three physicists have demonstrated the passage of heat energy from a colder object to a warmer one, without any external energy input -- something first shown as impossible by French physicist Sadi Carnot in 1824.

The authors write:
The cooling of boiling water all the way down to freezing, by thermally connecting it to a thermal bath held at ambient temperature without external intervention, would be quite unexpected.  We describe the equivalent of a “thermal inductor,” composed of a Peltier element and an electric inductance, which can drive the temperature difference between two bodies to change sign by imposing inertia on the heat flowing between them, and enable continuing heat transfer from the chilling body to its warmer counterpart without the need of an external driving force.
When I read this, I sat up, squinted at my computer screen, and uttered an expression of surprise that I will leave to your imagination.  In my AP Biology class, I always described the Laws of Thermodynamics as two of the most unshakeable laws of science -- two rules that are never, ever broken.  The idea that three scientists in Switzerland had taken a simple Peltier element -- a type of heat pump often found in refrigerators -- and made it run without expending any energy was earthshattering.

But before you dust off your plans for a perpetual motion machine, read the next lines in the paper:
We demonstrate its operation in an experiment and show that the process can pass through a series of quasi-equilibrium states while fully complying with the second law of thermodynamics.  This thermal inductor extends the analogy between electrical and thermal circuits and could serve, with further progress in thermoelectric materials, to cool hot materials well below ambient temperature without external energy supplies or moving parts.
I'm not going to claim I fully understand how this all works, and how despite the system's bizarre behavior it still obeys the Second Law, but apparently the key point is that despite the heat energy flowing the "wrong way," the system still gains entropy overall.

Which, I must say, was a bit of a relief.

It's still a pretty fantastic discovery.  "With this very simple technology, large amounts of hot solid, liquid or gaseous materials could be cooled to well below room temperature without any energy consumption," study co-author Andreas Schilling said, in a press release from Phys.org. "Theoretically, this experimental device could turn boiling water to ice, without using any energy."

So don't believe any of the hype that I'm already seeing on dubiously-accurate websites, to the effect that "An Exception Has Been Discovered to the Laws of Thermodynamics!  Physicists Dismayed!  Textbooks Will Have to be Rewritten!"  It's a curiosity, sure, and pretty cool, and sounds like it will have a good many applications, but you shouldn't discount everything you learned in physics class quite yet.

***********************************

This week's Skeptophilia book recommendation is a classic, and is pure fun: Man Meets Dog by the eminent Austrian zoologist and ethologist Konrad Lorenz.  In it, he looks at every facet of the human/canine relationship, and -- if you're like me -- you'll more than once burst out laughing and say, "Yeah, my dog does that all the time!"

It must be said that (as the book was originally written in 1949) some of what he says about the origins of dogs has been superseded by better information from genetic analysis that was unavailable in Lorenz's time, but most of the rest of his Doggy Psychological Treatise still stands.  And in any case, you'll learn something about how and why your pooches behave the way they do -- and along the way, a bit about human behavior, too.

[Note: If you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]