Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label information. Show all posts
Showing posts with label information. Show all posts

Wednesday, November 10, 2021

Can't win, can't break even

Dear readers,

I'm going to take a short break from Skeptophilia -- my next post will be Thursday, November 18.  I'll still be lining up topics during the time I'm away, so keep those suggestions coming!

cheers,

Gordon

**********************************

One of the most misunderstood laws of physics is the Second Law of Thermodynamics.

Honestly, I understand why.  It's one of those bits of science that seem simple on first glance, then the more you learn, the weirder it gets.  The simplest way to state the Second Law is "systems tend to proceed toward disorder," so on the surface it's so common-sensical that it triggers nothing more than a shrug and, "Well, of course."  But a lot of its ramifications are seriously non-intuitive, and a few are downright mindblowing.

The other problem with it is that it exists in multiple formulations that seem to have nothing to do with one another.  These include:
  • the aforementioned statement that without an energy input, over time, systems become more disordered.
  • if you place a warm object and cool object in contact with each other, energy will flow from the warmer to the cooler; the warmer object will cool off, and the cooler one will heat up, until they reach thermal equilibrium (equal temperatures).
  • no machine can run at 100% efficiency (i.e., turning all of its energy input into usable work).
  • some processes are irreversible; for example, there's nothing odd about knocking a wine glass off the table and shattering it, but if you were watching and the shards gathered themselves back together and leapt off the floor and back onto the table as an intact wine glass, you might wonder if all you'd been drinking was wine.
The fact that all of these are, at their basis, different ways of stating the same physical law is not obvious.

For me, the easiest way to understand the "why" of the Second Law has to do with a deck of playing cards.  Let's say you have a deck in order; each suit arranged from ace to king, and the four suits in the order hearts, spades, diamonds, clubs.  How many possible ways are there to arrange the cards in exactly that way?

Duh.  Only one, by definition.

Now, let's say you accidentally drop the deck, then pick it up.  Unless you flung the deck across the room, chances are, there will still be some of the cards in the original order, but some of the orderliness will probably have been lost.  Why?  Because there's only a single way to arrange the cards in the order you started with, but there are lots of ways to have them mostly out of order.  The chances of jumping from the single orderly state to one of the many disorderly states is a near certainty.  Then you drop them again (you're having a clumsy day, apparently).  Are they more likely to become more disordered or more orderly?

You see where this is going; since at each round, there are way more disorderly states than orderly ones, just by the laws of statistics you're almost certainly going to watch the deck becoming progressively more disordered.  Yes, it's possible that you could take a completely random deck, toss them in the air, and they'd fall into ace-through-king, hearts-spades-diamonds-clubs -- but if you're waiting for that to happen by random chance, you're going to have a long wait.

You can, of course, force them back into order by painstakingly rearranging the cards, but that takes an input of energy (in the form of your brain and muscles using up chemical energy to accomplish it).  And here's where it gets weird; if you were to measure the decrease in entropy (disorder) in the deck of cards as you rearranged them, it would be outweighed by the increase in entropy of the energy-containing molecules you burned through to do it.  The outcome: you can locally and temporarily decrease entropy, but only at the expense of creating more entropy somewhere else.  Everything we do makes things more chaotic, and any decrease in entropy we see is illusory.  In the end, entropy always wins.

As my long-ago thermodynamics professor told us, "The First Law of Thermodynamics says that you can't win.  The Second Law says you can't break even."

Hell of a way to run a casino, that.

[Image is in the Public Domain]

The reason this all comes up is a paper that a friend of mine sent me a link to, which looks at yet another way of characterizing the Second Law; instead of heat transfer or overall orderliness, it considers entropy as a measure of information content.  The less information you need to describe a system, the lower its entropy; in the example of the deck of cards, I was able to describe the orderly state in seven words (ace-through-king, hearts-spades-diamonds-clubs).  High-entropy states require a lot of information; pick any of the out-of-order arrangements of the deck of cards, and pretty much the only way to describe it is to list each card individually from the top of the deck to the bottom.

The current paper has to do with information stored inside machines, and like many formulations of the Second Law, it results in some seriously weird implications.  Consider, for example, a simple operation on a calculator -- 2+2, for example.  When you press the "equals" sign, and the calculator tells you the answer is four, have you lost information, or gained it?

Most people, myself included, would have guessed that you've gained information; you now know that 2+2=4, if you didn't already know that.  In a thermodynamic sense, though, you've lost information.  When you get the output (4), you irreversibly erase the input (2+2).  Think about going the other way, and it becomes clearer; someone gives you the output (4) and asks you what the input was.

No way to tell.  There are, in fact, an infinite number of arithmetic operations that would give you the answer "4".  What a calculator does is time-irreversible.  "Computing systems are designed specifically to lose information about their past as they evolve," said study co-author David Wolpert, of the Santa Fe Institute.

By reducing the information in the calculator, you're decreasing its entropy (the answer has less information than the input did).  And that means that the calculator is increasing entropy more somewhere else -- in this case, it heats up the surrounding air.

And that's one reason why your calculator gets warm when you use it.  "There's this deep relationship between physics and information theory," said study co-author Artemy Kolchinsky.  "If you erase a bit of information, you have to generate a little bit of heat."

But if everything you do ultimately increases the overall entropy, what does that say about the universe as a whole?

The implication is that the entire universe's entropy was at a minimum at its creation in the Big Bang -- that it started out extremely ordered, with very low information content.  Everything that's happened since has stirred things up and made them more chaotic (i.e., requiring more information for a complete description).  Eventually, the universe will reach a state of maximal disorder, and after that, it's pretty much game over; you're stuck there for the foreseeable future.  This state goes by the cheerful name the "heat death of the universe."

Not to worry, though.  It won't happen for a while, and we've got more pressing matters to attend to in the interim.

To end on a positive note, though -- going back to our original discussion of the increase of entropy as stemming from the likelihood of jumping from a disordered state back to an orderly one, recall that the chance isn't zero, it's just really really really small.  So once the heat death of the universe has occurred, there is a non-zero chance that it will spontaneously come back together into a second very-low-entropy singularity, at which point the whole thing starts over.  Yeah, it's unlikely, but once the universe is in heat death, it's not like it's got much else to do besides wait.

*********************************************

If Monday's post, about the apparent unpredictability of the eruption of the Earth's volcanoes, freaked you out, you should read Robin George Andrews's wonderful new book Super Volcanoes: What They Reveal About the Earth and the Worlds Beyond.

Andrews, a science journalist and trained volcanologist, went all over the world interviewing researchers on the cutting edge of the science of volcanoes -- including those that occur not only here on Earth, but on the Moon, Mars, Venus, and elsewhere.  The book is fascinating enough just from the human aspect of the personalities involved in doing primary research, but looks at a topic it's hard to imagine anyone not being curious about; the restless nature of geology that has generated such catastrophic events as the Yellowstone Supereruptions.

Andrews does a great job not only demystifying what's going on inside volcanoes and faults, but informing us how little we know (especially in the sections on the Moon and Mars, which have extinct volcanoes scientists have yet to completely explain).  Along the way we get the message, "Will all you people just calm down a little?", particularly aimed at the purveyors of hype who have for years made wild claims about the likelihood of an eruption at Yellowstone occurring soon (turns out it's very low) and the chances of a supereruption somewhere causing massive climate change and wiping out humanity (not coincidentally, also very low).

Volcanoes, Andrews says, are awesome, powerful, and fascinating, but if you have a modicum of good sense, nothing to fret about.  And his book is a brilliant look at the natural process that created a great deal of the geology of the Earth and our neighbor planets -- plate tectonics.  If you are interested in geology or just like a wonderful and engrossing book, you should put Super Volcanoes on your to-read list.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Monday, October 19, 2020

Knots, twists, and meaning

One of the most curious relics of the past, and one which is a persistent mystery, is the quipu (also spelled khipu) of Andean South America.

A quipu is a linked series of knotted, dyed cotton strings, and were apparently some kind of meaningful device -- but what their meaning was is uncertain, thanks to the thoroughness and determination of Spanish priests in the sixteenth century to destroy whatever they could of the "pagan" Inca culture.  The result is, there are only 751 of them left, which is a pretty small sample if you're interested in decipherment.

An Incan quipu in the Larco Museum, Lima, Peru [Image licensed under the Creative Commons Claus Ableiter nur hochgeladen aus enWiki, Inca Quipu, CC BY-SA 3.0]

A number of attempts have been made to understand what the patterns of knots meant, but none of them have really panned out.  Some of the possibilities are that they were devices for enumeration, perhaps something like an abacus; a literary device for recording history, stories, or genealogies; or census data.

In fact, the jury's still out on whether they encode linguistic information at all.  An anthropologist named Sabine Hyland has suggested that they do; the color, position of knots, and even the ply of the string combine in 95 different ways to represent a syllabic writing system, she says, and claims that they were intricate family records.  If she's right, the burning of the Incan quipus represents a horrific eradication of the entire cultural history of a people -- something the invading Europeans were pretty good at.

The reason the topic comes up is because of a paper that came out last week in Nature Communications that has a striking parallel to the quipu.  The paper, titled "Optical Framed Knots as Information Carriers," by Hugo Larocque, Alessio d'Errico, Manuel Ferrer-Garcia, and Ebrahim Karimi (of the University of Ottawa), Avishy Carmi (of Ben-Gurion University), and Eliahu Cohen (of Bar Ilan University), describes a way of creating knots in laser light that could be used to encode information.  The authors write:

Modern beam shaping techniques have enabled the generation of optical fields displaying a wealth of structural features, which include three-dimensional topologies such as Möbius, ribbon strips and knots.  However, unlike simpler types of structured light, the topological properties of these optical fields have hitherto remained more of a fundamental curiosity as opposed to a feature that can be applied in modern technologies.  Due to their robustness against external perturbations, topological invariants in physical systems are increasingly being considered as a means to encode information.  Hence, structured light with topological properties could potentially be used for such purposes.  Here, we introduce the experimental realization of structures known as framed knots within optical polarization fields.  We further develop a protocol in which the topological properties of framed knots are used in conjunction with prime factorization to encode information.
"The structural features of these objects can be used to specify quantum information processing programs," said study lead author Hugo Larocque, in an interview in Science Daily.  "In a situation where this program would want to be kept secret while disseminating it between various parties, one would need a means of encrypting this 'braid' and later deciphering it.  Our work addresses this issue by proposing to use our optical framed knot as an encryption object for these programs which can later be recovered by the braid extraction method that we also introduced.  For the first time, these complicated 3D structures have been exploited to develop new methods for the distribution of secret cryptographic keys.  Moreover, there is a wide and strong interest in exploiting topological concepts in quantum computation, communication and dissipation-free electronics.  Knots are described by specific topological properties too, which were not considered so far for cryptographic protocols."

A few of the research team's knotted beams of light

I have to admit that even given my B.S. in physics, most of the technical details in this paper went over my head so fast they didn't even ruffle my hair.  And I know that any similarity between optical framed knots and the knots on quipus is superficial at best, but even so, the parallel jumped out at me immediately.  Just as the Incas (probably) used color, knot position and shape, and ply of the string to encode information, these scientists have figured out how to encode information using intensity, phase, wavelength, polarization, and topological form to do the same thing.

Which is pretty amazing.  I know the phrase "reinventing the wheel" is supposed to be a bad thing, but here we have two groups independently (at least, as far as I know) coming up with analogous solutions for the same problem -- how to render information without recourse to ordinary symbology and typography.

Leaving me awestruck, as always, by the inventiveness and creativity of the human mind.

**********************************

Have any scientifically-minded friends who like to cook?  Or maybe, you've wondered why some recipes are so flexible, and others have to be followed to the letter?

Do I have the book for you.

In Science and Cooking: Physics Meets Food, from Homemade to Haute Cuisine, by Michael Brenner, Pia Sörensen, and David Weitz, you find out why recipes work the way they do -- and not only how altering them (such as using oil versus margarine versus butter in cookies) will affect the outcome, but what's going on that makes it happen that way.

Along the way, you get to read interviews with today's top chefs, and to find out some of their favorite recipes for you to try out in your own kitchen.  Full-color (and mouth-watering) illustrations are an added filigree, but the text by itself makes this book a must-have for anyone who enjoys cooking -- and wants to learn more about why it works the way it does.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Tuesday, October 2, 2018

Brain linkage

New from the "Should I Be Scared?" department, we have: the first experimental proof of a successful brain-to-brain interface.

To be sure, the information passed through it was rather rudimentary.  Two "senders" played a game of Tetris, and passed along to a "receiver" the information about whether a particular block had to be rotated or not in order to fit in the grid.  The receiver then recorded what the decision was -- and got it right with an accuracy of 81%.  Furthermore, in a second round where the receiver was given information about the accuracy of their choices, the researchers tried to muddy things up by injecting noise into the transmission from one of the senders.  The result?  The receiver was able to figure out which one of the senders to pay attention to -- which one had the highest accuracy -- and ignore the input from the channel with the noise.

The researchers -- Linxing Jiang, Andrea Stocco, Darby M. Losey, Justin A. Abernethy, Chantel S. Prat, and Rajesh P. N. Rao, of the University of Washington and Carnegie Mellon University -- are unequivocal about where this could lead.  "Our results," they write, "raise the possibility of future brain-to-brain interfaces that enable cooperative problem solving by humans using a 'social network' of connected brains."

Which certainly seems likely.  What worries me, however, is where else it could lead.  The technology is surely going to do nothing but improve, the interfaces working better and faster and more accurately.  At what point would it be possible to use such an interface to read a person's thoughts without their will?  To inject a directive, something like a post-hypnotic suggestion, into their brains?  To suppress or erase a memory of something you would prefer they didn't remember?

Of course, there are a lot of good directions we could go.  I've always thought I'd love to have a Matrix-style plug in the back of my head that would allow me to download information.  


Think of how cool that would be!  Even if (for example) you'd still have to learn the grammatical rules and semantic nuances of a language, you could simply input the dictionary into your brain and you'd never have to memorize the vocabulary (which has always been the sticking point for me, linguistics-wise).  And Jiang et al.'s optimistic prediction of using it as a tool for collaborative problem solving is kind of awesome as well.

But you have to admit, we humans don't exactly have a sterling track record of using scientific discoveries for positive purposes.  Our general approach has usually been "personal gain first, power second" -- and only as an afterthought, "Oh, yeah, we could also use this to benefit humanity."  

The problem is, once the cat's out of the bag, you can't exactly stuff it back inside.  As soon as someone shows proof of concept -- which Jiang et al. clearly have -- the next step will inevitably be refinement.  It's like the line from Ian Malcolm in Jurassic Park: "Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."

I'm not saying we should halt this kind of research, but some caution seems advisable, especially since we're crossing the line into infringement on that most sacred and private realm -- one's own mind.  So I would urge anyone involved in this endeavor to move slowly, and take care to assure as best we can that this discovery won't be used as one more assault on free thought.

********************************************

This week's Skeptophilia book recommendation is a fun one -- Hugh Ross Williamson's Historical Enigmas.  Williamson takes some of the most baffling unsolved mysteries from British history -- the Princes in the Tower, the identity of Perkin Warbeck, the Man in the Iron Mask, the murder of Amy Robsart -- and applies the tools of logic and scholarship to an analysis of the primary documents, without descending into empty speculation.  The result is an engaging read about some of the most perplexing events that England ever saw.

[If you purchase the book from Amazon using the image/link below, part of the proceeds goes to supporting Skeptophilia!]