Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label mathematics. Show all posts
Showing posts with label mathematics. Show all posts

Friday, August 29, 2025

Life, complexity, and evolution

Next to the purely religious arguments -- those that boil down to "it's in the Bible, so I believe it" -- the most common objection I hear to the evolutionary model is that "you can't get order out of chaos."

Or -- which amounts to the same thing -- "you can't get complexity from simplicity."  Usually followed up by the Intelligent Design argument that if you saw the parts from which an airplane is built, and then saw an intact airplane, you would know there had to be a builder who put the parts together.  This is unfortunately often coupled with some argument about how the Second Law of Thermodynamics (one formulation of which is, "in a closed system, the total entropy always increases") prohibits biological evolution, which shows a lack of understanding both of evolution and thermodynamics.  For one thing, the biosphere is very much not a closed system; it has a constant flow of energy through it (mostly from the Sun).  Turn that energy source off, and our entropy would increase post-haste.  Also, the decrease in entropy you see within the system, such as the development of an organism from a single fertilized egg cell, does increase the entropy as a whole.  In fact, the entropy increase from the breakdown of the food molecules required for an organism to grow is greater than the entropy decrease within the developing organism itself.

Just as the Second Law predicts.

So the thermodynamic argument doesn't work.  But the whole question of how you get complexity in the first place is not so easily answered.  On its surface, it seems like a valid objection.  How could we start out with a broth of raw materials -- the "primordial soup" -- and even with a suitable energy source, have them self-organize into complex living cells?

Well, it turns out it's possible.  All it takes -- on the molecular, cellular, or organismal level -- is (1) a rule for replication, and (2) a rule for selection.  For example, with DNA, it can replicate itself, and the replication process is accurate but not flawless; the selection process comes in with the fact that some of those varying DNA configurations are better than others at copying themselves, so those survive and the less successful ones don't.  From those two simple rules, things can get complex fast.

But to take a non-biological example that is also kind of mindblowing, have you heard of British mathematician John Horton Conway's "Game of Life?"

In the 1960s Conway became interested in a mathematical concept called a cellular automaton.  The gist, first proposed by Hungarian mathematician John von Neumann, is to look at arrays of "cells" that then can interact with each other by a discrete set of rules, and see how their behavior evolves.  The set-up can get as fancy as you like, but Conway decided to keep it really simple, and came up with the ground rules for what is now called his "Game of Life."  You start out with a grid of squares, where each square touches (either on a side or a corner) eight neighboring cells.  Each square can be filled ("alive") or empty ("dead").  You then input a starting pattern -- analogous to the raw materials in the primordial soup -- and turn it loose.  After that, four rules determine how the pattern evolves:

  1. Any live cell that has fewer than two live neighbors dies.
  2. Any live cell that has two or three live neighbors lives to the next round.
  3. Any live cell that has four or more live neighbors dies.
  4. Any dead cell that has exactly three live neighbors becomes a live cell.
Seems pretty simple, doesn't it?  It turns out that the behavior of patterns in the Game of Life is so wildly complex that it's kept mathematicians busy for decades.  Here's one example, called "Gosper's Glider Gun":


Some start with as few as five live cells, and give rise to amazingly complicated results.  Others have been found that do some awfully strange stuff, like this one, called the "Puffer Breeder":



What's astonishing is not only how complex this gets, but how unpredictable it is.  One of the most curious results that has come from studying the Game of Life is that some starting conditions lead to what appears to be chaos; in other cases, the chaos settles down after hundreds, or thousands, of rounds, eventually falling into a stable pattern (either one that oscillates between two or three states, or produces something regular like the Glider Gun).  Sometimes, however, the chaos seems to be permanent -- although because there's no way to carry the process to infinity, you can't really be certain.  There also appears to be no way to predict from the initial state where it will end up ahead of time; no algorithm exists to take the input and determine what the eventual output will be.  You just have to run the program and see what happens.

In fact, the Game of Life is often used as an example of Turing's Halting Problem -- that in general there is no way to be certain that a given algorithm will arrive at a solution in a finite number of steps.  This theorem arises from such mind-bending weirdness as the Gödel Incompleteness Theorem, which proved rigorously that within mathematics, there are true statements that cannot be proven and false statements that cannot be disproven.  (Yes -- it's a proof of unprovability.)

All of this, from a two-dimensional grid of squares and four rules so simple a fourth-grader could understand them.

Now, this is not meant to imply that biological systems work the same way as an algorithmic mathematical system; just a couple of weeks ago, I did an entire post about the dangers of treating an analogy as reality.  My point here is that there is no truth to the claim that complexity can't arise spontaneously from simplicity.  Given a source of energy, and some rules to govern how the system can evolve, you can end up with astonishing complexity in a relatively short amount of time.

People studying the Game of Life have come up with twists on it to make it even more complicated, because why stick with two dimensions and squares?  There are ones with hexagonal grids (which requires a slightly different set of rules), ones on spheres, and this lovely example of a pattern evolving on a toroidal trefoil knot:


Kind of mesmerizing, isn't it?

The universe is a strange and complex place, and we need to be careful before we make pronouncements like "That couldn't happen."  Often these are just subtle reconfigurations of the Argument from Ignorance -- "I don't understand how that could happen, therefore it must be impossible."  The natural world has a way of taking our understanding and turning it on its head, which is why science will never end.  As astrophysicist Neil deGrasse Tyson explained, "Surrounding the sea of our knowledge is a boundary that I call the Perimeter of Ignorance.  As we push outward, and explain more and more, it doesn't erase the Perimeter of Ignorance; all it does is make it bigger.  In science, every question we answer raises more questions.  As a scientist, you have to become comfortable with not knowing.  We're always 'back at the drawing board.'  If you're not, you're not doing science."

****************************************


Wednesday, July 9, 2025

Tracking the hailstones

One of the most shocking results from mathematics -- or even scholarship as a whole -- is Kurt Gödel's Incompleteness Theorem.

Like (I suspect) many of us, I first ran into this startling idea in Douglas Hofstadter's wonderful book Gödel, Escher, Bach: An Eternal Golden Braid, which I read when I was an undergraduate at the University of Louisiana.  I've since reread the whole thing twice, but I'm afraid the parts about formal logic and Gödel's proof are still a real challenge to my understanding.  The gist of it is that Gödel responded to a call by German mathematician David Hilbert to come up with a finite, consistent set of axioms from which all other true statements in mathematics could be derived (and, significantly, which excluded all false or paradoxical ones).  Gödel picked up the gauntlet, but not in the way Hilbert expected (or wanted).

He showed that what Hilbert was asking for was fundamentally impossible.

Put succinctly, Gödel proved that if you come up with an axiomatic system that can generate all true statements of mathematics, it will also generate some untrue ones; if you come up with a system that generates only true statements, there will always be true statements that cannot be proven from within it.  In other words, if a mathematical system is complete, it's inconsistent; if it is consistent, it's incomplete.

The result is kind of staggering, and the more you think about it, the weirder it gets.  Math is supposed to be cut and dried, black-and-white, where things are either provable (and therefore true) or they're simply wrong.  What Gödel showed was that this is not the case -- and worse, there's no way to fix it.  If you simply take any true (but unprovable) mathematical statements you find, and add them to the system as axioms, the new expanded system still falls prey to Gödel's proof.

It's the ultimate catch-22.

The problem is, there's no way to tell the difference between a true-but-thus-far-unproven statement and a true-but-unprovable statement.  There have been a number of conjectures that have baffled mathematicians for ages, and finally been proven -- the four-color map theorem and Fermat's last theorem come to mind.  But one that has resisted all attempts at a proof is the strange Collatz conjecture, also known as the hailstone sequence, proposed in 1937 by the German mathematician Lothar Collatz.

What's wild about the Collatz conjecture is that it's simple enough a grade-school student could understand it.  It says: start with any natural number.  If it's even, divide it by two.  If it's odd, multiply it by three and then add one.  Repeat the process until you reach 1.  Here's how it would work, starting with 7:

7 - 22 - 11 - 34 - 17 - 52 - 26 - 13 - 40 - 20 - 10 - 5 - 16 - 8 - 4 - 2 - 1.

You can see why it's called a "hailstone sequence;" like hailstones, the numbers rise and fall, sometimes buffeted far upwards before finally "falling to Earth."  And what Collatz said was that, subject to this procedure, every natural number will finally fall to 1.

Simple enough, right?  Wrong.  The best minds in mathematics have been stumped as to how to prove it.  The brilliant Hungarian mathematician Paul Erdös said, "Mathematics may not be ready for such a problem."  American mathematician Jeffrey Lagarias was even bleaker, saying, "[The Collatz conjecture] is completely out of reach of present-day mathematics."

What's weirdest is that there does seem to be a pattern -- a relationship between the number you start with and the number of steps it takes to reach 1.  Here's what the graph looks like, if you plot the number of steps as a function of the number you start with, for every number from 1 to 9,999:

[Image is in the Public Domain]

It certainly doesn't appear to be random, but this doesn't get us any closer to proving that all numbers descend to 1 in a finite number of steps.

The reason all this comes up is a recent paper in The Journal of Supercomputing showing that every number between 1 and 2 to the 71st power obeys the Collatz conjecture.  That's a bit over twenty quintillion.  Of course, this still isn't proof; all it'd take is one single number in the octillions that either (1) keeps rising higher and higher forever, or (2) leads to an infinite loop, to disprove it.  So until a formal proof (or disproof) is found, all mathematicians can do is keep extending the list of numbers tested.

But is the Collatz conjecture one of Gödel's inevitable true-but-unprovable statements?  No way to know, even if it never does get proven.  That's the brilliance -- and the frustration -- of Gödel's proof.  Such statements are forever outside the axiomatic system, so there's no way to get at them.

So much for mathematics being firm ground.

Anyhow, that's our mind-blowing bit of news for this morning.  A simple conjecture that has baffled mathematicians for almost ninety years, and is no closer to being solved now than it was when it was first proposed.  It's indicative of how weird and non-intuitive mathematics can be.  As Douglas Hofstadter put it, "It turns out that an eerie type of chaos can lurk just behind a facade of order -- and yet, deep inside the chaos lurks an even eerier type of order."

****************************************


Wednesday, December 25, 2024

Adventures in solid geometry

I've always been a bit in awe at people who are true math-adepts.

Now, I'm hardly a math-phobe myself; having majored in physics, I took a great many math courses as an undergraduate.  And up to a point, I was pretty good at it.  I loved calculus -- partly because my teacher, Dr. Harvey Pousson, was a true inspiration, making complex ideas clear and infusing everything he did with curiosity, energy, and an impish sense of humor.  Likewise, I thoroughly enjoyed my class in differential equations, a topic that is often a serious stumbling block for aspiring math students.  Again, this was largely because of the teacher, a five-foot-one, eccentric, hypercharged dynamo named Dr. LaSalle, who was affectionately nicknamed "the Roadrunner" because she was frequently seen zooming around the halls, dodging and weaving around slow-moving students as if she were late for boarding a plane.

I recall Dr. LaSalle finishing up some sort of abstruse proof on the board, then writing "q.e.d."  She turned around, and said in a declamatory voice, "Quod erat demonstrandum.  Which is Latin for 'ha, we sure showed you.'"  It was only much later that I found out her translation was actually pretty accurate.

But other than those bright spots, my math career pretty much was in its final tailspin.  At some point, I simply ran into an intellectual wall.  My sense is that it happened when I stopped being able to picture what I was studying.  Calculating areas and slopes and whatnot was fine; so were the classic differential equations problems involving things like ladders slipping down walls and water leaking out of tanks.  But when we got to fields and matrices and tensors, I was no longer able to visualize what I was trying to do, and it became frustrating to the extent that now -- forty-five years later -- I still have nightmares about being in a math class, taking an exam, and having no idea what I'm doing.

Even so, I have a fascination for math.  There is something grand and cosmic about it, and it underpins pretty much everything.  (As Galileo put it, "Mathematics is the language with which God wrote the universe.")  It's no wonder that Pythagoras thought there was something holy about numbers; there are strange and abstruse patterns and correspondences you start to uncover when you study math that seem very nearly mystical.

The topic comes up because of a paper in the journal Experimental Mathematics that solved a long-standing question about something that also came out of the ancient Greek fascination with numbers -- the five "Platonic solids", geometrical figures whose sides are composed of identical regular polygons and which all have identical vertices.  The five are the tetrahedron (four triangular faces), the cube (six square faces), the octahedron (eight triangular faces), the dodecahedron (twelve pentagonal faces), and the icosahedron (twenty triangular faces).  And that's it.  There aren't any other possibilities given those parameters.

[Image is in the Public Domain]

The research had to do with a question that I had never considered, and I bet you hadn't, either.  Suppose you were standing on one corner of one of these shapes, and you started walking.  Is there any straight path you could take that would return you to your starting point without passing through another corner?  (Nota bene: by "straight," of course we don't mean "linear;" your path is still constrained to the surface, just as if you were walking on a sphere.  A "straight path" in this context means that when you cross an edge, if you were to unfold the two faces -- the one you just left and the one you just entered -- to make a flat surface, your path would be linear.)

Well, apparently it was proven a while back that for four of the Platonic solids -- the tetrahedron, cube, octahedron, and icosahedron -- the answer is "no."  If you launched off on your travels with the rules outlined above, you would either cross another corner or you'd wander around forever without ever returning to your starting point.  Put a different way: to return to your starting point you'd have to cross at least one other corner.

The recent research looks at the odd one out, the dodecahedron.  In the paper "Platonic Solids and High Genus Covers of Lattice Surfaces," mathematicians Jayadev Athreya (of the University of Washington), David Aulicino (of Brooklyn College), and W. Patrick Hooper (of the City University of New York) showed the astonishing result that alone of the Platonic solids, the answer for the dodecahedron is yes -- and in fact, there are 31 different classes of pathways that return you to your starting point without crossing another corner.

The way they did this started out by imagining taking the dodecahedron and opening it up and flattening it out.  You then have a flat surface made of twelve different pentagons, connected along their edges in some way (how depends on exactly how you did the cutting and unfolding).  You start at the vertex of one of the pentagons, and strike off in a random direction.  When you reach the edge of the flattened shape, you glue a second, identical flattened dodecahedron to that edge so you can continue to walk. This new grid will always be a rotation of the original grid by some multiple of 36 degrees.  Reach another edge, repeat the process. Athreya et al. showed that after ten iterations, the next flattened dodecahedron you glue on will have rotated 360 degrees -- in other words, it will be oriented exactly the same way the first one was.

Okay, that's kind of when my brain pooped out.  From there, they took the ten linked, flattened dodecahedrons and folded that back up to make a shape that is like a polygonal donut with eighty-one holes.  And that surface is related mathematically to a well-studied figure called a double pentagon, which allowed the researchers to prove that not only was a straight line returning to your origin without crossing another corner possible, there were 31 ways to do it.

"This was one of the most fun projects I've worked on in my entire career," lead author Jayadev Athreya said, in an interview with Quora.  "It's important to keep playing with things."

But it's also pretty critical to have a brain powerful enough to conceptualize the problem, and I'm afraid I'm not even within hailing distance.  I'm impressed, intrigued, and also convinced that I'd never survive in such rarified air.

So on the whole, it's good that I ended my pursuit of mathematics when I did.  Biology was probably the better choice.  I think I'm more suited to pursuits like ear-tagging fruit bats than calculating straight paths on Platonic solids, but I'm glad there are people out there who are able to do that stuff, because it really is awfully cool.

****************************************

Saturday, February 17, 2024

All set

How long is the coastline of Britain?

Answer: as long as you want it to be.

This is not some kind of abstruse joke, and if it sounds like it, blame the mathematicians.  This is what's known as the coastline paradox, which is not so much a paradox as it is the property of anything that is a fractal.  Fractals are patterns that never "smooth out" when you zoom in on them; no matter how small a piece you magnify, it still has the same amount of bends and turns as the larger bit did.

And coastlines are like that.  Consider measuring the coastline of Britain by placing dots on the coast one hundred kilometers apart -- in other words, using a straight ruler one hundred kilometers long.  If you do this, you find that the coastline is around 2,800 kilometers long.

[Image licensed under the Creative Commons Britain-fractal-coastline-100km , CC BY-SA 3.0]

But if your ruler is only fifty kilometers long, you get about 3,400 kilometers -- not an insignificant difference.

[Image licensed under the Creative Commons Britain-fractal-coastline-50km, CC BY-SA 3.0]

The smaller your ruler, the longer your measurement of the coastline.  At some point, you're measuring the twists and turns around every tiny irregularity along the coast, but do you even stop there?  Should you curve around every individual pebble and grain of sand?

At some point, the practical aspects get a little ridiculous.  The movement of the ocean makes the exact position of the coastline vague anyhow.  But with a true fractal, we get into one of the weirdest notions there is: infinity.  True fractals, such as the ones investigated by Benoit B. Mandelbrot, have an infinite length, because no matter how deeply you plunge into them, they have still finer structure.

Oh, by the way: do you know what the B. in "Benoit B. Mandelbrot" stands for?  It stands for "Benoit B. Mandelbrot."

Thanks, you're a great audience.  I'll be here all week.

The idea of infinity has been a thorn in the side of mathematicians for as long as anyone's considered the question, to the point that a lot of them threw their hands in the air and said, "the infinite is the realm of God," and left it at that.  Just trying to wrap your head around what it means is daunting:

Teacher: Is there a largest number?
Student: Yes. It's 10,732,210.
Teacher: What about 10, 732,211?
Student: Well, I was close.

It wasn't until German mathematician Georg Cantor took a crack at refining what infinity means -- and along the way, created set theory -- that we began to see how peculiar it really is.  (Despite Cantor's genius, and the careful way he went about his proofs, a lot of mathematicians of his time dismissed his work as ridiculous.  Leopold Kronecker called Cantor not only "a scientific charlatan" and a "renegade," but "a corrupter of youth"!)

Cantor started by defining what we mean by cardinality -- the number of members of a set.  This is easy enough to figure out when it's a finite set, but what about an infinite one?  Cantor said two sets have the same cardinality if you can find a way to put their members into a one-to-one correspondence in a well-ordered fashion without leaving any out, and that this works for infinite sets as well as finite ones.  For example, Cantor showed that the number of natural numbers and the number of even numbers is the same (even though it seems like there should be twice as many natural numbers!) because you can put them into a one-to-one correspondence:

1 <-> 2
2 <-> 4
3 <-> 6
4 <-> 8
etc.

Weird as it sounds, the number of fractions (rational numbers) has exactly the same cardinality as well -- there are the same number of possible fractions as there are natural numbers.  Cantor proved this as well, using an argument called Cantor's snake:


Because you can match each of them to the natural numbers, starting in the upper left and proceeding along the blue lines, and none will be left out along the way, the two sets have exactly the same cardinality.

It was when Cantor got to the real numbers that the problems started.  The real numbers are the set of all possible decimals (including ones like π and e that never repeat and never terminate).  Let's say you thought you had a list (infinitely long, of course) of all the possible decimals, and since you believe it's a complete list, you claimed that you could match it one-to-one with the natural numbers.  Here's the beginning of your list:

7.0000000000...
0.1010101010....
3.1415926535...
1.4142135623...
2.7182818284...

Cantor used what is called the "diagonal argument" to show that the list will always be missing members -- and therefore the set of real numbers is not countable.  His proof is clever and subtle.  Take the first digit of the first number in the list, and add one.  Do the same for the second digit of the second number, the third digit of the third number, and so on.  (The first five digits of the new number from the list above would be 8.2553...)  The number you've created can't be anywhere on the list, because it differs from every single number on the list by at least one digit.

So there are at least two kinds of infinity; countable infinities like the number of natural numbers and number of rational numbers, and uncountable infinities like the number of real numbers.  Cantor used the symbol aleph null -- -- to represent a countable infinity, and the symbol c (for continuum) to represent an uncountable infinity.

Then there's the question of whether there are any types of infinity larger than but smaller than c.  The claim that the answer is "no" is called the continuum hypothesis, and proving (or disproving) it is one of the biggest unsolved problems in mathematics.  In fact, it's thought by many to be an example of an unprovable but true statement, one of those hobgoblins predicted by Kurt Gödel's Incompleteness Theorem back in 1931, which rigorously showed that a consistent mathematical system could never be complete -- there will always be true mathematical statements that cannot be proven from within the system.

So that's probably enough mind-blowing mathematics for one day.  I find it all fascinating, even though I don't have anywhere near the IQ necessary to understand it at any depth.  My brain kind of crapped out somewhere around Calculus 3, thus dooming my prospects of a career as a physicist.  But it's fun to dabble my toes in it.

Preferably somewhere along the coastline of Cornwall.  However long it actually turns out to be.

****************************************



Tuesday, August 1, 2023

The problem with Aristotle's wheel

There's an apparent mathematical paradox, of very long standing, that illustrates a fundamental problem with a lot of modern discourse.

It's called the Aristotle's wheel paradox, and it goes something like this.

Imagine you have a circular wheel, and attached firmly to it is a concentric smaller wheel.  You set the larger wheel on a flat surface, and allow it to roll one complete rotation without slipping.

The trouble comes in when you try to figure out how far each wheel has moved.  Let's call the radius of the outside wheel R.  So by the time it's turned once, it's traveled as far as its circumference, which we all know from high school geometry is a distance of 2πR.  So that should be the length of the horizontal blue dashed line in the above diagram.

But here's the snag; the same applies to the smaller wheel.  Let's say its (smaller) radius is r.  So by the same logic, after it's made one complete rotation, it's traveled a distance of 2πr, which is less than 2πR (because r < R).  That's the red dashed line in the diagram.

But... the two lines are obviously the same length!

While it's uncertain if Aristotle ever did puzzle over this seeming conundrum, it's definitely been known since antiquity.  The first written exposition of it was by Hero of Alexandria, who described it in his book Mechanics in the first century B.C.E.

The solution has to do with the fact that the way the question is posed is misleading.  Most people, reading the description of the paradox and (especially) looking at the diagram, would accept unquestioningly that this is a correct framing of the problem.  But in fact, by stating it that way I was engaging in deliberate sleight-of-hand -- giving you information that seems correct on first glance, but is disingenuous at best and an outright lie at worst.

[Nota bene: I'm not implying here that Aristotle and the other mathematicians who worked on it were lying; they seemed genuinely puzzled by it.  What I'm saying is that I was misleading you, because I know the answer and misdirected you anyhow, with complete malice aforethought.]

The truth is, the two circles haven't moved by the same amount, even if that's what the diagram plausibly leads you to believe.  In fact, the straight dashed lines in the diagram aren't the paths taken by a point on the circumference of either circle.  (Those lines' lengths are equal to the distance covered not by a point on the edge, but by the center of the wheel.)  If you trace the paths of actual points on the rims of the two wheels, here's what you get:

[Both this and the above diagram are licensed under the Creative Commons Merjet, AristotleWheel6, CC BY-SA 4.0]

Without even measuring it, you can see that a point on the outer wheel (the blue dashed curve) travels considerably farther than one on the inner wheel (the red dashed curve) -- and both, in fact, cover more distance than that traveled by the wheel's center (the green dashed line).

Just as you'd expect.

What strikes me about the Aristotle's wheel (non-)paradox is that this kind of thing underlies a great many of the problems with our current political situation.  How many of the hot-button topics in the news lately have come about because of a deliberate, disingenuous attempt to reframe the question in such a way that it ignores important facts or completely mischaracterizes the situation?  Examples include the Florida State Education Department's new standards for history requiring teachers to include information about how slaves benefitted from slavery, Richard Dawkins's statement to commentator Piers Morgan that biological sex is binary "and that's all there is to it," and Jason Aldean's defense of his controversial song and video "Try That in a Small Town," stating that "There is not a single lyric in the song that references race or points to it."

All three could be looked at with a shrug of the shoulders and a comment on the order of, "Okay, I guess that's true."  But in each case, that is to miss the deeper and far more critical truths those statements are deliberately overlooking.

This kind of thing is dangerous because it's so damned attractive.  We're taught to take things as given, especially when (1) they come from a trusted or respected source, and (2) they seem right.  This latter leads us onto the thin ice of confirmation bias, where we accept what someone says because it confirms what we already thought was true.  Here, though, the bias is more insidious, because the case is deliberately being presented to us so as to say nothing specifically false, and yet still to lead us to an erroneous conclusion.

So whenever you're reading the news, remember Aristotle's wheel -- and always keep in mind that what you're seeing may not be the whole story.  Like the two diagrams of the wheel's motion, sometimes all it takes is looking at things from another angle to realize you've been led down the garden path.

****************************************



Friday, July 14, 2023

The halting problem

A couple of months ago, I wrote a post about the brilliant and tragic British mathematician, cryptographer, and computer scientist Alan Turing, in which I mentioned in passing the halting problem.  The idea of the halting problem is simple enough; it's the question of whether a computer program designed to determine the truth or falsity of a mathematical theorem will always be able to reach a definitive answer in a finite number of steps.  The answer, surprisingly, is a resounding no.  You can't guarantee that a truth-testing program will ever reach an answer, even about matters as seemingly cut-and-dried as math.  But it took someone of Turing's caliber to prove it -- in a paper mathematician Avi Wigderson called "easily the most influential math paper in history."

What's the most curious about this result is that you don't even need to understand fancy mathematics to find problems that have defied attempts at proof.  There are dozens of relatively simple conjectures for which the truth or falsity is not known, and what's more, Turing's result showed that for at least some of them, there may be no way to know.

One of these is the Collatz conjecture, named after German mathematician Lothar Collatz, who proposed it in 1937.  It's so simple to state that a bright sixth-grader could understand it.  It goes like this:

Start with any positive integer you want.  If it's even, divide it by two.  If it's odd, multiply it by three and add one.  Repeat.  Here's a Collatz sequence, starting with the number seven:

7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1.

Collatz's conjecture is that if you do this for every positive integer, eventually you'll always reach one.

The problem is, the procedure involves a rule that reduces the number you've got (n/2) and one that grows it (3n + 1).  The sequence rises and falls in an apparently unpredictable way.  For some numbers, the sequence soars into the stratosphere; starting with n = 27, you end up at 9,232 before it finally hits a number that allows it to descend to one.  But the weirdness doesn't end there.  Mathematicians studying this maddening problem have made a graph of all the numbers between one and ten million (on the x axis) against the number of steps it takes to reach one (on the y axis), and the following bizarre pattern emerged:

[Image licensed under the Creative Commons Kunashmilovich, Collatz-10Million, CC BY-SA 4.0]

So it sure as hell looks like there's a pattern to it, that it isn't simply random.  But it hasn't gotten them any closer to figuring out if all numbers eventually descend to one -- or if, perhaps, there's some number out there that just keeps rising forever.  All the numbers tested eventually descend, but attempts to figure out if there are any exceptions have failed.

Despite the fact that in order to understand it, all you have to be able to do is add, multiply, and divide, American mathematician Jeffrey Lagarias lamented that the Collatz conjecture "is an extraordinarily difficult problem, completely out of reach of present-day mathematics."

Another theorem that has defied solution is the Goldbach conjecture, named after German mathematician Christian Goldbach, who proposed it to none other than mathematical great Leonhard Euler.  The Goldbach conjecture is even easier to state:

All positive integers greater than two can be expressed as the sum of two prime numbers.

It's easy enough to see that the first few work:

3 = 1 + 2
4 = 1 + 3
5 = 2 + 3
6 = 3 + 3 (or 1 + 5)
7 = 2 + 5
8 = 3 + 5

and so on.

But as with Collatz, showing that it works for the first few numbers doesn't prove that it works for every number, and despite nearly three centuries of efforts (Goldbach came up with it in 1742), no one's been able to prove or disprove it.  They've actually brute-force tested all numbers between 3 and 4,000,000,000,000,000,000 -- I'm not making that up -- and they've all worked.

But a general proof has eluded the best mathematical minds for close to three hundred years.

The bigger problem, of course, is that Turing's result shows that not only do we not know the answer to problems like these, there may be no way to know.  Somehow, this flies in the face of how we usually think about math, doesn't it?  The way most of us are taught to think about the subject, it seems like the ultimate realm in which there are always definitive answers.

But here, even two simple-to-state conjectures have proven impossible to solve.  At least so far.  We've seen hitherto intractable problems finally reach closure -- the four-color map theorem comes to mind -- so it may be that someone will eventually solve Collatz and Goldbach.

Or maybe -- as Turing suggested -- the search for a proof will never halt.

****************************************



Saturday, March 4, 2023

Weird math

When I was in Calculus II, my professor, Dr. Harvey Pousson, blew all our minds.

You wouldn't think there'd be anything in a calculus class that would have that effect on a bunch of restless college sophomores at eight in the morning.  But this did, especially in the deft hands of Dr. Pousson, who remains amongst the top three best teachers I've ever had.  He explained this with his usual insight, skill, and subtle wit, watching us with an impish grin as he saw the implications sink in.

The problem had to do with volumes and surface areas.  Without getting too technical, Dr. Pousson asked us the following question. If you take the graph of y = 1/x:


And rotate it around the y-axis (the vertical bold line), you get a pair of funnel-shapes.  Not too hard to visualize.  The question is: what are the volume and surface area of the funnels?

Well, calculating volumes and surface areas is pretty much the point of integral calculus, so it's not such a hard problem.  One issue, though, is that the tapered end of the funnel goes on forever; the red curves never strike either the x or y-axis (something mathematicians call "asymptotic").  But calc students never let a little thing like infinity stand in the way, and in any case, the formulas involved can handle that with no problem, so we started crunching through the math to find the answer.

And one by one, each of us stopped, frowning and staring at our papers, thinking, "Wait..."

Because the shapes end up having an infinite surface area (not so surprising given that the tapered end gets narrower and narrower, but goes on forever) -- but they have a finite volume.

I blurted out, "So you could fill it with paint but you couldn't paint its surface?"

Dr. Pousson grinned and said, "That's right."

We forthwith nicknamed the thing "Pousson's Paint Can."  I only found out much later that the bizarre paradox of this shape was noted hundreds of years ago, and it was christened "Gabriel's Horn" by seventeenth-century Italian physicist and mathematician Evangelista Torricelli, who figured it was a good shape for the horn blown by the Archangel Gabriel on Judgment Day.

There are a lot of math-phobes out there, which is a shame, because you find out some weird and wonderful stuff studying mathematics.  I largely blame the educational system for this -- I was lucky enough to have a string of fantastic, gifted elementary and middle school math teachers who encouraged us to play with numbers and figure out how it all worked, and I came out loving math and appreciating the cool and unexpected bits of the subject.  It's a pity, though, that a lot of people have the opposite experience.  Which, unfortunately, is what happened with me in my elementary and middle school social studies and English classes -- with predictable results.

So math has its cool bits, even if you weren't lucky enough to learn about 'em in school.  Here are some short versions of other odd mathematical twists that your math teachers may not have told you about.  Even you math-phobes -- try these on for size.

1. Fractals

A fractal is a shape that is "self-similar;" if you take a small piece of it, and magnify it, it looks just like the original shape did.  One of the first fractals I ran into was the Koch Snowflake, invented by Swedish mathematician Helge von Koch, which came from playing around with triangles.  You take an equilateral triangle, divide each of its sides into three equal pieces, then take the middle one and convert it into a (smaller) equilateral triangle. Repeat. Here's a diagram with the first four levels:


And with Koch's Snowflake -- similar to Pousson's Paint Can, but for different reasons -- we end up with a shape that has an infinite perimeter but a finite area.

Fractals also result in some really unexpected patterns coming out of perfectly ordinary processes.  If you have eight minutes and want your mind completely blown, check out how what seems like a completely random dice-throwing protocol generates a bizarre fractal shape called the Sierpinski Triangle.  (And no, I don't know why this works, so don't ask.  Or, more usefully, ask an actual mathematician, who won't just give you what I would, which is a silly grin and a shrug of the shoulders.)



2. The Four-Color-Map Theorem

In 1852, a man named Francis Guthrie was coloring in a map of the counties of England, and noticed that he could do the entire map, leaving no two adjacent counties the same color, using only four different colors. Guthrie wondered if that was true of all maps.

Turns out it is -- something that wasn't proven for sure until 1976.

Oh, but if you're talking about a map printed onto a Möbius Strip, it takes six colors.  A map printed on a torus (donut) would take seven.

Once again, I don't have the first clue why.  Probably explaining how it took almost a hundred years to prove. But it's still pretty freakin' cool.


3. Brouwer's Fixed-Point Theorem

In the 1950s, Dutch mathematician Luitzen Brouwer came up with an idea that -- as bizarre as it is -- has been proven true.  Take two identical maps of Scotland.  Deform one any way you want to -- shrink it, expand it, rotate it, crumple it, whatever -- and then drop it on top of the other one.

Brouwer said that there will be one point on the deformed copy of the map that is exactly on top of the corresponding point on the other map.

[Nota bene: it works with any map, not just maps of Scotland.  I just happen to like Scotland.]

It even works on three dimensions.  If I stir my cup of coffee, at any given time there will be at least one coffee molecule that is in exactly the same position it was in before I stirred the cup.

Speaking of which, all this is turning my brain to mush.  I think I need to get more coffee before I go on to...


4. The types of infinity

You might think that infinite is infinite.  If something goes on forever, it just... does.

Turns out that's not true.  There are countable infinities, and uncountable infinities, and the latter is much bigger than the former.

Infinitely bigger, in fact.

Let's define "countable" first.  It's simple enough; if I can uniquely assign a natural number (1, 2, 3, 4...) to the members of a set, it's a countable set.  It may go on forever, but if I took long enough I could assign each member a unique number, and leave none out.

So, the set of natural numbers is itself a countable set.  Hopefully obviously.

So is the set of odd numbers.  But here's where the weirdness starts.  It turns out that the number of natural numbers is exactly the same as the number of odd numbers.  You may be thinking, "Wait... that can't be right, there has to be twice as many natural numbers as odd numbers!"  But no, because you can put them in a one-to-one correspondence and leave none out:
1-1
2-3
3-5
4-7
5-9
6-11
7-13
etc.
So there are exactly the same number in both sets.

Now, what about real numbers?  The real numbers are all the numbers on the number line -- i.e. all the natural numbers plus all of the possible decimals in between.  Are there the same number of real and natural numbers?

Nope.  Both are infinite, but they're different kinds of infinite.

Suppose you tried to come up with a countable list of real numbers between zero and one, the same as we came up with a countable list of odd numbers above.  (Let's not worry about the whole number line, even.  Just the ones between zero and one.)  As I mentioned above, if you can do a one-to-one correspondence between the natural numbers and the members of that list, without leaving any out, then you've got a countable infinity. So here are a few members of that list:
0.1010101010101010...
0.3333333333333333...
0.1213141516171819...
0.9283743929178394...
0.1010010001000010...
0.13579111315171921...
And so forth.  You get the idea.

German mathematician Georg Cantor showed that no matter what you do, your list will always leave some out.  In what's called the diagonal proof, he said to take your list, and create a new number -- by adding one to the first digit of the first number, to the second digit of the second number, to the third digit of the third number, and so on.  So using the short list above, the first six decimal places will be:

0.242413...

This number can't be anywhere on the list.  Why?  Because its first digit is different from the first number on the list, the second digit is different from the second number on the list, the third digit is different from the third number of the list, and so forth.  And even if you just artificially add that new number to the end of the list, it doesn't help you, because you can just do the whole process again and generate a new number that isn't anywhere on the list.

So there are more numbers between zero and one on the number line than there are natural numbers.  Infinitely more.


5. Russell's Paradox

I'm going to end with one I'm still trying to wrap my brain around.  This one is courtesy of British mathematician Bertrand Russell, and is called Russell's Paradox in his honor.

First, let's define two kind of sets:
  • A set is normal if it doesn't contain itself.  For example, the "set of all trees on Earth" is normal, because the set itself is not a tree, so it doesn't contain itself.
  • A set is abnormal if it contains itself.  The "set of everything that is not a tree" is abnormal, because the set itself is not a tree.
Russell came up with a simple idea: he looked at "the set of all possible normal sets."  Let's call that set R.  Now here's the question:

Is R normal or abnormal?

Thanks, I'll show myself out.

****************************************


Monday, September 12, 2022

Confidence boost

New from the "Well, I Coulda Told You That" department, we have: a study out of MIT showing that confident kids do better in mathematics -- and that confidence instilled in childhood persists into adulthood, with positive outcomes in higher education, employment, and income.

The study appeared in the Journal of Human Resources, and tracked children from eighth grade onward.  It looked at measures of their confidence in their own knowledge and ability, correlated those assessments against their performance in math, and then studied their paths later on in education and eventual employment.  Controlled for a variety of factors, confidence was the best predictor of success.

What's interesting is that their confidence didn't even have to be that accurate to generate positive outcomes.  Overconfident kids had a much better track record than kids who were underconfident by the same amount.  Put a different way, it's better to think you're pretty good at something that you're not than to think you're pretty bad at something that you're not.

I can speak to this from my own experience.  I've had confidence issues all my life, largely stemming from a naturally risk-averse personality together with a mom who (for reasons I am yet to understand) discouraged me from trying things over and over.  I wanted to try martial arts as a teenager; her comment was "you'd quit after three weeks."  I had natural talent at music -- one of the talents I can truly say I was born with -- and asked to take piano lessons.  My mom said, "Why put all that money and time into something for no practical reason?"  I loved (and love) plants and the outdoors, and wanted to apply for a job at a local nursery run by some friends of my dad's.  She said, "That's way more hard, heavy, sweaty work than you'll want to do."

So in the end I did none of those things, at least not until (a lot) later in life.

A great deal of attention has been given to "helicopter parents," who monitor their kids' every move, and heaven knows as a teacher I saw enough of that, as well.  I remember one parent in particular who, if I entered a low grade into my online gradebook (which the parents had access to), I could almost set a timer for how long it'd take me to get an email asking why he'd gotten a low score.  (It usually was under thirty minutes.)  To me, this is just another way of telling kids you have no confidence in them.  It says -- perhaps not as explicitly as my mom did, but says it just the same -- "I don't think you can do this on my own.  Here, let me hold your hand."

Humans are social primates, and we are really sensitive to what others think and say.  Coincidentally, just yesterday I saw the following post, about encouragement in the realm of writing:

Now, let me put out there that this doesn't mean telling people that bad work is good or that incorrect answers are correct.  It is most definitely not the "Everyone Gets A Prize" mentality.  What it amounts to is giving people feedback that encourages, not destroys.  It's saying that anyone can succeed -- while being honest that success might entail a great deal more hard work for some than for others.  And for the person him/herself, it's not saying "I'm better than all of you" -- it's saying, "I know I've got what it takes to achieve my dreams."

Confidence is empowering, energizing, and sexy.  And I say that as someone who is still hesitant, overcautious, self-effacing, and plagued with doubt.  I all too often go into an endeavor -- starting a new book, entering a race, trying a new style of sculpture -- and immediately my mind goes into overdrive with self-sabotage.  "This'll be the time I fail completely.  Probably better not to try."

So it's a work in progress.  But let's all commit to helping each other, okay?  Support your friends and family in achieving what they're passionate about.  Find ways to help them succeed -- not only honest feedback, but simply boosting their confidence in themselves, that whatever difficulties they're currently facing, they can overcome them. 

After all, isn't it more enjoyable to say "see, I toldja so" to someone when they succeed brilliantly than when they fail?

****************************************


Saturday, September 4, 2021

Space donuts

A friend of mine asked me yesterday if I'd ever heard of a "flux thruster atom pulser."  I said, "You mean, like in Back to the Future?"

He said, "No, that's a flux capacitor."  And he gave me a link to a site called Rodin Aerodynamics.

"You may want to wear a helmet while reading it," he said.  "It'll protect your skull when you faceplant."

Indeed, the site did not disappoint, and I was put on notice in the first paragraph:
Within, you will be taken on a spiraling tour through the toroidal roller coaster of our deterministic universe.  Dark Matter, the vibratory essence of all that exists, is no longer on its elusive hide and seek trip -- it has been found!  With the introduction of Vortex-Based Mathematics you will be able to see how energy is expressing itself mathematically.  This math has no anomalies and shows the dimensional shape and function of the universe as being a toroid or donut-shaped black hole.  This is the template for the universe and it is all within our base ten decimal system...  You have entered a place where Numbers Are Real And Alive and not merely symbols for other things.
So, we live in a giant space donut composed of dark matter, and 125.7 is a living entity.  Wheeee!  We are certainly off to a good start, aren't we?

[Image licensed under the Creative Commons RokerHRO, Torus vectors oblique, CC BY-SA 3.0]

The originator of the idea is allegedly a fellow named Marko Rodin, although I could find no independent corroboration of this -- as far as I could tell, Rodin seems not to exist except on this site and others that reference it.

The mysterious Rodin, however, has had quite a life:
At the age of fifteen Marko Rodin projected his mind as far as he could across the universe and asked the question, "What is the secret behind intelligence?"  Due to his gift of intense focus or because it was time for him to know the answer, his stomach muscles turned to iron and as he was literally lifted forward he answered out loud, "I understand."  What he had gleaned from his query was that all intelligence comes from a person's name.  This led him to understand that not only do our personal names and the language they are spoken in highly affect our personalities but that the most important names are the names of God.
I wish I'd known when I was fifteen that all I had to do to get rock-hard abs was ask a vague philosophical question.

Anyhow, what intelligence did Rodin glean from his trip, and the contemplation of his name?  Well, here are a few gems of wisdom he brought back:
  • a propulsion system that can bring you "anywhere in the universe."
  • there is an "aetheric template" in DNA that guides evolution.
  • the "repeating number series that solves pi and proves that it is a whole number."
  • the fact that "zero does not exist on the number line."
  • infinity has an "epicenter."
These represent just the ones I could read without my brain exploding, because a lot of Rodin's "ideas" are completely incomprehensible.  A couple of these will suffice:
  • the world boundary seams consist of nested vortices.
  • the torus skin models harmonic cascadence [sic].
A lot of his pronouncements sound like that -- a bunch of fancy-sounding words strung together that basically don't mean anything.

He goes on to mess about with number patterns, but brings in the Yin/Yang, the Mathematical Fingerprint of God, and Aetheric Flux Monopole Emanations.  What are those, you might ask?  You might be sorry you did:
Aetheron Flux Monopole Emanations, or Aetherons, are linear Emanations of quasi-mass/energy, traveling in a straight line from the center of mass outwards.  They radiate in phased-array from the Aeth Coalescence (the central essence of God).  The Aetheron Flux Monopole Emanations Rarefy the Diamond Tiles.  This rarefication [sic] is spread over the Torus Skin, creating Doubling Circuits and Nested Vortices.

Aetherons cannot be seen or felt by the average human being.  Yet, Aetherons are responsible for life as we know it.  Aetherons are Life Force of the universe, and are responsible for all form and movement.  Aetherons are the source of all magnetic fields and create instantly reacting, high inductance, dual magnetic field flows.  Aetherons generate Synchronized Electricity.  They are irresistible and can penetrate anything.

The Aetheron Flux Monopole Emanations comprise the positive, transparent Z axis of the Abha Torus.  This is not the traditional Z-Axis of the traditional, Euclidean geometry.  The transparent Z-Axis of the Abha Torus is actually a point source from which linear Emanations pour in all spherical directions from the center, as demonstrated by the Dandelion Puff Principle.
Oh!  Right!  The "Dandelion Puff Principle."  I'd forgotten all about that, from my college physics classes.

Now, you might think that this is just some guy blathering on about how he will Revolutionize Physics despite the fact of having no scientific background whatsoever, and admittedly people like that are a dime a dozen.  But now Marko Rodin has been championed by noted wackmobile Jeff Rense.

Never heard of Rense? He is a conspiracy theorist par excellence, whose overall looniness quotient ranks him right up there with Richard C. Hoagland and Alex Jones.  But Rense compounds his bizarre view of the world with anti-Semitism and Holocaust denial, which moves his ideas from the realm of the laughable to the completely odious.  He brags that his is the most "format and content-plagiarized site on the net," despite the fact that his most of his material seems to be outright lunacy.  (And even if you don't want to read any of his posts, you should at least go to his site to look at his profile photograph, in which he sports a mustache and a mane of flowing hair that in my eyes makes him look a little like an aging 70s porn star.)

So, anyway, that's today's Breakfast of Wingnuttery.  We live on a donut made of dark matter and numbers, and the whole thing is caused by invisible particles emanating from the Essence of God.  Oh, yeah, and despite what your math teacher told you, pi is a whole number, something I remember trying to convince my seventh grade math teacher of, many years ago.  "Can't we just call it '3' and be done with it?", I recall saying.  If only I'd known how many years ahead of my time I was, I could have dropped out of school and beat Rodin to the punch, and invented my own "flux thruster atom pulser" so I could "go anywhere in the universe."  Think of how impressed the aliens would have been, especially given my rock-hard abs.

*******************************

One of the most enduring mysteries of neuroscience is the origin of consciousness.  We are aware of a "self," but where does that awareness come from, and what does it mean?  Does it arise out of purely biological processes -- or is it an indication of the presence of a "soul" or "spirit," with all of its implications about the potential for an afterlife and the independence of the mind and body?

Neuroscientist Anil Seth has taken a crack at this question of long standing in his new book Being You: A New Science of Consciousness, in which he brings a rigorous scientific approach to how we perceive the world around us, how we reconcile our internal and external worlds, and how we understand this mysterious "sense of self."  It's a fascinating look at how our brains make us who we are.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]