One of the most shocking results from mathematics -- or even scholarship as a whole -- is Kurt Gödel's Incompleteness Theorem.
Like (I suspect) many of us, I first ran into this startling idea in Douglas Hofstadter's wonderful book Gödel, Escher, Bach: An Eternal Golden Braid, which I read when I was an undergraduate at the University of Louisiana. I've since reread the whole thing twice, but I'm afraid the parts about formal logic and Gödel's proof are still a real challenge to my understanding. The gist of it is that Gödel responded to a call by German mathematician David Hilbert to come up with a finite, consistent set of axioms from which all other true statements in mathematics could be derived (and, significantly, which excluded all false or paradoxical ones). Gödel picked up the gauntlet, but not in the way Hilbert expected (or wanted).
He showed that what Hilbert was asking for was fundamentally impossible.
Put succinctly, Gödel proved that if you come up with an axiomatic system that can generate all true statements of mathematics, it will also generate some untrue ones; if you come up with a system that generates only true statements, there will always be true statements that cannot be proven from within it. In other words, if a mathematical system is complete, it's inconsistent; if it is consistent, it's incomplete.
The result is kind of staggering, and the more you think about it, the weirder it gets. Math is supposed to be cut and dried, black-and-white, where things are either provable (and therefore true) or they're simply wrong. What Gödel showed was that this is not the case -- and worse, there's no way to fix it. If you simply take any true (but unprovable) mathematical statements you find, and add them to the system as axioms, the new expanded system still falls prey to Gödel's proof.
It's the ultimate catch-22.
The problem is, there's no way to tell the difference between a true-but-thus-far-unproven statement and a true-but-unprovable statement. There have been a number of conjectures that have baffled mathematicians for ages, and finally been proven -- the four-color map theorem and Fermat's last theorem come to mind. But one that has resisted all attempts at a proof is the strange Collatz conjecture, also known as the hailstone sequence, proposed in 1937 by the German mathematician Lothar Collatz.
What's wild about the Collatz conjecture is that it's simple enough a grade-school student could understand it. It says: start with any natural number. If it's even, divide it by two. If it's odd, multiply it by three and then add one. Repeat the process until you reach 1. Here's how it would work, starting with 7:
7 - 22 - 11 - 34 - 17 - 52 - 26 - 13 - 40 - 20 - 10 - 5 - 16 - 8 - 4 - 2 - 1.
You can see why it's called a "hailstone sequence;" like hailstones, the numbers rise and fall, sometimes buffeted far upwards before finally "falling to Earth." And what Collatz said was that, subject to this procedure, every natural number will finally fall to 1.
Simple enough, right? Wrong. The best minds in mathematics have been stumped as to how to prove it. The brilliant Hungarian mathematician Paul Erdös said, "Mathematics may not be ready for such a problem." American mathematician Jeffrey Lagarias was even bleaker, saying, "[The Collatz conjecture] is completely out of reach of present-day mathematics."
What's weirdest is that there does seem to be a pattern -- a relationship between the number you start with and the number of steps it takes to reach 1. Here's what the graph looks like, if you plot the number of steps as a function of the number you start with, for every number from 1 to 9,999:
It certainly doesn't appear to be random, but this doesn't get us any closer to proving that all numbers descend to 1 in a finite number of steps.
The reason all this comes up is a recent paper in The Journal of Supercomputing showing that every number between 1 and 2 to the 71st power obeys the Collatz conjecture. That's a bit over twenty quintillion. Of course, this still isn't proof; all it'd take is one single number in the octillions that either (1) keeps rising higher and higher forever, or (2) leads to an infinite loop, to disprove it. So until a formal proof (or disproof) is found, all mathematicians can do is keep extending the list of numbers tested.
But is the Collatz conjecture one of Gödel's inevitable true-but-unprovable statements? No way to know, even if it never does get proven. That's the brilliance -- and the frustration -- of Gödel's proof. Such statements are forever outside the axiomatic system, so there's no way to get at them.
So much for mathematics being firm ground.
Anyhow, that's our mind-blowing bit of news for this morning. A simple conjecture that has baffled mathematicians for almost ninety years, and is no closer to being solved now than it was when it was first proposed. It's indicative of how weird and non-intuitive mathematics can be. As Douglas Hofstadter put it, "It turns out that an eerie type of chaos can lurk just behind a facade of order -- and yet, deep inside the chaos lurks an even eerier type of order."
Note that there is also no way to distinguish between a true-but-thus-far-unproven statement and a false-but-not-yet-disproven statement. The Collatz conjecture has the flavor Stephen Wolfram's cellular automata, some of which have outcomes that have no solution short of complete playing out (as in Turing's halting problem, which brings us back to Godel).
ReplyDelete