Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label Gödel's Incompleteness Theorem. Show all posts
Showing posts with label Gödel's Incompleteness Theorem. Show all posts

Thursday, November 13, 2025

The incomplete algorithm

A paper this week in the Journal of Holography Applications in Physics left me scratching my head.

To get why it's so puzzling -- and why not only I (an admitted layperson), but some actual physicists, are inclined to doubt its rather earth-shattering conclusion -- a little background first.

A few months ago I wrote here at Skeptophilia about Kurt Gödel's Incompleteness Theorem.  This astonishing piece of work, published in 1931, dashed the hopes of people like David Hilbert that it could be proved that a purely algorithmic system like mathematics was both complete (all true statements that can be expressed within it are provable) and consistent (all provable statements that can be expressed within it are true).  Gödel proved, beyond any doubt, that mathematics cannot be both; if it is consistent, it is incomplete (some true statements are unprovable); if it is complete, it is inconsistent (some provable statements are untrue).

It was a devastating result.  We think of math as being cut-and-dried, a system where there's no fuzzy ambiguity.  Turns out there's a built-in flaw (although some might object to my calling it that); and not just mathematics, but any sufficiently powerful algorithmic system you could invent, would fall to the same death blow.

The other piece of background is also something I've dealt with here before; the possibility that we might live in a simulation.  The claim has been seriously considered by people like Nick Bostrom (of the University of Oxford) and David Kipping (of Columbia University); they looked at the questions of (1) if we were in a simulation, how we could tell, and (2) if simulation is possible, what our chances are of inhabiting the real, original universe (turns out, a fairly persuasive argument concludes that it's near zero).  Of course, that doesn't settle the truth of the major premises; (1) are we in a simulation? and (2) are simulations possible?, respectively.  And as anyone who's taken a course in logic knows, if the first part of a syllogism is false, you can conclude any damnfool thing you want from it.  (More accurately, if the major premise is false, the conclusion could be true or false, and there's no way to tell for sure.)

Okay.  So what this week's paper did is to look at the possibility of our being in a simulation, from a Gödelian perspective.  And their conclusion was that if the universe is a simulation, then it is by definition an algorithmic system, because anything that is runnable on a computer (even a super-powerful one) would have to be, at its basis, a set of algorithms.  Therefore, by Gödel's Theorem, there would have to be true statements that are unreachable from within the system, meaning there is more to the universe than can be reached from inside the simulation.  Conclusion: we can't be in a simulation, because it would be inherently incomplete.

My first thought on reading this was that I must be misinterpreting them, because the conclusion seemed like an enormous overreach.  But here is a quote from one of the authors, Mir Faizal of the University of British Columbia - Okanagan:

Drawing on mathematical theorems related to incompleteness and indefinability, we demonstrate that a fully consistent and complete description of reality cannot be achieved through computation alone.  It requires non-algorithmic understanding, which by definition is beyond algorithmic computation and therefore cannot be simulated.  Hence, this universe cannot be a simulation.

And another, by study co-author Lawrence Krauss, of Australian National University:

The fundamental laws of physics cannot be contained within space and time, because they generate them.  It has long been hoped, however, that a truly fundamental theory of everything could eventually describe all physical phenomena through computations grounded in these laws.  Yet we have demonstrated that this is not possible.  A complete and consistent description of reality requires something deeper -- a form of understanding known as non-algorithmic understanding.

So I don't think I'm misunderstanding their logic -- although I will certainly defer to wiser heads if there are any physicists in the studio audience.

The problem for me is that it is yet to be shown that the universe is non-algorithmic.  I'll buy that if it is algorithmic, there will be truths we can't reach by mathematical logic; so if this were a simulation, there'd be parts of it we couldn't get at.  But... so what?  I have no problem imagining a sufficiently complex simulation that gave such a convincing appearance of reality that any unreachable truths would be remote enough we might not have found them.

I think Faizal et al. may have too high an opinion of the capacity of the human brain for elucidating the workings of the universe.

Turns out I'm not the only one who has issues with this paper.  Physicist Sabine Hossenfelder did a brilliant take-down of the Faizal et al. study, and in fact gave it a nine out of ten on her infamous Bullshit Meter.  She said:

Unfortunately, [the paper] assumes its major premise.  We have never measured any quantity that is provably uncomputable.  To assume we can do this is logically equivalent to assuming we don't live in a simulation.  To see what I mean, let me turn their argument around.  Maybe the fact that we have never measured any quantity we can't also compute is proof that we are part of an algorithm that simulates the universe...  
We don't know any counterexamples.  The cases which I've mentioned that have been discussed in the literature always take some limit to infinity.  For example, they might use infinitely many atoms.  Or a lattice with an infinitely small spacing...  In these mathematical idealizations, yes, there are quantities that provably can't be computed in a finite time.  But we don't know any real-world examples.  Not a single one.  Isn't that weird?  It's like we actually can't measure anything that an algorithm could not also compute.  So maybe... we are algorithms.

In conclusion, the headlines I've seen, along the lines of, "Physicists Prove We're Not In A Simulation," strike me as a bit premature.  We haven't escaped from Bostrom and Kipping's matryoshka universe quite that easily.  Me, I'm going to stay in the "I don't know" column.  The Simulation Hypothesis certainly hasn't been proven, but despite Faizal et al., I don't think it's going to be easy to disprove, either.

So I guess we haven't settled whether the insane *gestures around vaguely at everything* we've been dealing with is real, or is (as I've suggested before) the result of stoned aliens twiddling the knobs just to fuck with us.  I'm honestly not sure which would be worse, frankly.

****************************************


Wednesday, July 9, 2025

Tracking the hailstones

One of the most shocking results from mathematics -- or even scholarship as a whole -- is Kurt Gödel's Incompleteness Theorem.

Like (I suspect) many of us, I first ran into this startling idea in Douglas Hofstadter's wonderful book Gödel, Escher, Bach: An Eternal Golden Braid, which I read when I was an undergraduate at the University of Louisiana.  I've since reread the whole thing twice, but I'm afraid the parts about formal logic and Gödel's proof are still a real challenge to my understanding.  The gist of it is that Gödel responded to a call by German mathematician David Hilbert to come up with a finite, consistent set of axioms from which all other true statements in mathematics could be derived (and, significantly, which excluded all false or paradoxical ones).  Gödel picked up the gauntlet, but not in the way Hilbert expected (or wanted).

He showed that what Hilbert was asking for was fundamentally impossible.

Put succinctly, Gödel proved that if you come up with an axiomatic system that can generate all true statements of mathematics, it will also generate some untrue ones; if you come up with a system that generates only true statements, there will always be true statements that cannot be proven from within it.  In other words, if a mathematical system is complete, it's inconsistent; if it is consistent, it's incomplete.

The result is kind of staggering, and the more you think about it, the weirder it gets.  Math is supposed to be cut and dried, black-and-white, where things are either provable (and therefore true) or they're simply wrong.  What Gödel showed was that this is not the case -- and worse, there's no way to fix it.  If you simply take any true (but unprovable) mathematical statements you find, and add them to the system as axioms, the new expanded system still falls prey to Gödel's proof.

It's the ultimate catch-22.

The problem is, there's no way to tell the difference between a true-but-thus-far-unproven statement and a true-but-unprovable statement.  There have been a number of conjectures that have baffled mathematicians for ages, and finally been proven -- the four-color map theorem and Fermat's last theorem come to mind.  But one that has resisted all attempts at a proof is the strange Collatz conjecture, also known as the hailstone sequence, proposed in 1937 by the German mathematician Lothar Collatz.

What's wild about the Collatz conjecture is that it's simple enough a grade-school student could understand it.  It says: start with any natural number.  If it's even, divide it by two.  If it's odd, multiply it by three and then add one.  Repeat the process until you reach 1.  Here's how it would work, starting with 7:

7 - 22 - 11 - 34 - 17 - 52 - 26 - 13 - 40 - 20 - 10 - 5 - 16 - 8 - 4 - 2 - 1.

You can see why it's called a "hailstone sequence;" like hailstones, the numbers rise and fall, sometimes buffeted far upwards before finally "falling to Earth."  And what Collatz said was that, subject to this procedure, every natural number will finally fall to 1.

Simple enough, right?  Wrong.  The best minds in mathematics have been stumped as to how to prove it.  The brilliant Hungarian mathematician Paul Erdös said, "Mathematics may not be ready for such a problem."  American mathematician Jeffrey Lagarias was even bleaker, saying, "[The Collatz conjecture] is completely out of reach of present-day mathematics."

What's weirdest is that there does seem to be a pattern -- a relationship between the number you start with and the number of steps it takes to reach 1.  Here's what the graph looks like, if you plot the number of steps as a function of the number you start with, for every number from 1 to 9,999:

[Image is in the Public Domain]

It certainly doesn't appear to be random, but this doesn't get us any closer to proving that all numbers descend to 1 in a finite number of steps.

The reason all this comes up is a recent paper in The Journal of Supercomputing showing that every number between 1 and 2 to the 71st power obeys the Collatz conjecture.  That's a bit over twenty quintillion.  Of course, this still isn't proof; all it'd take is one single number in the octillions that either (1) keeps rising higher and higher forever, or (2) leads to an infinite loop, to disprove it.  So until a formal proof (or disproof) is found, all mathematicians can do is keep extending the list of numbers tested.

But is the Collatz conjecture one of Gödel's inevitable true-but-unprovable statements?  No way to know, even if it never does get proven.  That's the brilliance -- and the frustration -- of Gödel's proof.  Such statements are forever outside the axiomatic system, so there's no way to get at them.

So much for mathematics being firm ground.

Anyhow, that's our mind-blowing bit of news for this morning.  A simple conjecture that has baffled mathematicians for almost ninety years, and is no closer to being solved now than it was when it was first proposed.  It's indicative of how weird and non-intuitive mathematics can be.  As Douglas Hofstadter put it, "It turns out that an eerie type of chaos can lurk just behind a facade of order -- and yet, deep inside the chaos lurks an even eerier type of order."

****************************************