Comment by ajkjk
8 months ago
Not an expert, but I've read about this a bit because it bothered me also and I think this is the answer:
Most of these 'uncomputable' problems are uncomputable in the sense of the halting problem: you can write down an algorithm that should compute them, but it might never halt. That's the sense in which BB(x) is uncomputable: you won't know if you're done ever, because you can't distinguish a machine that never halts from one that just hasn't halted yet (since it has an infinite number of states, you can't just wait for a loop).
So presumably the independence of a number from ZFC is like that also: you can't prove it's the value of BB(745) because you won't know if you've proved it; the only way to prove it is essentially to run those Turing machines until they stop and you'll never know if you're done.
I'm guessing that for the very small Turing machines there is not enough structure possible to encode whatever infinitely complex states end up being impossible to deduce halting from, so they end up being Collatz-like and then you can go prove things about them using math. As you add states the possible iteration steps go wild and eventually do stuff that is beyond ZFC to analyze.
So the finite value 745 isn't really where the infinity/uncomputability comes from-it comes from the infinite tape that can produce arbitrarily complex functions. (I wonder if over a certain number of states it becomes possible to encoding a larger Turing machine in the tape somehow, causing a sort of divergence to infinite complexity?)
And also, if BB were computable, then it could be used to solve the halting problem: run the Turing machine of size n for BB(n) steps, and if it hasn't halted yet, it never will. So the BB function is clearly not computable.
But to me as a layman that seems true regardless of formal axioms chosen, but I guess I need to read that linked thesis.
That is the standard argument for why BB is uncomputable for general n, but it's not the same as why BB(n) would be independent of ZFC for fixed n.
I am also not an expert, but this does not sound right to me. Godel's incompleteness theorem shows that there are certain things that cannot be proven. Being independent of ZFC means that something is such a case. So BB(643) being independent of ZFC means that we cannot prove or disprove that a certain number is BB(643). Aka we don't have the math to know for certain.
Independence from ZFC means we can't prove that any given number is BB(643) using ZFC. It doesn't mean we can't prove it at all, e.g. one could use a stronger set theory like NBG which can prove the consistency of ZFC to verify the value of BB(643). But there would be some n for which BB(n) is independent of that set theory, requiring a yet-stronger theory, and so on ad infinitum.
ZF & ZFC are as important as they are because they're the weakest set theories capable of working as the foundations of mathematics that we've found. We can always add axioms, but taking axioms away & still having a usable theory on which to base mathematics is much more difficult.
sure, but it is still very hard to wrap one's head around how the value of a function can be independent of ZFC, and how it could not be for (e.g.) 642 but then be true for 643. That was the point of my post. It seems like you could just... run the function on every 643-state input and see what the value is, which would in some sense constitute a "proof" in ZFC? but maybe not, because you wouldn't even know if you had the answer? That's the part that is so intriguing about it.
1 reply →
Don't you want the weakest (ie makes the fewest assumptions) theory that works?
1 reply →
Yeah, but the vexing part is "how can that be true for e.g. N=643 but not N=642"? What happens at whatever number it starts true at?
Incidentally, Gödel's theorem eventually comes down to a halting-like argument as well (well, a diagonal argument). There is a presentation of it that is in like less than one page in terms of the halting problem---all of the Gödel-numbering stuff is essentially an antiquated proof. I remember seeing this in a great paper which I can't find now, but it's also mentioned as an aside in this blog post: https://scottaaronson.blog/?p=710
wait jk I found it: https://arxiv.org/abs/1909.04569
> What happens at whatever number it starts true at?
Usually, "what happens" is that the machines become large enough to represent a form of induction too strong for the axioms to 'reason' about. It's a function of the axioms of your theory, and you can add more axioms to stave it off, but of course you can't prove that your new axioms are consistent without even more axioms.
> There is a presentation of it that is in like less than one page in terms of the halting problem---all of the Gödel-numbering stuff is essentially an antiquated proof.
Only insofar as you can put faith into the Church–Turing thesis to sort out all the technicalities of enumerating and verifying proofs. There still must be an encoding, just not the usual Gödel numbering.
> Incidentally, Gödel's theorem eventually comes down to a halting-like argument as well (well, a diagonal argument).
> There is a presentation of it that is in like less than one page in terms of the halting problem
Those are two very different ideas. Your second sentence says that Gödel's theorem is easy to prove if you have results about the halting problem. Your first one says that in order to prove Gödel's theorem, you need to establish results about the halting problem.
1 reply →
Wow that might be the best, most entertaining, and most elucidating academic article I've ever read. Thanks for sharing.
> Most of these 'uncomputable' problems are uncomputable in the sense of the halting problem: you can write down an algorithm that should compute them, but it might never halt. That's the sense in which BB(x) is uncomputable: you won't know if you're done ever, because you can't distinguish a machine that never halts from one that just hasn't halted yet (since it has an infinite number of states, you can't just wait for a loop).
> So presumably the independence of a number from ZFC is like that also: you can't prove it's the value of BB(745) because you won't know if you've proved it; the only way to prove it is essentially to run those Turing machines until they stop and you'll never know if you're done.
These aren't similar ideas. You can't know if a machine that hasn't halted yet will ever halt. But you can easily know if a machine that has already halted was going to halt.
Independence is the second case. For the value of BB(x) to be independent of ZFC, one of two things must hold:
(1) ZFC is inconsistent, and therefore all statements are independent of it.
(2) ZFC is consistent with two different statements, "BB(x) = a" and "BB(x) = b" for two different a, b. This means that a disproof of either statement cannot exist.
This, in turn, means that there is no observation you could ever make that would distinguish between the values a and b (for the identity of BB(x)). No matter what you believe the value of BB(x) might secretly be, there are no consequences; nothing anywhere could ever change if the value turned out to be different. Because, if there were an observable consequence of the value being different, the hypothetical observation of that consequence would be a disproof of the value that didn't cause it, and no such disproof can exist.
Neither value, a or b, can be more true than the other as the answer to the question "what is BB(x)?". It doesn't make sense to consider that question to have an answer at all.
> (2) ZFC is consistent with two different statements, "BB(x) = a" and "BB(x) = b" for two different a, b. This means that a disproof of either statement cannot exist.
> This, in turn, means that there is no observation you could ever make that would distinguish between the values a and b (for the identity of BB(x)). No matter what you believe the value of BB(x) might secretly be, there are no consequences; nothing anywhere could ever change if the value turned out to be different. Because, if there were an observable consequence of the value being different, the hypothetical observation of that consequence would be a disproof of the value that didn't cause it, and no such disproof can exist.
There's one part of this I don't understand. "BB(x) = n" means "there is at least one x-state Turing machine that halts after exactly n steps, and there are no x-state Turing machines that halt after more than n steps", right? Then why wouldn't this approach work (other than the numbers being way too big to actually do in this universe)? WLOG, assume a < b. Run all possible x-state Turing machines for b steps. If any halted on step b, then you've disproved "BB(x) = a". If not, then you've disproved "BB(x) = b".
The trick is that if none halt on `b` steps, you don't know that BB(x)<b. Specifically, if you have one TM that keeps going, you don't know whether that TM halts eventually or keeps going forever.
1 reply →
What happens if you take the larger of a and b and run all the Turing machines for that many steps?
Among all possible values of BB(n) for some fixed n, it's the smallest such value that is the true value.
The issue is that there is no way within ZFC to determine which value is the smallest.
What are a and b?
1 reply →
It likely comes from the smallest machine that someone has been able to construct that can diagonalize over all proofs in ZFC, or something similar.
It has to come from a finite value (specifically, the amount of complexity that can be enocoded in 745 pieces of information https://turingmachinesimulator.com/shared/vgimygpuwi), because the finite size 745 with infinite tape leads to uncomputability, but the size 5 does not.
In a very real sense, a deep kind of infinite complexity can be generated from 745 objects of certain kind, but not from 5 objects of that kind..
Turing machines have infinite tape, not infinite state. The entire set of all halting machines of a given size collectively only use finite tape. Totally finite. Only (some of) the non-halting machines use infinite tape.
The problem is that we don't know in advance how large the (definitely finite) upper bound on the amount of tape all the size-N halting machines use, until after enough of them (one per known equivalence class) halt. And we don't know (in general) how to run all the halting ones until they halt, without also running a non-halting program for an unbounded amount of time.
TL:DR: unbounded is not infinite, but big enough to be a problem.
I am aware it's an infinite tape and finite state (maybe I misspoke somewhere), as well as the halting machines using finite tape (because of course they do).
But the overall 'complexity' (at a timestep, say) is going to be due to the states and the tape together. The BB(5) example that was analyzed, iirc, was a Collatz-like problem (Aaronson describes it here: https://scottaaronson.blog/?p=8088 ). My interpretation of this is that:
1. collatz-like functions have a lot of complexity just due to math alone 2. 5 states turned out to be enough to "reach" that one that 3. more states means you're going to reach more possible Collatz-like functions (they don't have to be Collatz-like; it's just easier to think about them like that) 4. eventually you reach ones that ZFC cannot show to halt, because there is effectively no way to prove it other than running them, and then you would have to solve the halting problem.
The part that was helpful for me to be less unsettle by BB(745) being independent of the ZFC was the notion that it eventually boils down to a halting problem, and asking ZFC to "solve" it... which is more agreeable than the idea that "ZFC cannot compute a function that seems to be solvable by brute force".