I'm going to try and explain this in a simple way:
If I ask you to write a program that returns false if 4 is an odd number and true if it's an even number, you would say that's trivially computable:
def is_four_even():
return 4 % 2 == 0
Of course there's an even _simpler_ function:
def is_four_even():
return true
The two are both valid ways of computing that function. Four is even, it has always been even, you don't need to check if it's even, you can just return true.
For all these complicated questions about that people are all getting wrapped up about the only difference is that we currently don't know how to write the first version of the function, and we don't know which of "return true" or "return false" is the correct version of the second form of the function, but _the second form surely exists_, which means that it is computable. Either P=NP or P!=NP, and that has been true or not true since the beginning of time, and _one_ of those two functions would have been correct to use from the beginning of time. It's computable, we just don't know which one to use right now.
As soon as someone proves the status of P?=NP, you can just pick one of the two ('return false' or 'return true') if someone asks you write the function. It also doesn't really matter if a statement is provable in theory or not. Whether or not it's possible to prove that P?=NP, it is either true or it's false and it has always been either true or false, and one of those two programs is correct.
> It also doesn't really matter if a statement is provable in theory or not. Whether or not it's possible to prove that P?=NP, it is either true or it's false and it has always been either true or false, and one of those two programs is correct.
This is a pretty philosophically extremist statement (relying on a hardcore version of Platonism) and with my handle I can’t just let it stand unchallenged. :)
I’m actually somewhat more sympathetic to excluded middle for P?=NP than for some other statements, so let’s start elsewhere. I don’t think it’s at all obvious that the continuum hypothesis is, and always has been, either true or false. We know it’s independent of ZFC, of course, and there are sensible “extra” axioms that would resolve it in opposite directions (e.g. V=L vs. MM). In order to believe that there’s a fact of the matter you need to posit a very well-populated Platonic realm, despite not needing that kind of philosophical commitment to do mathematics.
Well, maybe P?=NP is just simpler than CH. It probably is! But you can imagine a case where it isn’t; e.g. if there exists an algorithm for 3-SAT (call it Algorithm A) which can be proved to be asymptotically optimal, and which runs in O(n^10^100) time if !CH, and exponential time if CH. Then the P?=NP question would be equivalent to CH, and you should have the same beliefs about its truth value. If you’re like me, that means you’re skeptical that it has a well-defined truth value “for all time” at all.
In that case, either of "return false" or "return true" are valid computable functions depending on the domain you're operating in (with CH or without it).
It's the same essentially as a function that returns true if any two lines will eventually intersect. The correct answer depends on whether you're in curved space or not. That the correct answer depends on the domain or axioms doesn't make it non computable.
There's just no case where a constant function isn't computable in the technical sense.
> What if there is a beaver that never halts or loops
A Turing machine with finite states must eventually either halt or loop. Those are the only options, because there are only finitely many configurations it can be in, and each configuration completely determines the next.
A "beaver" is defined to not loop. All "beavers" must halt, because otherwise they're just not considered for BB(n). All the challenge is in proving whether a given Turing machine does (or does not) halt, and therefore must not (or must) loop. Proving "halt" or "loop" proves the other one.
Yes, the function `busy_beaver_6() = 576125642131574254..." must exist.
I disagree unless you state what you mean by "loop". If it's just "repeat a state" then any 6-state TM "loops" or halts after at most 6 turns... and many that "loop" will eventually halt.
There are infinitely many configurations if you consider the tape.
It is still true, of course, that every Turing machine either halts on a given input, or doesn't.
You're right: my argument is flawed. I had thought TFA relied on that argument, but the machine that writes 1 and moves right forever is a counterexample.
Yet something about that still seems extremely "loopy". Is there something that must stop increasing after a while? Kolmogorov complexity? Or is that begging the question, since that's basically measuring the smallest TM that can produce that output?
How about this: given any arbitrarily large window size W, we can find an infinite number of timestamps (it may even be enough to say we can find two) in which the tape within the range of Head-W and Head+W is identical. So if you say 10, I can give you an infinite list of step-counts at which the 21 symbols on the tape centered at the head is identical to all the other times in that list. Assuming we can do that for any arbitrary window size, then the TM is in a "looping" state. Of course, a halted TM also has this property. So perhaps this is true of all Turing Machines?
I don't understand what you're disagreeing with. "loop" has a well-understood meaning here: return to an identical state. Not a similar one, identical. Because if it does that once, being a deterministic automaton, it will do so an infinite number of times without halting.
In determining whether you've returned to an identical state, are you including the tape? Or just the machine states?
If you are including the tape, it's not true that there are finitely many states. If you're not, then "looping" as you've defined it is not excluded from the definition of the busy beaver problem, and does not imply that the machine never halts.
> If you are including the tape, it's not true that there are finitely many states.
An infinite Turing tape can be in an identical state, however. The number of states don't have to be finite. If a Turing machine returns to an identical state, it will not halt. That's what we call looping.
An example of an identical state is 1 at indexes 3 and 5 of the tape, and 0 everywhere else.
Another example is the Brainfuck program `++[]`. This trivially returns repeatedly to a given finite state.
Yes, but the original claim was that non-halting TMs must loop because the number of configurations is finite. But that's not true.
Here's an example of a bf program that never returns to an identical configuration, and also never halts. The corresponding TM would be excluded from consideration for the busy beaver number, despite never "looping" according to your definition.
+[+]
A similar-in-spirit TM (with tape alphabet {0, 1}, and only one machine state) is the one that unconditionally sets the current symbol to 1 and then moves to the right. This never encounters the same configuration twice (the number of 1s on the tape increases each turn) and also never halts.
> A Turing machine with finite states must eventually either halt or loop. Those are the only options, because there are only finitely many configurations it can be in, and each configuration completely determines the next.
The Turing machine writes and reads from an infinite tape, and as such, the number of configurations (the machine's state + tape) is countably infinite.
If you’re using ZFC, there is (TFA mentions the state of the art is BB(745); Yedida and Aaronson’s original work on BB(8000)[1] is quite fun to read from a programmer’s point of view). But the second form still exists (if you accept excluded middle)—you just can’t prove which one it is!
Specifically, ZFC is consistent iff ZFC+“Y&A’s machine does halt” is consistent iff ZFC+“Y&A’s machine never halts” is consistent (a theorem in a fairly weak ambient metalogic). So you can take a stronger set theory that does prove the answer, it’s just that thus far we have no reason to prefer theories that answer yes to theories that answer no.
(You don’t have to accept excluded middle, and it can on occasion be useful not to[2], but pragmatically you’re going to have a lot of difficulties even with first-year calculus unless you do.)
Whether it’s possible to prove it halts or not is irrelevant. It either does halt, or not. Whether a human can prove that a function has a particular value doesn’t change whether that function is computable in the technical sense being used here.
If I ask you to write a program that returns false if 4 is an odd number and true if it's an even number, you would say that's trivially computable:
def is_four_even(): return 4 % 2 == 0
Of course there's an even _simpler_ function:
def is_four_even(): return true
The two are both valid ways of computing that function. Four is even, it has always been even, you don't need to check if it's even, you can just return true.
For all these complicated questions about that people are all getting wrapped up about the only difference is that we currently don't know how to write the first version of the function, and we don't know which of "return true" or "return false" is the correct version of the second form of the function, but _the second form surely exists_, which means that it is computable. Either P=NP or P!=NP, and that has been true or not true since the beginning of time, and _one_ of those two functions would have been correct to use from the beginning of time. It's computable, we just don't know which one to use right now.
As soon as someone proves the status of P?=NP, you can just pick one of the two ('return false' or 'return true') if someone asks you write the function. It also doesn't really matter if a statement is provable in theory or not. Whether or not it's possible to prove that P?=NP, it is either true or it's false and it has always been either true or false, and one of those two programs is correct.