Andy's Math/CS page

Saturday, December 23, 2006

Trees and Infinity, part II

Previous: Part I




Next: Part III



Time for another short installment on trees and recursive combinatorics, that will address a question raised by the previous puzzle and take us on an journey beyond the arithmetic hierarchy (cue your favorite sci-fi theme music...).

To recap, Konig's Lemma guarantees that a locally-finite infinite tree has an infinite path.

Suppose some machine M(x) computes membership of x in a locally-finite tree T; if x is in T then M(x) outputs the parent and immediate children of x. Then there is no guarantee that T has a computably-describable infinite path; however, such a path can be computed, uniformly in M (assuming we know the root node), given oracle access to the Halting set.

Konig's Lemma fails if the nodes can be infinite-degree, even if the tree is infinite at every height (Exercise). But consider the following computational problem:

INF: Given a Turing machine M describing a tree T (not necessarily locally finite), does T have an infinite path?

This question has a superficial simplicity: it is purely existential. However, the object it seeks is infinite, and unlike in the case where Konig's Lemma applied, there is no obvious finite 'no' certificate. Here's the astounding truth of the matter:

Problem 1: There exists an algorithm which, given any formula F of first-order logic, outputs a polynomial-time Turing machine M describing a tree T such that T has an infinite path if and only if F is true.

Solving the problem boils down to this: suppose we've got a statement of the form Q: "there exists a number x such that P(x) is true", and we know how to convert each individual P(i) to a tree that has an infinite path if and only if P(i) is true; we now need to use this knowledge to build a tree corresponding to Q.

Only slightly harder is the other case that arises: if Q is of form "for all numbers x, P(x) is true", where we again know how to build trees for each individual P(i). Still, this is pretty cool, since you are effectively reducing a universal condition into an existential one. Once you can handle both cases, you can systematically attack an arbitrary first-order expression.

***Interlude: A Review***

If you're not up on first-order logic, just consider F a statement of form

"the game G(X1, X2, ... Xk) is a win for player 1 (assuming perfect play)",

where G is a computable function from N^k --> {0, 1} that defines a game as follows: player 1 and player 2 take turns in choosing values the k values X1, X2, ... Xk, and player 1 wins if and only if G evaluates to 1 under the input chosen.

This 'computable game' formulation turns out to be equivalent to first-order logic; i.e., given any F we can produce a G such that G is a win for player 1 if and only if F is true, and the number of play-rounds of G equals the depth of alternating quantifiers in F; there is also a reduction in the reverse direction.

***

So how hard is the computational problem of checking for infinite paths? If I've got my definitions right, it's complete for the first level of the analytic hierarchy: existential second-order logic.

Now here's a tie-in to the 'censor puzzle' I discussed recently. Given a 'classification' C of finite words over {0, 1} as allowed or forbidden and an infinite 'text' bitstring D, both poly-time computable, let PARSE be the computational problem of determining whether D can be parsed so as to appear wholly allowed with respect to C.

Problem 2: Show that PARSE and INF are Turing-equivalent.

The challenge in reducing INF to PARSE, which is not too hard to overcome, is that because our alphabet is finite, the same words pop up again and again, so our decisions in building up C keep on bugging us, whereas we want continual freedom of movement in encoding new and higher levels of the tree we are attempting to model in a PARSE instance.

In the reduction I came up with, it's always easy to find a wholly-forbidden parsing of the text string. Thus, as an open problem (update: solved! In the negative. See discussion in comments of Part I.) I ask,

Problem 3: Exhibit a PARSE instance, such that any parsing that is either wholly-allowed or wholly-forbidden (with the possible exception of the first word in each case) is not describable in the arithmetic hierarchy.


Conclusion

NP-completeness theory has transformed the way we regard finite optimization problems.
NP-complete problems require hard search, but on a basic level it's 'the same' hard search
in all cases.

Perhaps less widely known is that a similar phenomenon pervades higher
mathematics. Of course, computer scientists know that most natural
undecidable problems are so because they can be used
to solve the halting problem. If they are much harder (as with PARSE),
they are still likely to have a natural classification using the analytic
hierarchy or some other 'mega'-complexity class.

But there is more. If a highly nonconstructive proof seems to require
the axiom of choice, chances are the theorem being proved also
constructively implies some version of the axiom of choice and can be
classified as 'complete' for a certain logical class. This is a theme
explored by 'reverse mathematics', an area I'm only beginning to
learn about.

Revisiting familiar theorems with a view towards establishing their
complexity, informational, or logical content lends them new grandeur
and enriches our sense of the interconnectedness of the mathematical world.

Labels:

0 Comments:

Post a Comment

<< Home