### Fast but Useless

My thesis is about hard functions that are useless. But there's a natural structural condition on functions that makes them useful (and hard): they grow faster than Busy Beaver. In such cases they can act as a timer to solve the halting problem. Is this the end of the story?

A while ago Scott Aaronson mentioned the following question to me while discussing questions around my thesis and a post of his: is there a function asymptotically dominating all recursive functions that *doesn't* allow you to solve the halting problem? He said he'd asked Carl Jokusch, who gave a 'high-powered' proof. I found an easy proof that proved something stronger:

Theorem: Let RAND be the Kolmogorov-random strings; then there's a function f dominating all recursive functions such that RAND is 'immune' (one can't compute an infinite subset of it) even given oracle access to f.

(HALT is Turing reducible to RAND--even truth-table reducible, which is bizarre--so it follows that HALT is not Turing reducible to f. Actually, the theorem remains true if you replace RAND with any immune set.)

Proof: Let B[1](x), B[2](x), ... be a (nonconstructive) enumeration of all total recursive bounds. At each stage we set finitely many values of f to screw up the ith oracle machine M[i]'s attempt to decide an infinite RAND-subset. At stage i we restrict ourselves to setting new values f(x) >B[j](x) for j <= i; this gets the growth condition.

Case I: There's a finite partial completion of f, subject to the constraints/commitments so far, causing M[i] to accept some non-random string x.

Then of course, fix these f-values, along with a big value f(i) to ensure convergence of our partial f to a total function.

Case II: For no such partial completion can M[i] be so induced.

Then M[i] is not a problem: no matter how we set the function hereafter, M[i] can only accept a finite subset of RAND. Otherwise, we could code in the f-values committed so far along with programs for B[1](x), B[2](x), ... B[i](x) into oracle-free machine M', which on input x dovetails over all admissible partial completions of the oracle looking for one that makes M[i] accept x. Then M' accepts an infinite RAND subset, impossible.

So in Case II just fix a big value for f(i) to help

convergence. Continue over all i to get the desired f.

I've also explored a dual question (are there non-r.e.-hard infinite subsets of RAND?), with only partial success:

i) There is no algorithm that computes HALT when given oracle access to any infinite subset of RAND;

ii) There are infinite subsets of the 1/2-random strings (not compressible by a factor of two) that aren't r.e.-hard.

Happy to get questions or comments. Lance Fortnow and others have interesting results about the computational power of 'dense enough' subsets of 1/2-RAND.

A while ago Scott Aaronson mentioned the following question to me while discussing questions around my thesis and a post of his: is there a function asymptotically dominating all recursive functions that *doesn't* allow you to solve the halting problem? He said he'd asked Carl Jokusch, who gave a 'high-powered' proof. I found an easy proof that proved something stronger:

Theorem: Let RAND be the Kolmogorov-random strings; then there's a function f dominating all recursive functions such that RAND is 'immune' (one can't compute an infinite subset of it) even given oracle access to f.

(HALT is Turing reducible to RAND--even truth-table reducible, which is bizarre--so it follows that HALT is not Turing reducible to f. Actually, the theorem remains true if you replace RAND with any immune set.)

Proof: Let B[1](x), B[2](x), ... be a (nonconstructive) enumeration of all total recursive bounds. At each stage we set finitely many values of f to screw up the ith oracle machine M[i]'s attempt to decide an infinite RAND-subset. At stage i we restrict ourselves to setting new values f(x) >B[j](x) for j <= i; this gets the growth condition.

Case I: There's a finite partial completion of f, subject to the constraints/commitments so far, causing M[i] to accept some non-random string x.

Then of course, fix these f-values, along with a big value f(i) to ensure convergence of our partial f to a total function.

Case II: For no such partial completion can M[i] be so induced.

Then M[i] is not a problem: no matter how we set the function hereafter, M[i] can only accept a finite subset of RAND. Otherwise, we could code in the f-values committed so far along with programs for B[1](x), B[2](x), ... B[i](x) into oracle-free machine M', which on input x dovetails over all admissible partial completions of the oracle looking for one that makes M[i] accept x. Then M' accepts an infinite RAND subset, impossible.

So in Case II just fix a big value for f(i) to help

convergence. Continue over all i to get the desired f.

I've also explored a dual question (are there non-r.e.-hard infinite subsets of RAND?), with only partial success:

i) There is no algorithm that computes HALT when given oracle access to any infinite subset of RAND;

ii) There are infinite subsets of the 1/2-random strings (not compressible by a factor of two) that aren't r.e.-hard.

Happy to get questions or comments. Lance Fortnow and others have interesting results about the computational power of 'dense enough' subsets of 1/2-RAND.

Labels: computability