Andy's Math/CS page

Sunday, October 08, 2006

In Praise of Automata

I have a confession to make: I am inordinately fond of automata theory.

Yes automata, those unfashionable creations: once-proud denizens of Complexity Castle, they have been reduced to gatekeepers--frozen, homely gargoyles to frighten off undergraduates who have strayed into Theory Woods.

Well, they may not be the 'right' model of feasibility or tomorrow's hot topic; they may be innocent of the Unique Games Conjecture, and they may not know how to count past a constant. But to all who have written them off, or--worse--taken to pumping them for sport, I say automata are computational processes too, and as such worthy of respect--even love.

So join me now in a scenic stroll through some of the historic sites of automata theory. My aim is not to introduce the subject (for that see Wiki and its refs), or even to give vital theorems or unifying ideas you might have missed, but simply to point out the opportunities for intuitive problem-solving that automata provide. You can think about these things through the lens of logic, languages, games, pure visualization, or what have you, but in my experience they reward even casual thought.

All results mentioned are offered as puzzles, since that's generally the spirit in which I approach the subject, although difficulty varies hugely and some I've never solved. Oh, and I won't be giving many citations, but unless noted these results are older than I am.

Swingin' Both Ways

What is the least we can do to augment a finite automaton, while preserving its distinctive flavor? We could let it go back and forth on the tape. For decision problems, however, this gives us nothing. (As we'll see, the robustness of the basic model is remarkable; however, unlike Turing machines, the invariance only goes so far, so tooling around is actually worthwhile. Less robustness = more research opps.)

There is a much more subtle result, due to Hopcroft and Ullman, that shows how thoroughly automata dynamics can be comprehended. Take two FSAs, M and M', one of which scans left to right, the other right to left. Then there is a third FSA P using 2-way motion, with a subset of its states called 'display states', such that on any input x, when P enters a display state for the jth time, it is on square j and its state tells us the states of both M and M' upon arriving there (from their opposite directions). Chew on that!

Alright.. guessing/nondeterminism? As you probably know, this gains us no new languages, though it can yield exponential savings in automaton size. (This is the only truly quantitative result I'll mention... automata-land is a place to forget your troubles with big-O and theta, a place where all you need to know is that infinity is larger than N for any N.)

How about giving the finite automaton a brief chance to imitate its big brother the Turing machine? We'll give it a constant k > 0 modifications of the input string. Will anything come of it?

Try throwing in all of these augmentations in at once, along with unlimited alternating nondeterminism: one by one, the bells and whistles can be stripped away.

Pebble Pushers

What happens if you give a finite automaton a pebble? It's not quite on par with the mouse/cookie scenario, but outrageous hijinks are to be expected nonetheless.

Let's qualify that. The first pebble is a bit of a letdown; they can still only decide the regular languages. With the second things get interesting (how interesting?), and with with an arbitrary number of pebbles you can do logspace computation.

How about function computation with a single pebble? This was a big concern of mine much of a summer internship. First, what is our model? Say that the automaton can announce an output symbol anytime it wants, but the symbol gets added to the end of an output string that the automaton cannot reconsult. This kind of automaton is called a 'transducer'.

A 2-way transducer can compute functions that a 1-way one cannot; for example, the reversal of the input. Additionally, it's obvious that adding a pebble helps us: with our state space enlarged, we can get quadratic run-time, hence, quadratic output. But isn't this a little bit cheap? That's what I thought, and with the aid of the Hopcroft-Ullman result I eventually showed that, if the automaton is required to have linearly bounded output, 1 pebble does not help.

Logic Chopping

I later found out that the result had been known for a few years; or rather, it followed easily from some more general result. What did those crafty theorists have that I was missing? I forget the specifics, but most likely they made a perspicuous transformation of the problem. I'd like to illustrate one such paradigm for transforming automata.

Let x be a bitvector, and let PAR(x) = 1 iff |x| is odd. Then PAR(x) = 1 if and only if there exists a 2-coloring of the symbols of x such that

a) the first symbol is red if equal to 1, and blue otherwise;
b) for i > 1, x_i has the same coloration as x_{i-1} iff x_i = 0;
c) the final symbol is red.

Looking at this from a formal perspective, we can write it using position comparisons, bit-readings, and logical connectives, along with existential and universal quantification, where quantifiers are over either positions or subsets of the positions. This is 'monadic second-order logic (MSO)' over strings (a set is a monadic relation). Clearly the same approach can be applied with PAR replaced with any regular language, with colors for each state and constraints embodying the state-update rule.

In fact, the converse is also true! Namely, given an MSO formula, the set L of strings satisfying it is regular. This isn't hard to show if you start by giving an automaton using alternation to mirror the quantifiers, along with other bells and whistles for the set-guessing, and then patiently eliminate each augmentation in turn.

Combining this result with the first half of our discussion, we find that every MSO formula is equivalent to one with only constant-depth quantifier alternation--cool. Could also probably use the FSA equivalence to study how quantifier use can allow MSO formulae to be more concise.

Ad Infinitum

I recently learned about some very elegant models that give new life to FSA. Take pumping to the limit--give them infinite strings! And let automata perform infinite computations.

To define languages of infinite strings, we need a notion of acceptance. There are at least three major models available:

-Buchi: automaton accepts if it enters one of its 'happy' states infinitely often.

-Muller: an automaton with state-space Q accepts if the set S of infinitely recurring states is among a family S_1, ... S_k of 'good' subsets of Q (states themselves are not good or bad).

-Rabin: there's a collection {P_i, N_i} of pairs of states, and automaton accepts if for some i, P_i is visited i.o. while N_i is visited only finitely often.

Then if I've got it right, the following statements hold:

-Nondeterministic Buchi = conondeterministic Buchi = deterministic Muller = deterministic Rabin;
-deterministic Buchi is weaker than nondeterministic;
-If L is a nondeterministic Buchi language, there exist regular languages V, W, such that an infinite string x is in L iff x = v w_1 w_2 w_3 ..., (where v is in V, and each w_i is in W). The converse holds too.

Also, Buchi automata capture MSO with the successor relation over infinite strings. All of these facts are worth thinking about; some are not hard.

Apparently these infinite models are relevant to practical nonterminating computer systems--bank tellers, operating systems, and so on. Whatever.

The Joy of Dimensionality

Remember when you could have thoughts like, "woah... what about a 2-dimensional Turing Machine?" Well, replace TMs with FSAs and this becomes interesting.

-Can a two-dimensionally wandering FSA look at a map of a maze and determine if escape is possible?

-What if it's not a map, but the FSA is in the maze?

I worked on this unsuccessfully for a while. Apparently the first impossibility proof was 175 pages! But the basic idea is to watch an FSA's behavior and design 'traps' for it. I should mention that the traps are much, much easier to design on a torus or in 3-space.

-What is the least 'leg up' we can give to make maze exploration possible? Forget laying down string: Blum and Kozen showed that having just 2 pebbles does the trick. I believe the 1-pebble case is still open.

Conclusion

A single tear cracked the surface and rolled down the gargoyle's face as it remembered what it had once been, and could still become..

Feel free to chime in with your own favorite results. And seriously, automata theory seems to be in very healthy shape, despite its relative neglect in the Complexity world, where it still pops up, as in the following result (Barrington's?):

-There exists a regular language L and a nondeterministic polynomial time machine N(x, y) (|x| = |y|) such that it is PSPACE complete, given x, to determine if the string
(N(x, 00...00), N(x, 00...01), ... N(x, 11...11)) is in L.

Cheers!

0 Comments:

Post a Comment

<< Home