Andy's Math/CS page

Sunday, October 29, 2006

WLOG

WLOG is mathematics-speak for 'without loss of generality', which means 'restricting ourselves to considering a special case, which is not really a restriction'. For example, is there a drive from Berkeley to Las Vegas that's less than 500 miles? WLOG we can look for one that doesn't involve visiting any one spot twice.

The above example illustrates two features of WLOG claims:

i) They are frequently commonsensical observations;

ii) They can eliminate a great deal of dross very easily, making efficient algorithms possible.

Using dynamic programming to solve construction problems is a prime example. Here we have construction tasks where the 'worth' of a partial solution can be completely summarized with a few well-chosen parameters; if two competing partial solutions have the same parameters, we can WLOG throw one of them away.

So, in our example, our idealized travel problem has length and stopping point as the sole relevant parameters for partial trips: if we think the shortest trip to Vegas might take us through Reno, and we've found two equally short shortest-paths to Reno, WLOG we can flip a coin and store only the winning path as we try to extend it from Reno to Vegas. Of course, we also need to consider paths that don't pass thru Reno (which, as it turns out, is way too far north).

Note that, while the no-looping-paths restriction has a distinct preference when it makes restrictions, this second dynamic-programming restriction has a symmetric character--indifferent to which candidate it keeps--and is motivated purely by a desire to reduce the size of our search space and save computation time. Both types of restriction are common.

As a challenge, try to apply domain-specific WLOG-reasoning to give a solution (not necessarily efficient) to the following geometry problem:

Circle Packing: Given a set of circles of rational radii r_1, r_2, ... r_n > 0, and the dimensions of a rectangle R, can the circles be packed into R without overlap or dissection?

Probably an old problem, and close to NP-complete problems if not NP-complete itself (?), but I cribbed it from this book which, although insubstantial, I recommend skimming for puzzles.

Labels:

Saturday, October 28, 2006

Mathematics of the Worst-Case Scenario

Optimization is an obvious and major theme of applied mathematics. What may be less obvious to the public is the extent to which mathematics relies on a dual process, which one might call 'pessimization'.

Suppose we want to partially relate two functions, f and g, with a statement of the form

"f(x) >= M implies g(x) >= N". So, f and g are positively correlated, and we want to quantify this relationship.

How do we get the strongest possible relationship? Fixing M, we choose x to minimize g(x) subject to the constraint f(x) >= M. This value of g(x) is the tightest lower-bound N we can get, and the most informative statement.

Why is this 'pessimization'? We look for statements of the above form when we are interested in ensuring that g(x) is high (g is some measure of 'goodness', and f is some simpler measure that we can more easily estimate and which is heuristically related to g); but, to achieve this we have to spend much of the time thinking perversely about how low g could wriggle subject to the constraint that f is high.

This kind of thinking is ubiquitous. The subject is too general (really only a 'theme') to have truly distinctive methods or theoretical constructs (in particular, there is no fundamental difference between optimization and pessimization--the latter is just optimization from the devil's perspective), but where it treats discrete, finite objects and collections it is sometimes called 'extremal combinatorics'; I recommend Jukna's book by that name.

Sometimes the heuristic relationship between our two functions has a clear logical grounding. For instance, let f(G) measure the number of edges in a graph, and let h(G) measure the number of triangles; adding an edge to a graph can never decrease the number of triangles. However, f(G) > f(G') does not imply that h(G) >= h(G'); G' may be more 'pessimally' organized than G with respect to triangles. Turan's Theorem, giving exact conditions on f to guarantee that h > 0, is considered the founding result of extremal combinatorics. Can you prove such a theorem? What do triangle-pessimal graphs on m vertices look like?

The preceding example is really a special case. Having a triangle and having at least a certain number of edges are both 'monotone properties' in the sense that they are monotone functions of the 0/1 adjacency matrix of a graph. A simple result of Kleitman, provable by induction, says that if A and B are two such properties and we uniformly generate a graph on n labeled vertices, Prob[A & B hold] >= Prob[A holds]*Prob[B holds].

So, e.g., being informed that our graph has diameter at least 10 cannot make it less likely to be planar, as should be intuitive ("diam >= 10" and planarity are antimonotone properties, so we apply the result to their negations and then rephrase).

A pleasant feature of investigations in extremal combinatorics is that they can be jump-started with simple questions; the exotic structure emerges naturally in exploring pessimal objects. Or, sometimes, the pessimal structures end up being very simple and familiar, but meeting them in the context of the investigation sheds new light on them. For example, we may heuristically observe that collections of 'many' distinct numbers contain 'many' distinct pairwise differences; even though the pessimal structure that pops out when we pursue this relationship is nothing more than n points in arithmetic progression, it's still a nice problem (which becomes more interesting when raised to 2 or more dimensions).

Some quite simple results of extremal combinatorics have important uses in elementary probability and in complexity theory. For example, take a square 0/1 matrix with 'many' ones; we'd like to conclude that it has many rows with many ones each; if we're willing to settle for either many 1-heavy rows or many 1-heavy columns, we can do much better (since pessimal examples against the 'many-rows' conclusion promote the 'many-columns' conclusion) when we quantify our various 'manys'. If we'd also settle for even a single 1-heavy row or column, we can do still better.

Such results have some immediate interpretations in terms of conditional probabilities. They also fuel a kind of robust reduction from any function f(x) on n bits to the function g(x, y) = (f(x), f(y)) (with the two matrix dimensions in our analysis corresponding to the two inputs to g). This is at the heart of a certain approach to hardness amplification, including a proof of Yao's XOR Lemma, given in this paper (see also my earlier post on the XOR Lemma). In fact, Jukna's book is rife with complexity apps, and was written with computer scientists in mind.

As I've said, no 'deep' unifying theory of extremal combinatorics exists, and I feel more inclined to share problems than to exposit. But this time, instead of belaboring my idiosyncratic preferences, let me point you to a great site with more problems than you'll ever solve: Kalva, a clearing-house of old Olympiads with many solutions. Many sterling combinatorics puzzles lie amidst the usual geometry and number theory. Olympiads are hard, but with this many problems to peruse you WILL find ones you can solve--and that feels good.

Labels:

Sunday, October 08, 2006

In Praise of Automata

I have a confession to make: I am inordinately fond of automata theory.

Yes automata, those unfashionable creations: once-proud denizens of Complexity Castle, they have been reduced to gatekeepers--frozen, homely gargoyles to frighten off undergraduates who have strayed into Theory Woods.

Well, they may not be the 'right' model of feasibility or tomorrow's hot topic; they may be innocent of the Unique Games Conjecture, and they may not know how to count past a constant. But to all who have written them off, or--worse--taken to pumping them for sport, I say automata are computational processes too, and as such worthy of respect--even love.

So join me now in a scenic stroll through some of the historic sites of automata theory. My aim is not to introduce the subject (for that see Wiki and its refs), or even to give vital theorems or unifying ideas you might have missed, but simply to point out the opportunities for intuitive problem-solving that automata provide. You can think about these things through the lens of logic, languages, games, pure visualization, or what have you, but in my experience they reward even casual thought.

All results mentioned are offered as puzzles, since that's generally the spirit in which I approach the subject, although difficulty varies hugely and some I've never solved. Oh, and I won't be giving many citations, but unless noted these results are older than I am.

Swingin' Both Ways

What is the least we can do to augment a finite automaton, while preserving its distinctive flavor? We could let it go back and forth on the tape. For decision problems, however, this gives us nothing. (As we'll see, the robustness of the basic model is remarkable; however, unlike Turing machines, the invariance only goes so far, so tooling around is actually worthwhile. Less robustness = more research opps.)

There is a much more subtle result, due to Hopcroft and Ullman, that shows how thoroughly automata dynamics can be comprehended. Take two FSAs, M and M', one of which scans left to right, the other right to left. Then there is a third FSA P using 2-way motion, with a subset of its states called 'display states', such that on any input x, when P enters a display state for the jth time, it is on square j and its state tells us the states of both M and M' upon arriving there (from their opposite directions). Chew on that!

Alright.. guessing/nondeterminism? As you probably know, this gains us no new languages, though it can yield exponential savings in automaton size. (This is the only truly quantitative result I'll mention... automata-land is a place to forget your troubles with big-O and theta, a place where all you need to know is that infinity is larger than N for any N.)

How about giving the finite automaton a brief chance to imitate its big brother the Turing machine? We'll give it a constant k > 0 modifications of the input string. Will anything come of it?

Try throwing in all of these augmentations in at once, along with unlimited alternating nondeterminism: one by one, the bells and whistles can be stripped away.

Pebble Pushers

What happens if you give a finite automaton a pebble? It's not quite on par with the mouse/cookie scenario, but outrageous hijinks are to be expected nonetheless.

Let's qualify that. The first pebble is a bit of a letdown; they can still only decide the regular languages. With the second things get interesting (how interesting?), and with with an arbitrary number of pebbles you can do logspace computation.

How about function computation with a single pebble? This was a big concern of mine much of a summer internship. First, what is our model? Say that the automaton can announce an output symbol anytime it wants, but the symbol gets added to the end of an output string that the automaton cannot reconsult. This kind of automaton is called a 'transducer'.

A 2-way transducer can compute functions that a 1-way one cannot; for example, the reversal of the input. Additionally, it's obvious that adding a pebble helps us: with our state space enlarged, we can get quadratic run-time, hence, quadratic output. But isn't this a little bit cheap? That's what I thought, and with the aid of the Hopcroft-Ullman result I eventually showed that, if the automaton is required to have linearly bounded output, 1 pebble does not help.

Logic Chopping

I later found out that the result had been known for a few years; or rather, it followed easily from some more general result. What did those crafty theorists have that I was missing? I forget the specifics, but most likely they made a perspicuous transformation of the problem. I'd like to illustrate one such paradigm for transforming automata.

Let x be a bitvector, and let PAR(x) = 1 iff |x| is odd. Then PAR(x) = 1 if and only if there exists a 2-coloring of the symbols of x such that

a) the first symbol is red if equal to 1, and blue otherwise;
b) for i > 1, x_i has the same coloration as x_{i-1} iff x_i = 0;
c) the final symbol is red.

Looking at this from a formal perspective, we can write it using position comparisons, bit-readings, and logical connectives, along with existential and universal quantification, where quantifiers are over either positions or subsets of the positions. This is 'monadic second-order logic (MSO)' over strings (a set is a monadic relation). Clearly the same approach can be applied with PAR replaced with any regular language, with colors for each state and constraints embodying the state-update rule.

In fact, the converse is also true! Namely, given an MSO formula, the set L of strings satisfying it is regular. This isn't hard to show if you start by giving an automaton using alternation to mirror the quantifiers, along with other bells and whistles for the set-guessing, and then patiently eliminate each augmentation in turn.

Combining this result with the first half of our discussion, we find that every MSO formula is equivalent to one with only constant-depth quantifier alternation--cool. Could also probably use the FSA equivalence to study how quantifier use can allow MSO formulae to be more concise.

Ad Infinitum

I recently learned about some very elegant models that give new life to FSA. Take pumping to the limit--give them infinite strings! And let automata perform infinite computations.

To define languages of infinite strings, we need a notion of acceptance. There are at least three major models available:

-Buchi: automaton accepts if it enters one of its 'happy' states infinitely often.

-Muller: an automaton with state-space Q accepts if the set S of infinitely recurring states is among a family S_1, ... S_k of 'good' subsets of Q (states themselves are not good or bad).

-Rabin: there's a collection {P_i, N_i} of pairs of states, and automaton accepts if for some i, P_i is visited i.o. while N_i is visited only finitely often.

Then if I've got it right, the following statements hold:

-Nondeterministic Buchi = conondeterministic Buchi = deterministic Muller = deterministic Rabin;
-deterministic Buchi is weaker than nondeterministic;
-If L is a nondeterministic Buchi language, there exist regular languages V, W, such that an infinite string x is in L iff x = v w_1 w_2 w_3 ..., (where v is in V, and each w_i is in W). The converse holds too.

Also, Buchi automata capture MSO with the successor relation over infinite strings. All of these facts are worth thinking about; some are not hard.

Apparently these infinite models are relevant to practical nonterminating computer systems--bank tellers, operating systems, and so on. Whatever.

The Joy of Dimensionality

Remember when you could have thoughts like, "woah... what about a 2-dimensional Turing Machine?" Well, replace TMs with FSAs and this becomes interesting.

-Can a two-dimensionally wandering FSA look at a map of a maze and determine if escape is possible?

-What if it's not a map, but the FSA is in the maze?

I worked on this unsuccessfully for a while. Apparently the first impossibility proof was 175 pages! But the basic idea is to watch an FSA's behavior and design 'traps' for it. I should mention that the traps are much, much easier to design on a torus or in 3-space.

-What is the least 'leg up' we can give to make maze exploration possible? Forget laying down string: Blum and Kozen showed that having just 2 pebbles does the trick. I believe the 1-pebble case is still open.

Conclusion

A single tear cracked the surface and rolled down the gargoyle's face as it remembered what it had once been, and could still become..

Feel free to chime in with your own favorite results. And seriously, automata theory seems to be in very healthy shape, despite its relative neglect in the Complexity world, where it still pops up, as in the following result (Barrington's?):

-There exists a regular language L and a nondeterministic polynomial time machine N(x, y) (|x| = |y|) such that it is PSPACE complete, given x, to determine if the string
(N(x, 00...00), N(x, 00...01), ... N(x, 11...11)) is in L.

Cheers!

Sunday, October 01, 2006

Squaresville

Given a function from some domain to itself, we can form its 'square' f^2(x) = f(f(x)); third, fourth powers likewise.

How about the inverse problem, 'fractional powers'? What conditions on f allow us to express our function as f(x) = g(g(x)) for some g?

Puzzle: in the case where the domain is a set of nodes and f is presented as a directed graph with outdegree 1 (each u points to f(u)), try and classify the problem of recognizing squares as in P or NP-complete.

(for a quick intro to P and NP, see the wiki page).

And what if the domain is the real or complex numbers? Here we're courting higher math, but Taylor expansions offer some guidance. There's a thought-provoking if hand-wavy discussion of the real case here; for more authoritative references (which I haven't read) this page looks promising.

Complex numbers have nice properties; do fractional iterates exist more frequently in this case? I haven't thought much about this, but am reminded of a nifty result from a college course: If f(z) is an analytic function, then either f(z) has a fixed point OR h(z) = f(f(z)) has one. The proof is short and sweet once you assume the deep result of Picard, that any such f has image equal to the whole complex plane minus at most one point.

Labels: , ,