Andy's Math/CS page

Friday, May 23, 2008

A Must-See Site

It's called Theorem of the Day. I just found it, so I can't judge how accurate the frequency claim is, but Robin Whitty has stacked up an impressive collection of short, illustrated introductions to major theorems. I like his taste... two that were news to me and I especially liked in my first sampling: Nevanlinna's Five-Value Theorem and The Analyst's Traveling Salesman Problem.

Sign Robin's guestbook, spread the word, and recommend a theorem of CS theory! (I see at least 3 already, if you count unsolvability of Diophantine equations.)

Labels:

Thursday, May 15, 2008

Random bits

This morning I was delighted to learn that Alan Baker, the Swarthmore professor whose Philosophy of Science class I took senior year, recently won the US Shogi championships--read his account here. Congrats, Prof. Baker!
I never learned Shogi, although Go--another Asian board game--was a big part of my youth and probably a decisive factor in my eventual interest in math/CS.

In other news, I am coauthor on a paper, 'The Power of Unentanglement', recently released on ECCC (and headed for CCC '08), jointly with Scott Aaronson, Salman Beigi, Bill Fefferman, and Peter Shor. It's about multiple-prover, single-round, quantum interactive proofs, and I encourage those whose cup of tea that is to take a look.

This is my first appearance in a conference paper, but not quite my first time in print--it happened once before, by accident. As a freshman in college, I asked my Linear Algebra professor, Steven Maurer, a question on an online class discussion board. Here it is, in the breathless and overwrought prose of my teenage years:

"This question has been haunting me, and I know I shouldn't expect definite answers. But how do mathematicians know when a theory is more or less done? Is it when they've reached a systematic classification theorem or a computational method for the objects they were looking for? Do they typically begin with ambitions as to the capabilities they'd like to achieve? I suppose there's nuanced interaction here, for instance, in seeking theoretical comprehension of vector spaces we find that these spaces can be characterized by possibly finite 'basis' sets. Does this lead us to want to construct algorithmically these new ensembles whose existence we weren't aware of to begin with? Or, pessimistically, do the results just start petering out, either because the 'interesting' ones are exhausted or because as we push out into theorem-space it becomes too wild and wooly to reward our efforts? Are there more compelling things to discover about vector spaces in general, or do we need to start scrutinizing specific vector spaces for neat quirks--or introduce additional structure into our axioms (or definitions): dot products, angles, magnitudes, etc.?

Also, how strong or detailed is the typical mathematician's sense of the openness or settledness of the various theories? And is there an alternative hypothesis I'm missing? "


Steve gave a thoughtful reply, years passed, and then a similar topic came up in coversation between himself and the mathematician and popular math writer Philip J. Davis. The conversation sparked an essay by Davis in which he quoted Maurer and I (with permission), the essay recently became part of a philosophy-of-math book ('Mathematics and Common Sense: A Case of Creative Tension'), and I got mailed a free copy--sweet! Once again, I recommend the book to anyone who enjoys that kind of thing. The essay is online.

Labels:

Saturday, May 10, 2008

Free on Friday?

You should come to Johnny D's in Davis Square and hear No Static, a 9-or-10-piece Steely Dan tribute band (standard preemptive clarification: Steely Dan is the band name, not a person).

No Static plays regularly in the area--I saw them in the fall and they were excellent, channeling the Dan's recordings with amazing care and fidelity.

Like most great artists, Steely Dan affords a unique perspective on the world, one best imbibed by listening to multiple albums in their weird entirety. This is exactly No Static's approach, so if you don't know Steely Dan, or have only heard a few catchy radio numbers, here's your chance to get hooked. Hope to see you there!

Thursday, May 01, 2008

Complexity Calisthenics (Part I)

Today I want to describe and recommend a paper I quite enjoyed: The Computational Complexity of Universal Hashing by Mansour, Nisan, and Tiwari (henceforth MNT). I think that this paper, while not overly demanding technically, is likely to stretch readers' brains in several interesting directions at once.

As a motivation, consider the following question, which has vexed some of us since the third grade: why is multiplication so hard?

The algorithm we learn in school takes time quadratic in the bit-length of the input numbers. This is far from optimal, and inspired work over the years has brought the running time ever-closer to the conjecturally-optimal n log n; Martin Furer published a breakthrough in STOC 2007, and there may have been improvements since then. But compare this with addition, which can easily be performed with linear time and logarithmic space (simultaneously). Could we hope for anything similar? (I don't believe that any of the fast multiplication algorithms run in sublinear space, although I could be mistaken. Here is Wiki's summary of existing algorithms for arithmetic.)

As MNT observe, the question is made especially interesting when we observed that multiplication could be just as easily achieved in linear time/logspace... if we were willing to accept a different representation for our integer inputs! Namely, if we're given the prime factorizations of the inputs, we simply add corresponding exponents to determine the product. There are two hitches, though: first, we'd have to accept the promise that the prime factorizations given really do involve primes (it's not at all clear that we'd have the time/space to check this, even with the recent advances in primality testing); second, and more germane to our discussion, addition just got much harder!

The situation is similar over finite fields of prime order (Z_p): in standard representation, addition is easy and multiplication is less so, while if we represent numbers as powers of a fixed primitive root, the reverse becomes true. This suggests a woeful but intriguing possibility: perhaps no matter how we represent numbers, one of the two operations must be computationally complex, even though we have latitude to 'trade off' between + and *. So we are invited to consider

Mental Stretch #1: Can we prove 'representation-independent' complexity lower bounds?

Stretch #2: Can we prove such lower bounds in the form of tradeoffs between two component problems, as seems necessary here?

In the setting of finite-field arithmetic, MNT answer 'yes' to both problems. The lower bounds they give, however, are not expressed simply in terms of time usage or space usage, but instead as the product of these two measures. Thus we have

Stretch #3: Prove 'time-space tradeoffs' for computational problems.

To be clear, all three of these 'stretches' had been made in various forms in work prior to MNT; I'm just using their paper as a good example.

The combination of these three elements certainly makes the task seem daunting. But MNT have convinced me, and I hope to suggest to you, that with the right ideas it's not so hard. As the paper title indicates, their point of departure is Universal Hashing (UH)--an algorithmic technique in which finite fields have already proved useful. Use upper bounds to prove lower bounds. We can call this another Stretch, or call it wisdom of the ages, but it deserves to be stated.

So what is UH? Fix a domain D and a range R. a (finite) family H of functions from D to R is called a Universal Hash Family (UHF) if the following holds:

For every pair of distinct elements d, d' in D,

and for every pair of (not necessarily distinct) elements r, r' in R,

if we pick a function h at random from H,

Prob[h(d) = r, h(d') = r' ] = 1/|R|^2.

In other words, the randomly chosen h behaves just like a truly random function from D to R when we restrict attention to any two domain elements. (In typical applications we hope to save time and randomness, since H may be much smaller than the space of all functions.)


Here is what MNT do: they prove that implementing any UHF necessitates a complexity lower bound in the form of a time-space product. (To 'implement' a UHF H is to compute the function f_H(h, x) = h(x).)

This is in fact the main message of the paper, but they obtain our desired application to + and * as a corollary, by citing the well known fact that, fixing a prime field Z_p = D = R,

H = {h_{a, b}(x) = a*x + b mod p}

is a UHF, where a, b range over all field elements. (Left as an easy exercise.)

Note the gain from this perspective: implementing a UHF is a 'representation-invariant' property of a function, so Stretch 1 becomes possible. Moreover, Stretch 2 now makes more sense: it is only jointly that + and * define a UHF, so whatever complexity measure M we lower-bound for UHFs implies only a tradeoff between + and *.

It remains only to sketch the proof of time-space tradeoffs for UHFs, which in fact is a manageable argument along classic lines (the basic form of the argument is attributed to earlier work by Borodin and Cook). The upshot for us will be that for any program computing (any representation of) f(a, b, x) = a*x + b over an n-digit prime modulus, if T denotes worst-case time usage and S worst-case space, T*S = Omega(n^2). Very nice! (Although what this actually implies about the larger of the individual time-space products of + and * under this representation is not clear to me at the moment.)

Let's adjourn for today... wouldn't want to pull a muscle.

Labels: