Andy's Math/CS page

Sunday, January 28, 2007

Going Live

Coming soon: irrefutable evidence that I exist in the physical world! To anyone in the San Diego area, I am giving a talk this Wednesday:

(Update: the talk has been delivered, and the powerpoint is here. Pdf version is here, thanks to the mysterious S...)

Property Testing-- A talk by Andy Drucker

Property testing is a recently developed algorithmic
paradigm inspired by the need for fast (even
constant-time!) approximate solutions to problems with
very large data sets. I will introduce the field,
give representative examples of testing algorithms and
their analysis, and describe lower bounds for the model.

At UCSD, in 4109 EBU3b (CSE building). I will link to my powerpoint when it's complete, should be essentially self-contained.

Despite its bland label, property testing is a fascinating area, and I am looking forward to sharing some of what I've learned about it--so be there!

Addendum: Here is the modest wisdom I have gained from this experience:

i) Don't put too much information on one slide.

ii) Do a practice run--it really helps, and in this case, also made me realize point i).

iii) Don't be afraid to focus on simple results in a talk.

iv) If you are preparing a talk partly to help learn a subject (as I was), you should realistically expect to learn more about public speaking and pedagogy.

Thursday, January 18, 2007


Today I'd like to present an exercise about PP and related complexity classes, requiring only basic familiarity with reductions in complexity theory. The result involved has almost certainly been discovered before, and I'd appreciate any references.

But first, why do I present so many exercises and puzzles? It's not just to skimp on exposition; it's because, for me anyway, doing puzzles is a joyful activity and the best way to really learn a field. When I pose a puzzle, you have my promise that the solution is both interesting and not too complicated, unless I say otherwise. I will give a sense of the basic background you need, so e.g. I won't ask you to reinvent the probabilistic method (as Kuhn says, puzzles are within and relative to a paradigm, whatever that means).

If theoretical computer science wants to compete with other areas of math in enticing and effectively training young problem-solvers, it needs to have a better sense of what its puzzles are. I hope to say more about this soon.

Here is a computational problem, which I'll call CONTEST:

GIVEN: A boolean circuit C on n input bits, outputting a single bit;

DECIDE: is there a z in {0, 1}^n such that at least half of the outputs {C(y): y <= z in the lexicographic ordering} are 1's?

In other words (and supposing for illustration n = 4), if the outputs {C(0000), C(0001), C(0010), ... C(1111)} give votes for candidates 0 and 1, in the order in which they were received, was candidate 1 ever ahead or tied in the running tally? (i.e., does candidate 1 make a 'contest' of it?)

Problem 1: Show that CONTEST is hard for PP, whose standard complete problem is the following:

GIVEN: A boolean circuit C(x) on n input bits, outputting a single bit;

DECIDE: Are at least half of C's outputs 1's?

Problem 2: Show that CONTEST is in the class NP*(PP), whose standard complete problem is the following problem:

GIVEN: A boolean circuit C(x, y) on 2n input bits (|x| = |y| = n), outputting a single bit;

DECIDE: is there an x in {0, 1}^n such that at least half of the outputs {C(x, y): |y| = n } are 1's?

Problem 3: CONTEST is complete for either PP or NP(PP); which one?
The proof is simple and, I think, amusing.

For those who've been bit by the computability bug (as I've been recently, witness some previous posts), there are many related problems:

GIVEN: A polynomial-time machine M outputting 0's and 1's, viewed as a sequence.

DECIDE: a) Do 1's ever outnumber 0's in an initial segment of the sequence?

b) Do 1's outnumber 0's infinitely often?

c) Does the sequence a_n = (number of 1's up to length n ) - (number of 0's up to length n) diverge properly to positive infinity?

These decision problems are listed in increasing order of unsolvability, and their exact complexity can be pretty readily determined using the k-round game characterization of level k of the Arithmetic Hierarchy that I mentioned in the post 'Trees and Infinity II'.

Labels: , ,

Saturday, January 13, 2007

The Spirit of Modern Math: from Compactness to VC Dimension

Helly's Theorem was, I think, the first geometry result I ever loved. It was my first exposure to 'combinatorial geometry', a body of results with much looser hypotheses and exploring more large-scale scenarios than classical geometry.
(If you can find it, definitely check out the out-of-print problem book 'Combinatorial Geometry in the Plane' by Hadwiger & Debrunner, or ask me for sample problems.)

Here is the theorem: Suppose a finite collection C of convex sets in the plane is such that any 3 of them has a nonempty intersection; then there is a point common to every set in C.

There is a simple proof by induction on the number of sets in C, which directly implies a polynomial-time algorithm for finding the point claimed to exist, given only a list of points {p[i, j, k]}, where p[i, j, k] is common to sets i, j, and k and we range over all such triples. It'd be interesting to know the exact complexity for this search problem--is there an outright formula for the common point?--but I haven't given it much thought, although the problem is definitely amenable to parallel processing.

Many more general results are available. From the general perspective, the 2-dimensional version of Helly's Theorem above states that the 'Helly number' of R^2 is 3. Helly also proved in the same fashion that the Helly number of R^n is n + 1. There is a result for infinite collections of sets, too, which I invite you to think about.

Think of a convex set in terms of its membership function; Helly's Theorem says that if any 3 of a finite set of these functions are simultaneously satisfiable, then they are all simultaneously satisfiable. This might sound familiar: recall the Compactness Theorem of Propositional Logic (see my 'Trees and Infinity' post), regarding simultaneous satisfiability of finite Boolean functions; Helly's Theorem says that something even stronger holds for convex membership functions, even though they have an infinite domain.

So we find that, even though the finite-domain condition seemed to play an essential role in our proof of the Compactness Theorem, there is a powerful thread of truth running beyond that theorem's scope. It was an important step to formalize abstract properties like compactness and Helly numbers on the basis of more concrete theorems, and then to seek out zones of applicability beyond the ones theorists were originally interested in. This spirit of generalization is part of the greatness of modern mathematics, although it can certainly be taken to extremes.

Note that the Helly property of families of collections of functions(simultaneous satisfiability of any size-k set of functions from a collection implies simultaneous satisfiability of the whole collection), and the weaker compactness property (simultaneous satisfiability of any finite set of functions from a collection implies simultaneous satisfiability of the whole collection) are more useful than natural. They're a mouthful to formulate, and they don't necessarily correspond to what we think 'ought to happen' most of the time, but whenever we're interested in satisfying a bunch of conditions at once in a construction (which is frequent throughout math), Helly and compactness properties can prove invaluable. Furthermore, they apply surprisingly often (especially compactness). This well justifies the study, in real analysis and topology courses, of compact metric and topological spaces, even though the definitions bewilder many students.

In future posts I hope to say more another great (and more recent) concept that also arose out of generalization and is also more useful than natural: the VC dimension. Interestingly, VC dimension (don't worry about the definition for now) can be seen as a probabilistic analogue of the Helly property. Let me explain.

Suppose you've got a finite collection C of sets, and each set S[i] is big enough (relative to some probability distribution) that a randomly drawn element lies in S[i] with probability p > 0. Then if we draw a large number of elements, each individual S[i] is hit at least once with a very high probability 1 - e; then, using the union bound, every set gets hit at least once with probability at least 1 - |C|*e, and if we get the numbers right this will still be 'large enough' (m = O(log(|C|)/p) will do).

The idea sketched above is the core of the conventional probabilistic method in combinatorics. In applying the union bound, it seems to rely crucially on |C| being finite. However, this is not the case. A much more relaxed condition on C--that its VC dimension be finite--will also gain us the same conclusion, albeit with some differences in quantitative strength.

I am grateful to Professor Sanjoy Dasgupta of UCSD for teaching a wonderful course in Learning Theory that much improved my understanding of VC dimension. He showed us a very slick, sophisticated proof of the result about VC dimension I alluded to. However, I am still looking for a truly simple proof, and would happily accept suboptimal quantitative strength in order to be able to share this amazing phenomenon with a wider and more pressed-for-time audience. If anyone has references or ideas on this, please let me know.

For those familiar with the VC concept, I invite you to show that set/function families can have a finite VC dimension without compactness, and compactness (even the Helly property) without finite VC dimension.

Labels: ,