Helly's Theorem was, I think, the first geometry result I ever loved. It was my first exposure to 'combinatorial geometry', a body of results with much looser hypotheses and exploring more large-scale scenarios than classical geometry.
(If you can find it, definitely check out the out-of-print problem book 'Combinatorial Geometry in the Plane' by Hadwiger & Debrunner, or ask me for sample problems.)
Here is the theorem:
Suppose a finite collection C
of convex sets in the plane is such that any 3 of them has a nonempty intersection; then there is a point common to every set in C.
There is a simple proof by induction on the number of sets in
C, which directly implies a polynomial-time algorithm for finding the point claimed to exist, given only a list of points
{p[i, j, k]}, where
p[i, j, k] is common to sets
i, j, and
k and we range over all such triples. It'd be interesting to know the exact complexity for this search problem--is there an outright formula for the common point?--but I haven't given it much thought, although the problem is definitely amenable to parallel processing.
Many more general results are available. From the general perspective, the 2-dimensional version of Helly's Theorem above states that the 'Helly number' of
R^2 is 3. Helly also proved in the same fashion that the Helly number of
R^n is
n + 1. There is a result for infinite collections of sets, too, which I invite you to think about.
Think of a convex set in terms of its membership function; Helly's Theorem says that if any 3 of a finite set of these functions are simultaneously satisfiable, then they are all simultaneously satisfiable. This might sound familiar: recall the Compactness Theorem of Propositional Logic (see my 'Trees and Infinity' post), regarding simultaneous satisfiability of finite Boolean functions; Helly's Theorem says that something even stronger holds for convex membership functions, even though they have an infinite domain.
So we find that, even though the finite-domain condition seemed to play an essential role in our proof of the Compactness Theorem, there is a powerful thread of truth running beyond that theorem's scope. It was an important step to formalize abstract properties like compactness and Helly numbers on the basis of more concrete theorems, and then to seek out zones of applicability beyond the ones theorists were originally interested in. This spirit of generalization is part of the greatness of modern mathematics, although it can certainly be taken to extremes.
Note that the Helly property of families of collections of functions(simultaneous satisfiability of any size-
k set of functions from a collection implies simultaneous satisfiability of the whole collection), and the weaker compactness property (simultaneous satisfiability of any finite set of functions from a collection implies simultaneous satisfiability of the whole collection) are more useful than natural. They're a mouthful to formulate, and they don't necessarily correspond to what we think 'ought to happen' most of the time, but whenever we're interested in satisfying a bunch of conditions at once in a construction (which is frequent throughout math), Helly and compactness properties can prove invaluable. Furthermore, they apply surprisingly often (especially compactness). This well justifies the study, in real analysis and topology courses, of compact metric and topological spaces, even though the definitions bewilder many students.
In future posts I hope to say more another great (and more recent) concept that also arose out of generalization and is also more useful than natural: the VC dimension. Interestingly, VC dimension (don't worry about the definition for now) can be seen as a probabilistic analogue of the Helly property. Let me explain.
Suppose you've got a finite collection
C of sets, and each set
S[i] is big enough (relative to some probability distribution) that a randomly drawn element lies in
S[i] with probability
p > 0. Then if we draw a large number of elements, each individual
S[i] is hit at least once with a very high probability
1 - e; then, using the union bound,
every set gets hit at least once with probability at least
1 - |C|*e, and if we get the numbers right this will still be 'large enough' (
m = O(log(|C|)/p) will do).
The idea sketched above is the core of the conventional probabilistic method in combinatorics. In applying the union bound, it seems to rely crucially on
|C| being finite. However, this is not the case. A much more relaxed condition on
C--that its VC dimension be finite--will also gain us the same conclusion, albeit with some differences in quantitative strength.
I am grateful to Professor Sanjoy Dasgupta of UCSD for teaching a wonderful course in Learning Theory that much improved my understanding of VC dimension. He showed us a very slick, sophisticated proof of the result about VC dimension I alluded to. However, I am still looking for a truly simple proof, and would happily accept suboptimal quantitative strength in order to be able to share this amazing phenomenon with a wider and more pressed-for-time audience. If anyone has references or ideas on this, please let me know.
For those familiar with the VC concept, I invite you to show that set/function families can have a finite VC dimension without compactness, and compactness (even the Helly property) without finite VC dimension.
Labels: general math, geometry