### So a Puzzle Walks into a Bar...

In the Boston area, two American values--higher education and beer--converge in a big way. Possibly as a result of this convergence, it's a good bet that on any night of the week, there's a bar where you can watch or participate in a trivia contest. (Or so friends tell me.)

These contests have two features that make them interesting from this blog's perspective. First, there's apparently a company that sends its quizmasters all over town, and presumably supplies them with fun questions. So let's assume that the questions are drawn from some fixed distribution D, unknown in advance.

Second, these are team games. So now--let's say you've got n trivia-loving friends, and you're trying to form a team of k < n of them to maximize your chances of winning. k might grow with n.

To help you choose, you can take your friends out for drinks and watch the contests for a few weeks or months before deciding on your team. You can record who out of your friends answered which questions correctly as they watched. (For simplicity, let's say that if someone answers correctly, they know they're right, and if they don't know the answer they keep their mouths shut.) But you can only afford poly(n) of these expensive outings before you decide your team and start trying to rake in gift certificates or stuffed bears or whatever.

So--what are your prospects in general for determining the best team?

(i.e., we're looking for algorithms that, for any group of friends and any D, perform well with high probability.)

If one has only been exposed to computational problems in which all the necessary information is presented as a single input, a problem like this can be disconcerting. There appears to be both a computational and a statistical puzzle to solve, simultaneously. (If the computational aspect doesn't seem challenging, keep in mind: your friends' various knowledge-bases may be similar, disjoint, or anywhere crazily in between...)

My challenge to readers is to think about how this problem fits into algorithms/complexity theory. Not necessarily to solve it--I've found it touches on published research, and I will give references in a future post--but to relate it to what you've seen before. I'm hoping this might be a good warm-up to thinking about ideas from computational learning theory which I also have been wanting to exposit.

I should note that even slight tweaks on this problem could lead to questions I haven't thought about, but which I encourage others to (e.g., what if player ability is affected by drinking?).

These contests have two features that make them interesting from this blog's perspective. First, there's apparently a company that sends its quizmasters all over town, and presumably supplies them with fun questions. So let's assume that the questions are drawn from some fixed distribution D, unknown in advance.

Second, these are team games. So now--let's say you've got n trivia-loving friends, and you're trying to form a team of k < n of them to maximize your chances of winning. k might grow with n.

To help you choose, you can take your friends out for drinks and watch the contests for a few weeks or months before deciding on your team. You can record who out of your friends answered which questions correctly as they watched. (For simplicity, let's say that if someone answers correctly, they know they're right, and if they don't know the answer they keep their mouths shut.) But you can only afford poly(n) of these expensive outings before you decide your team and start trying to rake in gift certificates or stuffed bears or whatever.

So--what are your prospects in general for determining the best team?

(i.e., we're looking for algorithms that, for any group of friends and any D, perform well with high probability.)

If one has only been exposed to computational problems in which all the necessary information is presented as a single input, a problem like this can be disconcerting. There appears to be both a computational and a statistical puzzle to solve, simultaneously. (If the computational aspect doesn't seem challenging, keep in mind: your friends' various knowledge-bases may be similar, disjoint, or anywhere crazily in between...)

My challenge to readers is to think about how this problem fits into algorithms/complexity theory. Not necessarily to solve it--I've found it touches on published research, and I will give references in a future post--but to relate it to what you've seen before. I'm hoping this might be a good warm-up to thinking about ideas from computational learning theory which I also have been wanting to exposit.

I should note that even slight tweaks on this problem could lead to questions I haven't thought about, but which I encourage others to (e.g., what if player ability is affected by drinking?).

Labels: complexity, probability, puzzles

## 9 Comments:

If it were up to me, I'd use a weighted-mixture-of-experts algorithm with boosting, where I give a partial score to the members on the team for each question they get right, in order to compensate for the uncertainty. Then, I'd assign a new team round robin based on the weighted mixture.

You would probably need O(sqrt(#players)) rounds.

By John, at 3:49 PM

Hi John,

interesting... and if it were up to you, what kind of optimality/approximation guarantees would you achieve after this many rounds?

(and what, if any, assumptions would you need to make about the distribution and your friends to derive it?)

By andy, at 3:58 PM

Well, the idea here is that you are exposing each player to player's they've never seen before, if it matters. Of course, the number of groups is important here too, but if they are the same, that should be enough to split apart players who are masking each other... I could be wrong about this, this is just off the top of my head.

By John, at 6:05 PM

I think we have somewhat different modelling assumptions at the moment... in my model, during the training period, all of your friends can try to answer every question, and you record who got which question right, so there is no need to form and reform teams, unless it's as a conceptual aid.

Once the training period is over (after poly(n) samples from D), you've got to choose a team once and for all.

By the way, I haven't tried to model a strategic view of the teams you are up against. In practice this might matter, but my notion was just to choose a team with maximal expected group score in a contest (which amounts to maximal expected score per round, since D is fixed).

By andy, at 6:47 PM

Well, naively: You have a concept class of size < n^k (each set of k friends induces a possibly different classifier over the space of quiz questions). Standard uniform convergence bounds tell you that you only need to observe them for (1/epsilon^2)k*log(n)*log(1/delta) rounds to be able to find the true error rate for each potential team to within +- epsilon (except possibly with probability delta). So thats not much -- you will have beer money to spare.

Now that doesn't answer the algorithmic question, unless k is constant (in that case, you can just keep track of every team's error rate and pick the lowest one). I imagine that the problem of learning a concept class defined as a weighted-majority vote over a concept class C for which we have a learner has been explored, but I haven't read about it.

By Aaron, at 1:57 PM

Thanks, Aaron, that was the answer I was fishing for (which is not to dismiss John's ideas at all)... this problem is dominated by the computational task, not the sampling task. The computational problem is called Maximum Coverage, and though NP-hard, is approximable within a constant factor, see Problem 2.18 in Vazirani's book "Approximation Algorithms".

There's another thing one could show, namely that our task is *at least as hard* as Maximum Coverage. This can be shown by a reduction that exact specifies D and your friends according to some Max-Cov instance, and I'll leave it as an exercise to anyone interested.

By andy, at 2:05 PM

The problem becomes easier if you are not limited to k-person teams (but everyone else is). Following up on John's idea, a recent best-experts algorithm (http://www.cis.upenn.edu/~mkearns/papers/bestavg.pdf) not only guarantees no-regret to the best friend, but also simultaniously to the best mixture of friends (including mixtures of size k).

In a model in which your friends sometimes speak up even when they are wrong, its not obvious that allowing your group to be larger than k should make the problem easier, since its possible that all but k of your friends are drunken idiots, and allowing them all to be on your team would be a sure-fire strategy for losing.

By Aaron, at 2:19 PM

Thanks, Aaron...

The modeling questions are starting to proliferate.

-Speaking of 'regret' seems to indicate an online model where you revise the team or its decision-making process as the trivia contest proceeds. This is a cool setting, but no longer one in which the problem 'essentially' reduces to a Max-Coverage-like NP-hard problem (backgrounding the statistical aspect), as far as I can see.

(btw, I would love to know about any good references or, even better, problem sets for online algorithmics, to develop some basic 'chops' in the area.)

-If we can have wrong guesses, how do we adjudicate between guesses made by our team members? Is it assumed unweighted majority vote is used, or do we model differential and evolving authority levels?

By Andy D, at 7:00 PM

I've been working on a closely related problem, where you can select a different team for each contest and want to do well on average:

http://reports-archive.adm.cs.cmu.edu/anon/2007/CMU-CS-07-171.pdf

Anyway, if you can't alter the team after D rounds and the questions are distributional, then a natural strategy is to treat all previous contests as one huge contest, then run your favorite max-k-coverage approximation algorithm on it and return the solution.

By Daniel, at 1:49 AM

Post a Comment

<< Home