The Impossible
Since this page is now linked to by at least one other site, I should say a few words by way of introduction to a general audience. What motivates the topics I choose for study and exposition? Let me give a loose criterion, which accounts for a good deal but not all of what excites me.
I'm interested in impossible tasks. I want to know where and how and with what frequency they arise, and I want to know what makes them impossible. To a lesser extent I want to know what we can do about them; but I worry that an exclusive concern with the possible makes people hubristic. We need to acknowledge some essential bounds on the range of possible human activity (mental and material) if we are to explore life within those bounds effectively.
What bounds exist obviously depends on who we are, and I am not too interested in entering this debate. Instead, I want robust mathematical tools for taking models of agents and models of situations and determining what is impossible for such agents in such situations.
A good impossibility result should paint a general picture in which we can recognize ourselves and many of our experiences (the match doesn't have to be perfect; the result should be at least potentially 'about people', but not narrowly so). It should model agents as having a significant range of strategic resources, so that the result isn't a triviality and applies in many cases. It should also be suggestive beyond the confines of its mathematical specifics.
Let me give five example results, which I believe illustrate several rich strands of research into related but distinctive types of impossibility. I think each is simple enough to be treated as a puzzle by those with mathematical background, yet important and interesting enough to reward even casual thought or unsuccessful attempts at solution. All are well-known.
I. Information-Theoretic Impossibility
This type of impossibility result assumes that information is distributed far and wide and has to be aggregated to solve a task, but that the amount of communication is limited, or the communication lines are imperfect, or both. See my post 'Information Theory and CS' for more details.
i) Two parties, Alice and Bob, each hold a private n-bit string. They want to determine if their strings are identical. Prove that any (deterministic) protocol to solve this problem needs n bits of communication in the worst case.
ii) Two allied generals are hiding with their armies on opposite sides of a valley containing an enemy camp. The generals know that a surprise attack will only be successful if both armies have at least 100 soldiers and they both attack at exactly dawn on the same day. Neither knows the state of the other's army, so they must communicate, but to do this their messengers must cross the valley and risk being killed (this doesn't affect the enemy's readiness to defend, however).
Formalize this task in such a way that it is provably impossible (i.e. any protocol will yield failed attacks, and/or no attack when an attack is feasible), even if 'enough' messengers 'eventually' make it across in each direction.
The basic idea behind the proofs of information-theoretic impossibility results is usually to isolate two 'inequivalent' system states treated as 'equivalent' by any particular protocol due to the protocol's perceptual limitations. This mistake invalidates the protocol.
II. Game-Theoretic Impossibility
This kind of result assumes multiple agents with distinct and conflicting interests, and describes the way this conflict can preclude socially desirable outcomes.
This type of research is contentious: in general there is a lack of consensus over both how to predict games' outcomes and how to define what is 'socially desirable'. But given any model of these parameters, the basic proof idea is often the following: "Say folks are 'cooperating', yielding a socially desirable state; then there exists an opportunity for some selfish agent to shift his behavior and reap benefits, disrupting the pattern of cooperation." So instability is a key concept.
So on the one hand, there are results describing how a particular societal arrangement turns out poorly due to conflicts of interest. But there are also higher-level problems as to designing institutions that minimize the costs of conflict. It's the latter that interests me more and where interesting impossibility results (like Arrow's Theorem) seem to lie.
I hesitate to offer the Iterated Prisoner's Dilemma as a 'puzzle' (what's the solution?), and I am unsure whether Arrow's Theorem and the Chaos Theorem of voting theory should really be described as game-theoretic in nature. To be honest, my study of this type of results has been comparatively dilettantish, and I'd appreciate any suggestions as as to unifying themes and representative puzzles. But for the time being I'll suggest in sketch form the Median Voter Theorem (itself a form of the Prisoner's Dilemma):
iii) Two competing ice-cream vendors are deciding where to park their carts along the beach. Customers dislike walking on the hot sand and will go to the closest vendor. Why can't we expect a socially optimal outcome?
Game-theoretic research has had the least impact on computer science of the four categories I'm discussing, but its foothold in CS is growing. Any comments on the significance/desirability of this development? I don't yet have a strong opinion.
III. Intractibility-based Impossibility
This is (in my view) by far the most interesting and mysterious form of impossibility: we are given all the information we need to solve a mathematically well-defined problem, but we can't do it, because--why? Because we're not smart enough? What does 'smart' mean, and how smart does 'smart' get? I was led into wondering about intractibility by my
youthful experience with the board game Go.
This type of impossibility is the object of study in 'computational complexity'. This theory aims to determine the intrinsic requirements of any algorithms solving particular computational problems. For example, in the 'traveling salesman problem' we are given n cities and distances between them, and asked to produce the minimum-length tour of all cities. We would like an algorithm that reliably produces the answer in n^2 steps, but we strongly suspect such an algorithm does not exist--smart doesn't get that smart. Still, our ignorance in these matters is drastic. See the archives of Lance Fortnow's website for a good online tutorial.
We don't have good tools to diagnose high but finite problem complexity. On the other hand, we're now pretty good at diagnosing *infinite* problem complexity; the following result of Turing is foundational. (Note: I hesitate to pose this as a puzzle; although simple in retrospect, and essentially the first proof I was able to really appreciate in this life, it was one of Hilbert's famous problems, open for 30 years!)
iv) Prove that there is no computer program P (pick your favorite standard programming model, with the idealization of unbounded memory space and runtime) that, given a description of another program Q, reliably determines whether Q ever halts on the input '0'.
As a hint, don't try to understand the internal dynamics of the program P that we've supposed exists. Only consider its input-output behavior, and use it as a subroutine (in a larger program P' that P cannot possibly analyze correctly).
The 'black-box methodology' used in this proof is paradigmatic in much research on intractibility. It underlines that, like it or not, our main tools for understanding intractable problems are the same constructive algorithmic techniques that we use to solve tractable problems. The only difference is that we work with imaginary components (like P above) and the goal of our algorithms is to show their own inconsistency.
IV. Cryptographic Impossibility
Modern cryptography considers such complex scenarios that no one schema is likely to fit all of them. Nor can I summarize the sources of impossibility here. But the basic idea is often that agents want to distribute certain types of information without 'leaking' anything more to eavesdroppers, and without revealing more than they have to with each other.
Cryptography uses intractibility to hide sensitive information (witness public-key cryptosystems like RSA). But it is also valuable to see how much of cryptography is possible without the use of intractibility, and hopefully this approach reveals things about the 'rules of information' that are not elucidated by the more fully-cooperative settings of type-I results. The following puzzle is in this vein.
v) Alice and Bob have unlimited computational resources. Each wants to know if they both have a sensitive medical condition so they can commiserate (each knows only his/her own status). They want a protocol that will allow them both to determine this, without leaking anything more. E.g. if only Bob has the condition, Alice shouldn't find this out (although Bob will necessarily learn Alice's negative status). Can this be done? What if the protocol is randomized, with small error probabilities and info leakages acceptable--how good can we get it?
(The label for this type of cryptographic task is 'secure multiparty computation'.)
Plenty of research remains in each of these four areas.
Or, one can jump on the bandwagon of 'Internet-centric' computer science,
and deal with situations like e-commerce in which all of these issues are
simultaneously present. I wonder, in such cases, is it still coherent or worthwhile to 'decompose' the impossible in a situation into these different factors? And are there important impossibility factors I'm leaving out, or unfairly conflating?
Comments welcome.
I'm interested in impossible tasks. I want to know where and how and with what frequency they arise, and I want to know what makes them impossible. To a lesser extent I want to know what we can do about them; but I worry that an exclusive concern with the possible makes people hubristic. We need to acknowledge some essential bounds on the range of possible human activity (mental and material) if we are to explore life within those bounds effectively.
What bounds exist obviously depends on who we are, and I am not too interested in entering this debate. Instead, I want robust mathematical tools for taking models of agents and models of situations and determining what is impossible for such agents in such situations.
A good impossibility result should paint a general picture in which we can recognize ourselves and many of our experiences (the match doesn't have to be perfect; the result should be at least potentially 'about people', but not narrowly so). It should model agents as having a significant range of strategic resources, so that the result isn't a triviality and applies in many cases. It should also be suggestive beyond the confines of its mathematical specifics.
Let me give five example results, which I believe illustrate several rich strands of research into related but distinctive types of impossibility. I think each is simple enough to be treated as a puzzle by those with mathematical background, yet important and interesting enough to reward even casual thought or unsuccessful attempts at solution. All are well-known.
I. Information-Theoretic Impossibility
This type of impossibility result assumes that information is distributed far and wide and has to be aggregated to solve a task, but that the amount of communication is limited, or the communication lines are imperfect, or both. See my post 'Information Theory and CS' for more details.
i) Two parties, Alice and Bob, each hold a private n-bit string. They want to determine if their strings are identical. Prove that any (deterministic) protocol to solve this problem needs n bits of communication in the worst case.
ii) Two allied generals are hiding with their armies on opposite sides of a valley containing an enemy camp. The generals know that a surprise attack will only be successful if both armies have at least 100 soldiers and they both attack at exactly dawn on the same day. Neither knows the state of the other's army, so they must communicate, but to do this their messengers must cross the valley and risk being killed (this doesn't affect the enemy's readiness to defend, however).
Formalize this task in such a way that it is provably impossible (i.e. any protocol will yield failed attacks, and/or no attack when an attack is feasible), even if 'enough' messengers 'eventually' make it across in each direction.
The basic idea behind the proofs of information-theoretic impossibility results is usually to isolate two 'inequivalent' system states treated as 'equivalent' by any particular protocol due to the protocol's perceptual limitations. This mistake invalidates the protocol.
II. Game-Theoretic Impossibility
This kind of result assumes multiple agents with distinct and conflicting interests, and describes the way this conflict can preclude socially desirable outcomes.
This type of research is contentious: in general there is a lack of consensus over both how to predict games' outcomes and how to define what is 'socially desirable'. But given any model of these parameters, the basic proof idea is often the following: "Say folks are 'cooperating', yielding a socially desirable state; then there exists an opportunity for some selfish agent to shift his behavior and reap benefits, disrupting the pattern of cooperation." So instability is a key concept.
So on the one hand, there are results describing how a particular societal arrangement turns out poorly due to conflicts of interest. But there are also higher-level problems as to designing institutions that minimize the costs of conflict. It's the latter that interests me more and where interesting impossibility results (like Arrow's Theorem) seem to lie.
I hesitate to offer the Iterated Prisoner's Dilemma as a 'puzzle' (what's the solution?), and I am unsure whether Arrow's Theorem and the Chaos Theorem of voting theory should really be described as game-theoretic in nature. To be honest, my study of this type of results has been comparatively dilettantish, and I'd appreciate any suggestions as as to unifying themes and representative puzzles. But for the time being I'll suggest in sketch form the Median Voter Theorem (itself a form of the Prisoner's Dilemma):
iii) Two competing ice-cream vendors are deciding where to park their carts along the beach. Customers dislike walking on the hot sand and will go to the closest vendor. Why can't we expect a socially optimal outcome?
Game-theoretic research has had the least impact on computer science of the four categories I'm discussing, but its foothold in CS is growing. Any comments on the significance/desirability of this development? I don't yet have a strong opinion.
III. Intractibility-based Impossibility
This is (in my view) by far the most interesting and mysterious form of impossibility: we are given all the information we need to solve a mathematically well-defined problem, but we can't do it, because--why? Because we're not smart enough? What does 'smart' mean, and how smart does 'smart' get? I was led into wondering about intractibility by my
youthful experience with the board game Go.
This type of impossibility is the object of study in 'computational complexity'. This theory aims to determine the intrinsic requirements of any algorithms solving particular computational problems. For example, in the 'traveling salesman problem' we are given n cities and distances between them, and asked to produce the minimum-length tour of all cities. We would like an algorithm that reliably produces the answer in n^2 steps, but we strongly suspect such an algorithm does not exist--smart doesn't get that smart. Still, our ignorance in these matters is drastic. See the archives of Lance Fortnow's website for a good online tutorial.
We don't have good tools to diagnose high but finite problem complexity. On the other hand, we're now pretty good at diagnosing *infinite* problem complexity; the following result of Turing is foundational. (Note: I hesitate to pose this as a puzzle; although simple in retrospect, and essentially the first proof I was able to really appreciate in this life, it was one of Hilbert's famous problems, open for 30 years!)
iv) Prove that there is no computer program P (pick your favorite standard programming model, with the idealization of unbounded memory space and runtime) that, given a description of another program Q, reliably determines whether Q ever halts on the input '0'.
As a hint, don't try to understand the internal dynamics of the program P that we've supposed exists. Only consider its input-output behavior, and use it as a subroutine (in a larger program P' that P cannot possibly analyze correctly).
The 'black-box methodology' used in this proof is paradigmatic in much research on intractibility. It underlines that, like it or not, our main tools for understanding intractable problems are the same constructive algorithmic techniques that we use to solve tractable problems. The only difference is that we work with imaginary components (like P above) and the goal of our algorithms is to show their own inconsistency.
IV. Cryptographic Impossibility
Modern cryptography considers such complex scenarios that no one schema is likely to fit all of them. Nor can I summarize the sources of impossibility here. But the basic idea is often that agents want to distribute certain types of information without 'leaking' anything more to eavesdroppers, and without revealing more than they have to with each other.
Cryptography uses intractibility to hide sensitive information (witness public-key cryptosystems like RSA). But it is also valuable to see how much of cryptography is possible without the use of intractibility, and hopefully this approach reveals things about the 'rules of information' that are not elucidated by the more fully-cooperative settings of type-I results. The following puzzle is in this vein.
v) Alice and Bob have unlimited computational resources. Each wants to know if they both have a sensitive medical condition so they can commiserate (each knows only his/her own status). They want a protocol that will allow them both to determine this, without leaking anything more. E.g. if only Bob has the condition, Alice shouldn't find this out (although Bob will necessarily learn Alice's negative status). Can this be done? What if the protocol is randomized, with small error probabilities and info leakages acceptable--how good can we get it?
(The label for this type of cryptographic task is 'secure multiparty computation'.)
Plenty of research remains in each of these four areas.
Or, one can jump on the bandwagon of 'Internet-centric' computer science,
and deal with situations like e-commerce in which all of these issues are
simultaneously present. I wonder, in such cases, is it still coherent or worthwhile to 'decompose' the impossible in a situation into these different factors? And are there important impossibility factors I'm leaving out, or unfairly conflating?
Comments welcome.
Labels: complexity, crypto, puzzles
1 Comments:
Andy,
There is an article about the mysterious Perelman in this week's New Yorker, which might interest you. It was co-written by Sylvia Nasar, of 'Beautiful Mind' fame.
By Anonymous, at 8:10 PM
Post a Comment
<< Home