tag:blogger.com,1999:blog-19908808Wed, 06 May 2015 08:36:39 +0000general mathpuzzlescomplexitygeometrycomputabilityprobabilitycryptograd lifemiscellaneousthe infiniteAndy's Math/CS pageSporadic notes on mathematical and non-mathematical topics, from a student of computational complexity.http://andysresearch.blogspot.com/noreply@blogger.com (Andy D)Blogger96125tag:blogger.com,1999:blog-19908808.post-4049664667048889034Sat, 18 Oct 2014 20:19:00 +00002014-10-18T16:19:01.178-04:00Making academic contacts (some thoughts for new researchers)Suppose you're an undergraduate hoping to go into academic research, or a beginning grad student. It could be very helpful to have academic contacts at other schools---such as professors, but also maybe postdocs and grad students. (I'm concerned here with starting scientific discussions and/or collaborations, not making contacts for graduate admissions per se. Admissions decisions will be made based on your application. But a successful collaboration and resulting publication, or letter of support, is one of the best things you can hope to add to your applications.)<br /><br />Say you've never written a paper, and may not yet have the time or readiness to solve major problems on your own; but you want to have real scientific conversations and perhaps even collaboration with a busy researcher you admire. What could you possibly talk about? What can you offer them? Below is my advice. <br /><br /><br />***<br /><br /><b>1) </b>In my view, the best approach to making contacts is closely related to how I'd recommend spending your own study time as a beginning researcher: (a) start reading research papers as soon as possible, and (b) try diligently to <i>ask interesting follow-up questions </i>to the papers you like. <br /><br />Asking questions is a central research skill. It is one you can start learning early on because, even in technically difficult fields, <i>simple patterns of questioning</i> often recur in many forms across papers. E.g., in the theory of computing, if there is a new model of computation being studied, it is likely to have randomized or nondeterministic versions that can be studied next. Or if there is an algorithmic problem, it might have variant problems to be solved in, say, the communication protocol or decision-tree models. And so on. <br /><br />Even though it is not so difficult to produce these variations, few people do so systematically. If you keep at it, and keep trying to learn and adopt more patterns of questioning, you will hit upon interesting and solvable problems. You will also start to develop a taste about which problems are likely to be most interesting, which are too hard, and so on. (Side note---I believe that keeping a research journal with your questions greatly helps this process.)<br /><br /><br /><b>2) </b>Now returning to the goal of making contacts, I think that one of the best kind of emails you can send is one that contains a <i>good research question</i>, that's reasonably related to the person's research area. If you can do this and pique their interest, they may well enter into a dialogue that could become a full-fledged collaboration---regardless of your credentials on paper. After all, you've already brought something important to the table.<br /><br />A key advantage of this approach to making contacts is that you're aiming to attract the researcher's own interest and curiosity, rather than just asking them for something. Another advantage is that there is a lot of freedom in asking and considering research questions. If you ask someone an interesting, specific question about topic X, there is no presumption that X is your only interest or that future interactions will be limited to that. And in the course of your correspondence, you might realize there is a better question to ask about X, or you might getting around to asking something about topic Y as well. That's how scientific interactions go.<br /><br />So you don't have to worry about choosing the exact right question to represent yourself, as long as it is good and leads to discussion. In this respect it's less stressful than trying to introduce yourself by defining your whole research outlook. And it's certainly more promising than suggesting a collaboration based on your GPA or work experience.<br /><br />Of course, asking good questions isn't easy and takes work. You should think carefully about them before sending, try your best to answer them yourself, and see if there are initial observations or partial solutions you can provide with your question to show you're serious. Maybe in the end it will be a question they can answer right away; but again, it could still lead to other questions. There is little risk in trying and it is likely to at least be a useful exercise.http://andysresearch.blogspot.com/2014/10/making-academic-contacts-some-thoughts.htmlnoreply@blogger.com (Andy D)6tag:blogger.com,1999:blog-19908808.post-9087854085167604372Wed, 28 Sep 2011 16:35:00 +00002011-09-28T12:33:40.947-04:00complexitygeometryA geometric-graphs offering<div><div><div><b><span class="Apple-style-span" style="font-size:large;">Introduction</span></b></div><div><br /></div><div>After a night of board games, I found myself thinking about the peculiarities of movement on discrete versions of the plane. This suggested a number of questions. As they would likely suffer neglect at my hands, I'm posting them here for others to enjoy---any ideas or references are very welcome.<div></div></div><div><br /></div><div>The basic structure that got me thinking was the <i>8-neighborhood (</i>or <i>Moore) graph:</i></div></div><div><br /></div><a href="http://1.bp.blogspot.com/-0BmEIbJ7aH0/ToM3TWmmwQI/AAAAAAAAAEg/edzCUAU8s9I/s1600/8Neighbor.png" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}"><img src="http://1.bp.blogspot.com/-0BmEIbJ7aH0/ToM3TWmmwQI/AAAAAAAAAEg/edzCUAU8s9I/s320/8Neighbor.png" border="0" alt="" id="BLOGGER_PHOTO_ID_5657426362532020482" style="display: block; margin-top: 0px; margin-right: auto; margin-bottom: 10px; margin-left: auto; text-align: center; cursor: pointer; width: 320px; height: 317px; " /></a><div><div><i><span class="Apple-style-span" style="font-style: normal; "><b><span class="Apple-style-span" style="font-weight: normal; "><br /></span></b></span></i></div><div>This graph (which we'll denote by $G_{8N}$) describes how a chess king moves on an infinite chessboard. It's often convenient for game designers, but there's something... <i>wrong</i> about it: it <i>distorts distances</i> in the plane.</div></div><div><br /></div><div>To make this formal, let $G$ be a (finite or countably infinite) undirected graph. Let $d_G(u, v)$ denote the shortest-path distance in $G$ between vertices $u, v$.</div><div><br /></div><div>We say $F: V(G) \rightarrow \mathbf{R}^2$ is an <i>embedding</i> of $G$ if $|| F(u) - F(v)||_2 \geq d_G(u, v)$ for all distinct vertices $u, v$. (This is just a normalization convention.) Define the <i>distortion</i> of $F$ as the maximum (supremum) of</div><div>\[ \frac{|| F(u) - F(v) ||_2}{d_G(u, v)} , \quad{} u \neq v . \]</div><div><br /></div><div>The study of low-distortion embeddings (which can be pursued in a more general setting) has been a highly-active TCS research topic, largely due to its role in designing efficient <i>approximation algorithms </i>for NP-hard problems<i>.</i> My initial focus here will be on embeddings for <i>periodic</i> and <i>highly-symmetric</i> graphs like $G_{8N}$.</div><div><br /></div><div>As an example, look at the usual embedding of the 8-neighborhood graph into the plane. This has a distortion of $\sqrt{2}$, witnessed by points along a diagonal.</div><div><br /></div><div><b>Warm-up:</b> Show that $\sqrt{2}$ is the minimum distortion of <i>any</i> embedding of the 8-neighborhood graph $G_{8N}$.</div><div><br /></div><div><b><span class="Apple-style-span" style="font-size:large;">Symmetries and Distortion</span></b></div><div><br /></div><div>Now a basic observation here is that, when we started with a graph with a high degree of inherent symmetry, we found that its optimal (distortion-minimizing) embedding was also highly-symmetric. I would like to ask whether this is <i>always</i> the case.</div><div><br /></div><div>For background, an <i>automorphism</i> of $G$ is a bijective mapping $\phi$ from $V(G)$ to itself, such that $(u, v) \in E(G) \Leftrightarrow (\phi(u), \phi(v)) \in E(G)$.</div><div>Let's say that a graph $G$ has <i>2D symmetry</i> if there's an embedding $F$ of $V(G)$ into the plane, and linearly independent vectors $\mathbf{p}, \mathbf{q} \in \mathbf{R}^2$, such that a translation of the plane by $\mathbf{p}$ or by $\mathbf{q}$ induces an automorphism of $G$ (in the obvious way). In this case we also say the embedding $F$ has 2D symmetry.</div><div><br /></div><div>So for example, with the usual embedding of $G_{8N}$, we can take $\mathbf{p} = (1, 0), \mathbf{q} = (0, 1)$.</div><div><br /></div><div><b>Question 1:</b> Suppose $G$ has 2D symmetry. Does this imply that there is a distortion-minimizing embedding of $G$ with 2D symmetry?</div><div><br /></div><div>Say that $G$ is <i>transitive</i> if "all points look the same:" there's an automorphism of $G$ mapping any vertex $u$ to any other desired vertex $v$. Similarly, say that an embedding $F$ of $G$ is transitive, if translating the plane by any vector of form $(F(u) - F(v))$ induces an automorphism of $G$. (The usual embedding of $G_{8N}$ is transitive.)</div><div><br /></div><div><b>Question 2: </b>Suppose $G$ has 2D symmetry and is transitive. Is there a distortion-minimizing, transitive embedding of $G$ with 2D symmetry?</div><div><br /></div><div><b>Question 3:</b> Suppose $G$ has 2D symmetry, and is presented to us (in the natural way, by a finite description of a repeating "cell"). What is the complexity of determining the minimum distortion of any embedding of $G$? What about the case where $G$ is also transitive?</div><div><br /></div><div>It seems clear that the answers to Questions 1 and 2 are highly relevant to Question 3.</div><div><br /></div><div><b><span class="Apple-style-span" style="font-size:large;">Graph Metrics and Movement</span></b></div><div><br /></div><div>I want to shift focus to another type of question suggested by $G_{8N}$. Let's back up a bit, and think about the familiar <i>4-neighborhood</i> graph $G_{4N}$. It's not hard to see that the minimum-distortion embedding of $G_{4N}$ <i>also</i> has distortion $\sqrt{2}$. (You have to blow up the grid by a $\sqrt{2}$ factor to make the diagonals long enough.) Yet $G_{4N}$ seems considerably more <i>natural</i> as a discrete representation of movement in the plane somehow. Why?</div><div><br /></div><div>I think the answer is that, with the usual embedding of $G_{4N}$, the graph distances $d_G(u, v)$ correspond to actual Euclidean travel-distances, under the restricted form of paths in which we confine ourselves to the line segments between vertices. (You can see why this metric is sometimes called "taxicab geometry.") By contrast, the usual embedding of $G_{8N}$ doesn't have this interpretation.</div><div><br /></div><div>However, consider the following system of paths connecting points in the plane:</div></div><div><br /></div><a href="http://1.bp.blogspot.com/-93zb4fnzF0A/ToM94KrjZgI/AAAAAAAAAEo/xb9fwfjV964/s1600/squiggly.png" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 172px;" src="http://1.bp.blogspot.com/-93zb4fnzF0A/ToM94KrjZgI/AAAAAAAAAEo/xb9fwfjV964/s320/squiggly.png" border="0" alt="" id="BLOGGER_PHOTO_ID_5657433592056473090" /></a><div><br /></div><div>If we restrict ourselves to these paths, and if we make those squiggles the right length, then shortest Euclidean travel-distances actually <i>do</i> correspond to distances in the graph $G_{8N}$! This is so, <i>even if</i> we're allowed to switch paths at the crossover points.</div><div>So $G_{8N}$ is not totally weird as a discrete model of movement in the plane; it just corresponds to an odder restriction of movement.</div><div><br /></div><div> </div><div>More generally, say that a graph $G$, with nonnegative edge-weights ("lengths"), is an <i>obstructed-plane graph, </i>if there is an embedding of $G$ into $\mathbf{R}^2$ along with a set of "obstructions" (just a point-set in $\mathbf{R}^2$), such that shortest paths in $G$ correspond to shortest obstruction-avoiding paths in $\mathbf{R}^2$. </div><div><br /></div><div><b>Question 4: </b>What is the complexity of deciding whether a given graph (finite, say) is an obstructed-plane graph?</div><div><br /></div><div>It simplifies things a bit to realize that, in trying to find an obstructed-plane realization of a graph $G$, the obstructions may as well be all of the plane <i>except</i> the intended shortest paths between all pairs of points. Using this observation, we can at least show that our problem is in NP. Is it NP-complete?</div><div><br /></div><div>Any planar graph, with arbitrary nonnegative edge-weights, is clearly an obstructed-plane graph. But we've seen that $G_{8N}$, a non-planar graph, is also an obstructed-plane graph. (Quick---prove that $G_{8N}$ is non-planar!) The essence of the problem is to find systems of paths in the plane which, though they may cross, do not introduce any undesired "short-cuts" between vertices. </div><div><br /></div><div>Now suppose we draw $G$ in the plane, along with a collection of "intended" shortest paths between each vertices. (That is, we will obstruct the rest of the plane, and hope that these paths are indeed shortest in what remains.) We expect that the intended $u$-$v$ path is of Euclidean length $d_G(u, v)$.</div><div><br /></div><div>A simple observation is that in order to avoid short-cuts, all 4-tuples of distinct vertices $u, u', v, v'$ must obey the following property: </div><div><br /></div><div>$\bullet$ If the "intended path" from $u$ to $v$ intersects the intended path from $u'$ to $v'$, then</div><div>\[ d_G(u, v) + d_G(u', v') \geq d_G(u, u') + d_G(v, v') \]</div><div>and</div><div>\[ d_G(u, v) + d_G(u', v') \geq d_G(u, v') + d_G(u', v) . \]</div><div><br /></div><div><b>Question 5:</b> Is the necessary condition above also sufficient?</div><div><br /></div><div>In a narrow sense, the answer to Question 5 is No: it's possible to draw a graph in this way and still introduce undesired short-cuts. My real question is whether, from a graph drawing with the property above, we can lengthen and contract the lengths of segments, <i>without</i> changing the topological structure of the drawing, in order to get the desired obstructed-plane realization.</div><div><br /></div><div>It may be foolish to hope for such a simple condition to be sufficient. Also, an affirmative answer to Question 5 wouldn't seem to imply any new complexity upper bound for our problem (except perhaps to speed up the NP verification a bit). I ask only because I find the question interesting, and wasn't able to cook up any counterexamples in my brief attempt.</div>http://andysresearch.blogspot.com/2011/09/geometric-graphs-offering.htmlnoreply@blogger.com (Andy D)5tag:blogger.com,1999:blog-19908808.post-6812022639293292087Wed, 15 Jun 2011 14:21:00 +00002011-06-15T12:28:52.971-04:00complexityJoint computational complexity, and the "buy-one-get-one-free conjecture"<p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia; min-height: 27.0px"><span class="Apple-style-span" style="font-size: large;">Below is a simple-to-state open question, stemming from </span><a href="http://arxiv.org/abs/0808.2662"><span style="color:#002bee;"><span class="Apple-style-span" style="font-size: large;">this</span></span></a><span class="Apple-style-span" style="font-size: large;"> paper of mine from CCC'09. First, I'll state the question; then I'll give some background, explaining how it's an instance of a more general and significant problem.</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><b><span class="Apple-style-span" style="font-size: large;">The question</span></b></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><span class="Apple-style-span" style="font-size: large;">Let's consider the standard two-party model of communication complexity. Given inputs </span><i><span class="Apple-style-span" style="font-size: large;">x</span></i><span class="Apple-style-span" style="font-size: large;"> and </span><i><span class="Apple-style-span" style="font-size: large;">y </span></i><span class="Apple-style-span" style="font-size: large;">to Alice and Bob respectively, suppose there are 3 functions the two parties are interested in evaluating on these inputs---let's call them </span><i><span class="Apple-style-span" style="font-size: large;">F(x, y), G(x, y), H(x, y).</span></i></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><b><span class="Apple-style-span" style="font-size: large;">Question: </span></b><span class="Apple-style-span" style="font-size: large;">is there a collection of total functions </span><i><span class="Apple-style-span" style="font-size: large;">F</span></i><span class="Apple-style-span" style="font-size: large;">, </span><i><span class="Apple-style-span" style="font-size: large;">G</span></i><span class="Apple-style-span" style="font-size: large;">, </span><i><span class="Apple-style-span" style="font-size: large;">H</span></i><span class="Apple-style-span" style="font-size: large;">, and a positive value </span><i><span class="Apple-style-span" style="font-size: large;">T</span></i><span class="Apple-style-span" style="font-size: large;">, such that:</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><b><span class="Apple-style-span" style="font-size: large;">(i) </span></b><span class="Apple-style-span" style="font-size: large;">any one of </span><i><span class="Apple-style-span" style="font-size: large;">F</span></i><span class="Apple-style-span" style="font-size: large;">,</span><i><span class="Apple-style-span" style="font-size: large;"> G</span></i><span class="Apple-style-span" style="font-size: large;">,</span><i><span class="Apple-style-span" style="font-size: large;"> H</span></i><span class="Apple-style-span" style="font-size: large;"> requires at least </span><i><span class="Apple-style-span" style="font-size: large;">T</span></i><span class="Apple-style-span" style="font-size: large;"> bits of communication to compute;</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><b><span class="Apple-style-span" style="font-size: large;">(ii)</span></b><span class="Apple-style-span" style="font-size: large;"> any </span><b><span class="Apple-style-span" style="font-size: large;">two</span></b><span class="Apple-style-span" style="font-size: large;"> of </span><i><span class="Apple-style-span" style="font-size: large;">F</span></i><span class="Apple-style-span" style="font-size: large;">,</span><i><span class="Apple-style-span" style="font-size: large;"> G</span></i><span class="Apple-style-span" style="font-size: large;">,</span><i><span class="Apple-style-span" style="font-size: large;"> H</span></i><span class="Apple-style-span" style="font-size: large;"> can be computed in </span><i><span class="Apple-style-span" style="font-size: large;">(1.01 T)</span></i><span class="Apple-style-span" style="font-size: large;"> bits of communication, on a </span><b><span class="Apple-style-span" style="font-size: large;">common input</span></b><span class="Apple-style-span" style="font-size: large;"> </span><i><span class="Apple-style-span" style="font-size: large;">(x, y)</span></i><span class="Apple-style-span" style="font-size: large;">;</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><b><span class="Apple-style-span" style="font-size: large;">(iii)</span></b><span class="Apple-style-span" style="font-size: large;"> but, computing all three of </span><i><span class="Apple-style-span" style="font-size: large;">F</span></i><span class="Apple-style-span" style="font-size: large;">, </span><i><span class="Apple-style-span" style="font-size: large;">G</span></i><span class="Apple-style-span" style="font-size: large;">, </span><i><span class="Apple-style-span" style="font-size: large;">H</span></i><span class="Apple-style-span" style="font-size: large;"> on a common input requires at least </span><i><span class="Apple-style-span" style="font-size: large;">(1.99 T)</span></i><span class="Apple-style-span" style="font-size: large;"> bits of communication.</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><span class="Apple-style-span" style="font-size: large;">I believe such a collection exists. We can call this the </span><b><span class="Apple-style-span" style="font-size: large;">'buy-one-get-one-free conjecture'</span></b><span class="Apple-style-span" style="font-size: large;">: Think of </span><i><span class="Apple-style-span" style="font-size: large;">T</span></i><span class="Apple-style-span" style="font-size: large;"> as the individual 'price' of the 'items' </span><i><span class="Apple-style-span" style="font-size: large;">F</span></i><span class="Apple-style-span" style="font-size: large;">, </span><i><span class="Apple-style-span" style="font-size: large;">G</span></i><span class="Apple-style-span" style="font-size: large;">, </span><i><span class="Apple-style-span" style="font-size: large;">H</span></i><span class="Apple-style-span" style="font-size: large;">; we want to arrange a special 'deal' where the second item is essentially free, but one has to pay full-price for the third item.</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><span class="Apple-style-span" style="font-size: large;">Now if you think about it, what we're looking for is pretty strange. The function </span><i><span class="Apple-style-span" style="font-size: large;">F</span></i><span class="Apple-style-span" style="font-size: large;"> should be efficiently computable in at least two 'essentially different' ways---one of which also gives us the value of </span><i><span class="Apple-style-span" style="font-size: large;">G</span></i><span class="Apple-style-span" style="font-size: large;">, and one of which gives </span><i><span class="Apple-style-span" style="font-size: large;">H</span></i><span class="Apple-style-span" style="font-size: large;">---yet there should be no efficient scheme to compute </span><i><span class="Apple-style-span" style="font-size: large;">F</span></i><span class="Apple-style-span" style="font-size: large;"> that gives us </span><i><span class="Apple-style-span" style="font-size: large;">G</span></i><span class="Apple-style-span" style="font-size: large;"> and </span><i><span class="Apple-style-span" style="font-size: large;">H</span></i><span class="Apple-style-span" style="font-size: large;"> simultaneously. (This property seems easier to contrive when the inputs </span><i><span class="Apple-style-span" style="font-size: large;">x, y</span></i><span class="Apple-style-span" style="font-size: large;"> are assumed to have a special, correlated form; I rule this out by insisting that </span><i><span class="Apple-style-span" style="font-size: large;">F, G, H</span></i><span class="Apple-style-span" style="font-size: large;"> be </span><i><span class="Apple-style-span" style="font-size: large;">total</span></i><span class="Apple-style-span" style="font-size: large;"> functions.)</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><span class="Apple-style-span" style="font-size: large;">The question makes equal sense when posed for other models of computation. In my </span><a href="http://arxiv.org/abs/0808.2662"><span style="color:#002bee;"><span class="Apple-style-span" style="font-size: large;">paper</span></span></a><span class="Apple-style-span" style="font-size: large;">, I proved the corresponding conjecture in the decision tree model of computation, as a special case of a more general result--see below. Communication complexity could be a reasonable model to attack next.</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><b><span class="Apple-style-span" style="font-size: large;">Please note: </span></b><span class="Apple-style-span" style="font-size: large;">While this conjecture may require lower-bounds expertise to resolve, I believe that anyone with a creative spark could make an important contribution, by coming up with a good set of </span><i><span class="Apple-style-span" style="font-size: large;">candidate functions</span></i><span class="Apple-style-span" style="font-size: large;"> </span><i><span class="Apple-style-span" style="font-size: large;">F, G, H</span></i><span class="Apple-style-span" style="font-size: large;">. Please feel encouraged to share any ideas you might have.</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><b><span class="Apple-style-span" style="font-size: large;">Background on the question</span></b></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><span class="Apple-style-span" style="font-size: large;">Let </span><i><span class="Apple-style-span" style="font-size: large;">cc(F)</span></i><span class="Apple-style-span" style="font-size: large;"> denote the (deterministic) communication complexity of computing </span><i><span class="Apple-style-span" style="font-size: large;">F(x, y)</span></i><span class="Apple-style-span" style="font-size: large;">. Next, let </span><i><span class="Apple-style-span" style="font-size: large;">cc(F, G)</span></i><span class="Apple-style-span" style="font-size: large;"> denote the communication complexity of computing </span><i><span class="Apple-style-span" style="font-size: large;">F(x, y) </span></i><span class="Apple-style-span" style="font-size: large;">and </span><i><span class="Apple-style-span" style="font-size: large;">G(x, y)</span></i><span class="Apple-style-span" style="font-size: large;">---on the </span><i><span class="Apple-style-span" style="font-size: large;">same</span></i><span class="Apple-style-span" style="font-size: large;"> input-pair </span><i><span class="Apple-style-span" style="font-size: large;">(x, y)</span></i><span class="Apple-style-span" style="font-size: large;">. We define </span><i><span class="Apple-style-span" style="font-size: large;">cc(F, H)</span></i><span class="Apple-style-span" style="font-size: large;">,</span><i><span class="Apple-style-span" style="font-size: large;"> cc(G, H)</span></i><span class="Apple-style-span" style="font-size: large;">, and </span><i><span class="Apple-style-span" style="font-size: large;">cc(F, G, H)</span></i><span class="Apple-style-span" style="font-size: large;"> similarly.</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><span class="Apple-style-span" style="font-size: large;">Together, we think of these various quantities as summarizing the 'joint complexity' of the collection </span><i><span class="Apple-style-span" style="font-size: large;">F</span></i><span class="Apple-style-span" style="font-size: large;">, </span><i><span class="Apple-style-span" style="font-size: large;">G</span></i><span class="Apple-style-span" style="font-size: large;">, </span><i><span class="Apple-style-span" style="font-size: large;">H</span></i><span class="Apple-style-span" style="font-size: large;">. Of course, this notion can be extended to collections of </span><i><span class="Apple-style-span" style="font-size: large;">k > 3</span></i><span class="Apple-style-span" style="font-size: large;"> functions; the joint complexity is summarized by giving the communication complexity of all </span><i><span class="Apple-style-span" style="font-size: large;">2^k</span></i><span class="Apple-style-span" style="font-size: large;"> subsets of the collection. Let's let </span><i><span class="Apple-style-span" style="font-size: large;">JC</span></i><span class="Apple-style-span" style="font-size: large;"> denote the function that takes as input a </span><i><span class="Apple-style-span" style="font-size: large;">k</span></i><span class="Apple-style-span" style="font-size: large;">-bit vector, and returns the complexity of computing the corresponding subcollection. So, in our 3-function example, we have</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><i><span class="Apple-style-span" style="font-size: large;">JC(1, 1, 0) = cc(F, G)</span></i><span class="Apple-style-span" style="font-size: large;"> and </span><i><span class="Apple-style-span" style="font-size: large;">JC(0, 0, 1) = cc(H)</span></i><span class="Apple-style-span" style="font-size: large;">.</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><span class="Apple-style-span" style="font-size: large;">The question we want to ask is: what kinds of behavior are </span><i><span class="Apple-style-span" style="font-size: large;">possible</span></i><span class="Apple-style-span" style="font-size: large;"> with the joint complexity, if we allow the functions </span><i><span class="Apple-style-span" style="font-size: large;">F</span></i><span class="Apple-style-span" style="font-size: large;">, </span><i><span class="Apple-style-span" style="font-size: large;">G</span></i><span class="Apple-style-span" style="font-size: large;">, </span><i><span class="Apple-style-span" style="font-size: large;">H</span></i><span class="Apple-style-span" style="font-size: large;">, etc. to be chosen arbitrarily? In other words, what different types of 'efficiencies' can arise in a collection of computational tasks (in the communication model)?</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><span class="Apple-style-span" style="font-size: large;">A little thought reveals some obvious constraints:</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><b><span class="Apple-style-span" style="font-size: large;">1.</span></b><span class="Apple-style-span" style="font-size: large;"> the joint complexity function </span><i><span class="Apple-style-span" style="font-size: large;">JC</span></i><span class="Apple-style-span" style="font-size: large;"> must always be nonnegative and integral-valued, with </span><i><span class="Apple-style-span" style="font-size: large;">JC(</span></i><b><i><span class="Apple-style-span" style="font-size: large;">0</span></i></b><i><span class="Apple-style-span" style="font-size: large;">) = 0</span></i><span class="Apple-style-span" style="font-size: large;">.</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><b><span class="Apple-style-span" style="font-size: large;">2. monotonicity: </span></b><span class="Apple-style-span" style="font-size: large;">Enlarging the subset of the functions to be computed cannot decrease the complexity. For example, we always have </span><i><span class="Apple-style-span" style="font-size: large;">cc(F, G) >= cc(F)</span></i><span class="Apple-style-span" style="font-size: large;">, which translates to J</span><i><span class="Apple-style-span" style="font-size: large;">C(1, 1, 0) >= JC(1, 0, 0)</span></i><span class="Apple-style-span" style="font-size: large;">.</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><b><span class="Apple-style-span" style="font-size: large;">3. subadditivity:</span></b><span class="Apple-style-span" style="font-size: large;"> Taking the union of two subsets of functions to be computed cannot increase the complexity beyond the sum of the individual complexities of the subsets. For example, </span><i><span class="Apple-style-span" style="font-size: large;">cc(F, G, H) <= cc(F, G) + cc(H)</span></i><span class="Apple-style-span" style="font-size: large;">, since we can always compute </span><i><span class="Apple-style-span" style="font-size: large;">(F, G)</span></i><span class="Apple-style-span" style="font-size: large;"> in an optimal fashion first, then compute </span><i><span class="Apple-style-span" style="font-size: large;">H</span></i><span class="Apple-style-span" style="font-size: large;"> optimally afterwards.</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><span class="Apple-style-span" style="font-size: large;">(Technically, this assumes that in our model both players always know when a communication protocol halts, so that they can combine two protocols sequentially without any additional overhead. No big deal, though.)</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><span class="Apple-style-span" style="font-size: large;">Now, a little further thought reveals that… well, there really </span><i><span class="Apple-style-span" style="font-size: large;">aren't</span></i><span class="Apple-style-span" style="font-size: large;"> any other obvious, general constraints on the joint complexity! Let's call </span><b><i><span class="Apple-style-span" style="font-size: large;">C</span></i></b><span class="Apple-style-span" style="font-size: large;"> an </span><i><span class="Apple-style-span" style="font-size: large;">Economic Cost Function (ECF)</span></i><span class="Apple-style-span" style="font-size: large;"> if it obeys constraints 1-3. We are tempted to conjecture that perhaps </span><i><span class="Apple-style-span" style="font-size: large;">every</span></i><span class="Apple-style-span" style="font-size: large;"> ECF is in fact equal to the joint complexity (in the communication model) of </span><i><span class="Apple-style-span" style="font-size: large;">some</span></i><span class="Apple-style-span" style="font-size: large;"> particular collection of functions.</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><span class="Apple-style-span" style="font-size: large;">There are two things wrong with this conjecture. First, it's false, as can be seen by a simple counterexample: namely, the "buy-one-get-one-free" example, with </span><i><span class="Apple-style-span" style="font-size: large;">T</span></i><span class="Apple-style-span" style="font-size: large;"> set to 1. That's how I stumbled onto this example, and is one reason why I find it interesting.</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><span class="Apple-style-span" style="font-size: large;">However, if we relax the problem, and just ask to realize some </span><i><span class="Apple-style-span" style="font-size: large;">scalar multiple</span></i><span class="Apple-style-span" style="font-size: large;"> of </span><b><i><span class="Apple-style-span" style="font-size: large;">C</span></i></b><span class="Apple-style-span" style="font-size: large;"> as a joint complexity function, this counterexample loses its force.</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><span class="Apple-style-span" style="font-size: large;">The second thing wrong with the conjecture (in its relaxed form) is that, even if true, it'd likely be impossible to prove. This is because determining the </span><i><span class="Apple-style-span" style="font-size: large;">exact</span></i><span class="Apple-style-span" style="font-size: large;"> computational cost of even modestly-complicated tasks is just way too hard. So I propose a doubly-relaxed form of the conjecture: I conjecture that if </span><b><i><span class="Apple-style-span" style="font-size: large;">C</span></i></b><span class="Apple-style-span" style="font-size: large;"> is an ECF, then there is a joint complexity function that is a good </span><i><span class="Apple-style-span" style="font-size: large;">pointwise approximation</span></i><span class="Apple-style-span" style="font-size: large;"> to some scalar multiple of </span><b><i><span class="Apple-style-span" style="font-size: large;">C</span></i></b><span class="Apple-style-span" style="font-size: large;">. (Here we allow a </span><i><span class="Apple-style-span" style="font-size: large;">(1 +- eps) </span></i><span class="Apple-style-span" style="font-size: large;">multiplicative error.)</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><span class="Apple-style-span" style="font-size: large;">In my </span><a href="http://arxiv.org/abs/0808.2662"><span style="color:#002bee;"><span class="Apple-style-span" style="font-size: large;">paper</span></span></a><span class="Apple-style-span" style="font-size: large;">, I managed to prove the corresponding conjecture for the model of decision trees (aka deterministic query algorithms). Several interesting ingredients were needed for the proof. </span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: large;">Now, why do I believe the conjecture should also hold true for the communication model? In a nutshell, I think it should be possible to 'embed' tasks in the query model into the communication model, by a suitable distributed encoding of each bit, in such a way that the relative costs of all computational tasks are approximately preserved. If this could be shown, the result in the communication model would follow from my result for decision trees. (See the paper for more details.)</span></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><span class="Apple-style-span" style="font-size: large;">We may not be ready for an attack on the general conjecture, however. In particular, we </span><i><span class="Apple-style-span" style="font-size: large;">seem</span></i><span class="Apple-style-span" style="font-size: large;"> to require a much better understanding of so-called 'Direct Sum problems' in communication complexity. Thus, I offer the 'buy-one-get-one-free conjecture' as a simpler, more concrete problem on which we can hope to make progress sooner.</span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Helvetica; min-height: 29.0px"><span class="Apple-style-span" style="font-size: large;"><br /></span></p> <p style="margin: 0.0px 0.0px 0.0px 0.0px; font: 24.0px Georgia"><span class="Apple-style-span" style="font-size: large;">In the decision tree model, my result allows us to realize an ECF of the 'buy-one-get-one-free' type as a joint complexity function; but I </span><i><span class="Apple-style-span" style="font-size: large;">don't</span></i><span class="Apple-style-span" style="font-size: large;"> know of any method for this that's significantly simpler than my general construction. Even finding such a simpler method in the decision tree model would be a very nice contribution, and might lead to new ideas for the more general problem.</span></p>http://andysresearch.blogspot.com/2011/06/joint-computational-complexity-and-buy_15.htmlnoreply@blogger.com (Andy D)7tag:blogger.com,1999:blog-19908808.post-7908260004777934009Mon, 09 May 2011 14:53:00 +00002011-05-09T11:43:50.602-04:00complexityAn exciting new textbook, and a request<span class="Apple-style-span" style=" border-collapse: collapse; font-family:arial, sans-serif;font-size:13px;"><div><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;">Today I'd like to put out an appeal to readers. If you have a solid grasp of English and an interest in circuit complexity (no expertise required!), please consider helping proofread the forthcoming book "Boolean Function Complexity: Advances and Frontiers" by </span></span><a href="http://www.thi.informatik.uni-frankfurt.de/~jukna/"><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;">Stasys Jukna</span></span></a><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;">.</span></span></div><div><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;"><br /></span></span></div><div><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;">Stasys (whom I recently had the pleasure to meet at an enjoyable Dagstuhl </span></span><a href="http://www.dagstuhl.de/en/program/calendar/semhp/?semnr=11121"><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;">seminar</span></span></a><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;"> in Germany) is a talented researcher and a kind, gracious person; he's also worked tirelessly to produce high-quality textbooks for our community. I'm a long-time fan of his </span></span><a href="http://www.amazon.com/Extremal-Combinatorics-Applications-Computer-Science/dp/3540663134"><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;">"Extremal Combinatorics: With Applications in Computer Science"</span></span></a><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;"> which contains many gems of combinatorial reasoning in complexity theory.</span></span></div><div><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;"><br /></span></span></div><div><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;">His latest manuscript, to be published soon, promises to be a leading reference for circuit lower-bounds research. Although I've only read parts, it clearly achieves both depth and breadth; I really think anyone in the field could learn something new and useful here.</span></span></div><div><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;"><br /></span></span></div><div><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;">The main cause for concern is that Stasys (who hails from Lithuania) is not a native English speaker, and the text needs work in places to become grammatically correct and idiomatic. Also, it seems a full-time copy-editor is not available for this project. </span></span></div><div><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;">So readers: by volunteering to proofread a chapter or two, you'll be doing a valuable service for present and future students of complexity theory. Language editing is the essential thing--you can skim over the equations, so it really doesn't take that long. (Of course, mathematical feedback is also welcome.)</span></span></div><div><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;"><br /></span></span></div><div><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;">The manuscript is available </span></span><a href="http://www.thi.informatik.uni-frankfurt.de/~jukna/BFC-book/"><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;">here</span></span></a><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;">; use the following login info:</span></span></div><div><span style=" border-collapse: collapse; "><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;">User: friend<br />Password: catchthecat</span></span></span></div><div><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;"><br /></span></span></div><div><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;">You can email Stasys your comments. The text is long (500+ pages), so if you do or are planning to do some editing, please enter your name, the chapters you'll edit, and a timeframe in the comments below. This will allow others to maximize our coverage of the draft.</span></span></div><div><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;"><br /></span></span></div><div><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="font-family:georgia;">Thanks in advance for your help!</span></span></div></span>http://andysresearch.blogspot.com/2011/05/exciting-new-textbook-and-request.htmlnoreply@blogger.com (Andy D)6tag:blogger.com,1999:blog-19908808.post-6454278364262544207Fri, 10 Dec 2010 05:38:00 +00002010-12-10T11:45:43.240-05:00Harassment Policies for Theory Conferences<span class="Apple-style-span" style="border-collapse: collapse; line-height: 18px; -webkit-border-horizontal-spacing: 2px; -webkit-border-vertical-spacing: 2px; "><span class="Apple-style-span"><p class="MsoNormal" style="color: rgb(51, 51, 51); font-family: 'trebuchet ms', verdana, arial, sans-serif; "><span class="apple-style-span"><span style="font-family: Georgia, serif; color: black; ">Following offline conversations and recent discussions on other blogs (hat-tip to Anna and <a href="http://11011110.livejournal.com/210162.html">David</a>), I want to promote the</span></span><span class="apple-converted-space"><span style="font-family:"Georgia","serif";color:black"> </span></span><span class="apple-style-span"><span style="font-family:"Georgia","serif";color:black"><a href="http://geekfeminism.org/2010/11/29/get-your-conference-anti-harassment-policy-here/">Geek Feminism Blog initiative</a></span></span><span class="apple-converted-space"><span style="font-family:"Georgia","serif";color:black"> </span></span><span class="apple-style-span"><span style="font-family:"Georgia","serif";color:black">asking computing conferences to adopt explicit policies against sexual harassment. Bringing such policies to theory conferences that don't yet have them is an important step. (Note that this would mean putting them on conference websites and preparing conference staff. Just having some boilerplate document hidden somewhere on the IEEE or ACM websites is not enough.)</span></span></p> <p class="MsoNormal" style="color: rgb(51, 51, 51); font-family: 'trebuchet ms', verdana, arial, sans-serif; "><span style="font-family:"Georgia","serif";color:black">What is the value of such a policy? Geek Feminism provides a<span class="apple-converted-space"> </span><a href="http://geekfeminism.wikia.com/index.php?title=Conference_anti-harassment_policy">policy template</a><span class="apple-converted-space"> </span>whose intro spells it out well. Such a policy</span></p> <p class="MsoNormal" style="color: rgb(51, 51, 51); font-family: 'trebuchet ms', verdana, arial, sans-serif; "><i><span style="font-family:"Georgia","serif";color:black">"sets expectations for behavior at the conference. Simply having an anti-harassment policy can prevent harassment all by itself...</span></i></p> <p class="MsoNormal" style="color: rgb(51, 51, 51); font-family: 'trebuchet ms', verdana, arial, sans-serif; "><i><span style="font-family:"Georgia","serif";color:black">"...it encourages people to attend who have had bad experiences at other conferences...</span></i></p> <p class="MsoNormal" style="color: rgb(51, 51, 51); font-family: 'trebuchet ms', verdana, arial, sans-serif; "><i><span style="font-family:"Georgia","serif";color:black">"...it gives conference staff instructions on how to handle harassment quickly, with the minimum amount of disruption or bad press for your conference."</span></i></p> <p class="MsoNormal" style="color: rgb(51, 51, 51); font-family: 'trebuchet ms', verdana, arial, sans-serif; "><span style="font-family:"Georgia","serif";color:black">Stating such a policy would cost nothing, and local conference staff could prepare for their roles using anti-harassment training materials, which abound on the web -- I invite others to suggest good ones.</span></p> <p class="MsoNormal" style="color: rgb(51, 51, 51); font-family: 'trebuchet ms', verdana, arial, sans-serif; "><span style="font-family:"Georgia","serif";color:black">So why would we hesitate to adopt such policies? I will suggest three possible reasons, and explain why they're unconvincing.</span></p> <p class="MsoNormal" style="color: rgb(51, 51, 51); font-family: 'trebuchet ms', verdana, arial, sans-serif; "><span style="font-family:"Georgia","serif";color:black">First, there is a certain tendency to deride anti-harassment training as "sensitivity training" and as stating the obvious. But whether or not most of us know how to treat others respectfully, responding to disrespectful treatment is another story. Conference staff need to know there are circumstances under which they can and should reprimand attendees or even eject them, and they need to mentally rehearse for these difficult tasks. Attendees need to know the staff are ready to help.</span></p> <p class="MsoNormal" style="color: rgb(51, 51, 51); font-family: 'trebuchet ms', verdana, arial, sans-serif; "><span style="font-family:"Georgia","serif";color:black">Second, some might object that while harassment may be a major problem in other parts of the computing/tech world, it's less of a problem in our mature, enlightened theory community. Of course, this would be a self-serving belief without empirical support. I'm not aware of any systematic efforts to track harassment incidents at theory conferences, although Geek Feminism maintains<span class="apple-converted-space"> </span><a href="http://geekfeminism.wikia.com/index.php?title=Timeline_of_incidents">wiki record</a><span class="apple-converted-space"> </span>of incidents in computing/tech more broadly -- I hope theory conference-goers will find and use it or something similar. </span><span class="Apple-style-span" style="color: rgb(0, 0, 0); font-family: Georgia, serif; ">But if we can agree that sexual harassment is seriously wrong -- harmful to individuals and the community when it occurs -- surely we can take the time to state this publicly and prepare ourselves to deal with it, whatever its frequency.</span></p> <p class="MsoNormal" style="color: rgb(51, 51, 51); font-family: 'trebuchet ms', verdana, arial, sans-serif; "><span style="font-family:"Georgia","serif";color:black">Third, might an anti-harassment policy inhibit our freedom of expression too much or make people afraid to interact? Let me turn this question around. Almost all universities and major employers have explicit anti-harassment policies (here's<span class="apple-converted-space"> </span><a href="http://studentlife.mit.edu/mindandhandbook/policiesandprocedures/harassment">MIT's</a>, for example). Most of us support these precautions and don't feel oppressed by the policies. Why should conferences, which are outgrowths of the academic system, be different? Do we believe there is some special spirit of lawlessness that we need to protect at conferences, and only at conferences?<o:p></o:p></span></p> <p class="MsoNormal" style="color: rgb(51, 51, 51); font-family: 'trebuchet ms', verdana, arial, sans-serif; "><span class="Apple-style-span" style="color: rgb(0, 0, 0); font-family: Georgia, serif; ">Of course not. So I support the harassment-policy initiative, and encourage others to do so as well.</span></p></span></span>http://andysresearch.blogspot.com/2010/12/harassment-policies-for-theory_09.htmlnoreply@blogger.com (Andy D)14tag:blogger.com,1999:blog-19908808.post-1974937762537878016Thu, 04 Nov 2010 17:38:00 +00002010-11-07T22:10:47.605-05:00ECCC: what authors should knowAnyone interested in computational complexity should be aware of <a href="http://www.eccc.uni-trier.de/">ECCC</a>, the most important and widely-used online repository for complexity papers. (Depending on your specific interests, various sections of <a href="http://arxiv.org/">arxiv</a>, along with the <a href="http://eprint.iacr.org/">Cryptology eprint Archive</a>, may be equally important to follow.)<br /><br />Unfortunately, the technical side of ECCC's submission process is arguably broken, and seems to trip up many if not most authors. Here are the issues I'm aware of:<br /><br /><span style="font-weight: bold;">1: no preview function.<br /></span><br />Unlike arxiv, ECCC offers Latex support for on-site abstracts, so you can have as many funky symbols as you want. Good thing? No, because the site offers no way to preview what the compiled math will look like. This results in bizarre spacing effects and compile errors. (The most frequent problem, I think, is authors trying to use their own custom macros in the abstract, or commands from outside the Latex fragment supported by the site.) <br /><br />Nor is it possible to preview what your document will look like (assuming it's accepted). This brings us to the second point:<br /><br /><span style="font-weight: bold;">2: hyperrefs are broken.<br /><br /></span>Many authors these days like to use internal hyperlinks in their document (provided by the hyperref package in Latex). This way, in case the reader forgets what Lemma 14.5 said, there's a link to it every time it's invoked. ECCC is happy to accept hyperref'd papers, and when they appear on the site, they'll have the same appearance you've chosen to give them. Unfortunately, in most cases the damn things <span style="font-style: italic;">won't do anything</span> when you click on them.<br /><br />Faulkner wanted to print <span style="font-style: italic;">The Sound and the Fury</span> <span style="text-decoration: underline;"><span style="font-style: italic;"></span></span>in multi-colored ink. I happen to like the look of colorful hyperrefs, even broken ones, but I still feel like a fool when they're all over a paper of mine.<br /><br /><span style="font-weight: bold;">3: keywords are nearly useless.<br /><br /></span>There is so little standardization in the use of keywords, and they're handled so rigidly, that clicking on them is often a waste of time. For example, the keywords 'algorithmic meta-theorems' and 'algorithmic meta theorems' bring up one paper each -- two lonely souls separated by a hyphen. (The search tool and the browse-able list of keywords are somewhat more useful, but still probably less so than googling.) Mistyped keywords are another danger. I've also seen author names that, when clicked, bring up a proper subset of that author's work for no apparent reason.<br /><br /><span style="font-weight: bold;"><span style="font-weight: bold;">Why does this matter?<br /><br /></span></span>I think all this constitutes a serious problem. But one possible objection to my view is that authors can always post revisions to their papers -- fixing abstracts, documents, and keywords in one stroke.<br /><br />This might be an OK solution, except for the empirical fact that almost nobody does this (myself included). I think, and others have also opined, that this is because <span style="font-weight: bold;">people are afraid to revise.</span> Presumably, they fear there's a widespread perception that <span style="font-weight: bold;">posting revisions = mistakes or sloppiness</span>.<br /><br />Whether or not this perception is actually widespread, we should stand visibly against it, to loosen the grip of pointless anxieties and to improve the quality of available papers. A more reasonable attitude is that having early access to preprints is a good thing, but that such papers will almost always have imperfections. Revising a paper within a few weeks or months of its release ought to be <span style="font-weight: bold;">a sign of conscientious authors, not sloppy ones. </span>This holds doubly in the context of a messed-up submission system.<span style="font-weight: bold;"><br /></span>Of course, there is such a thing as too many revisions, and it is possible to post a paper too early in the editing process. Where that line should be drawn is a tough topic that deserves its own discussion.<span style="font-weight: bold;"><span style="font-weight: bold;"><br /><br /><span style="font-weight: bold;">What's the solution?<br /><br /></span></span></span>We can and should expect more from ECCC. Specifically, a preview function for abstracts and documents would be a key improvement. But at the least, there should be a <span style="font-weight: bold;">clear list of warnings</span> to authors about common submission errors.<br /><br />The web administrators are aware of these issues, and have been for some time; but we as users can each do our part to communicate the importance and urgency of fixing these problems. This is especially true of users on the site's scientific board.<br /><br />In the meantime, what should you do if your submission doesn't turn out the way you expected? Last time this happened to me, I contacted a web admin, a friendly guy who was able to fix part of the problem for me, without resorting to the dreaded revision step. This might work for you as well.<br /><span style="font-weight: bold;"><span style="font-weight: bold;"><span style="font-weight: bold;"></span></span></span><span style="font-weight: bold;"></span>http://andysresearch.blogspot.com/2010/11/eccc-what-authors-should-know.htmlnoreply@blogger.com (Andy D)19tag:blogger.com,1999:blog-19908808.post-7301869883766789992Wed, 21 Jul 2010 23:27:00 +00002010-07-21T21:08:53.088-04:00general mathInjective polynomialsFrom a <a href="http://arxiv.org/abs/0902.3961">paper</a> of MIT's <a href="http://www-math.mit.edu/%7Epoonen/">Bjorn Poonen</a>, I learned of an amazingly simple open problem. I'll just quote the paper (here <span style="font-weight: bold;"><span style="font-style: italic;">Q</span></span> denotes the rational numbers):<br /><br />"Harvey Friedman asked whether there exists a polynomial <span style="font-style: italic;">f(x, y) </span>in <span style="font-style: italic;"><span style="font-weight: bold;">Q</span>(x, y)</span> such that the induced map <span style="font-style: italic;"><span style="font-weight: bold;">Q</span> </span>x<span style="font-style: italic;"><span style="font-weight: bold;"> Q </span>--><span style="font-weight: bold;"> Q</span></span> is injective. Heuristics suggest that most sufficiently complicated polynomials should do the trick. Don Zagier has speculated that a polynomial as simple as <span style="font-style: italic;">x^7 + 3y^7</span> might be an example. But it seems very difficult to prove that <span style="font-style: italic;">any</span> polynomial works. Both Friedman's question and Zagier's speculation are at least a decade old... but it seems that there has been essentially no progress on the question so far."<br /><br />Poonen shows that a certain other, more widely-studied hypothesis implies that such a polynomial exists. Of course, such a polynomial does not exist if we replace <span style="font-weight: bold;"><span style="font-style: italic;">Q</span></span> by <span style="font-style: italic;"><span style="font-weight: bold;">R<span style="font-style: italic;"><span style="font-weight: bold;"></span></span></span></span>, the reals. In fact any injection <span style="font-style: italic;"><span style="font-weight: bold;">R</span> </span>x<span style="font-style: italic;"><span style="font-weight: bold;"> R </span>--><span style="font-weight: bold;"> R </span></span>must be (very) discontinuous.<br /><br />Suppose an injective polynomial <span style="font-style: italic;">f </span>could be identified, answering Friedman's question; it might then be interesting to look at `recovery procedures' to produce <span style="font-style: italic;">x, y</span> given <span style="font-style: italic;">f(x, y)</span>. We can't hope for <span style="font-style: italic;">x, y</span> to be determined as polynomials in<span style="font-style: italic;"> f(x, y)</span>, but maybe an explicit, fast-converging power series or some similar recipe could be found.<div><br /></div><div>Finally, all of this should be compared with the study of injective polynomials from the Euclidean plane to <i>itself</i>; this is the subject of the famous <i>Jacobian Conjecture</i>. See Dick Lipton's excellent <a href="http://rjlipton.wordpress.com/2010/07/17/an-amazing-paper/">post</a> for more information.</div>http://andysresearch.blogspot.com/2010/07/injective-polynomials.htmlnoreply@blogger.com (Andy D)16tag:blogger.com,1999:blog-19908808.post-4237820042997243818Mon, 10 Aug 2009 15:29:00 +00002009-08-10T11:30:35.792-04:00general mathWit and Wisdom from KolmogorovI just learned about a result of Kolmogorov from the '50s that ought to interest TCS fans. Consider circuits operating on real-variable inputs, built from the following gates:<br /><br />-sum gates, of unbounded fanin;<br />-arbitrary continuous functions of a single variable.<br /><br />How expressive are such circuits? Kolmogorov informs us that they can compute <span style="font-style: italic;">any continuous function </span><span style="">on <span class="Apple-style-span" style="font-style: italic;">n</span> variables</span>. Wow! In fact, one can achieve this with poly(<span class="Apple-style-span" style="font-style: italic;">n</span>)-sized, constant-depth formulas alternating between sums and univariate continuous functions (two layers of each type, with the final output being a sum).<br /><br />Note that sums can also be computed in a tree of fanin-two sums, so this theorem (whose proof I've not yet seen) tells us that fanin-two continuous operations capture continuous functions in the same way that fanin-two Boolean operations capture Boolean computation (actually, even more strongly in light of the poly-size aspect).<br /><br />This result (strengthening earlier forms found by Kolmogorov and by his student Arnol'd) may, or may not, negatively answer one of <a href="http://en.wikipedia.org/wiki/Hilbert%27s_problems">Hilbert's problems</a>, the thirteenth--it's not entirely clear even to the experts what Hilbert had intended to ask. But a fun source to learn about this material is a <a href="http://www.google.com/url?sa=t&source=web&ct=res&cd=4&url=http%3A%2F%2Fwww.iop.org%2FEJ%2Farticle%2F0036-0279%2F59%2F1%2FR03%2FRMS_59_1_R03.pdf&ei=22d3SszPF-STtgfpi9mWCQ&usg=AFQjCNFmrpGNLkAr8aj-Gl7j_ChqP-Ms2A">survey/memoir</a> written by A. G. Vitushkin, a former student of Kolmogorov. <span style="font-weight: bold;">[a gated document, unfortunately...] </span><div><br /></div><div>The freewheeling article starts off with Hilbert's problem, but also contains interesting anecdotes about Kolmogorov and the academic scene he presided over in Moscow. Towards the end we get some amusing partisan invective against Claude Shannon, who, scandalously, "entirely stopped his research at an early stage and still kept his position of professor at the Massachusetts Institute of Technology". Vitushkin (who passed away in 2004) apparently bore a grudge over the insufficient recognition of Soviet contributions to information theory. My favorite story relates a not-so-successful visit by Shannon to meet Kolmogorov and features a deft mathematical put-down:<br /></div><div><div><br /><span style="font-style: italic;">"Kolmogorov was fluent in French and German and read English well, but his spoken English was not very good. Shannon, with some sympathy, expressed his regret that they could not understand each other well. Kolmogorov replied that there were five international languages, he could speak three of them, and, if his interlocutor were also able to speak three languages, then they would have no problems."</span><br /></div></div>http://andysresearch.blogspot.com/2009/08/wit-and-wisdom-from-kolmogorov_10.htmlnoreply@blogger.com (Andy D)24tag:blogger.com,1999:blog-19908808.post-5016404257375005779Wed, 20 May 2009 14:22:00 +00002009-05-20T10:23:05.710-04:00general mathAdditive combinatorics: a request for informationGiven a subset <span style="font-style: italic;">A</span> of the integers mod <span style="font-style: italic;">N</span>, we can ask, how many 4-element `patterns' appear in <span style="font-style: italic;">A</span>? A pattern is an equivalence class of size-4 subsets of <span style="font-style: italic;">A</span>, where two 4-sets <span style="font-style: italic;">S</span>, <span style="font-style: italic;">S'</span> are considered the same pattern if <span style="font-style: italic;">S = S' + j</span> (mod <span style="font-style: italic;">N</span>) for some <span style="font-style: italic;">j</span>.<br /><br />Clearly the number of patterns is at most <span style="font-style: italic;">|A|</span>-choose-4; but it can be much less: if <span style="font-style: italic;">A</span> is a consecutive block, or more generally an arithmetic progression, the number of patterns is on the order of <span style="font-style: italic;">|A|^3</span>.<br /><br />So my question is: if the number of patterns is `much less' then <span style="font-style: italic;">|A|^4</span>, what nice structure do we necessarily find in <span style="font-style: italic;">A</span>?<br /><br />I believe that similar questions for 2-patterns have satisfactory answers: then the hypothesis is just that the difference-set <span style="font-style: italic;">(A - A)</span> is small. In this case I believe A is 'close' to a (generalized) arithmetic progression, although actually I'm having trouble finding the relevant theorem here too (most references focus on sumsets <span style="font-style: italic;">(A + A)</span>, for which <a href="http://en.wikipedia.org/wiki/Freiman%27s_theorem">Frieman's Theorem</a> applies).<br /><br />Thanks in advance for any pointers!http://andysresearch.blogspot.com/2009/05/additive-combinatorics-request-for_20.htmlnoreply@blogger.com (Andy D)3tag:blogger.com,1999:blog-19908808.post-1606607225883034986Tue, 24 Mar 2009 19:45:00 +00002010-12-13T13:42:54.179-05:00general mathpuzzlesAlgebraic and TranscendentalA number is called <span style="font-style: italic;">algebraic </span>if it is the root of a nonzero polynomial with integral coefficients (or, equivalently, rational coefficients); otherwise it is <span style="font-style: italic;">transcendental.</span> <span style="font-weight: bold;"><span style="font-weight: bold;"><span style="font-weight: bold;"></span></span><br /><br />Item:</span> there exists a nonintegral <span style="font-style: italic;">c</span> > 1 such that <span style="font-style: italic;">c^n</span> 'converges to the integers': that is, for any <span style="font-style: italic;">eps</span> > 0 there's an <span style="font-style: italic;">N > 0</span> such that <span style="font-style: italic;">c^n</span> is within <span style="font-style: italic;">eps</span> of some integer, for all <span style="font-style: italic;">n > N. </span>(I think the golden mean was an example, but I can't find or remember the reference book at the moment.)<br /><span style="font-style: italic;"><span style="font-weight: bold;"></span><br /></span><span style="font-weight: bold;">Item: </span>it is apparently open whether any such number can be transcendental.<br /><br />***<br /><br />Next up, two questions of my own on algebraic numbers. Possibly easy, possibly silly, but I don't know the answers.<br /><br />Define the <span style="font-style: italic;">interleave</span> of two numbers <span style="font-style: italic;"><br /><br />b =</span> <span style="font-style: italic;">0.b_1b_2b_3... </span>and <span style="font-style: italic;">c = 0.c_1c_2c_3...</span><br />(both in [0, 1], and using the 'correct' binary expansions) as<br /><br /><span style="font-style: italic;">b@c = 0.b_1c_1b_2c_2...</span><br /><br /><span style="font-weight: bold;">1)</span> Suppose <span style="font-style: italic;">b, c</span> are algebraic. Must <span style="font-style: italic;">b@c</span> be algebraic?<br /><br /><span style="font-weight: bold;">2)</span> The other direction. Suppose <span style="font-style: italic;">b@c</span> is algebraic. Must <span style="font-style: italic;">b, c</span> be algebraic?<br /><br />In both cases, I am inclined towards doubt.<br /><br />***<br /><br />Final thoughts (and these are observations others have made as well)... computational complexity theory seems to have certain things in common with transcendental number theory:<br />-an interest in impossibility/inexpressibility results;<br /><br />-markedly slow progress on seemingly basic<br />questions--is <span style="font-style: italic;">(pi + e)</span> irrational? does <span style="font-style: italic;">P = NP?</span><br /><br />-attempts to use statistical and otherwise 'constructive' properties as sieves to distinguish simple objects from complicated ones.<br /><br />So, could complexity theorists benefit from interacting and learning from transcendental number theorists? If nothing else, looking back on their long historical march might teach us patience and an appreciation for incremental progress.http://andysresearch.blogspot.com/2009/03/algebraic-and-transcendental.htmlnoreply@blogger.com (Andy D)21tag:blogger.com,1999:blog-19908808.post-4289865868591859707Wed, 03 Dec 2008 22:53:00 +00002008-12-03T18:12:47.432-05:00A Rather Elegant SolutionSo I'm co-writing a survey paper for a course project, and struggling with the conflicting demands of completeness and brevity. As if charmed, while taking a break I happen upon a '99 paper called <a href="http://www.maths.uq.edu.au/%7Euqowarna/pubs/Bailey50.pdf">"50 years of Bailey's Lemma"</a> by S. Warnaar whose abstract really resonates:<br /><br /><span style="font-size:85%;">"Half a century ago, The Proceedings of the London Mathematical Society published W. N. Bailey’s influential paper </span><span style="font-style: italic;font-size:85%;" >Identities of the Rogers–Ramanujan type</span><span style="font-size:85%;">... To celebrate the occasion of the lemma’s fiftieth birthday we present a history of Bailey’s lemma in 5 chapters...<br />Due to size limitations of this paper the higher rank [42, 40, 43, 41, 14, 60] and trinomial [11, 59, 19] generalizations of the Bailey lemma will be treated at the lemma’s centennial in 2049."<br /></span>http://andysresearch.blogspot.com/2008/12/rather-elegant-solution.htmlnoreply@blogger.com (Andy D)2tag:blogger.com,1999:blog-19908808.post-4732517512568740372Fri, 31 Oct 2008 14:36:00 +00002011-06-22T14:59:20.026-04:00general mathprobabilityExcitement and ProbabilityElections, sporting events, and other competitions can be exciting. But there is also a sense in which they are <span style="font-style: italic;">almost always dull</span>, and this can be proved rigorously. Allow me to explain.<br /><br />(What follows is an idea I hatched at UCSD and described to Russell Impagliazzo, who had, it turned out, discovered it earlier with some collaborators, for very different reasons. I wouldn't be surprised if it had been observed by others as well. The proof given here is similar to my original one, but closer to the more elegant one Russell showed me, and I'll cite the paper when I find it.)<br /><br />I want to suggest that a competition is dull to watch if one side is always heavily favored to win (regardless of whether it's the side we support), or if the two sides muddle along in a dead heat until some single deciding event happens. More generally, I'd like to suggest that excitement occurs only when there is a <span style="font-style: italic;">shift</span> in our subjective probabilities of the two (let's say just two) outcomes.<br /><br />Without defending this suggestion, I'll build a model around it. If anyone is upset that this notion of excitement doesn't include the smell of popcorn, the seventh-inning stretch, or any substantive modelling of the competition itself, there's no need to read any further.<br /><br />Assume we are a rational Bayesian spectator watching a competition unfold, and that we receive a regular, discrete sequence of update information \(X_1, X_2, ... X_n\) about a competition (these update variables could be bits, real numbers, etc.). Let $p_0, p_1, ... p_n$ be our subjective probabilities of 'victory' (fixing an outcome we prefer between the two) at each stage, where $p_t$ is a random variable conditioning on all the information $X_1, ... X_t$ we've received at time $t$.<br /><br />For $t > 0$, let's define the <span style="font-style: italic;">excitement at time </span><span>$t$</span> as $EXC_t = |p_{t} - p_{t - 1}|$. This random variable measures the 'jolt' we presume we'll get by the revision of our subjective probabilities on the $t$th update.<br /><br />Define the total excitement $EXC$ as the sum of all $EXC_t$ 's. Now, we only want to watch this competition in the first place if the <i>expected</i> total excitement is high; so it's natural to ask, how high can it be?<br /><br />We needn't assume that our method for updating our subjective probability corresponds to the 'true' probabilities implied by the best possible understanding of the data. But let's assume it conforms at least internally to Bayesian norms: in particular, we should have $E[p_{t + 1} | p_{t} = p] = p$.<br /><br />An immediate corollary of this assumption, which will be useful, is that <div><span>\[E[p_t p_{t + 1}] = \sum_p Prob[p_t = p]\cdot p E[p_{t + 1}|p_t = p]\]<br />\[= \sum_p Prob[p_t = p]\cdot p^2 = E[p_t^2].\]</span><br />OK, now rather than look at the expected total excitement, let's look at the expected sum of squared excitements, an often-useful trick which allows us to get rid of those annoying absolute value signs:<br /><span>\[E[EXC_1^2 +EXC_2^2 + \ldots + EXC_{n }^2]\]<br /></span></div><div><span>\[= E[(p_1 - p_0)^2 + \ldots + (p_n - p_{n - 1})^2 ] \]<br /></span></div><div><span>\[= E[p_0^2 + p_n^2 + 2\left(p_1^2 + \ldots + p_{n - 1}^2\right) \]</span></div><div>\[ \quad{}- 2\left(p_1 p_0 + p_2 p_1 + \ldots + p_n p_{n - 1} \right) ] \]</div><div><span><br /></span></div><div><span>\[= E[p_0^2] + E[p_n^2] + 2(E[p_1^2] + \ldots + E[p_{n - 1}^2)] \]</span></div><div><span>\[ \quad{} - 2(E[p_0^2] + \ldots + E[p_{n - 1}^2] )\]</span><br /></div><div>(using linearity of expectation and our previous corollary). Now we get a bunch of cancellation, leaving us with</div><div>\[= E[p_n^2] - E[p_0^2].\]<br /></div><div>This is at most 1. So if we measured excitement at each step by squaring the shift in subjective probabilities, we'd only expect a constant amount of excitement, no matter how long the game!<br /><br />Now, rather crudely, if $Y \geq 0$ and $E[Y] \leq 1$ then $E[\sqrt{Y}] \leq 2$. We also have the general '$\ell_1$ vs $\ell_2$' inequality<br />\[|Y_1| + ... + |Y_n| \leq \sqrt{ n \cdot (Y_1^2 + ... + Y_n^2)} .\]</div><div>Using both of these, we conclude that<br />\[E[EXC_1 + \ldots + EXC_n] \leq E\left[\sqrt{n \cdot \left(EXC_1^2 + \ldots + EXC_n^2\right)} \right] \leq 2 \cdot \sqrt{n} .\]<br />Thus we expect at most $2 \sqrt{n}$ total excitement, for an expected 'amortized excitement' of at most $2/\sqrt{n} = o(1)$.<br /><br /><span>Watch $n$</span> innings, for only $O(\sqrt{n})$ excitement? Give me break! If $<span>n$</span> is large, it's better to stay home--no matter what the game.<br /><br />I would love to see this theory tested against prediction markets like <a href="http://www.intrade.com/">Intrade</a>, which are argued to give a running snapshot of our collective subjective probability of various events. Are the histories as 'low-excitement' as our argument predicts? Even lower? (Nothing we've said rules it out, although one can exhibit simple games which have expected excitement on the order of $\sqrt{n}$.)<br /><br />And if the histories exhibit more excitement than we'd predict (some sign of collective irrationality, perhaps), is there a systematic way to take advantage of this in the associated betting market? Food for thought. I'd be grateful if anyone knew where to get a record of Intrade's raw day-by-day numbers.<br /><br />Finally, nothing said above rules out individual games playing out with high excitement, on the order of $n$, but it does say that such outcomes should be infrequent. I believe a more careful martingale approach would show an exponentially small possibility of such large deviations (Russell said their original proof used Azuma's inequality, which would probably suffice).</div>http://andysresearch.blogspot.com/2008/10/excitement-and-probability.htmlnoreply@blogger.com (Andy D)2tag:blogger.com,1999:blog-19908808.post-7496602774237656148Fri, 24 Oct 2008 18:08:00 +00002008-10-24T18:05:36.115-04:00NO on Prop 8If you are straight, would you join a club that disallowed gay people? Or keep your membership in a club that stopped admitting them? Would you feel distinguished by your membership? To the contrary, I think most people would feel embarrassed and cheapened by it.<br /><br />That's why <span style="font-style:italic;">everyone</span> who is or hopes to get married in California (or anywhere, really) should feel alarmed about <a href="http://en.wikipedia.org/wiki/California_Proposition_8_(2008)">Proposition 8</a>, why everyone whose tax dollars fund the Marriage Club should feel affronted by this attempt to make it an exclusionary one (by amending the state constitution). Even though we also fund the similar and inclusive Civil-Union Club next door (at least, until the next wacky voter initiative comes along), who can ignore the fence-builders' zeal for insisting on this petty distinction for heterosexual couples, or fail to grasp its underlying message?<br /><br />Nothing in the existing laws force clergy of any religion to give ceremonies or `recognize' marriages they don't accept. There remains, in fact, plenty of space in private life to speak and practice intolerance, but we can't let it be done in the name of all Californians.<br /><br />For thoughtful posts on the subject, see e.g. <a href="http://lucatrevisan.wordpress.com/2008/10/21/save-the-california-constitution/">Luca's</a>, <a href="http://ben.casnocha.com/2008/10/prop-8-on-calif.html#comments">Ben Casnocha's</a>, and the No on Prop 8 <a href="http://www.noonprop8.com/">website</a>. They are outspent by the opposition and need help to run TV spots up thru the election, to sway what seems like a very volatile public opinion on this issue.<br />For an amazing photo-essay on California's ever-expanding diversity, and a powerful argument for mutual acceptance and respect, check out the book <a href="http://www.underthedragon.com/">Under the Dragon</a>. (Hat-tip to Chaya!)http://andysresearch.blogspot.com/2008/10/no-on-prop-8.htmlnoreply@blogger.com (Andy D)9tag:blogger.com,1999:blog-19908808.post-7830663712508622560Sun, 14 Sep 2008 17:23:00 +00002008-09-25T09:24:56.491-04:00miscellaneousDavid Foster WallaceAbout a week ago, at great risk to my studies, I gave in to temptation and checked out <a href="http://en.wikipedia.org/wiki/Infinite_Jest">'Infinite Jest'</a> from the library. Last night, ~300 pages into rereading this wonderful novel, I learned that <a href="http://en.wikipedia.org/wiki/David_Foster_Wallace">David Foster Wallace</a>, its author, has died in an apparent suicide.<br /><br />I never met DFW, and know little about his personal life--probably as he wished it--although he seems to have been widely regarded as a kind and generous person. I also never connected too closely with his short fiction and journalistic writing, although I read most of it eagerly. My relationship to DFW centered on `Infinite Jest' (1996), an immense book that captivated me when I discovered it in high school.<br /><br />Briefly, IJ consists of about 1000 pages of chronologically free-form and narratively heterogeneous episodes from the lives of several characters, generally connected either to the Enfield Tennis Academy of metro Boston or to the nearby Ennet House, a drug rehabilitation center. It is supplemented with ~200 pages of footnotes--a generous helping of asides, extra scenes, and background info (it's set in the slight future, culminating around 2008, better known as the Year of the Depend Adult Undergarment in the era of Subsidised Time).<br /><br />IJ is at once: <br /><br />-a lush entertainment, addictive in ways I can't fully explain;<br /><br />-a barrage of observation, alternately expansive and minute, in which the struggle for readers and characters alike is not so much to find meaning as to hold on to it in the face of various compulsions and distractions, to exercise discernment in a world of spectacular banalities and banal truths;<br /><br />-a compendium of contemporary striving and suffering, in turns putting up for scrutiny: pleasure and addiction, competitive pursuit, narcissism and dismorphic thinking, irony/withdrawal as survival strategies in a surreal political climate... and more, all in memorably original fashion;<br /><br />-a genuinely moving book, never dominated by its theses or formal experiments, with deeply rendered characters who, despite their glaring and costly mistakes along the way, become friends you wish would hang around for another 1000 pages.<br /><br /><br />It's a huge loss to learn that David Foster Wallace won't publish a follow-up to IJ. It's a blow to learn that the person who produced such a sustained meditation on suffering (and our resources to overcome it) has taken his own life. It's a sadness to know that the spirit that breathed into those pages has passed.<br /><br />What's left is his remarkable work, and his readership, which I hope will continue to grow. Pick up 'Infinite Jest' today!<br /><br /><span style="font-weight:bold;">Update:</span> <a href="http://www.aldaily.com/">Arts & Letters Daily</a> has collected a number of DFW retrospectives (see 'Essays and Opinion'). This site, by the way, is an excellent aggregator of new and noteworthy online writing.http://andysresearch.blogspot.com/2008/09/david-foster-wallace.htmlnoreply@blogger.com (Andy D)2tag:blogger.com,1999:blog-19908808.post-1732295815295460703Thu, 10 Jul 2008 21:50:00 +00002008-07-10T18:18:12.314-04:00complexityHeh...Funny <a href="http://www.requestcomics.com/images/strips/26.jpg">web comic</a> touching on complexity theory. <br /><br />From <a href="http://www.requestcomics.com/">Request Comics.</a> Handed across the room by Madhu to Brendan, who showed it to me.<br /><br />Earlier I'd seen another, slightly more reverent, <a href="http://theory.cs.uchicago.edu/merlin/">comics treatment</a> of Interactive Proofs, by Larry Gonick of 'Cartoon History of the Universe' fame.<br /><br /><br />To briefly discuss the Request Comics scenario: suppose an alien claims to play perfect chess (in all positions) on an n-by-n board; is it possible to efficiently test this in a poly(n)-length randomized interaction? <br /><br />If there's only one alien, this might be too hard, since the 'correct' generalization of Chess is EXPTIME-complete. If we instead play a PSPACE-complete game like Hex (or Chess with a truncated game length of poly(n)), things change: Shamir's Theorem tells us how a Verifier can be convinced that any particular board set-up is a win.<br /><br />But this is not the same as Prover convincing Verifier that Prover plays perfectly! However, additional ideas can help. Feigenbaum and Fortnow <a href="http://citeseer.ist.psu.edu/24505.html">showed </a>that every PSPACE-complete language L has a worst-case-to-average case reduction to some sampleable distribution D on instances. <br /><br />This means there exists a randomized algorithm A^B using a black-box subroutine B, such that for all problem instances x, and for all possible black-boxes B that correctly answer queries of form 'is y in L?' with high probability when y is drawn according to distribution D,<br /><br />A^B(x) accepts with high prob. if x is in L, and rejects w.h.p. if x is not in L.<br /><br />Thus for the Prover to convince Verifier that Prover is 'effectively able' to solve L in the worst case (and L may encode how to play perfect Hex), it's enough to prove that Prover can decide L w.h.p. over D. Since D is sampleable, Verifier may draw a sample y from D, and the two can run an interactive proof for L on it (or L-complement, if Prover claims y isn't in L). Repeat to increase soundness.http://andysresearch.blogspot.com/2008/07/heh.htmlnoreply@blogger.com (Andy D)8tag:blogger.com,1999:blog-19908808.post-6156820751556283692Fri, 23 May 2008 16:53:00 +00002008-05-23T13:08:17.787-04:00general mathA Must-See SiteIt's called <a href="http://www.theoremoftheday.org/">Theorem of the Day</a>. I just found it, so I can't judge how accurate the frequency claim is, but Robin Whitty has stacked up an impressive collection of short, illustrated introductions to major theorems. I like his taste... two that were news to me and I especially liked in my first sampling: <a href="http://myweb.lsbu.ac.uk/~whittyr/MathSci/TheoremOfTheDay/Analysis/Nevanlinna5/TotDNevanlinna5.pdf">Nevanlinna's Five-Value Theorem</a> and <a href="http://myweb.lsbu.ac.uk/~whittyr/MathSci/TheoremOfTheDay/GeometryAndTrigonometry/ATST/TotDATST.pdf">The Analyst's Traveling Salesman Problem</a>.<br /><br />Sign Robin's guestbook, spread the word, and recommend a theorem of CS theory! (I see at least 3 already, if you count unsolvability of Diophantine equations.)http://andysresearch.blogspot.com/2008/05/must-see-site.htmlnoreply@blogger.com (Andy D)2tag:blogger.com,1999:blog-19908808.post-1200567893229304987Thu, 15 May 2008 18:43:00 +00002008-05-15T15:12:26.307-04:00general mathRandom bitsThis morning I was delighted to learn that Alan Baker, the Swarthmore professor whose Philosophy of Science class I took senior year, recently won the US Shogi championships--read his account <a href="http://www.swarthmore.edu/x19080.xml">here</a>. Congrats, Prof. Baker!<br />I never learned Shogi, although Go--another Asian board game--was a big part of my youth and probably a decisive factor in my eventual interest in math/CS.<br /><br />In other news, I am coauthor on a paper, <a href="http://eccc.hpi-web.de/eccc-reports/2008/TR08-051/index.html">'The Power of Unentanglement'</a>, recently released on ECCC (and headed for CCC '08), jointly with Scott Aaronson, Salman Beigi, Bill Fefferman, and Peter Shor. It's about multiple-prover, single-round, quantum interactive proofs, and I encourage those whose cup of tea that is to take a look.<br /><br />This is my first appearance in a conference paper, but not quite my first time in print--it happened once before, by accident. As a freshman in college, I asked my Linear Algebra professor, Steven Maurer, a question on an online class discussion board. Here it is, in the breathless and overwrought prose of my teenage years:<br /><br /><span style="font-style:italic;">"This question has been haunting me, and I know I shouldn't expect definite answers. But how do mathematicians know when a theory is more or less done? Is it when they've reached a systematic classification theorem or a computational method for the objects they were looking for? Do they typically begin with ambitions as to the capabilities they'd like to achieve? I suppose there's nuanced interaction here, for instance, in seeking theoretical comprehension of vector spaces we find that these spaces can be characterized by possibly finite 'basis' sets. Does this lead us to want to construct algorithmically these new ensembles whose existence we weren't aware of to begin with? Or, pessimistically, do the results just start petering out, either because the 'interesting' ones are exhausted or because as we push out into theorem-space it becomes too wild and wooly to reward our efforts? Are there more compelling things to discover about vector spaces in general, or do we need to start scrutinizing specific vector spaces for neat quirks--or introduce additional structure into our axioms (or definitions): dot products, angles, magnitudes, etc.? <br /><br />Also, how strong or detailed is the typical mathematician's sense of the openness or settledness of the various theories? And is there an alternative hypothesis I'm missing? "</span><br /><br />Steve gave a thoughtful reply, years passed, and then a similar topic came up in coversation between himself and the mathematician and popular math writer Philip J. Davis. The conversation sparked an essay by Davis in which he quoted Maurer and I (with permission), the essay recently became part of a philosophy-of-math book (<a href="http://www.amazon.com/Mathematics-Common-Sense-Creative-Tension/dp/1568812701">'Mathematics and Common Sense: A Case of Creative Tension'</a>), and I got mailed a free copy--sweet! Once again, I recommend the book to anyone who enjoys that kind of thing. The essay is <a href="http://64.233.169.104/search?q=cache:f25l0qiOq-YJ:people.ex.ac.uk/PErnest/pome19/Davis%2520-%2520When%2520is%2520a%2520Problem%2520Solved.doc+maurer+drucker+problem+solved&hl=en&ct=clnk&cd=1&gl=us&client=iceweasel-a">online</a>.http://andysresearch.blogspot.com/2008/05/random-bits.htmlnoreply@blogger.com (Andy D)1tag:blogger.com,1999:blog-19908808.post-8028785311549327459Sun, 11 May 2008 03:26:00 +00002008-05-13T19:39:16.001-04:00Free on Friday?You should come to <a href="http://www.johnnyds.com/">Johnny D's</a> in Davis Square and hear <a href="http://www.nostaticband.com/">No Static,</a> a 9-or-10-piece Steely Dan tribute band (standard preemptive clarification: Steely Dan is the band name, not a person). <br /><br />No Static plays regularly in the area--I saw them in the fall and they were excellent, channeling the Dan's recordings with amazing care and fidelity.<br /><br />Like most great artists, Steely Dan affords a unique perspective on the world, one best imbibed by listening to multiple albums in their weird entirety. This is exactly No Static's approach, so if you don't know Steely Dan, or have only heard a few catchy radio numbers, here's your chance to get hooked. Hope to see you there!http://andysresearch.blogspot.com/2008/05/free-on-friday.htmlnoreply@blogger.com (Andy D)0tag:blogger.com,1999:blog-19908808.post-3461807339876779003Thu, 01 May 2008 18:02:00 +00002008-05-01T16:33:59.171-04:00complexityComplexity Calisthenics (Part I)Today I want to describe and recommend a paper I quite enjoyed: <a href="http://citeseer.ist.psu.edu/mansour02computational.html">The Computational Complexity of Universal Hashing</a> by Mansour, Nisan, and Tiwari (henceforth MNT). I think that this paper, while not overly demanding technically, is likely to stretch readers' brains in several interesting directions at once.<br /><br />As a motivation, consider the following question, which has vexed some of us since the third grade: <span style="font-style:italic;">why is multiplication so hard?</span><br /><br />The algorithm we learn in school takes time quadratic in the bit-length of the input numbers. This is far from optimal, and inspired work over the years has brought the running time ever-closer to the conjecturally-optimal n log n; Martin Furer published a <a href="http://www.cse.psu.edu/~furer/Papers/mult.pdf">breakthrough</a> in STOC 2007, and there may have been improvements since then. But compare this with addition, which can easily be performed with linear time and logarithmic space (simultaneously). Could we hope for anything similar? (I don't believe that any of the fast multiplication algorithms run in sublinear space, although I could be mistaken. <a href="http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations">Here</a> is Wiki's summary of existing algorithms for arithmetic.)<br /><br />As MNT observe, the question is made especially interesting when we observed that multiplication could be just as easily achieved in linear time/logspace... <span style="font-style:italic;">if</span> we were willing to accept a different representation for our integer inputs! Namely, if we're given the prime factorizations of the inputs, we simply add corresponding exponents to determine the product. There are two hitches, though: first, we'd have to accept the promise that the prime factorizations given really do involve primes (it's not at all clear that we'd have the time/space to check this, even with the recent advances in primality testing); second, and more germane to our discussion, <span style="font-style:italic;">addition</span> just got much harder!<br /><br />The situation is similar over finite fields of prime order (<span style="font-style:italic;">Z_p</span>): in standard representation, addition is easy and multiplication is less so, while if we represent numbers as powers of a fixed <a href="http://en.wikipedia.org/wiki/Primitive_root_modulo_n">primitive root</a>, the reverse becomes true. This suggests a woeful but intriguing possibility: perhaps no matter <span style="font-style:italic;">how</span> we represent numbers, one of the two operations must be computationally complex, even though we have latitude to 'trade off' between <span style="font-style:italic;">+</span> and <span style="font-style:italic;">*</span>. So we are invited to consider<br /><br /><span style="font-weight:bold;">Mental Stretch #1</span>: Can we prove 'representation-independent' complexity lower bounds?<br /><br /><span style="font-weight:bold;">Stretch #2</span>: Can we prove such lower bounds in the form of tradeoffs between two component problems, as seems necessary here?<br /><br />In the setting of finite-field arithmetic, MNT answer 'yes' to both problems. The lower bounds they give, however, are not expressed simply in terms of time usage or space usage, but instead as the <span style="font-style:italic;">product</span> of these two measures. Thus we have<br /><br /><span style="font-weight:bold;">Stretch #3</span>: Prove 'time-space tradeoffs' for computational problems.<br /><br />To be clear, all three of these 'stretches' had been made in various forms in work prior to MNT; I'm just using their paper as a good example.<br /><br />The combination of these three elements certainly makes the task seem daunting. But MNT have convinced me, and I hope to suggest to you, that with the right ideas it's not so hard. As the paper title indicates, their point of departure is Universal Hashing (UH)--an algorithmic technique in which finite fields have already proved useful. <span style="font-style:italic;">Use upper bounds to prove lower bounds.</span> We can call this another Stretch, or call it wisdom of the ages, but it deserves to be stated.<br /><br />So what is UH? Fix a domain <span style="font-style:italic;">D</span> and a range <span style="font-style:italic;">R</span>. a (finite) family <span style="font-style:italic;">H</span> of functions from <span style="font-style:italic;">D</span> to <span style="font-style:italic;">R</span> is called a Universal Hash Family (UHF) if the following holds: <br /><br />For every pair of distinct elements <span style="font-style:italic;">d, d'</span> in <span style="font-style:italic;">D</span>, <br /><br />and for every pair of (not necessarily distinct) elements <span style="font-style:italic;">r, r'</span> in <span style="font-style:italic;">R</span>,<br /><br />if we pick a function <span style="font-style:italic;">h</span> at random from <span style="font-style:italic;">H</span>,<br /><br />Prob<span style="font-style:italic;">[h(d) = r, h(d') = r' ] = 1/|R|^2</span>.<br /><br />In other words, the randomly chosen <span style="font-style:italic;">h</span> behaves just like a truly random function from <span style="font-style:italic;">D</span> to <span style="font-style:italic;">R</span> when we restrict attention to any two domain elements. (In typical applications we hope to save time and randomness, since <span style="font-style:italic;">H</span> may be much smaller than the space of all functions.)<br /><br /><br />Here is what MNT do: they prove that implementing <span style="font-style:italic;">any</span> UHF necessitates a complexity lower bound in the form of a time-space product. (To 'implement' a UHF <span style="font-style:italic;">H</span> is to compute the function <span style="font-style:italic;">f_H(h, x) = h(x)</span>.) <br /><br />This is in fact the main message of the paper, but they obtain our desired application to <span style="font-style:italic;">+</span> and <span style="font-style:italic;">*</span> as a corollary, by citing the well known fact that, fixing a prime field <span style="font-style:italic;">Z_p = D = R</span>,<br /><br /><span style="font-style:italic;">H = {h_{a, b}(<span style="font-weight:bold;">x</span>) = a*<span style="font-weight:bold;">x</span> + b mod p}</span><br /><br />is a UHF, where <span style="font-style:italic;">a, b</span> range over all field elements. (Left as an easy exercise.)<br /><br />Note the gain from this perspective: implementing a UHF is a 'representation-invariant' property of a function, so Stretch 1 becomes possible. Moreover, Stretch 2 now makes more sense: it is only jointly that <span style="font-style:italic;">+</span> and <span style="font-style:italic;">*</span> define a UHF, so whatever complexity measure <span style="font-style:italic;">M</span> we lower-bound for UHFs implies only a tradeoff between <span style="font-style:italic;">+</span> and <span style="font-style:italic;">*</span>.<br /><br />It remains only to sketch the proof of time-space tradeoffs for UHFs, which in fact is a manageable argument along classic lines (the basic form of the argument is attributed to earlier work by Borodin and Cook). The upshot for us will be that for any program computing (any representation of) <span style="font-style:italic;">f(a, b, x) = a*x + b</span> over an n-digit prime modulus, if <span style="font-style:italic;">T</span> denotes worst-case time usage and <span style="font-style:italic;">S</span> worst-case space, <span style="font-style:italic;">T*S = Omega(n^2)</span>. Very nice! (Although what this actually implies about the larger of the <span style="font-style:italic;">individual</span> time-space products of <span style="font-style:italic;">+</span> and <span style="font-style:italic;">*</span> under this representation is not clear to me at the moment.)<br /><br />Let's adjourn for today... wouldn't want to pull a muscle.http://andysresearch.blogspot.com/2008/05/complexity-calisthenics-part-i.htmlnoreply@blogger.com (Andy D)4tag:blogger.com,1999:blog-19908808.post-5186903159901195030Wed, 16 Apr 2008 23:50:00 +00002008-04-16T20:54:50.287-04:00grad lifeA simple plan to improve your graduate programIt's just this: Food is key. We need more food. (In what follows, I'm speaking not just for MIT theory students, but for all students everywhere.)<br /><br />Cancel the subscription to 'Journal of Timed Networked Multithreaded Aqueous Automata', and a few others. You've just saved about $20,000. <br /><br />Use the money to provide copious snacks for students and faculty. Weekly receptions help, but really we're hungry all the time. To elaborate on that point: <br /><br />-The student center is a tiresome 5-10 minutes away. <br /><br />-Graduate students are low on cash. We work strange hours that discourage grocery shopping (and may not own a car). Some of us are newly weaned from the meal plans of our undergrad days, and we're only slowly learning to provide for ourselves. The food around here is expensive.<br /><br />If department budget is truly an issue, there is another way, practiced with admirable success by UC San Diego's CSE department: recruit grad student volunteers to maintain a stocked snack-room, with foods purchased cheaply in bulk and paid for on the honor system. Of course, a snack-room should also be a social space.<br /><br />Candy and tasty treats help, but it's too easy to over-rely on them and come crashing down. At some point we all wish there were less of these around the office. Consider in their stead:<br /><br />-bagels<br />-raisins<br />-apples and bananas<br />-peanuts and peanut butter<br /><br />...all cheap, real-tasting, and calorific.<br /><br />That's it! An easy, cost-effective intervention that will keep students and faculty working happily in their offices on their next theorem or patentable device.http://andysresearch.blogspot.com/2008/04/simple-plan-to-improve-your-graduate.htmlnoreply@blogger.com (Andy D)3tag:blogger.com,1999:blog-19908808.post-8597169660067038809Sat, 05 Apr 2008 02:23:00 +00002008-04-04T22:33:02.634-04:00general mathSome babies grow in a peculiar wayToday I bumped into an order of growth I hadn't seen before, and thought I'd share it for a modest bit of mental aerobics.<br /><br />Readers may well have seen functions of form <br /><br />f(n) = (log n)^c,<br /><br />known as 'polylogarithmic'. These are important in, e.g., query complexity, where we like the number of queries to be much smaller than the input size n when possible. Also emerging from such studies are the 'quasipolynomial' functions, of form<br /><br />g(n) = 2^{(log n)^c}.<br /><br />As a warmup--how fast do these things grow?<br /><br /><br />OK, now the main course tonight is the following:<br /><br />h(n) = 2^{2^{(log log n)^c}}.<br /><br />What do you make of these? And what questions need answering before we understand such a growth rate 'well enough'? I'm unsure and would love to hear your thoughts.http://andysresearch.blogspot.com/2008/04/some-babies-grow-in-peculiar-way.htmlnoreply@blogger.com (Andy D)7tag:blogger.com,1999:blog-19908808.post-4706468474963281369Fri, 21 Mar 2008 18:30:00 +00002008-03-24T23:31:46.380-04:00computabilitygeneral mathNews, and some Number TheoryTime for a personal update: I'm enjoying myself greatly here in Cambridge, and was recently admitted as a transfer student to MIT's EECS department. The move has both personal and academic advantages for me, but let me emphasize that I think UCSD has a lot to offer prospective students, and in addition is home to many cool people whom I miss... I would be happy to offer advice about either school.<br /><br />When I started writing here as an undergrad, I knew very few people in the worlds of TCS and Complexity, and the blog was a way of reaching out to a community I wanted to join. Now, thanks in part to blogging (thru which I met Scott, my advisor), I'm immersed in that community at a school with a large, vibrant Theory group. This is to say that, though writing here still interests me, it no longer answers an urgent personal need, and I am most likely to post new material in response to a request for specific topics or post types.<br /><br /><br />Today I was perusing a favorite book of mine, 'Gems of Theoretical Computer Science' by U. Schoning & R. Pruim. One chapter partially treats the undecidability of determining whether systems of Diophantine equations (polynomial equations where solutions are required to be integral) have a solution. This was one of Hilbert's problems from 1900, solved in 1971 and full of deep math. <br /><br />The authors pose the following exercise: show that the version where variables must take nonnegative values, is reducible to the integer version. (And vice-versa; also the rational-values version is reducible to the integral version; the converse appears to be open...)<br /><br />Think about it...<br /><br /><br /><br />The reduction they give: Given a set of polynomial equations {P_i(X_1, ... X_k)}, we want to determine if they're simultaneously satisfiable with nonnegative values. Introduce, for each j <= k, the variables Y_{j, 1}, Y_{j, 2}, Y_{j, 3}, Y_{j_4}.<br /><br />Now add to the system, for each j <= k, the constraint that <br /><br />X_j = Y_{j, 1}^2 + Y_{j, 2}^2 + Y_{j, 3}^2 + Y_{j_4}^2.<br /><br />Claim: the new system is satisfiable over the integers iff the original one is satisfiable over the nonnegative integers! The proof, of course, is an easy consequence of Lagrange's Theorem that every nonnegative integer is the sum of 4 integer squares.<br /><br />So, my question is, could a reduction be found that doesn't rely on Lagrange's Theorem? Or its weaker variants where the constant 4 is replaced with some constant c > 4. Or maybe for some constant c, the proof is really so simple that I will be satisfied that we are performing this reduction with the cheapest tools.<br /><br />If our plan in the reduction is to constrain the original variables in a form analogous to the above, namely<br /><br />X_j = Q_j(Y, Z, ...),<br /><br />where Y, Z, ... are new integral variables, is there any way around proving a version of Lagrange's Theorem? Generally we show that polynomials are identically nonnegative by expressing them as a sum of one or more squares, e.g., <br /><br />Y^2 + Z^2 - 2YZ = (Y - Z)^2.<br /><br />Using this template, we'd be reliant on Lagrange. However, in his thesis, Minkowski conjectured that there exist nonnegative real polynomials *not* so expressible, and this was proved by Hilbert. A simple example I found online is<br /><br />Q(X, Y, Z) = X^4*Y^2 + Y^4*Z^2 + Z^4*Y^2 - 3X^2*Y^2*Z^2,<br /><br />and another counterexample is derived systematically in the classy and useful inequalities book 'The Cauchy-Schwartz Master Class' by J.M. Steele. (Artin proved, however, that every poly that's identically nonnegative, is expressible as the sum of finitely many *rational* functions squared.)<br /><br />Could it be that one of these creatures is easier to use in the reduction (both to prove it's nonnegative, and ranges over all nonnegative values)? Somehow I doubt it. Anyways, I just wanted to point to an instance where a reduction one would expect to be straightforward seems to require fairly nontrivial understanding of the problem domain.http://andysresearch.blogspot.com/2008/03/news-and-some-number-theory.htmlnoreply@blogger.com (Andy D)7tag:blogger.com,1999:blog-19908808.post-3452396220135662238Fri, 07 Mar 2008 21:43:00 +00002008-03-07T18:14:06.772-05:00geometryprobabilityBeasts of Probability and Plane GeometrySay you're trying to predict whether some event <span style="font-style:italic;">E</span> occurs or not. There is another collection of events <span style="font-style:italic;">I_1, I_2, ... I_k</span>, which are positive predictors of <span style="font-style:italic;">E</span>: for every <span style="font-style:italic;">j, E</span> occurs with probability at least <span style="font-style:italic;">.99</span> conditioning on the event that <span style="font-style:italic;">I_j</span> occurs.<br /><br />Can we lower-bound the probability that <span style="font-style:italic;">E</span> occurs conditioning on the event that *at least one* <span style="font-style:italic;">I_j</span> occurs?<br /><br />(Think about it before reading on.)<br /><br /><br /><br />Here's a simple example of what can go wrong: let the underlying probability space be a sequence of <span style="font-style:italic;">n</span> unbiased coin flips. Let <span style="font-style:italic;">E</span> be the event that at least <span style="font-style:italic;">2/3</span> of the flips come up heads. For each subset <span style="font-style:italic;">S</span> of <span style="font-style:italic;">{1, 2, ... n}</span> of size exactly <br /><span style="font-style:italic;">.4n</span>, let <span style="font-style:italic;">I_S</span> be the event that all the coin flips indexed by <span style="font-style:italic;">S</span> come up heads.<br /><br />If <span style="font-style:italic;">n</span> is large enough, we have that <br /><br />i) <span style="font-style:italic;">E</span> is almost surely false, yet<br /><br />ii) Almost surely, some <span style="font-style:italic;">I_S</span> is satisfied--even though<br /><br />iii) Conditioning on any fixed event <span style="font-style:italic;">I_S, E</span> becomes almost surely true (since we then expect half of the remaining flips to come up heads, yielding about a .4 + .5*.6 = .7 fraction of heads total).<br /><br />One can also modify this example to make conditioning on the union of the <span style="font-style:italic;">I_j</span>'s actually <span style="font-style:italic;">decrease</span> the probability that <span style="font-style:italic;">E</span> occurs.<br /><br /><br />This kind of conclusion seems somewhat pathological and inconvenient, so it's natural to look for restrictions that prevent it from arising. The simplest would be to restrict the number, <span style="font-style:italic;">k</span>, of predictor variables: for the above hypotheses, we have that the probability of <span style="font-style:italic;">E</span> conditioning on the union of the <span style="font-style:italic;">I_j</span>'s is at least<br /><br /><span style="font-style:italic;">.99 / (1 + .01(k - 1))</span>. <br /><br />(to see this, think about the worst possible case, which resembles a 'sunflower' in probability space.)<br /><br />A more interesting direction is to restrict the structure of the predictor events within probability space. For instance, suppose that the probability space is a uniformly drawn point from the unit interval, <span style="font-style:italic;">E</span> is some arbitrary subset of the interval, and each <span style="font-style:italic;">I_j</span> is the indicator variable for some fixed subinterval. Then, regardless of the number of <span style="font-style:italic;">I_j</span>, we can conclude that <span style="font-style:italic;">E </span>occurs with high probability conditioning on the union of I_j's; not quite <span style="font-style:italic;">.99</span>, but close. See my closely related <a href="http://andysresearch.blogspot.com/2007/07/tricky-vote.html">earlier post</a> for details.<br /><br /><br /><br />It is natural to try to extend this result to higher dimensions, and for predictor indicator-sets given by axis-parallel rectangles this succeeds by an induction (although the effect gets exponentially weaker with the dimension). Similar results hold if the sets are required to be 'fat' convex bodies, in which case an argument much like the 1-dimensional one works.<br /><br />However, allowing rotated rectangles destroys the effect even in two dimensions. Here's one interpretation: consider the unit square as a city, and take the set <span style="font-style:italic;">S</span> to be the set of Democratic households.<br /><br />In pathological cases, it's possible to find a covering of the square by overlapping convex 'precincts', such that<br /><br />i) in each precinct, 99% of the households are Democratic, yet<br /><br />ii) overall, 99% of houses are Republican!<br /><br />Such sets seem truly bizarre. For a long time I was convinced they couldn't exist, but after failing to prove this, I finally tracked down a construction in a Harmonic Analysis textbook by <a href="http://en.wikipedia.org/wiki/Elias_M._Stein">Elias Stein</a> (who, besides seeming impressive in his own right, was PhD advisor to Fields Medalists <a href="http://en.wikipedia.org/wiki/Charles_Fefferman">Charles Fefferman</a> and <a href="http://terrytao.wordpress.com/">Terence Tao</a>). These sets, whose construction resembles a kind of hydra spawning ever more and tinier heads, are related to the more well-known Besicovitch/Kakeya sets. One can even achieve a kind of fantastical limit, in the following theorem for which Stein provides a reference:<br /><br />There exists a subset <span style="font-style:italic;">S</span> of measure zero in the unit square, such that for every point <span style="font-style:italic;">x</span> on the square, there exists a line <span style="font-style:italic;">L</span> thru <span style="font-style:italic;">x</span>, such that <span style="font-style:italic;">S</span> contains all of <span style="font-style:italic;">L</span> except, possibly, <span style="font-style:italic;">x</span> itself!http://andysresearch.blogspot.com/2008/03/beasts-of-probability-and-plane.htmlnoreply@blogger.com (Andy D)0tag:blogger.com,1999:blog-19908808.post-7037671464078653209Tue, 25 Dec 2007 04:34:00 +00002007-12-25T11:16:46.075-05:00general mathgeometryThat's a WrapI had some gift-giving duties to attend to today... halfway into the wrapping phase, my mind wandered in a predictable direction:<br /><br /><span style="font-style:italic;">Given a set of rectangular box dimensions, what is the smallest amount of paper that will wrap the box?</span><br /><br />One would assume we want to cut out a rectangle of wrapping paper, since otherwise we get annoying scraps (and also the problem is mathematically trivial if we can cut a cross-type shape).<br /><br />I haven't had any time to toy with this problem, but I had a feeling one particular MIT dude might have... and I was right. True to form, Erik Demaine delivers a whole <a href="http://erikdemaine.org/wrapping/">page</a> about various wrapping problems he's explored with colleagues.<br /><br />To anyone reading who teaches a programming course, I would suggest that a fun algorithms assignment could be spun out problems of the above type. If the program outputted folding instructions too, so much the better.<br /><br />Happy holidays, all!http://andysresearch.blogspot.com/2007/12/thats-wrap.htmlnoreply@blogger.com (Andy D)3tag:blogger.com,1999:blog-19908808.post-481322462006598511Mon, 17 Dec 2007 20:13:00 +00002007-12-17T18:46:20.671-05:00puzzlesthe infiniteCardinal Rules<span style="font-weight:bold;">1.</span> Pick your favorite countable set <span style="font-style:italic;">S</span>. Let <span style="font-style:italic;">F</span> be a 'nested' family of distinct subsets of <span style="font-style:italic;">S</span>; that is, if <span style="font-style:italic;">A, B</span> are members of <span style="font-style:italic;">F</span>, then either <span style="font-style:italic;">A</span> is contained in <span style="font-style:italic;">B</span> or <span style="font-style:italic;">B</span> is contained in <span style="font-style:italic;">A</span>.<br /><br />Then clearly F can be at most countable... right?<br /><br /><br />A puzzle from Bollobas' recent <a href="http://andysresearch.blogspot.com/2007/10/another-heads-up-for-puzzlers.html">book</a>.<br /><br /><br /><span style="font-weight:bold;">2.</span> Given a collection <span style="font-style:italic;">C</span> of functions from <span style="font-style:italic;">N = {1, 2, 3, ...}</span> to <span style="font-style:italic;">N</span>, say that <span style="font-style:italic;">C</span> is <span style="font-style:italic;">unbounded</span> if for any function <span style="font-style:italic;">f</span> (in <span style="font-style:italic;">C</span> or not), there exists a function <span style="font-style:italic;">g</span> in <span style="font-style:italic;">C</span> such that <span style="font-style:italic;">g(i) > f(i)</span> for infinitely many <span style="font-style:italic;">i</span>.<br /><br />Clearly the class <span style="font-style:italic;">C</span> of <span style="font-style:italic;">all</span> functions from <span style="font-style:italic;">N</span> to <span style="font-style:italic;">N</span> is unbounded. Also, standard diagonalization techniques tell us that no countable collection <span style="font-style:italic;">C</span> can be unbounded (exercise).<br /><br />The question then becomes: what is the smallest cardinality of any unbounded collection <span style="font-style:italic;">C</span>? Assuming the <a href="http://en.wikipedia.org/wiki/Continuum_hypothesis">Continuum Hypothesis</a> is false, where does the threshold lie?<br /><br />Fortunately or unfortunately, there seems to be little we can say about this issue within the standard axioms of set theory. Assuming CH is false, the threshold could be as low as the first uncountable cardinal, or as large as the continuum, or in between--this I learned from Jech's encyclopedic book <a href="http://www.amazon.com/Set-Theory-Thomas-Jech/dp/3540440852/ref=pd_sim_b_2_img">Set Theory</a>. There are many questions of this flavor, where the key construction (in our case, constructing a 'bounding function' for a given 'small' collection <span style="font-style:italic;">C</span>) is easy to do in the countable setting, but essentially impossible to analyze when there are uncountably many requirements floating around. <br /><br />The need to satisfy uncountable collections of requirements in a construction is so common that new axioms for set theory are put forward expressly to assert the possibility of doing so (under some restrictions that make the axioms plausible); 'Martin's Axiom' is an especially widely used axiom of this type. Kunen's <a href="http://www.amazon.com/Theory-Studies-Logic-Foundations-Mathematics/dp/0444868399/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1197924132&sr=1-1">book</a> on set theory seems like a good reference for MA (though I'm just starting to learn about it).<br /><br />Of course, the Continuum Hypothesis itself makes it easier to satisfy collections of requirements of size smaller than continuum, since such requirements are then at most countable! This leads to some bizarre constructions, the possibility of many of which stands or falls with the CH itself. The excellent book <a href="http://www.amazon.com/Problems-Theorems-Classical-Problem-Mathematics/dp/038730293X">Problems and Theorems in Classical Set Theory</a> gives many of these, and Bill Gasarch recently <a href="http://weblog.fortnow.com/2007/11/it-was-stupid-question-or.html#comments">exposited</a> one such application in Euclidean Ramsey theory. On the other hand, Martin's Axiom, which is independent of CH, allows set theorists to prove many interesting results while remaining more 'agnostic' about cardinality issues.<br /><br /><br />I think that set theory is a blast, and that its logical structure resonates with issues in computer science. But I'm far from expert on the subject, so if I've said anything inaccurate please let me know.http://andysresearch.blogspot.com/2007/12/cardinal-rules.htmlnoreply@blogger.com (Andy D)2