Archive for November, 2011

Theory Day at Georgia Tech (and also in NYC)

2011-11-28 by Lev Reyzin. 0 comments

This post is by David Pritchard and Lev Reyzin.

Last Friday (11/11) was Theory Day at Georgia Tech. In fact, it was also Theory Day in NYC, although nobody seems to think the choice of date was coordinated or the start of a new national holiday. In Atlanta, Georgia Tech invited Avi Wigderson to give two talks on Thursday (making for two theory days in a row!), one for the broader public and then a math-on-whiteboard talk in the afternoon; and on Friday, there were four talks: by Thomas Dueholm Hansen, Mohit Singh, Alexander Mądry, and Ryan Williams, all aimed at theoreticians.

Dave: I was pretty impressed by the quality of the talks, which were about an hour long, and therefore gave the speakers quite a good chance to get into a bit of the technical details that are usually skipped at conferences. I have skimmed a couple of the papers before and really got much more intuition with these in-person explanations. Avi’s second talk was probably the most self-contained, on aspects of coding and a generalized Erdős/Gallai/Melchior/Sylvester theorem.  Also, we found that Alex may be a pyromaniac, since his examples for the k-server problem involved houses burning down (which you could think of as data requests). I was also pretty impressed by the quality of the lunch, which was a buffet of Indian food.

Lev: While I was a graduate student at Yale and during my year at Yahoo! Research in New York, I attended all the New York Theory Days for 5 years straight, so I was glad to see catch on in Atlanta too.  Apparently, we have Zvi Galil (now Georgia Tech’s College of Computing dean) to thank for the ideas of starting both the New York and the Atlanta theory days — so while the dates were not coordinated, the Theory Days, in some sense, were.  All four talks were great.  I especially enjoyed Alexander Mądry’s great and accessible talk on his recent progress on the k-server conjecture.  Ryan Williams also gave a very nice talk on how algorithms for circuits can imply lower bounds; it gave me some intuitive understanding that I didn’t have before.  Finally, I should note that I seem to remember the talks being filmed, so I’m hoping the videos will show on on Georgia Tech’s ARC website sometime.

If any readers attended the NYC Theory Day, please share your impressions in the comments!

Midwest Theory Day 2011

2011-11-23 by Aaron Sterling. 0 comments

I attended the 62nd Midwest Theory Day last Sunday, November 13rd. Usually there are two such theory days each year, but this was the only one in 2011. The event is like an informal workshop: anyone who wishes may sign up to speak, there are also invited speakers, lunch is provided, and there is no proceedings. Nicole Immorlica organized the event, and did a great job: there was a full day of talks, the room was great, the food was as good as any conference I’ve been to, and the invited speakers were entertaining and educational. 

Yael Tauman Kalai gave the first invited talk.  She gave a high-level overview of some hot topics in cryptography.  For example, she provided intuition into results she and co-authors obtained about designing systems whose algorithms resist physical attacks on cryptosystems.  A dramatic physical attack is someone who wants to break into a SIM card, so cooks it in a microwave oven for a while; this can cause bits to flip in the secret key, and even modify the circuitry of the SIM card itself.  (Some other examples of physical attacks are here.) Tauman Kalai presented a formalism in which the attacker could determine the system’s secret key, but would still be unable to produce a second secret key matching the system’s public key, unless the attacker used exponential resources or broke a reasonable cryptographic assumption.  For a system that can defend itself against an adversary capable of obtaining continual memory leakage of the system, see the FOCS 2010 paper Cryptography Resistant to Continual Memory Leakage (co-authors Brakerski, Katz, Vaikuntanathan).

Adam Kalai was the other invited speaker; he provided an overview of his ITCS 2011 paper Compression Without a Common Prior: an Information-Theoretica Justification for Ambiguity in Natural Language (co-authors Juba, Khanna, Sudan).  His talk was controversial: quite a few members of the audience either didn’t get his point, or got it and didn’t agree.  The idea, as I understand it, is to produce a mechanism two computers can use to compress and decompress messages they send to each other, even if their respective compression dictionaries (or “priors”) are different.  The motivation is how natural language works: you and I don’t have the same definitions for all words (and very different life experience) and yet we can communicate pretty well most of the time (for purposes of the example anyway).  If the message is large enough, one could of course use an algorithm like Lempel-Ziv, which approches best-possible compression in the limit.  But perhaps we just want to send one image, instead of thousands, and we would like to take advantage of the fact that both computers have useful priors for what an image is, even if those priors are not the same.  The paper provides an information-theoretic framework for this, but there is, as yet, no implementable algorithm.

There was also a day full of 15- and 20-minute talks by students, postdocs and faculty.  I will completely arbitrarily limit myself to the talks of the two people who sat on either side of me at lunch.  Fortunately, I thought both their talks were great.

Paolo Codenotti presented some brand new work on the group isomorphism problem: Polynomial-Time Isomorphism Test for Groups with no Abelian Normal Subgroups (co-authors Babai, Qiao).  This work extends the SODA 2011 paper Code Equivalence and Group Isomorphism (co-authors Babai, Grochow, Qiao).  The problem solved is the following: let $latex G$ be a group on $latex n$ elements, given to us as its multiplication table of size $latex n^2$.  Let $latex H$ be another group on $latex n$ elements, given in the same way.  If we know that $latex G$ and $latex H$ contain no normal subgroups that are abelian (i.e., commutative), then we can determine whether $latex G$ and $latex H$ are isomorphic in polynomial time.  One example of such a group is $latex A_5$, the alternating group on 5 elements, which has order $latex n=60$.  Another example would be any group built from direct sums of copies of $latex A_5$.

Tyson Williams gave an overview of his ITCS 2011 paper, Gadgets and Anti-Gadgets Leading to a Complexity Dichotomy (co-authors Cai, Kowalczyk).  Williams has also posted his slides from this talk online.  This paper is in the general category of holographic algorithms, a remarkable subfield of counting complexity started by Les Valiant and explored by Jin-Yai Cai, Williams’s advisor.  Since the slides are online, and the intuition behind the paper’s approach is so visual, I will let the slides speak for themselves, and close this post by simply stating the paper’s main theorem.

Theorem. Over 3-regular graphs $latex G$, the counting problem for any binary, complex-weighted function $latex f$,  $latex \displaystyle Z(G) = \sum_{\sigma:V \rightarrow \{0,1\}} \prod_{(u,v) \in E} f(\sigma(u),\sigma(v))$ is either polynomial-time computable or #P hard.  Furthermore, the complexity is efficiently decidable.

 The formalism of $latex f$ captures a wide variety of problems on 3-regular graphs, such as #VertexCover (how many vertex covers are there for the input graph), which is #P hard.

Something you should know about: Quantifier Elimination (Part I)

2011-11-14 by Arnab Bhattacharyya. 12 comments

by Arnab Bhattacharyya

 

About a month ago, Ankur Moitra dropped by my office. We started chatting about what each of us was up to. He told me a story about a machine learning problem that he was working on with Sanjeev Arora, Rong Ge, and Ravi Kannan. On its face, it was not even clear that the problem (non-negative rank) was decidable, let alone solvable in polynomial time. But on the other hand, they observed that previous work had already shown the existence of an algorithm using quantifier elimination. Ankur was a little taken aback by the claim, by the power of quantifier elimination. He knew of the theory somewhere in the back of his mind, in the same way that you probably know of Brownian motion or universal algebra (possible future topics in this “Something you should know about” series!), but he’d never had the occasion to really use it till then. On the train ride back home, he realized that quantifier elimination not only showed decidability of the problem but could also be helpful in devising a more efficient algorithm.

Quantifier elimination has a bit of a magical feel to it. After the conversation with Ankur, I spent some time revisiting the area, and this post is a consequence of that. I’ll mainly focus on the theory over the reals. It’s a remarkable result that you definitely should know about!

1. What is Quantifier Elimination?

A zeroth-order logic deals with declarative propositions that evaluate to either true or false. It is defined by a set of symbols, a set of logical operators, some inference rules and some axioms. A first-order logic adds functions, relations and quantifiers to the mix. Some examples of sentences in a first-order logic:

$latex \displaystyle \forall x~ \exists y~(y = x^2)&fg=000000$

$latex \displaystyle \forall y~ \exists x~(y = x^2)&fg=000000$

$latex \displaystyle \forall x,y~((x+y)^2 > 4xy \wedge x-y>0)&fg=000000$

The above are examples of sentences, meaning that they contain no free variables (i.e., variables that are not bounded by the quantifiers), whereas a formula $latex {\phi(x_1,\dots,x_n)}&fg=000000$ has $latex {x_1, \dots, x_n}&fg=000000$ as free variables. A quantifier-free formula is one in which no variable in the formula is quantified. Thus, note that a quantifier-free sentence is simply a proposition.

Definition 1 A first-order logic is said to admit quantifier elimination if for any formula $latex {\phi(x_1,\dots,x_n)}&fg=000000$, there exists a quantifier-free formula $latex {\psi(x_1,\dots,x_n)}&fg=000000$ which is logically equivalent to $latex {\phi(x_1,\dots,x_n)}&fg=000000$.

If the quantifier elimination process can be described algorithmically, then decidability of sentences in the logic reduces to decidability of quantifier-free sentences which is often a much easier question. (Note though that algorithmic quantifier elimination of formulas is a stronger condition than decidability of sentences.)

2. Quantifier Elimination over the Reals

The real numbers, being an infinite system, cannot be exactly axiomatized using first-order logic because of the Löwenheim-Skolem Theorem. But the axioms of ordered fields along with the intermediate value theorem yields a natural first-order logic, called the real closed field. The real closed field has the same first-order properties as the reals.

Tarski (1951) showed that the real closed field admits quantifier elimination. As a consequence, one has the following (Seidenberg was responsible for popularizing Tarski’s result):

Theorem 2 (Tarski-Seidenberg) Suppose a formula $latex {\phi(y_1,\dots,y_m)}&fg=000000$ over the real closed field is of the following form: \begin{equation*} Q_1x_1~Q_2x_2~\cdots Q_nx_n~(\rho(y_1,\dots,y_m,x_1,\dots,x_n)) \end{equation*} where $latex {Q_i \in \{\exists,\forall\}}&fg=000000$ and $latex {\rho}&fg=000000$ is a boolean combination of equalities and inequalities of the form:

$latex \displaystyle f_i(y_1,\dots,y_m,x_1,\dots,x_n) = 0&fg=000000$

$latex \displaystyle g_i(y_1,\dots,y_m,x_1,\dots,x_n) > 0&fg=000000$

where each $latex {f_i}&fg=000000$ and $latex {g_i}&fg=000000$ is a polynomial with coefficients in $latex {{\mathbb R}}&fg=000000$, mapping $latex {{\mathbb R}^{m+n}}&fg=000000$ to $latex {{\mathbb R}}&fg=000000$. Then, one can explicitly construct a logically equivalent formula $latex {\phi'(y_1, \dots, y_m)}&fg=000000$ of the same form but quantifier-free. Moreover, there is a proof of the equivalence that uses only the axioms of ordered fields and the intermediate value theorem for polynomials.

I should be more explicit about what “explicitly construct” means. Usually, it is assumed that the coefficients of the polynomials $latex {f_i}&fg=000000$ and $latex {g_i}&fg=000000$ are integers, so that there is a bound on the complexity of the quantifier elimination algorithm in terms of the size of the largest coefficient. (If the coefficients are integers, all the computations only involve integers.) But, even when the coefficients are real numbers, there is a bound on the “arithmetic complexity” of the algorithm, meaning the number of arithmetic operations $latex {+, -, \times, \div}&fg=000000$ performed with infinite precision.

A quick example. Suppose we are given an $latex {r}&fg=000000$-by-$latex {s}&fg=000000$ matrix $latex {M = (M_{i,j})}&fg=000000$, and we’d like to find out whether the rows of $latex {M}&fg=000000$ are linearly dependent, i.e. the following condition:

$latex \displaystyle \exists \lambda_1, \dots, \lambda_r \left( \neg \left(\bigwedge_{i=1}^r (\lambda_i = 0)\right) \wedge \bigwedge_{j=1}^s (\lambda_1 M_{1,j}+\lambda_2 M_{2,j} + \cdots \lambda_m M_{r,j} = 0) \right)&fg=000000$

Tarski-Seidenberg immediately asserts an algorithm to transform the above formula into one that does not involve the $latex {\lambda_i}&fg=000000$’s, only the entries of the matrix. Of course, for this particular example, we have an extremely efficient algorithm (Gaussian elimination) but quantifier elimination gives a much more generic explanation for the decidability of the problem.

2.1. Semialgebraic sets

If $latex {m=0}&fg=000000$ in Theorem 2, then Tarski-Seidenberg produces a proposition with no variables that can then be evaluated directly, such as in the linear dependency example. This shows that the feasibility of semialgebraic setsis a decidable problem. A semialgebraic set $latex {S}&fg=000000$ is a finite union of sets of the form:

$latex \displaystyle \{x \in {\mathbb R}^n~|~f_i(x) = 0, g_j(x) > 0 \text{ for all }i=1,\dots, \ell_1, j= 1,\dots, \ell_2\}&fg=000000$

where $latex {f_1,\dots, f_{\ell_1}, g_1,\dots, g_{\ell_2}: {\mathbb R}^n \rightarrow {\mathbb R}}&fg=000000$ are $latex {n}&fg=000000$-variate polynomials over the reals. The feasibility problem for a semialgebraic set $latex {S}&fg=000000$ is deciding if $latex {S}&fg=000000$ is empty or not. An algorithm to decide feasibility directly follows from applying quantifier elimination.

Note that this is in stark contrast to the first-order theory over the integers for which Godel’s Incompleteness Theorem shows undecidability. Also, over the rationals, Robinson proved undecidability (in her Ph.D. thesis advised by Tarski) by showing how to express any first-order sentence over the integers as a first-order sentence over the rationals. This latter step now has several different proofs. Poonen has written a nice survey of (un)decidability of first-order theories over various domains.

2.2. Efficiency

What about the complexity of quantifier elimination over the reals? The algorithm proposed by Tarski does not even show complexity that is elementary recursive! The situation was much improved by Collins (1975) who gave a doubly-exponential algorithm using a technique called “cylindrical algebraic decomposition”. More precisely, the running time of the algorithm is $latex {poly(C,(\ell d)^{2^{O(n)}})}&fg=000000$, where $latex {C}&fg=000000$ measures the size of the largest coefficient in the polynomials, $latex {\ell}&fg=000000$ is the number of polynomials, $latex {d}&fg=000000$ is the maximum degree of the polynomials, and $latex {n}&fg=000000$ is the total number of variables.

A more detailed understanding of the complexity of quantifier elimination emerged from an important work of Ben-Or, Kozen and Reif (1986). Their main contribution was an ingenious polynomial time algorithm to test the consistency of univariate polynomial constraints: given a set of univariate polynomials $latex {{f_i}}&fg=000000$ and a system of constraints of the form $latex {f_i(x) \leq 0}&fg=000000$, $latex {f_i(x) = 0}&fg=000000$ or $latex {f_i(x) > 0}&fg=000000$, does the system have a solution $latex {x}&fg=000000$? Such an algorithm exists despite the fact that it’s not known how to efficiently find an $latex {x}&fg=000000$ that makes the signs of the polynomials attain a given configuration. Ben-Or, Kozen and Reif also claimed an extension of their method to multivariate polynomials, but this analysis was later found to be flawed.

Nevertheless, subsequent works have found clever ways to reduce to the univariate case. These more recent papers have shown that one can make the complexity of the quantification elimination algorithm be only singly exponential if the number of quantifier alternations (number of switches between $latex {\exists}&fg=000000$ and $latex {\forall}&fg=000000$) is bounded or if the model of computation is parallel. See Basu (1999) for the current record and for references to prior work.

As for lower bounds, Davenport and Heintz (1988) showed that doubly-exponential time is required for quantifier elimination, by explicitly constructing formulas for which the length of the quantifier-free expression blows up doubly-exponentially. Brown and Davenport (2007) showed that the doubly exponential dependence is necessary even when all the polynomials in the first order formula are linear and there is only one free variable. I do not know if a doubly exponential lower bound is known for the decision problem when there are no free variables.


Thanks to Ankur for helpful suggestions. Part II of this post will contain a relatively quick proof of Tarski’s theorem! Stay tuned.

 

Alfred Tarski (1951). A Decision Method for Elementary Algebra and Geometry Rand Corporation

Quantum computing questions on new Theoretical Physics site

2011-11-07 by Aaron Sterling. 1 comments

Many CSTheory people probably know there is a new StackExchange Q&A site, Theoretical Physics, which is now in public beta, and was originally started by Joe Fitzsimons, co-editor of this blog.  My purpose for this post is to give TCS people a heads-up that there are now 16 questions on that site under the quantum-computing tag and the quantum-information tag that may be of interest to theoretical computer science.

My favorite of these questions is Rigorous Security Proof for Wiesner’s Quantum Money?  In this question, Scott Aaronson asks for an explicit upper bound on a value that is “known to exist” according to folklore, but that he and a co-author were unable to find in the literature or derive. Master’s student Abel Molina solves the problem, using a formalism of Gutoski and Watrous.  John Watrous then verifies the solution’s correctness.  There are also contributions by Dan Gottesman and Peter Shor.

In related news, there is now a proposal for a Quantum Information question and answer site on the Stack Exchange Area 51.  This proposal is (mildly) controversial, though, because some people are concerned it would duplicate topics already available on Theoretical Physics.