Archive for September, 2011

It only looks like a homework problem…

2011-09-23 by Aaron Sterling. 5 comments

Almost exactly one year ago (10 Sept 2010), Cem Say asked the following deceptively simple question:

Is the language $latex \{ a^{i}b^{j}c^{k} \mid  i \neq j, i \neq k, j \neq k \}$ non-context-free?

Recall that a language is context-free if it can be accepted by a pushdown automaton.  A pushdown automaton, in turn, is a finite automaton with access to a stack (so it has access to an unbounded amount of memory).  A standard example of a context-free language that cannot be recognized by a finite automaton without a stack is {$latex a^n b^n \mid n \geq 1 $}.  If a finite automaton has no access to a stack, it cannot store the value of $latex n$ if $latex n$ gets too large, so it has no way to be certain that the number of $latex b$ appearing exactly match the number of $latex a$.  This intuition is formalized in the famous pumping lemma; and there is a similar tool, the pumping lemma for context-free languages, that can be used to prove that a language is not context-free.

It is easy to apply a standard technique — the pumping lemma — to show the language Cem Say asked about is not regular.  (We make the exponents large enough that a finite automaton has to repeat a state, and take advantage of that confusion.)  However, there doesn’t appear to be a good way to apply the pumping lemma for context-free languages to show this particular language is context-free.  Initially, though, some site members thought the question was easy enough to be undergraduate homework, since the definition of the language is so close to other languages whose status can be resolved with a pumping lemma.

Fortunately, Frank Weinberg knew about Ogden’s Lemma, which is an extension of the pumping lemma for context-free languages (the pumping lemma is a special case of Ogden’s Lemma).  With this tool, he was able to prove that, indeed, the language Cem Say asked about is not context-free.  The proof I will give in this blog entry can be found in Frank Weinberg’s answer; I will give a version of the proof polished by Tsuyoshi Ito in the comments to that answer.

First, here is a statement of Ogden’s Lemma.

 Ogden’s Lemma: If $latex L$ is a context-free language, then there exists some number $latex p>0$ such that for any string $latex w$ of length at least $latex p$ in $latex L$, and every way of “marking” $latex p$ or more of the positions in $latex w$, $latex w$ can be written as $latex w=uxyzv$ with string $latex u$, $latex x$, $latex y$, $latex z$, $latex v$, such that
  1. $latex xz$ has at least one marked position,
  2. $latex xyz$ has at most $latex p$ marked positions, and
  3. $latex ux^iyz^iv$ is in $latex L$ for every $latex i \geq 0$.

The pumping lemma for context-free languages is the special case where every position is marked.  Armed with this tool, we can now prove that $latex \lbrace a^{i}b^{j}c^{k} \mid  i \neq j, i \neq k, j \neq k \rbrace$ is not context-free.

Proof (that Cem Say’s language is not context-free): For a given $latex p$, choose $latex a^i b^p c^k$ and mark all the $latex b$’s (and nothing else).  We choose $latex i$ and $latex k$ such that for every choice of how many $latex b$’s are actually pumped there is one pumping exponent such that the number of $latex b$’s is equal to $latex i$ and one where it is equal to $latex k$.  Then $latex i$ and $latex k$ have to be from the set $latex S= \bigcap_{1 \leq n \leq p} \lbrace p-n + m*n \mid m \in IN_0\rbrace$ where $latex IN_0$ is the set of positive integers.  The set $latex S$ is infinite, because it contains $latex p+ im$ for all $latex i \geq 0$, where $latex m$ is the least common muiltiple of $latex \lbrace 1, \ldots, p \rbrace$.  So, assuming the language is context-free, we can always find an $latex i$ and $latex k$ in $latex S$ for any $latex b$, and by condition (3) of Ogden’s Lemma, the resulting string will be in $latex L$, contradicting the original definition of $latex L$.  So the language cannot be context-free after all.

For questions and answers on related topics, click on the formal languages tag.

Conference Reports: EuroComb, Katona 70, ESA

2011-09-07 by daveagp. 2 comments

From August 28 to September 7 I participated in the following three meetings:

  1. The 6th EuroComb, in Budapest. This conference happens every 2nd year. There were 5 days, 4 parallel sessions, and discrete mathematics of every flavour, especially graphs, randomness, geometry, set systems, and algorithms.
  2. Gyula Katona’s 70th birthday conference, also in Budapest at the Renyi institute.
  3. The 19th European Symposium on Algorithms, in Saarbrucken. This was 3 days, with 2 parallel sessions plus more for satellite conferences, and the algorithmic topics included approximation algorithms, parameterized complexity, strings, geometry, graphs, SAT, and practical experimental work.
  4. (Saarbrucken continued with two more days of the ALGO meta-conference, including ATMOS, WAOA, IPEC, and AlgoSensors. I usually attend WAOA, but this year after seeing about 80 talks in 10 days, I left for home.)

EuroComb

Here is a little bit of info about the conference. EuroComb has a small level of refereeing, allowing people to present anything, including results already published elsewhere. It was a very social conference, and this reminded me of other rarer-than-annual conferences like ISMP and CANADAM. You get to meet many people, and they generally present their best recent results, which I liked. There were a lot of people of PhD/PostDoc age but also a lot of professors, and there were participants from all over the world. (It was even a nice chance for me to make a few new acquaintances from Canadian universities.) I talked about a paper in the realm of discrete geometry that used computational methods, joint with Filip Morić.

One major plenary was given by Nets Hawk Kats. He and Larry Guth recently solved* Erdős’ distinct distances problem: show that for any n points in the plane, the number of distinct distances among these points is at least Ω(n/√log n). The * is because they only got Ω(n/log n), still it improves polynomially on all previous bounds. I have heard some discussion of the result and its proof, overall it is elegant, although the talk was not well-prepared.

In more algorithmic news, Dan Král won a major award and his plenary talk was mostly about his work on fast algorithms for variants of small-treewidth problems, extending Courcelle’s work on FOL/MSOL languages expressing very general problems that can be solved in such families. David Conlon also won the same award, and his talk was about better understanding of the tower and “wowzer” functions arising in proofs of the Szémeredi Regularity Lemma and relatives (e.g., the Graph Removal Lemma). I learned that while the tower function is T(k) = 2^(2^…^(2^2)…) where k 2’s appear, the wowzer function is W(k) = T(T(…T(2)…)) where there are k T’s.

Several interesting open problems arose at EuroComb, and here is one of them.  The rainbow colouring number of a graph is the minimum number of colours one needs to apply to the edges, so that all pairs of vertices are connected by a rainbow path (no repeated colours). Is there a constant-factor approximation for the rainbow connection number? The concept was defined only a few years ago but there have been an explosive number of papers on the topic.

Another highlight: Ken-ichi Kawarabayashi put his cat on one of his slides.

Katona 70

There is a strong tradition of birthday conferences for researchers in Budapest and apparently Gyula Katona was even one of the forces behind this tradition; his friends had a lot of very nice things to say and presents to give him. For example, he received an Indian decoration usually reserved for elephants. He is perhaps most famous for the Kruskal-Katona theorem and indeed, the majority of talks were about extremal problems in hypergraphs. My former EPFL colleague Dömötör Pálvölgyi gave a very entertaining talk about determining max/min elements with a lie: it takes about n*87/32 queries and this crazy number is tight. From another talk by Alex Sidorenko I learned a nice trick which gave me some intuition about “virtual valuations” in algorithmic game theory, which I have posted on my blog.

ESA

This is my 4th year in a row attending ESA, and it is one of my favourite conferences because many talks are about work that can be explained to a general algorithmic audience. This year, for the first time, they have given each person a USB key with electronic proceedings, which are truly awesome; not simply a .pdf dump, but a fantastic browser-navigable html/pdf hybrid.

The best paper went to Nikhil Bansal and Joel Spencer for work on a deterministic version of Spencer’s “6 Standard Deviations Suffice” work on discrepancy. This is the 2nd year in a row the best paper went to an approximation-algorithms paper, which may not be so much of a surprise given that it’s also the 2nd year in a row it went to a paper co-authored by Nikhil.

The business meeting was unusual, despite (unlike some previous years) the absence of beer. There was one serious proposal for ESA 2013, in Sophia-Antipolis, France, and one half-joking proposal in Muscat, Oman. The presenter of the latter moved there last week and basically just wanted to ensure there was some “choice” for the voters (he admitted as much and also had no price estimates). The constituency picked France over Oman by a 33-22 vote but I would not say it was a close call as everyone could see how everyone voted and some people seemed to vote just to make the vote closer. I’m not sure if it was a useful exercise since it made the business meeting non-trivially longer. It is hard to find people willing to host conferences and I am glad the final result was the one I preferred. Vive là France!

From a technical perspective, I really liked a few talks in different areas:

  1. a result about MAX-SAT (out of 5 total in ALGO!) by Roth, Azar, and Gamzu since it showed naturally a matroid arising out of nowhere,
  2. a constructive algorithm for “ray shooting depth” by Ray (who gave the talk and avoided making the obvious joke), Mustafa, and Shabbir, which is a plane geometry problem which previously had only a non-constructive topological proof,
  3. the best student paper which was by Gawrychowski on text searching in compressed files,
  4. a paper by Verschae and Wiese about scheduling unrelated machines, giving an elegant integrality gap construction and a simpler version of an existing algorithmic result from FOCS 2009,
  5. nice approximation results in a model of graph vehicle routing by Schupbach and Zenklusen,
  6. a talk by Justin Ward (joint w/ Feldman, Naor, Schwartz) explaining/improving many local search algorithms… for me it finally made clear what’s special about claw-free graphs.
Last but not least, I gave the very last talk at ESA, on a colouring problem for set systems (joint w/ Bollobás, Rothvoß, and Scott). I also by chance gave the last 2 talks at WAOA last year! Mercifully, the audience was kind/caffeinated enough to stay awake.

What does it really mean to prove the Graph Minor Theorem?

2011-09-01 by Aaron Sterling. 5 comments

In previous posts about graph minors, we reviewed the definitions of graph minors and topological minors, then looked at a relationship between the two; and explored whether there was a relationship between the computational hardness of a set of forbidden subgraphs and the graph class defined as all graphs that avoid the set of forbidden subgraphs.  Both of those questions are fundamentally related to the Graph Minor Theorem of Robertson and Seymour.  Today, I’d like to take a step back and look big-picture at the Graph Minor Theorem, and its relationship to computer science, by featuring an answer Timothy Chow gave to a question Ryan Williams asked about the role of the Axiom of Choice in TCS.

Ryan’s question, which he asked in October 2010, was, “What interesting theorems in TCS rely on the Axiom of Choice (or, alternatively the Axiom of Determinacy)?” The Axiom of Choice, of course, is a (sometimes controversial) axiom extending Zermelo-Fraenkel set theory, and at this point is generally accepted as part of the foundations of mathematics.  Intuitively, the Axiom of Choice says that anytime you have a mathematical construction in which there is a set of nonempty bins, then it is legal to produce a set that contains exactly one item from each bin.  While this might seem “obviously true,” once this intuition is applied to infinitely-many bins, there are nonconstructive and perhaps counter intuitive consequences.  So it is natural to wonder, in the world of TCS where the sets of bins we encounter are unlikely to be “pathologically infinite,” whether we really need the Axiom of Choice to prove anything.  (I won’t discuss the Axiom of Determinacy, except to say that it is consistent with the Zermelo-Fraenkel axioms, but inconsistent with the Axiom of Choice, and set theorists sometimes put it forth as a “better” axiom, but it has not been widely accepted by mathematicians outside of logic.)

Janne Korhonen answered Ryan’s question by noting that the proof of the Graph Minor Theorem fundamentally uses Kruskal’s Tree Theorem, which in turn uses the Axiom of Choice.  A fine answer, and one might think that was the end of the story — but one would be wrong, as Timothy Chow demonstrated in a followup answer which I will now quote from.

What about arithmetical statements whose proof requires something like Koenig’s lemma or Kruskal’s tree theorem? Don’t these require a weak form of the axiom of choice? The answer is that it depends on exactly how you state the result in question. For example, if you state the graph minor theorem in the form, “given any infinite set of unlabeled graphs, there must exist two of them such that one is a minor of the other,” then some amount of choice is needed to march through your infinite set of data, picking out vertices, subgraphs, etc. However, if instead you write down a particular encoding by natural numbers of the minor relation on labeled finite graphs, and phrase the graph minor theorem as a statement about this particular partial order, then the statement becomes arithmetical and doesn’t require AC in the proof.

So the question comes down to, “What do we really mean in theoretical computer science when we talk about a mathematical proof of an infinitary statement?”  To quote from Chow’s answer again:

Most people feel that the “combinatorial essence” of the graph minor theorem is already captured by the version that fixes a particular encoding, and that the need to invoke AC to label everything, in the event that you’re presented with the general set-theoretic version of the problem, is sort of an irrelevant artifact of a decision to use set theory rather than arithmetic as one’s logical foundation. If you feel the same way, then the graph minor theorem doesn’t require AC. (See also this post by Ali Enayat to the Foundations of Mathematics mailing list, written in response to a similar question that I once had.)

So, in the final analysis, at least in Chow’s opinion, there appears to be no example of a TCS theorem that requires the Axiom of Choice to prove.  His final suggestion to Ryan is that it may be more fruitful to ask whether there are TCS statements that require large cardinal axioms to prove, instead of the Axiom of Choice.  Harvey Friedman has apparently demonstrated some artificial examples of finitary graph theory statements that require the consistency of large cardinals to prove.  So if artificial examples exist already, perhaps natural examples are around the corner.