On Endre Szemerédi’s Gifts to Computer Science

Personally, I was so very pleased to hear that Endre Szemerédi won the 2012 Abel Prize. In my eyes, this sentiment should be shared by all mathematicians and certainly by all who study the theory of computations. Szemerédi’s contributions to computer science are immense. The first examples that come to mind are most probably Szemerédi’s regularity lemma and the AKS sorting networks due to Miklós Ajtai, János Komlós, and Endre Szemerédi. On this occasion, we (Boaz, Parikshit and I) thought we would point out some of the other reasons computer science should be thankful for the research of Endre Szemerédi. Our examples should by no means taken as an exhaustive list.

Space-bounded derandomization

Omer Reingold

A question close to my heart is whether randomness can save memory, or perhaps for every randomized algorithm there exists a deterministic algorithm that solves the same computational problem and does not use much more memory. A prime example of a computational problem that has a very memory-efficient algorithm is connectivity of undirected graphs (graphs where edges can be traversed in both directions). The randomized algorithm is simply a random walk: start at any node of the graph and at each time step move to a random neighbor of the current node. Aleliunas, Karp, Lipton, Lovász  and Rackoff showed in 1979 that on an n-node connected, undirected graph the random walk (with an arbitrary start node) will visit every node of the graph with very high probability after polynomial (in n) number of steps.  The memory needed to carry out this algorithm is logarithmic (that is, proportional to the memory needed to memorize the name of a single node). At the time, the best deterministic algorithm was Savitch’s algorithm which requires \log^2 n memory (this is still the best algorithm for connectivity in directed graphs). The first improvement of Savitch’s algorithm (for undirected graphs) was given in an algorithm due to Noam Nisan, Endre Szemerédi and Avi Wigderson, requiring only \log^{3/2} n memory.

I read and enjoyed the NSW paper as a grad student and it is certainly one of Szemerédi’s contributions that directly influenced my research. But a different contribution that was potentially more important for my own research (as well as Complexity Theory as a whole) is “the other” (less known) AKS paper (Ajtai, Komlós, and Szemerédi) – Deterministic simulation in LOGSPACE. This paper initiated the study of pseudorandom generators that fool memory-bounded computations, a concept that turned out to be immensely useful. The model of memory-bounded computations used by AKS (layered read-once branching programs), is the model we still use today. The AKS generator takes \log n  uniformly distributed bits and expands them to \log^2 n / \log\log n  bits which fool algorithms with \log n  bits of memory (polynomial width branching programs), up to polynomially small error. Interestingly, these modest parameters are (to the best of my knowledge) not subsumed by more recent work.

More on these two papers coauthored by Szemerédi can be found in this (excellent though outdated) survey paper and potentially also in future posts.

Coding theory

Parikshit Gopalan

Endre Szemerédi is famous for his work on the size of subsets that do not contain arithmetic progressions, building on earlier work of Roth. Questions about the asymptotic size of these subsets are of interest even over finite fields, and are in fact related to a fundamental question in coding theory (so somewhat surprisingly, there are practical applications).

The question in coding theory is the following: How much information can be encoded in an error-correcting code of length n and distance d?

For simplicity, let us focus on linear codes. We are asking how many of the n symbols need to be parity check symbols (that is, added for redundancy). Let us call this number h. The Hamming bound uses a sphere packing argument to show that (ignoring floors and ceilings and -1s) h \geq \frac{d}{2}\log n. Over the binary alphabet, the famous (and ubiquitous) BCH codes give a matching upper bound. Over a q-ary alphabet though, the BCH bound is h \leq d(1 - 1/q)\log n. This only matches the Hamming bound when q=2. (This is a recurring theme in coding theory, 2 really is a very special number). Even when d=4 and q=3, we don’t know the right answer. Surely this has to be easy? And what does it have to do with arithmetic progressions?

Suppose we want to construct a subset of F_3^h with no 3 term arithmetic progressions. Assume we have a length n linear code C of distance 4 and h parity checks. Write down its parity check matrix H, and think of its columns as points in F_3^h. Since there are no codewords of weight 3, there are no three columns x,y,z which satisfy the equation x + y = 2z, which is the same as x + y +z =0 over F_3. So we get n points with no 3-term AP in F_3^h.

Moreover there is a converse. Assume that someone gave us a large set S \subseteq F_3^h containing n elements with no 3 term AP. We could use this as the columns of a parity check matrix H. This guarantees that no 3 columns sum to 0. Of course, it could be that x + y -z = 0 for some x,y,z. To eliminate such possibilities, we simply add the all 1s vector as a row of H. We now get a code of length n and distance 4 with h +1 parity checks.

Bibliographic notes: The first upper bound for the size of AP-free sets is due to Klaus Roth from the 50s. It holds both for integers and for F_3^n. Szemerédi strengthened Roth’s theorem for integers. The first improvement upon Roth in F_3^n happened just last year due to Michael Bateman and Nets Katz.

Additive combinatorics and randomness extraction

Boaz Barak

Many of my papers used or drew on Szemerédi’s research, probably in more ways than I’m aware of. One example comes from works I had with Russell Impagliazzo, Avi Wigderson, and others on the randomness extraction problem. This is the problem where we wish to produce output that behaves like sequence of independent coin tosses (perhaps for using in a randomized algorithm, or to generate cryptographic keys), but we have access only to a weak source {X} of randomness that has some entropy in it, but is not distributed like the uniform distribution (e.g., measurements of user typing pattern, or network latencies). Unfortunately, this is generally impossible, but luckily this impossibility result goes away if we assume that we can get several independent samples from the source {X} (or even samples from several independent sources of different high entropy distributions). In fact, one can show that a random function can extract essentially all the entropy from only two independent samples from {X}, and hence such a function yields what we call a two source extractor. However, one cannot efficiently compute a random function, and hence the problem was raised of finding an explicit such extractor. (I should note that although this problem sounds very practical, its main appeal and applications are in theoretical computer science and combinatorics, since in practice people are happy to apply some cryptographic hash function to a single sample from the source and this seems to work well, unless there is not much entropy in the source in the first place.)

Works of Chor-Goldreich and Vazirani in the 1980’s gave an explicit two-source extractor for an {n}-bit source {X} contains more than {n/2} bits of entropy, but there was no such extractors for sources of {\delta n} entropy with {\delta < 1/2}, even if we allow a larger constant number of samples than two. In our paper, we gave such an extractor, and the tool we used was Bourgain-Katz-Tao’s finite field analog of the Erdös-Szemerédi Sum-Product Theorem that (slightly rephrased) says that for an {N}-sized set {X} over the reals, {|X\cdot X + X| \geq N^{1+\epsilon}} for some absolute constant {\epsilon>0}, where for sets {A,B} we define {A+B = \{ a+b : a\in A,b\in B \}} and {A \cdot B = \{ a\cdot b : a\in A,b\in B \}}. This is a beautiful result, for there is in fact a simple “book proof” (that indeed can be found in page 285 of the book, see also here). The proof outline is that, assuming otherwise, one considers the {\leq N^{2+\epsilon}} points in {\mathbb{R}^2} of the form {(a,ab + c)}, with {a\in X} and {ab+c \in X\cdot X+X}, and the {N^2} lines of the form {t \mapsto tb+c} with {b,c \in X}. One can show that if we draw these points in the plane as vertices and draw these lines between them, we get a graph of {N^{1+\epsilon}} vertices and roughly {N^3} edges that is “somewhat planar”, in the sense that there are only {N^4} points where two edges cross one another. If we now subsample and keep each vertex in the graph with probability {1/(10\sqrt{N})}, then its likely that all these crossings will disappear, and we’ll get a planar graph with {\sim N^{1.5 + \epsilon}} vertices but {\sim N^2} edges— a contradiction to Euler’s formula.

What does the Sum-Product Theorem has to do with extracting randomness?

The construction of an extractor based on the finite field sum product theorem is actually quite simple to describe. It’s also simple to analyze, at least if we take the (overly simplistic) view that all random variables we deal with are distributions over sets, and hence their entropy is the logarithm of the size of these sets. Now, we can interpret {n}-bit strings as elements of the finite field of size {2^n}, and so if we have some source {X} over {n} bits of entropy {\delta n}, in our view it corresponds to a {2^{\delta n}}-sized set of field elements. Now the Sum Product theorem says that the source {X\cdot X +X}, which we can simulate using three independent samples of {X}, will correspond to a set of size at least {2^{\delta n (1+\epsilon)}}. In other words, by spending three samples we increased the entropy from {\delta n} to {(1+\epsilon)\delta n}, and if we repeat this enough times we’ll get to entropy {n} at which point we get the uniform distribution (or we can stop at {n/2} using the prior works mentioned before).

However, to translate the set size argument into entropy, we needed to use Gowers’s quantitative version of what I call a “Magic Lemma” by Balog and (yes, again) Szemerédi . This lemma says that if, for {N}-sized sets {X,Y}, the distribution {X+Y} has entropy less than {(1+\epsilon)\log N}, then there are subsets {X',Y'} of size roughly {N}, such that {|X'+Y'| < N^{1+10\epsilon}}. (The same statement holds for the operation {\cdot} instead of {+}). This turns out to be exactly what we need, since, roughly speaking it says that if we have the guarantee that either {+} or {\cdot} increases significantly the size of the sets we’re dealing with, then we know that it also would increase the entropy, thus allowing us to argue that {X\cdot X + X} has larger entropy than {X}.

Why do I call it a “Magic Lemma”? Let’s think about the Lemma’s statement. If {X+Y} has lowish entropy, this roughly means that there is a not too big set {S} (say of size less than {2^{(1+\epsilon)\delta n}}) such that with some noticeable probability {p =\Omega(1)}, if pick {x} from {X} and {y} from {Y}, we’ll get that {x+y} is in {S}. We can picture this by drawing a bipartite graph, with left vertices corresponding to members of {X}, and right members corresponding to members of {Y}, and adding the edge {(x,y)} for every pair where {x+y \in S}. The problem is that the support of {X+Y} can be huge, since we have no control over what happens to the majority of the pairs {(x,y)} where {x+y} is not in {S}. Intuitively, what the Lemma seems to say is that we could find subsets {X'\subset X} and {Y' \subseteq Y} of size {\Omega(N)} such that the induced graph would be complete. This is generally impossible, since a {2N}-vertex bipartite graph could have average degree {N/10}, but no induced bipartite clique of superlogarithmic size. So, we seem stuck but the magic is that it turns out that something slightly weaker is true: we can find subsets {X',Y'} of {\Omega(N)} size where every {x\in X',y\in Y'}, even if not directly connected, still have roughly the “right” number (i.e., {\Omega(N^2)}) of length-{3} paths between them. This means that we can express {x+y} as a sum {a+b+c} with {a,b,c \in S} in {\Omega(N^2)} different ways, which will upper bound {|X'+Y'|} by {O(|S|^3/N^2)}. The proof of this magic lemma is also quite simple (e.g., see Lemma 4 in this blog post). Roughly speaking, if you choose {X'} to be a neighborhood of a random point {y\in Y}, then we almost get what we want, in the sense that, after some pruning, every vertex {x\in X'} has,say, at least {p^5N^2 = \Omega(N^2)} paths of length {2} to all but, say, a {p^2/100} fraction of the other vertices in {X'}. Now, if we let {Y'} be the set of vertices that have at least {p^2N/10} neighbors in {X}, we’re done.

3 thoughts on “On Endre Szemerédi’s Gifts to Computer Science

    1. Thank you Andy! You are so right, I was confused and this proof is not due to Noam Elkies. Actually, looking deeper into Terry Tao’s post I linked above, I see that the credit is a bit more complicated. The proof I outlined combines:

      1) Elekes’s argument deriving the sum-product theorem from the Szemeredi-Trotter line/point incidences,

      2) Szekeli’s proof of the Szemeredi-Trotter theorem from the bound on the crossing numbers of Ajtai-Chvátal-Newborn-Szemerédi / Leighton.

      3) The simple proof of the crossing number bound, which according to Proofs from the Book arose in email conversations between Bernard Chazelle, Micha Sharir and Emo Welzl.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s