Skip to content

Childcare at STOC 2018 “TheoryFest”

April 16, 2018

(Announcement from Ilias Diakonikolas and David Kempe)

We are pleased to announce that we will provide pooled, subsidized
child care at STOC 2018. The cost will be $40 per day per child for
regular conference attendees, and $20 per day per child for students.
For more detailed information, including how to register for STOC 2018
childcare, see http://acm-stoc.org/stoc2018/childcare.html

Ilias Diakonikolas and David Kempe (local arrangements chairs)

Lecture notes on DKKMS

April 15, 2018

Mitali Bafna, Chi-Ning Chou, and Zhao Song wrote scribe notes for my lectures on the Dinur et al proof of the 2 to 2 conjecture (see the DKKMS  and KMS papers, though this presentation follows a different, and in my view simpler, approach.)

“Scribe notes” is really an understatement. In a heroic work, Mitali, Chi-Ning and Zhao supplied many more details and proofs (and much better exposition, including figures and comments) than I actually did in my talks.  (One proof is still missing, but hopefully it will be updated in a few weeks or so.)

The notes assume no background in prior works on PCP and unique games, and can be very useful for anyone interested in these recent breakthroughs.

Consider signing a pledge for inclusiveness in TCS

April 2, 2018

Edith Cohen, Vitaly Feldman, Omer Reingold and Ronitt Rubinfeld wrote a pledge for inclusiveness in TCS. I think it is mostly common sense: saying that not just our universities, but also our conferences and workshops, are part of our workplace, and that we should strive for them to be free of harassment. But sometimes it is very useful to explicitly state common sense statements, as they set the right level of expectations.

So, if you agree with the pledge’s statement and are a member of the TCS community, regardless of whether you are a student, postdoc, faculty, in industry,  or any other member, please do consider signing it here.

Statistical physics dictionary

March 14, 2018

I’ve always been curious about the statistical physics approach to problems from computer science. The physics-inspired algorithm survey propagation is the current champion for random 3SAT instances, statistical-physics phase transitions have been suggested as explaining computational difficulty, and statistical physics has even been invoked to explain why deep learning algorithms seem to often converge to useful local minima.

Unfortunately, I have always found the terminology of statistical physics, “spin glasses”, “quenched averages”, “annealing”, “replica symmetry breaking”, “metastable states” etc.. to be rather daunting. I’ve recently decided to start reading up on this, trying to figure out what these terms mean. Some sources I found useful are the fun survey by Cris Moore, the longer one by Zdeborová and Krzakala, the somewhat misnamed survey by Castellani and Cavagna, and the wonderful book by Mezard and Montanari. (So far, I’m still early in the process of looking at these sources. Would appreciate pointers to any others!)

This post is mainly for my own benefit, recording some of my thoughts.
It probably contains many inaccuracies and I’d be grateful if people point them out.

 

First of all, it seems that statistical physicists are very interested in the following problem: take a random instance J of a constraint satisfaction problem, and some number 0 \leq \alpha \leq 1, and sample from the uniform distribution \mu(\alpha) over assignments x to the variables that satisfy at least \alpha fraction of the constraints. More accurately, they are interested in the distribution \hat{\mu}(\beta) where the probability of an assignment x is proportional to \exp(-\beta J(x))) where \beta is some parameter and J(x) is the number of constraints of J that x violates. However, for an appropriate relation of \beta and \alpha, this is more or less the same thing, and so you can pick which one of them is easier to think about or to calculate based on the context.

Physicists call the distribution  \hat{\mu}(\beta) the Boltzman distribution, they would call J(x) the energy of x. Often they would write J(x) as H(x;J) where H, known as the Hamiltonian, is the general shape of the constraint satisfaction problem, and J are the parameters. The free energy is the typical value of \exp(\beta J(x)), which roughly corresponds to \exp(\beta \alpha |J|) where |J| is the total number of constraints in J and \alpha (as before) is the fraction of satisfied constraints.

 

Why would statistical physicists care about constraint satisfaction problems?

My understanding is that they typically study a system with a large n number of particles, and these particles have various forces acting on them. (These forces are local and so each one involves a constant number of particles – just like in constraint satisfaction problems.) These forces push the particles to satisfy the constraints, while temperature (which you can think of as 1/\beta) pushes the particles to have as much entropy as possible. (It is entropy in a Bayesian sense, as far as I understand; it is not that the particles’ state is sampled from this distribution at any given point in time, it is that our (or Maxwell’s demon’s) knowledge about the state of the particles can be captured by this distribution.) When the system is hot, \beta is close to zero, and the distribution has maximal entropy, when the system is cold, \beta gets closer to infinity, and the fraction \alpha of satisfied constraints grows closer to the maximum possible number. Sometimes there is a phase transition where a o(1) change in \beta can lead to an \Omega(1) change in \alpha. (These o(1) and \Omega(1) factors are with respect to the number of particles n, which we think of as tending to infinity.) That is, a small change in the temperature leads to a qualitatively different distribution. This happens for example when a system changes from solid to liquid.

Ideally (or maybe naively), we would think of the state of the system as being just a function of the temperature and not depend on the history of how we got to this temperature, but that’s not always the case. Just like an algorithm can get stuck in a local minima, so can the system get stuck in a so called “metastable state”. In particular, even if we cool the system to absolute zero, its state might not satisfy the global maximum fraction of constraints.

A canonical example of a system stuck in a metastable state is glass. For example, a windowpane at room temperature is in a very different state than a bag of sand in the same temperature, due to the windowpane’s history of first being heated to very high temperatures and then cooled down. In statistical physics lingo the static prediction of the state of the system is the calculation based on its temperature, while a dynamic analysis takes into account the history and the possibility of getting stuck in metastable states for a long time.

Why do statistical physicists care about random constraint satisfaction problems?

So far we can see why statistical physicists would want to understand a constraint satisfaction problem J that corresponds to some physical system, but why would they think that J is random?

As far as I can tell, there are two reasons for this. First, some physical systems are “disordered” in the sense that when they are produced, the coefficients of the various forces on the particles are random. A canonical example is spin glass. Second, they hope that the insights from studying random CSP’s could potentially be relevant for understanding the actual CSP’s that are not random (e.g., “structured” as opposed to “disordered”) that arise in practice.

In any case, a large body of literature is about computing various quantities of the state of a random CSP J at a given temperature, or after following certain dynamics.
For such a quantity X(J), we would want to know the typical value of X(J) that is obtained with high probability over J. Physicists call this value the “quenched” average (which is obtained by fixing J to a typical value). Often it is easier to compute the expectation of X(J) over J (which is called “annealed average” in physicist-lingo), which of course gives the right answer if the quantity X(J) is well concentrated – a property physicists call “self averaging”.

 

Often if a quantity X(J) is not concentrated, the quantity \log X(J) would be concentrated, and so we would want to compute the expectation \mathbb{E}_J \log X(J). The “replica trick” is to use the fact that \log X is the derivative at 0 of the function f(k)= X^k. So, \mathbb{E}_J \log X(J) = \mathbb{E}_J \lim_{k \rightarrow 0}\tfrac{1}{k} (X(J)^k-1). (Physicists would usually use n instead of k, but I already used n for the number of variables, which they seem to often call N.) Now we can hope to exchange the limit and expectation and write this as \lim_{k \rightarrow 0}\tfrac{1}{k} \mathbb{E}_J X(J)^k - 1. For an integer k, it is often the case that we can compute the quantity f(k)=\mathbb{E} X(J)^k. The reason is that typically X(J) is itself the expectation of some random variable that is parameterized by J, and hence X(J)^k would just correspond to taking the expectation of the product of k independent copies (or “replicas” in physicist-speak) from these random variables. So, now the hope is that we get some nice analytical expression for f(k), and then simply plug in f'(0) and hope this gives us the right answer (which surprisingly enough always seems to work in the contexts they care about).

 

What are the qualitative phase transitions that physicists are interested in?

Let us fix some “typical” J and consider the distribution \mu=\mu(J;\alpha) = \hat{\mu}(J;\beta) which is uniform over all assignments satisfying an \alpha fraction of J‘s constraints (or, essentially equivalently, where the probability of x is proportional to \exp(-\beta J(x))). We will be particularly interested in the second moment matrix M of \mu, which is the n\times n matrix such that M_{i,j} = \mathbb{E} x_ix_j where x is sampled from \mu. (Let’s think of x as a vector in \{ \pm 1 \}^n.)

We can often compute the quantity \mathbb{E}_J Tr(M(J)^k) for every k.
By opening up the parenthesis this corresponds to computing \mathbb{E}_J \langle x_1 , x_2 \rangle \langle x_2, x_3 \rangle \cdots \langle x_{k-1}, x_k \rangle \langle x_k , x_1 \rangle for k independent x_1,\ldots,x_k from \mu(J) (i.e., “replicas”). (Physicists often think of the related “overlap matrix” Q which is the k\times k matrix where Q_{a,b} = \langle x_a,x_b \rangle.) This means that we can completely determine the typical spectrum of M as a function of the parameters \alpha or \beta.

 

The typical behavior when \beta is small (i.e., the system is “hot”) is that the solutions form one large “cluster” around a particular vector x^*. This is known as the “replica symmetric” phase, since all copies (“replicas”) chosen independently from \mu would have roughly the same behavior, and the matrix Q above would have n on the diagonal and the same value q off diagonal.
This means that M is close to the rank one matrix x^*(x^*)^\top. This is for example the case in the “planted” or “hidden” model (related to “Nishimory condition” in physics) if we plant a solution x^* at a sufficiently “easy” regime in terms of the relation between the number of satisfied constraints and the total number of variables. In such a case, it is easy to “read off” the planted solution from the second moment matrix, and it turns out that many algorithms succeed in recovering the solution.

As we cool down the system (or correspondingly, increase \alpha) the distribution breaks into exponentially many clusters. In this case, even if we planted a solution x^*, the second moment matrix will look like the identity matrix (since the centers of the clusters could very well be in isotropic positions). Since there are exponentially many clusters that dominate the distribution, if we sample from this distribution we are likely to fall in any one of them. For this reason we will not see the cluster behavior in static analysis but we will get stuck in the cluster we started with in any dynamic process. This phase is called “1-dRSB” for “one step dynamic replica symmetry breaking” in the physics literature. (The “one step” corresponds to the distribution being simply clustered, as opposed to some hierarchy of sub-clusters within clusters. The “one step” behavior is the expected one for CSP’s as far as I can tell.)

As far as I can tell, the statistical physics predictions would be that this would be the parameter regime where sampling from the distribution \mu would be hard, as well as the regime where recovering an appropriately planted assignment would be hard. However, perhaps in some cases (e.g., in those arising from deep learning?) we are happy with anyone of those clusters.

If we cool the system more then one or few clusters will dominate the distribution (this is known as condensation), in which case we get to the “static replica symmetry breaking” phase where the second moment matrix changes and has rank corresponding to roughly the number of clusters. If we then continue cooling the system more then the static distribution will be the global optimum (the planted assignment if we had one), though dynamically we’ll probably be still stuck in one of those exponentially many clusters.

The figure below from Zdeborová and Krzakala illustrates these transitions for the 5 coloring constraint satisfaction problem (the red dot is the planted solution). This figure is as a function of the degree of the graph $c$, whereas increasing the degree can be thought of as making the system “colder” (as we are adding more constraints the system needs to satisfy).

lenka_survey_fig

What about algorithms?

I hope to read more to understand belief propagation, survey propagation, various Markov Chain sampling algorithms, and how they might or might not relate to the sum of squares algorithm and other convex programs. Will update when I do.

On the recent proof of the 2-to-2 conjecture

February 26, 2018

Update (4/15): Scribe notes are now up thanks to Mitali Bafna, Chi-Ning Chou, and Zhao Song.

As I posted before, recently Khot, Minzer and Safra posted a manuscript which is the culmination of a beautiful line of work, initiated by the same authors, and completes the proof of (the imperfect completeness variant of) Khot’s 2 to 2 conjecture.

An immediate corollary is establishing for every \epsilon>0, the NP hardness of distinguishing between a unique games instance of value 1/2-\epsilon vs an instance of value at most \epsilon. (The value of a constraint satisfaction problem is the maximum fraction of constraints one can satisfy.) This is interesting because in this parameter regime there is a subexponential time algorithm (of Arora, me and Steurer) and hence this is a proof of the intermediate complexity conjecture. It is also very strong evidence that the full unique games conjecture is true.

The proof of the 2 to 2 conjecture is obtained by combining (i) a reduction of Dinur, Khot, Kindler, Minzer and Safra (DKKMS), (ii) a (soon to be posted) manuscript of me with Kothari and Steurer showing the soundness conjecture for the DKKMS reduction works modulo a conjecture characterizing non-expanding sets on the Grassman graph (or equivalently, the degree two short code graph), and (iii) this latest KMS manuscript which proves the latter conjecture. (Most of the “heavy lifting” is done in the works (i) and (iii) of DKKMS and KMS respectively.)

On Friday I gave the first in a two part series of talks with an exposition of this result in our reading group. I plan to post here the full scribe notes of these talks, but thought I’d start with a short “teaser”. In particular, if I’m not mistaken (and it is possible I am) then one can describe the actual reduction underlying the proof in about 5 lines (see below). Of course the analysis takes much longer..

In my exposition I do not follow the approach of the above papers. Rather I present a combined version of the DKKMS paper and our manuscript which bypasses the need to talk about Grassman graphs altogether. Also, while the original DKKMS paper uses a particular reduction from 3XOR to label cover as a starting point, I tried to “abstract away” the properties that are really needed. I should note that the reduction I describe below is “morally related” but not identical to the one used by DKKMS, and I have not written down a full proof of its analysis, and so it is possible that I am making a mistake. Still I find it much easier to describe and understand, and so I prefer this exposition to the one in the papers.

 

The reduction

The Khot, Minzer and Safra manuscript completes the proof of the following theorem:

Theorem: For every \epsilon>0, it is NP hard to distinguish between a unique games instance where at least 1/2-\epsilon fraction of the constraints can be satisfied, and an instance where at most an \epsilon fraction can be satisfied.

Any NP hardness result is composed of three parts:

  1. A reduction from a previously known NP hard problem.
  2. A completeness analysis, showing that a “yes instance” of the original problem corresponds (in our case) to an instance with value at least 1/2-\epsilon of unique games.
  3. A soundness analysis showing that one can decode an assignent of value at least \epsilon of the unique games instance to an assignment to the instance of original problem certifying that it was a yes instance.

In this blog post I will show “two thirds” of the analysis by presenting 1 and 2. To get some sense of proportion, Part 1 took me about ten minutes to present in my talk, Part 2 about two minutes, and the rest of the six hours are dedicated to part 3 (though that is an underestimate, since I will probably not get to cover much of KMS’s proof that the combinatorial conjecture is true..).

The initial problem

The NP hard problem we start from is the label cover problem that has been used time and again for hardness of approximations results. Specifically, we consider the following game:

  • Verifier chooses a pair (i,j) of indices at random from some G \subseteq [n]\times [m].
  • She sends i to first prover and j to the second prover, and gets back answers x_i and y_j respectively.
  • She accepts the answers if y_j = f_{i,j}(x_i) for some particular function f_{i,j}.

We will look at the case of an  affine label cover where for every i,j, f_{i,j} is an affine function from GF(2)^D to GF(2)^{D-d} for some constant integers D>d>0. We can think of an instance to this problem as a graph G where an edge i\sim j is labeled by f_{i,j}. It is known that for every \delta>0, there are large enough D,d such that it is NP hard to distinguish between an instance where the provers can cause the verifier to accept with probability at least 1-\delta and an instance where every strategy would cause it to accept with probability at most \delta.

In our context we will need two extra properties of “smoothness” and “robust soundness” (or “soundness with respect to advice”) which I will not formally define here, but can be achieved by a “noisy variant” of the standard parallel repetition from the 3XOR problem.

The reduction

Given an instance \{ y_j = f_{i,j}(x_i) \}_{(i,j)\in G}, we construct a unique game instance over alphabet GF(2)^\ell as follows:

  • The verifier chooses (i,j) \in G and f_{i,j} as before.
  • She chooses a random affine function v:GF(2)^{D-d} \rightarrow GF(2)^\ell, a random rank one linear function e: GF(2)^D \rightarrow GF(2)^\ell and a random invertible affine function g:GF(2)^\ell \rightarrow GF(2)^\ell.
  • She sends j,v to the second prover and i,u to the first prover where u=g(vf_{i,j}+e) (we use here product notation for function composition, note that u is an affine function from GF(2)^D to GF(2)^\ell)
  • She gets back the answers \tilde{x} and \tilde{y} in GF(2)^\ell from the first and second prover respectively, and accepts if and only if \tilde{y} = g^{-1}(\tilde{x}). (Note that the constraint \tilde{y}= g^{-1}(\tilde{x}) is indeed a unique constraint.)

 

A rank one linear function e:GF(2)^D \rightarrow GF(2)^\ell is a function of the form e(x) = \langle e',x \rangle e'' where e' \in GF(2)^D and e'' \in GF(2)^\ell. (In matrix notation this is the rank one \ell\times D matrix e=e'(e'')^\top.)

 

As promised, completeness is not hard to show:

Lemma: If there is a prover strategy convincing the verifier with probability at least 1-\delta for the original label cover game, then there is a strategy convincing the verifier with probability at least 1/2-\delta/2 in the resulting unique game.

Proof: Suppose there was such a strategy \{ x_i,y_j \} for the original label cover game. The provers for the unique games will answer (i,u) with u(x_i) and (j,v) with v(y_j) respectively. Now suppose (as will be the case with probability at least 1-\delta) that y_j = f_{i,j}(x_i). For every fixed x_i\in GF(2)^D, if we choose e(x) = \langle e',x \rangle e'' to be a random rank one matrix then with probability 1/2 \langle e',x_i \rangle =0 in which case e(x_i) =0^\ell. In such a case u(x_i)=g(vf_{i,j}+e)(x_i)= gvf_{i,j}(x_i)=g v(y_j) under our assumption that y_j = f_{i,j}(x_i), and hence if we apply g^{-1} to the first prover’s answer we get the second prover’s answer.

(To get the full unique games conjecture we would need completeness close to 1, but unfortunately it’s not easy to construct the field GF(1.1)  or a rank 0.1 matrix..)

Two comments

There is plenty more to talk about this reduction its analysis and all the open questions and research directions that it opens, and I do hope to post the full lecture notes soon. But for now let me make two quick comments.

Quantity makes quality

One can think of this reduction as following the standard paradigm of taking an original label cover game with alphabet GF(2)^D and encoding each symbol using an error correcting code. The most common code for this is the long code which has codewords of length 2^{2^D}, and a more efficient version is the short code which has codewords of length 2^{D^{O(1)}}. The reduction of DKKMS can be thought of as using a “tensored Hadamard code” or “unbalanced degree two short code” over alphabet GF(2)^\ell with codewords of length 2^{\ell\cdot D+\ell} where a string x\in GF(2)^d is mapped to its evaluations by all affine functions v:GF(2)^D \rightarrow GF(2)^\ell. (More accurately, DKKMS use a “folded version” of this code where one has a coordinate for every \ell dimensional subspace L \subseteq GF(2)^D; this does not make much difference for the current discussion.) For constant \ell, this means the codewords are of length 2^{O(D)} rather than 2^{D^{O(1)}} as in the short code.

While this quasipolynomial difference between the short code and the “tensored Hadamard code” might seem mild, it turns out to be absolutely crucial for the reduction to go through. In fact, I believe that if finding a code that behaves similarly to the degree c shortcode but with codewords of length 2^{f(c)D} (as opposed to 2^{D^c}) would result in a proof of the full unique games conjecture.

Hardness from easiness

It turns out that the analysis of the reduction rests on characterizing the non-expanding subsets of the graph on affine functions u:GF(2)^D \rightarrow GF(2)^\ell where there is an edge between u and u' if u=u'+e for a rank one e. By now researchers have developed an intuition that if we stare at such natural graphs hard enough, we can figure out all the non-expanding small sets, and indeed this intuition was verified by the KMS manuscript for this particular graph. But this intuition might seem somewhat at odds with the competing intuition that the small set expansion problem (a close variant of the unique games problem) should be hard. One way to resolve this conundrum is that while the unique games problem may well be hard on the worst case, it is extremely hard to come up with actual hard instances for it. Like Trump supporters with Ph.D’s, such instances might exist, but are rarely seen in the wild.

Research masters

February 20, 2018

In the U.S., we have almost no research masters programs. We only admit students into a Ph.D. Overall it works well, but it requires us to be very conservative in our admissions, since we are committing to have the student come for 5 years or so. This can be a particular issue for students that discover their interest in research late in their studies, at which point their application might not be as strong.

When students like that seek my advice, I often suggest they look at applying for research Masters program in places such as my alma mater The Weizmann Institute, or other universities in Europe, Canada or elsewhere. However, I realized that there may be other places I don’t know about.

If you know of a good research masters program for theoretical computer science, could you post about it in the comments?

Update: Some good information in the comments – thank you! If you post, please say whether this a program where students have to pay tuition or is a program where there is a chance that tuition might be waived and/or students could get a stipend.

#metootcs

February 15, 2018

In an earlier post I asked if we have TCS “Harvey Weinsteins”. Unfortunately academia is hardly immune from sexual harassment and now a  TCS researcher posted about her experiences with sexual harassment and assault in our community. While this is not pleasant reading, it is important, and I urge you to read the full post (see also here).

It takes courage to talk about such painful experiences, even anonymously, in a community as small as ours. But this researcher deserves our thanks for bringing up this topic, and hopefully starting a conversation that would make theoretical computer science more welcoming and inclusive. I do not know who this person is, but I urge people not try to guess her identity, but rather focus on asking what we can do, both men and women, to make things better.

The blog post already contains some good suggestions. As the overwhelming majority in our field, we men enjoy many structural advantages, and it is especially up to us to step up to this challenge. Research is not a 9 to 5 job: conferences, workshops, and informal interactions are extremely important. But we should remember that we are there for the sake of scientific collaboration. A woman shouldn’t have to worry about the motivations behind every invitation for a discussion or meeting.

Given our skewed gender ratio, it is enough for a small minority of the men to behave badly to essentially guarantee that almost all women will encounter such behavior at some point. That is unless other men step up and call out unprofessional (or worse) behavior when we observe or are made aware of it. I know this is not easy – we are not selected for our ability to handle awkward social situations, and I can’t say I myself have  stepped up or been very aware of such issues in the past. But I will try to do better, and hope others do too.