Update (4/15): Scribe notes are now up thanks to Mitali Bafna, Chi-Ning Chou, and Zhao Song.
As I posted before, recently Khot, Minzer and Safra posted a manuscript which is the culmination of a beautiful line of work, initiated by the same authors, and completes the proof of (the imperfect completeness variant of) Khot’s 2 to 2 conjecture.
An immediate corollary is establishing for every , the NP hardness of distinguishing between a unique games instance of value vs an instance of value at most . (The value of a constraint satisfaction problem is the maximum fraction of constraints one can satisfy.) This is interesting because in this parameter regime there is a subexponential time algorithm (of Arora, me and Steurer) and hence this is a proof of the intermediate complexity conjecture. It is also very strong evidence that the full unique games conjecture is true.
The proof of the 2 to 2 conjecture is obtained by combining (i) a reduction of Dinur, Khot, Kindler, Minzer and Safra (DKKMS), (ii) a (soon to be posted) manuscript of me with Kothari and Steurer showing the soundness conjecture for the DKKMS reduction works modulo a conjecture characterizing non-expanding sets on the Grassman graph (or equivalently, the degree two short code graph), and (iii) this latest KMS manuscript which proves the latter conjecture. (Most of the “heavy lifting” is done in the works (i) and (iii) of DKKMS and KMS respectively.)
On Friday I gave the first in a two part series of talks with an exposition of this result in our reading group. I plan to post here the full scribe notes of these talks, but thought I’d start with a short “teaser”. In particular, if I’m not mistaken (and it is possible I am) then one can describe the actual reduction underlying the proof in about 5 lines (see below). Of course the analysis takes much longer..
In my exposition I do not follow the approach of the above papers. Rather I present a combined version of the DKKMS paper and our manuscript which bypasses the need to talk about Grassman graphs altogether. Also, while the original DKKMS paper uses a particular reduction from 3XOR to label cover as a starting point, I tried to “abstract away” the properties that are really needed. I should note that the reduction I describe below is “morally related” but not identical to the one used by DKKMS, and I have not written down a full proof of its analysis, and so it is possible that I am making a mistake. Still I find it much easier to describe and understand, and so I prefer this exposition to the one in the papers.
The Khot, Minzer and Safra manuscript completes the proof of the following theorem:
Theorem: For every , it is NP hard to distinguish between a unique games instance where at least fraction of the constraints can be satisfied, and an instance where at most an fraction can be satisfied.
Any NP hardness result is composed of three parts:
- A reduction from a previously known NP hard problem.
- A completeness analysis, showing that a “yes instance” of the original problem corresponds (in our case) to an instance with value at least of unique games.
- A soundness analysis showing that one can decode an assignent of value at least of the unique games instance to an assignment to the instance of original problem certifying that it was a yes instance.
In this blog post I will show “two thirds” of the analysis by presenting 1 and 2. To get some sense of proportion, Part 1 took me about ten minutes to present in my talk, Part 2 about two minutes, and the rest of the six hours are dedicated to part 3 (though that is an underestimate, since I will probably not get to cover much of KMS’s proof that the combinatorial conjecture is true..).
The initial problem
The NP hard problem we start from is the label cover problem that has been used time and again for hardness of approximations results. Specifically, we consider the following game:
- Verifier chooses a pair of indices at random from some .
- She sends to first prover and to the second prover, and gets back answers and respectively.
- She accepts the answers if for some particular function .
We will look at the case of an affine label cover where for every , is an affine function from to for some constant integers . We can think of an instance to this problem as a graph where an edge is labeled by . It is known that for every , there are large enough such that it is NP hard to distinguish between an instance where the provers can cause the verifier to accept with probability at least and an instance where every strategy would cause it to accept with probability at most .
In our context we will need two extra properties of “smoothness” and “robust soundness” (or “soundness with respect to advice”) which I will not formally define here, but can be achieved by a “noisy variant” of the standard parallel repetition from the 3XOR problem.
Given an instance , we construct a unique game instance over alphabet as follows:
- The verifier chooses and as before.
- She chooses a random affine function , a random rank one linear function and a random invertible affine function .
- She sends to the second prover and to the first prover where (we use here product notation for function composition, note that is an affine function from to )
- She gets back the answers and in from the first and second prover respectively, and accepts if and only if . (Note that the constraint is indeed a unique constraint.)
A rank one linear function is a function of the form where and . (In matrix notation this is the rank one matrix .)
As promised, completeness is not hard to show:
Lemma: If there is a prover strategy convincing the verifier with probability at least for the original label cover game, then there is a strategy convincing the verifier with probability at least in the resulting unique game.
Proof: Suppose there was such a strategy for the original label cover game. The provers for the unique games will answer with and with respectively. Now suppose (as will be the case with probability at least ) that . For every fixed , if we choose to be a random rank one matrix then with probability in which case . In such a case under our assumption that , and hence if we apply to the first prover’s answer we get the second prover’s answer.
(To get the full unique games conjecture we would need completeness close to , but unfortunately it’s not easy to construct the field or a rank matrix..)
There is plenty more to talk about this reduction its analysis and all the open questions and research directions that it opens, and I do hope to post the full lecture notes soon. But for now let me make two quick comments.
Quantity makes quality
One can think of this reduction as following the standard paradigm of taking an original label cover game with alphabet and encoding each symbol using an error correcting code. The most common code for this is the long code which has codewords of length , and a more efficient version is the short code which has codewords of length . The reduction of DKKMS can be thought of as using a “tensored Hadamard code” or “unbalanced degree two short code” over alphabet with codewords of length where a string is mapped to its evaluations by all affine functions . (More accurately, DKKMS use a “folded version” of this code where one has a coordinate for every dimensional subspace ; this does not make much difference for the current discussion.) For constant , this means the codewords are of length rather than as in the short code.
While this quasipolynomial difference between the short code and the “tensored Hadamard code” might seem mild, it turns out to be absolutely crucial for the reduction to go through. In fact, I believe that if finding a code that behaves similarly to the degree shortcode but with codewords of length (as opposed to ) would result in a proof of the full unique games conjecture.
Hardness from easiness
It turns out that the analysis of the reduction rests on characterizing the non-expanding subsets of the graph on affine functions where there is an edge between and if for a rank one . By now researchers have developed an intuition that if we stare at such natural graphs hard enough, we can figure out all the non-expanding small sets, and indeed this intuition was verified by the KMS manuscript for this particular graph. But this intuition might seem somewhat at odds with the competing intuition that the small set expansion problem (a close variant of the unique games problem) should be hard. One way to resolve this conundrum is that while the unique games problem may well be hard on the worst case, it is extremely hard to come up with actual hard instances for it. Like Trump supporters with Ph.D’s, such instances might exist, but are rarely seen in the wild.