Skip to content

Sanjeev Arora: Potential changes to STOC/FOCS: report from special FOCS session

November 11, 2014

As Boaz advertised, FOCS had a panel-led discussion on “How might FOCS and STOC evolve?” Here is a summary of that session by Sanjeev Arora:

——————–

This blog post is a report about a special 80 min session on the future shape of STOC/FOCS, organized by David Shmoys (IEEE TCMF Chair) and Paul Beame (ACM Sigact Chair) on the Saturday before FOCS in Philadelphia. Some 100+ people attended.

The panelists: Boaz Barak, Tim Roughgarden, and me. Joan Feigenbaum couldn’t attend but sent a long email that was read aloud by David. Avi Wigderson had to cancel last minute.

For those who don’t want to read further (spoiler alert): The panelists all agreed about the need to create an annual week-long event to be held during a convenient week in summer, which would hopefully attract a larger crowd than STOC/FOCS currently do. The decision was to study how to organize such an annual event, likely starting June 2017. Now read on.

Sole ground rule from David and Paul was: no discussion of open access/copyright, nor of moving STOC/FOCS out of ACM/IEEE. (Reason: these are orthogonal to the other issues and would derail the discussion.)

Boaz and Omer’s proposal in a nutshell (details are here): Fold STOC/FOCS into this annual event. Submissions and PC work for these two would work just as now with the same timetable. Actual presentations would happen at this annual event. But the annual event would be planned by a third PC that would decide upon how much time to allocate to each paper’s presentationnot all papers would be treated equally. This PC would also plan a multi-day program of plenary talks —invited speakers, and selected papers drawn from theory conferences of the past year including STOC/FOCS. (Some people expressed discomfort with creating different classes of STOC-FOCS papers. See Boaz and Omer’s blog post for more discussion, and also my proposal below.)

Tim’s ideas: It’s very beneficial to have such a mega event in some form. Logistics may be formidable and need discussing, but it would be good for the field to have a single clearing point for major results and place to catch up with others (for which it is important that the event is attractive enough to draw everybody). His other main point: the event should give a large number of people “something to do” by which he meant “something to present.” (Could be poster presentations, talks, workshops, etc.) This helps draw people into the event rather than make them feel like bystanders.

Joan’s email: Started off by saying that we should not be afraid of experimentation. Case in point: She tried a 2-tier PC a few years ago and while many people railed against it, nobody could pinpoint any impact on the quality of the final program. She thinks STOC/FOCS currently focus too much on technical wizardry. While this has its place, other aspects should be valued as well. With this preamble, her main proposal was: There should be an inclusive annual mega event that showcases good work in many different aspects of TCS , possibly trading off some mathematical depth with inclusiveness and intellectual breadth. Secondary proposal: to fix somehow the problem of incomplete papers. (She mentioned the VLDB model where the conference is also a journal.) Interestingly, I don’t detect such a crisis in TCS today; most people post full versions on arxiv. I do support looking at the VLDB model, but for a different reason: it’s our journal process that seems broken.

My proposal: Though it was a panel discussion I prepared powerpoint slides, which are available here. My proposal has evolved from my earlier blog post which turned into a B. EATCS article. A guiding principle: “Add rather than subtract; build upon structures that already work well.” The STOC/FOCS PC process works well with efficient reviewing and decision-making—though not everybody is happy with the decisions themselves. But the journal process is sclerotic and possibly broken, so proposals (such as Fortnow’s) that replace conferences with journals seems risky. Finally, let’s design any new system to maximize buy-in from our community.

So here’s the plan in brief: Keep STOC/FOCS as now, possibly increasing the number of acceptances to 100-ish, which still fit in 3 days with 2 parallel sessions but no plenary talks. (“If you are content with your current STOC/FOCS, you don’t need to change anything.”) Then add 3-4 days of activity around STOC including workshops, poster sessions, and lots of plenary sessions. Encourage other theory conferences to co-locate with this event.

See my article and slides for further details.

A Few Meta Points that I made.

Here are a few meta points that I made, which are interrelated:

We are a part of computer science. I hope to be a realist here, not controversial. Our work involves and touches upon other disciplines: math, economics, physics, sociology, statistics, biology, operations research, etc. But most of our students will find jobs in CS departments or industrial labs, and practically none in these allied disciplines. CS is also the field (biology possibly excepted) with most growth and new jobs in the foreseeable future. Our system should be most attuned with the CS way of doing things. To shoot down an obvious straw man, we should avoid the Math mode of splitting into small sub-communities and addressing papers and research to a small group of experts. Our papers and talks should remain comprehensible and interesting for a broad TCS audience, and a significant fraction of our collective work should look interesting to a general CS audience. (Joan’s email made a similar point about the danger of what she calls “mathematization.”)

Senior people in TCS have been dropping out of the STOC/FOCS system. I am, at 46 years of age, a regular attendee, but most people my age and older aren’t. I have talked to them, and they often feel that STOC/FOCS values specialization: technical improvements to past work, and that sort of thing. Any reform should try to address their concerns, and I hope the mega event will bring them back. (My advice to these senior people: if you want to change STOC/FOCS, be willing to serve on the PC, and speak up.)

Short papers are better. There’s a strong trend towards preferring long papers with full proofs. I consider this the “Math model” because it rewards research topics and presentation aimed at a handful of experts. I favor an old-fashioned approach that’s still in fashion at top journals like Nature and Science: force authors to explain their ideas in 8 double-column pages (or some other reasonable page limit). No appendices allowed, though reviewers who need more details should be able to look up a time-stamped detailed version on arxiv. In other words, use arxiv to the fullest, but force authors to also write clean, self-contained and terse versions. This is my partial answer to the question “What is the value added by conferences?” (NB: I don’t sense a crisis of incorrect papers in STOC/FOCS right now. Plus it’s not the end of the world if a couple percent of conference papers turn out to be wrong; Science and Nature have a worse track record and are doing OK!)

Towards the end of the session David and Paul solicited further ideas from the audience. Sensing general approval of the June mega event, they announced that they will further study this idea, and possibly implement it starting 2017, without waiting for other theory conferences to collocate. Paul pointed to logistical hurdles, which necessitate careful planning. David observed that putting the spotlight on STOC may cause FOCS to wither away. Personally, I think FOCS will do fine and may even find a devoted audience of those who prefer a more intimate event.

So dear readers, please comment away with your reactions and thoughts. This issue creates strong opinions, but let’s keep it civilized. If you have a counter proposal, please put it on the web and send us the link; Paul and David are following this debate.

ps: I am skeptical of the value of anonymous comments and will tend to ignore them (and hope that the other commenters will too).

 

FOCS 2014 is starting

October 18, 2014

Hope everyone has a great FOCS! In the previous post we mentioned the two workshops on different aspects of the Fourier transforms occurring today. I also wanted to mention the Tutorial on obfuscation today with talks by  Amit Sahai, Allison Lewko and Dan Boneh. The new constructions of obfuscation and their applications form one of the most exciting and rapidly developing research topics in cryptography (and all of theoretical CS) today, and this would be a great opportunity for non-specialists to catch up on some of these advances.

Applied mathematicians vs Theoretical Computer Scientists

October 12, 2014

[Guest post by Anna Gilbert, who is co-organizing with Piotr Indyk and Dina Katabi a FOCS 2014 workshop on The  Sparse Fourier Transform: Theory and Applications, this Saturday 9am-3:30pm]  After reading Boaz’s post on Updates from the ICM and in particular his discussion of interactions between the TCS and applied math communities, I thought I’d contribute a few observations from my interactions with both, as I consider myself someone who sits right at the intersection. My formal training is in (applied) mathematics and I am currently a faculty member in the Mathematics Department at the University of Michigan. I have spent many years working with TCS people on streaming algorithms and sparse analysis and I worked at AT&T Labs (where the algorithms group was much larger than the “math” group). There are definitely other TCS researchers who are quite adept and interested in collaborations with applied mathematicians, electrical engineers, computational biologists, etc. There are also venues where both communities come together and try to understand what each other is doing. The workshop that Piotr Indyk, Dina Katabi, and I are organizing at FOCS this year is a good example and I encourage anyone interested in learning more about these areas to come. The speakers span a range of areas from TCS, applied math, and electrical engineering! What’s especially fascinating is the juxtaposition of our workshop on the sparse Fourier transform with that of another FOCS workshop that day on Higher-order Fourier Analysis. There are two workshops on Fourier analysis, a topic that is central to applied and computational mathematics, at a conference ostensibly on the Foundations of Computer Science! Here are my observations of both communities (with a large bias towards examples in sparse approximation, compressed sensing, and streaming/sublinear algorithms):

1) Applied mathematicians are not nearly as mathematical as TCS researchers. By which I mean, the careful formal problem statements, the rigorous definitions, the proofs of correctness for an algorithm, the analysis of the use of resources, the definition of resources, etc. are not nearly as developed nor as important to applied mathematicians.

Here are two examples on the importance of clear, formal problem statements and the definition of resources. There a number of different ways to formulate sparse approximation problems, some instantiations are NP-complete and some are not. Some are amenable to convex relaxation and others aren’t. For example, exact sparse approximation of an arbitrary input vector over an arbitrary redundant dictionary is NP-complete but if we draw a dictionary at random and seek a sparse approximation of an arbitrary input vector, this problem is essentially the compressed sensing problem for which we do have efficient algorithms (for suitable distributions on random matrices). Stated this way, it’s clear to TCS what the difference is in the problem formulations but this is not the way many applied mathematicians think about these problems. To the credit of the TCS community, it recognized that randomness is a resource—generating the random matrix in the above example costs something and, should one want to design a compressed sensing hardware device, generating or instantiating that matrix ”in hardware” will cost you resources beyond simple storage. Pseudo-random number generators are a central part of TCS and yet, for many applied mathematicians, they are a small implementation detail easily handled by a function call. Similarly, electrical engineers well-versed in hardware design will use a linear feedback shift register (LFSR) to build such random matrices without making any ”use” of the theory of pseudo-random number generators. The gap between the mathematics of random matrices and the LFSR is precisely where pseudo-random number generators, small space constructions of pseudo-random variables, random variables with limited independence, etc. fit, but forming that bridge and, more importantly, convincing both sides that they need a bridge rather than a simple function call or a simple hardware circuit, is a hard thing to do and not one the TCS community has been successful at. (Perhaps it’s not even something they are aware of.)

2) Many TCS papers, whether they provide algorithms or discuss models of computation that could/should appeal to applied mathematicians, are written or communicated in a way that applied mathematicians can’t/don’t/won’t understand. And, sometimes, the problems that TCS folks address do not resonate with applied mathematicians because they are used to asking questions differently.

My biggest example here is sparse signal recovery as done by TCS versus compressed sensing. For TCS, it is very natural to ask to design both a measurement matrix and a decoding algorithm so that the algorithm returns a good approximation to the sparse representation of the measured signal. For mathematicians, it is much more natural to ask what conditions are sufficient (or, even better, necessary) for the measurement matrix and some existing algorithm (as opposed to one crafted specifically for the problem) to recover the sparse approximation. Applied mathematicians do not, in general, ask questions about how to generate such matrices algorithmically and how to compute with them, unless they are serious about implementing these algorithms and then, typically, these questions are software questions rather than mathematical ones. They are low-level, details, not necessarily abstract questions to be addressed formally. As an even higher level example of a difference in goals, the notion of approximation algorithm is foreign to applied mathematicians—that concept does not appear in numerical analysis. Typically, convergence rates or error analysis for numerical algorithms is expressed as a function of the step-size (for numerical integration, solving differential equations, etc.) or the number of iterations (for any iterative algorithm). It’s standard to seek a bound on the number of iterations one needs to guarantee an error (or relative error) of \epsilon rather than (1 +\epsilon) \cdot \mathrm{OPT}. The idea that for the given input, there is an optimal solution and we want our algorithm to return a solution that is close to that optimal is not a standard way of analyzing numerical algorithms. After all, that optimal solution may have terrible error and it’s not easy to determine what the optimal error is.

3) Finally, for many applied mathematicians, computation is a means to an end (e.g., solve the problem, better, faster) as opposed to an equal component of the mathematical problem, one to be studied rigorously for its own sake. And, for a number of TCS researchers, actually making progress on a complicated, real-world problem takes a back seat to the intricate mathematical analysis of the computation. In order for both communities to talk to one another, it helps to understand what matters to each of them.

I think that Michael Mitzenmacher’s response to Boaz’s post is similar to the points in the last point when he says “I think the larger issue is the slow but (over long periods) not really subtle shift of the TCS community away from algorithmic work and practical applications.” Although, I am not sure either model is better. TCS research can be practical, applied math isn’t as useful as we’d like to think, and solving a problem better, faster can be done only after thorough, deep understanding, the type that TCS excels at.

Evolving FOCS – mobile edition

October 10, 2014

Not unrelated to our last post, the upcoming FOCS will have a panel-led discussion on “How might FOCS and STOC evolve?” (organized by David Shmoys) on Saturday 10/18 at 6pm. If you are interested in the future of FOCS and STOC, and haven’t yet registered or made your travel plans, I urge you to do so and attend the meeting.

Here is one way this FOCS has evolved: FOCS now has an app –  you can use it to keep track of the schedule, add personal reminders to the talks you want to see, and more. In particular I am thankful to Nadia Heninger and Aaron Roth for the restaurant recommendations – apparently that area of Philadelphia has an amazing selection of great places. (If you don’t see me around during the talks, now you know why…)

To get the app, install the “Guidebook” app on your phone and then search for the “FOCS 2014” guide. While the Guidebook app prompts you to sign up, this is not mandatory, and you can use most of the features without it, though some features, such connecting with other participants, or seeing your reminders and schedule through web access, require it. You may want to download the app and the guide prior to the trip.

I hope to see many of you starting Saturday in Philadelphia!

Boaz

FOCS/STOC: Protect the Venue, Reform the Meeting

October 8, 2014

by Boaz Barak and Omer Reingold

————————————————————————————

The debate about the future of FOCS/STOC has been long and heated. A wide range of criticism (containing at times contradicting complaints) was answered with one simple truth: FOCS/STOC have played and still plays an invaluable role for the TOC community. Indeed, the authors of this proposal have a deep connection to FOCS/STOC. Nevertheless, though often exaggerated, we do acknowledge the validity of many of the concerns regarding FOCS/STOC. As the community evolves, we feel the need to evolve its central meeting place. So while not broken, and not in an urgent need to be fixed, we put forth a proposal to improve (perhaps revive) FOCS/STOC.

FOCS/STOC play a dual role in our community: as both publication venues and as meeting places. In their former role, FOCS/STOC has been incredibly successful, every year many of the best papers in TOC appear in these conference, and FOCS/STOC papers (including recent ones) have led to major awards including ACM dissertation awards, the Grace Murray Hopper award, the Rolf Nevanlinna prize, the MacArthur fellowship and the Turing award.

Thus, while undoubtedly FOCS/STOC are not perfect publication venues, our thesis is that their main shortcomings are as meeting places. Indeed, attendance has been flat over the last decade or so, even as the field has seen significant growth and specialized workshops (such as those at the Simons Institute) often draw an audience half the size of FOCS/STOC. In particular we feel that while FOCS/STOC provides an opportunity for social meetings and small-group collaborations, it falls short in terms of the wider-range exchange of ideas (specifically, the ideas in the papers published in these conferences). We believe it is possible to revise STOC/FOCS to make them a significantly more attractive event (in a sense, a “must-attend” item on every theoretician’s schedule), and a better forum for exchanging ideas across subfields of TOC, while preserving their nature as publication venues (in particular, no dramatic changes in the number of accepted papers, nor in the selection process).

The crux of our proposal is a single combined FOCS/STOC meeting that will be longer (and scheduled appropriately with respect to the academic year) and that will be specifically designed to allow the spread of ideas of appeal to the general community (thus countering the fragmentation of the community) as well as forums for sub-communities to exchange more specialized ideas. While many details can be open to tweaking, in a nutshell we suggest to have an annual weeklong “Theory Festival”. This theory festival would contain presentations of the STOC and FOCS papers, as well as many other activities, including invited talks, tutorials, mini-courses, workshops, and more. The organizers of the theory festival, which would be logically separate from the FOCS/STOC PC’s, would take as input the paper selection by the PCs, but would have considerable latitude in using this input to assemble an attractive program, including a mix of plenary and highly parallel sessions, or any other way they see fit.

Our Proposal in More Detail

The core of our proposal is to collocate FOCS and STOC (and possibly additional events) into a single somewhat longer event at an appropriate time of the year (for example, after the end of the academic year). At least at first, the two PCs of FOCS and STOC will operate similarly to their current operation. In particular, a list of accepted papers (including links to online versions) will be made public in a timely fashion. In addition, a separate organizing committee will be responsible to the selection and scheduling of the joint event. The committee will have representation from the two PCs but will have a separate agenda: to create the most effective program, optimizing for the TOC audience rather than the authors. In particular, it is natural to expect that part of the program will be in a plenary session whereas the rest will be organized as a collection of sub-conferences/workshops in multiple parallel sessions.

Attendees of the joint conference should get an opportunity to catch up on the most exciting developments in TOC (research trends, results and techniques) that are ready for general TOC audience as well as more complete perspective in their specialized area of research. For this purpose, in either the plenary session or the parallel sessions, the organizing committee will not be limited to talks by authors of FOCS/STOC accepted papers. Important results that appeared elsewhere should be represented. In addition, surveys of collections of papers may be at times more effective than talks on individual results.

Let us emphasize that at this point, we are suggesting merely to change the event and make no changes to the paper selection process. That is, there will be two separate FOCS and STOC PC’s that will work on a similar schedule as they do today, where at the end of each PC’s process, the list of accepted papers and the electronic proceedings will be published. The only difference would be that the paper presentations would be deferred to the annual “Theory Festival” that is organized by a third committee. Of course, we are not ruling out making changes to the selection process as well. In fact we believe that decoupling to some extent the event from the selection might open some possibilities for improving the latter that would not be otherwise possible.

Advantages, Concerns and Possible Future Extensions

  • As mentioned, the only change necessitated by this proposal is to the meetings, but FOCS/STOC can keep their character as publication venues (both for the authors as well as for external committees that evaluate TOC researchers). On the other hand, the organizing committee will be free to optimize the meetings for the audience experience and for the exchange of ideas. The meetings could also evolve and reflect developments in TOC as a growing research field.
  • FOCS/STOC PCs will not need to select papers in multiple tiers. In addition, the organizing committee will also be free from choosing the “strongest” papers. The plenary session (while hopefully a prestigious talk opportunity) will not be intended as an award for papers (as again, the focus is on the audience not the authors).
  • The scheduling choices will be intended to be “ephemeral”. The organizing committee will be free to use non-scientific considerations, including diversity of areas or speakers, in making these choices. It can be conveyed to the speakers that all FOCS/STOC papers were equally selected by the PC, and it would be “poor form” to list in the publication list on your CV or webpage the fact that the talk was presented in one session rather than the other. (Of course one can worry that people will still do that, but the risk in alienating potential evaluators will probably outweigh any benefit, and in any case we believe we should not make our events unattractive just to protect against the possibility of abuse.)
  • The quantity and high quality of papers accepted in FOCS and STOC together, go a long way towards the effect desired by a federated theory conference (which may not be easy to obtain otherwise; also FOCS/STOC together may provide enough “critical mass” to encourage other conferences to colocate).
  • A single event could significantly increase attendance. In particular, it could be easier for researchers with limited travel budget or other travel constraints (e.g. young children) to keep part of the community. Moreover, by “network effects”, with people knowing that this is the place they will meet most theorists, it may well be that the number of attendees in this event would be larger than the union of STOC and FOCS.
  • Sub-areas that have grown distant from FOCS/STOC could be welcomed back. As a first step, they could be incorporated as invited talks without asking authors to give up on their more specialized venues. With time one may hope that more papers from these sub areas will be submitted to FOCS/STOC. Similarly, papers submitted to venues with inconsistent publication rules (e.g., some ECON journals) could be easily incorporated in the major meeting of the TOC community.
  • A major concern is of increased fragmentation of the community due to additional parallel sessions in the non-plenary part of the program. We argue that the effect of a substantial part of the program (say half) being in a single session more than compensate for this effect.
  • The organization committee will also have the flexibility of having plenary survey talks to expose attendees to ideas from outside their area. Furthermore, attendees that used to focus on a few areas of interest (which characterize in our opinions most of the attendees) are more likely to be exposed to talks outside of their area, given the more restrictive filtering offered by the plenary session.
  • Some areas (for example Cryptography and Quantum Computing) are more likely to see increased attendance in talks, as at least some of the papers will be in the plenary session, and in any case, we believe there will be increased attendance over the current state. An important concern is the attention to papers that are in more isolated areas, and are not of wide enough appeal to appear in the plenary session. Care should be given to such papers in the program design. It is important to note that these kinds of papers suffer from lack of audience in the current system as well.
  • One could worry that by moving to an annual publication cycle, papers presented will be more “stale” than the current model. We agree that this is a concern. However, we posit that FOCS and STOC are primarily meant to educate researchers about progress outside their immediate area. While even few months could be too long a wait to hear about the latest improvement on the problem you’re working on (which is one more reason to be grateful to the arXiv), waiting 6 months to a year to hear a (perhaps more mature and well digested) talk about exciting results in another area may well be acceptable (note that even specialized workshops find value in presentations of papers that are one or two-year old). The organizing committee will have considerable latitude in selecting the program and in particular, if the conferences contained a sequence of papers that improved on one another, it may decide to schedule a single talk that surveys all these papers.

On changes to FOCS/STOC as a publishing venue

We acknowledge that, despite their success, many of the critiques of FOCS/STOC are as a publication venue, including suggestions that they have become too selective, or not selective enough, papers are too specialized, or too shallow, that the deadline-driven process yield “half-baked” papers, and more. These issues deserve discussion, but we note that our proposal is largely independent of any modifications to the selection process to address those, and we believe would yield a more attractive event regardless. Moreover, as we mentioned, decoupling the selection from the event naturally allows some modifications such as selecting more papers, or having more deadlines, that may be infeasible in the current model.

Sum of Squares: Upper bounds, lower bounds, and open questions

September 30, 2014

[Note: As I commented on Omer’s touching post, I too was shocked by the sudden closure of the amazingly successful MSR Silicon Valley lab. I hope that this blog, whose contents had very little to do with MSR itself and everything to do with the great group of people that was there, would continue to flourish on, independently of its former MSR connection.]

I am teaching a seminar series at MIT with the title above, and thought I would post the introduction to the course here. For the complete notes on the first lecture (and notes of future lectures), please see the course webpage, where you can also sign up for the mailing list to get some future updates. While the SOS algorithm is widely studied and used for a great many applications (see for example the courses of Parrilo and Laurent and the book of Lasserre) , this seminar will offer a different perspective of theoretical computer science. One tongue-in-cheek tagline for this course can be

Rescuing the Sum-of-Squares algorithm from its obscurity as an algorithm that keeps planes up in the sky into a way to refute computational complexity conjectures.

Prelude

Consider the following questions:

  1. Do we need a different algorithm to solve every computational problem, or can a single algorithm give the best performance for a large class of problems?
  2. In statistical physics and other areas, many people believe in the existence of a computational threshold effect, where a small change in the parameters of a computational problems seems to lead to a huge change in its computational complexity. Can we give rigorous evidence for this intuition?
  3. In machine learning there often seem to be tradeoffs between sample complexity, error probability, and computation time. Is there a way to map the curve of this tradeoff?
  4. Suppose you are given a 3SAT formula {\varphi} with a unique (but unknown) satisfying assignment {x}. Is there a way to make sense of statements such as “The probability that {x_{17}=1} is {0.6}” or “The entropy of {x} is {1000}“?
  5. (even though of course information theoretically {x} is completely determined by {\varphi}, and hence that probability is either {0} or {1} and {x} has zero entropy).
  6. Is Khot’s Unique Games Conjecture true?

If you learn the answers to these questions by the end of this seminar series, then I hope you’ll explain them to me, since I definitely don’t know them. However we will see that, despite these questions a-priori having nothing to do with Sums of Squares, that the SOS algorithm can yield a powerful lens to shed light on some of those questions, and perhaps be a step towards providing some of their answers.

Introduction

Theoretical computer science studies many computational models for different goals. There are some models, such as bounded-depth (i.e. {AC_0}) circuits, that we can prove unconditional lower bounds on, but do not aim to capture all relevant algorithmic techniques for a given problem. (For example, we don’t view the results of Furst-Sax-Sipser and Håstad as evidence that computing the parity of {n} bits is a hard problem.) Other models, such as bilinear circuits for matrix multiplication, are believed to be strong enough to capture all known algorithmic techniques for some problems, but then we often can’t prove lower bounds on them.

The Sum of Squares (SOS) algorithm (discovered independently by researchers from different communities including Shor, Parrilo, Nesterov and Lasserre) can be thought of as another example of a concrete computational model. On one hand, it is sufficiently weak for us to know at least some unconditional lower bounds for it. In fact, there is a sense that it is weaker than {AC_0}, since for a given problem and input length, SOS is a single algorithm (as opposed to an exponential-sized family of circuits). Despite this fact, proving lower bounds for SOS is by no means trivial, even for a single distribution or instances (such as a random graph) or even a single instance. On the other hand, while this deserves more investigation, it does seem that for many interesting problems, SOS does encapsulate all the algorithmic techniques we are aware of, and that there is some hope that SOS is an optimal algorithm for some interesting family of problems, in the sense that no other algorithm with similar efficiency can beat SOS’s performance on these problems. (I should note that we currently have more in the way of intuitions than hard evidence for this, though the unique games conjecture, if true, would imply that SOS is an optimal approximation algorithm for every constraint satisfaction problem and many other problems as well; that said, the SOS algorithm also yields the strongest evidence to date that the UGC may be false…)

The possibility of the existence of such an optimal algorithm is very exciting. Even if at the moment we can’t hope to prove its optimality unconditionally, this means that we can (modulo some conjectures) reduce analyzing the difficulty of a problem to analyzing a single algorithm, and this has several important implications. For starters, it reduces the need for creativity in designing the algorithm, making it required only for the algorithm’s analysis. In some sense, much of the progress in science can be described as attempting to automate and make routine and even boring what was once challenging. Just as we today view as commonplace calculations that past geniuses such as Euler and Gauss spent much of their time on, it is possible that in the future much of algorithm design, which now requires an amazing amount of creativity, would be systematized and made routine. Another application of optimality is automating hardness results— if we prove the optimal algorithm can’t solve a problem X then that means that X can’t be solved by any efficient algorithm.

Beyond than just systematizing what we already can do, optimal algorithms could yield qualitative new insights on algorithms and complexity. For example, in many problems arising in statistical physics and machine learning, researchers believe that there exist computational phase transitions— where a small change in the parameter of a problems causes a huge jump in its computational complexity. Understanding this phase transitions is of great interest for both researchers in these areas and theoretical computer scientists. The problem is that these problems involve random inputs (i.e., average case complexity) and so, based on current state of art, we have no way of proving the existence of such phase transitions based on assumptions such as {\mathbf{P}\neq \mathbf{NP}}. In some cases, such as the planted clique problem, the problem has been so well studied that the existence of a computational phase transition had been proposed as a conjecture in its own right, but we don’t know of good ways to reduce such reductions to one another, and we clearly don’t want to have as many conjectures as there are problems. If we assume that an algorithm is optimal for a class of problems, then we can prove a computational phase transition by analyzing the running time of this algorithm as a function of the parameters. While by no means trivial, this is a tractable approach to understanding this question and getting very precise estimates as to the location of the threshold where the phase transition occurs. (Note that in some sense the existence of a computational phase transition implies the existence of an optimal algorithm, since in particular it means that there is a single algorithm {A} such that beating {A}‘s performance by a little bit requires an algorithm taking much more resources.)

Perhaps the most exciting thing is that an optimal algorithm gives us a new understanding of just what is it about a problem that makes it easy or hard, and a new way to look at efficient computation. I don’t find explanations such as “Problem A is easy because it has an algorithm” or “Problem B is hard because it has a reduction from SAT” very satisfying. I’d rather get an explanation such as “Problem A is easy because it has property P” and ”Problem B is hard because it doesn’t have P” where P is some meaningful property (e.g., being convex, supporting some polymorphisms, etc..) such that every problem (in some domain) with P is easy and every problem without it is hard. For that, we would want an algorithm that will solve all problems in P and a proof (or other evidence) that it is optimal. Such understanding of computation could bear other fruits as well. For example, as we will see in this seminar series, if the SOS algorithm is optimal in a certain domain, then we can use this to build a theory of “computational Bayesian reasoning” that can capture the “computational beliefs” of a bounded-time agent about a certain quantity, just as traditional Bayesian reasoning captures the beliefs of an unbounded-time agent about quantities on which it is given partial information.

I should note that while much of this course is very specific to the SOS algorithms, not all of it is, and it is possible that even if the SOS algorithm is superseded by another one, some of the ideas and tools we develop will still be useful. Also, note that I have deliberately ignored the question of what family of problems would the SOS be optimal for. This is clearly a crucial issue— every computational model (even {AC_0}) is optimal for some problems, and every model falling short of general polynomial-time Turing machines would not be optimal for all problems. It definitely seems that some algebraic problems, such as integer factoring, have very special structure that makes it hard to conjecture that any generic algorithm (and definitely not the SOS algorithm) would be optimal for them. (See also my previous blog post  on this topic.) The reason I don’t discuss this issue is that we still don’t have a good answer for it, and one of the research goals in this area is to understand what should be the right conjecture about optimality of SOS. However, we do have some partial evidence and intuition, including those arising from the SOS algorithm’s complex (and not yet fully determined) relation to Khot’s Unique Games Conjecture, that leads us to believe that SOS could be an optimal algorithm for a non-trivial and interesting class of problems.

In this course we will see:

  1. A description of the SOS algorithm from different viewpoints— the traditional semidefinite programming/convex optimization view, as well as the proof system view, and the “pseudo-distribution” view.
  2. Discussion of positive results (aka “upper bounds”) using the SOS algorithms to solve graph problems such as sparsest cut, and problems in machine learning.
  3. Discussion of known negative results (aka “lower bounds” / “integrality gaps”) for this algorithm.
  4. Discussion of the interesting (and not yet fully understood) relation of the SOS algorithm to Khot’s Unique Games Conjecture (UGC). On one hand, the UGC implies that the SOS algorithm is optimal for a large class of problems. On the other hand, the SOS algorithm is currently the main candidate to refute the UGC.

2. Polynomial optimization

The SOS algorithm is an algorithm for solving a computational problem. Let us now define what this problem is:

Definition 1 A polynomial equation is an equation of the form {\{ P(x) \geq 0 \}} (in which case it is called an
inequality) or an equation of the form {\{ P(x)=0 \}} (in which case it is called an equality) where {P} is a multivariate polynomial
mapping {x\in\mathbb{R}^n} to {\mathbb{R}}. The equation {\{ P(x) \geq 0 \}} (resp. {\{ P(x)=0\}}) is satisfied by {x\in\mathbb{R}^n} if {P(x)\geq 0} (resp. {P(x)=0}).

A set {\mathcal{E}} of polynomial equations is satisfiable if there exists an {x} that satisfies all equations in {\mathcal{E}}.

The polynomial optimization problem is to output, given a set {\mathcal{E}} of polynomial equations as input, either an {x} satisfying all equations in {\mathcal{E}} or a proof that {\mathcal{E}} is unsatisfiable.

(Note: throughout this seminar we will ignore all issues of numerical accuracy— assume the polynomials always have rational coefficients with bounded numerator and denominator, and all equalities/inequalities can be satisfied up to some small error {\epsilon>0}.)

Here are some examples for polynomial optimization problems:

  • Linear programming If all the polynomials are linear then this is of course linear programming that can be done in polynomial time.
  • Least squares If the equations consist of a single quadratic then this is the least squares algorithm. Similarly, one can capture computing eigenvalues by two quadratics.
  • 3SAT Can encode 3SAT formula as degree 3 polynomial equations: the equation {x_i^2 = x_i} is equivalent to {x_i \in \{0,1\}}.
    The equation {x_i x_j (1-x_k) = 0} is equivalent to {\overline{x_i \wedge x_j \wedge \overline{x_k}} = \overline{x_i} \vee \overline{x_j} \vee x_k}.
  • Clique Given a graph {G=(V,E)} the following equations encode that {x} is a {0/1} indicator vector of a {k}-clique: {x_i^2 = x_i}, {\sum x_i = k}, {x_ix_j = 0} for all {(i,j)\not\in E}.

The SOS algorithm is designed to solve the polynomial optimization problem. As we can see from these examples, the full polynomial optimization problem is NP hard, and hence we can’t expect SOS (or any other algorithm) to efficiently solve it on every instance.
Exercise 1: prove that this is the case even if all polynomials are quadratic, i.e. of degree at most {2}.)
Understanding how close the SOS algorithm gets in particular cases is the main technical challenge we will be dealing with.

These examples also show that polynomial optimziation is an extremely versatile formalism, and many other computational problems (including SAT and CLIQUE) can be directly and easily phrased as instances of it. Henceforth we will ignore the question of how to formalize a problem as a polynomial optimization, and either assume the problem is already given in this form, or use the simplest most straightforward translation if it isn’t. While there are examples where choosing between different natural formulations could make a difference in complexity, this is not the case (to my knowledge) in the questions we will look at.

Note: We can always assume without loss of generality that all our equations are equalities, since we can always replace the equation {P(x) \geq 0} by {P(x) - y^2 = 0} where {y} is some new auxiliary variable.
Also, we sometimes will ask the question of minimizing (or maximizing) a polynomial {P(x)} subject to {x} satisfying equations {\mathcal{E}}, which can be captured by looking for the largest {\mu} such that {\mathcal{E} \cup \{ P \geq \mu \}} is satisfiable.

3. The SOS algorithm

The Sum of Squares algorithm is an algorithm to solve the polynomial optimization problem. Given that it is NP hard, the SOS algorithm cannot run in polynomial time on all instances. The main focus of this course is trying to understand in which cases the SOS algorithm takes a small (say polynomial or quasipolynomial) amount of time, in which cases it takes a large (say exponential) amount. An equivalent form of this question (which is the one we’ll mostly use) is that, for some small {\ell} (e.g. a constant or logarithmic) we want to understand in which cases the “{n^\ell}-capped” version of SOS succeeds to solve the problem and in which cases it doesn’t, where the “{T(n)}-capped” version of the SOS algorithm halts in time {T(n)} regardless of whether or not it solved the problem.

In fact, we will see that for every value of {\ell}, the SOS of squares always returns some type of a meaningful output. The main technical challenge is to understand whether that output can be transformed to an exact or approximate solution for the polynomial optimization problem.

Definition 2 (Sum of Squares – informal definition)
The SOS algorithm gets a parameter {\ell} and a set of equations {\mathcal{E}}, runs in time {n^{O(\ell)}} and outputs either:

  • An object we will call a “degree-{\ell} pseudo solution” (or more accurately a degree-{\ell} pseudo-distribution over solutions).\medskip or
  • A proof that a solution doesn’t exist.

We will later make this more precise: what is exactly a degree-{\ell} pseudo solution, what is exactly the form of the proof, and how does the algorithm work.

History. [Note: this is mostly from memory and not from the primary sources, so doublecheck this before quoting elsewhere. The introduction of this paper of O’Donnell and Zhou is a good starting point for the hisroty.] The SOS algorithm has its roots in questions raised in the late {19th} century by Minkowski and Hilbert of whether any non-negative polynomial can be represented as a sum of squares of other polynomials. Hilbert realized that except for some special cases (most notably univariate polynomials and quadratic polynomials), the answer is negative and that there is an example (which he constructed by non constructive means) of non-negative polynomial that cannot be represented in this way. It was only in the 1960′s that Motzkin gave a very concrete example of such a polynomial

1 + x^4y^2 + x^2y^4 - 3x^2y^2 (1)

In his famous 2000 address, Hilbert asked as his 17th problem whether any polynomial can be represented as a sum of squares of rational functions. (For example, Motzkin’s polynomial (1) can be shown to be the sum of squares of (I think) four rational functions of denominator and numerator degree at most {6}). This was answer positively by Artin in 1927. His approach can be summarized as, given a hypothetical polynomial {P} that cannot be represented in this form, to use the fact that the rational functions are a field to extend the reals into a “pseudo-real” field {\Tilde{\mathbb{R}}} on which there would actually be an element {\Tilde{x} \in \Tilde{R}} such that {P(\Tilde{x})<0}, and then use a “transfer principle” to show that there is an actual real {x\in\mathbb{R}} such that {P(x)<0}. (This description is not meant to be understandable but to make you curious enough to look it up..) Later in the 60′s and 70′s Krivine and Stengle extended this result to show that any unsatisfiable system of polynomial equations can be certified to be unsatisfiable via a Sum of Squares proof, a result known as the Positivstallensatz.

In the late 90′s / early 2000′s, there were two separate efforts on getting quantitative or algorithmic versions of this result. On one hand Grigoriev and Vorobjov asked the question of how large the degree of an SOS proof needs to be, and in particular Grigoriev proved several lower bounds on this degree for some interesting polynomials. On the other hand Parrilo and Lasserre (independently) came up with hierarchies of algorithms for polynomial optimization based on the Positivstallensatz using semidefinite programming. (Something along those lines was also described by Naum Shor in a 1987 Russian paper, and mentioned by Nesterov as well.)

It took some time for people to realize the connection between all these works, and in particular the relation between Grigoriev-Vorbjov’s work and the works from the optimization literature took some time to be discovered, and even 10 years after, it was still the case that some results of Grigoriev were rediscovered and reproven in the Lasserre language.

Applications of SOS SOS has applications to: equilibrium analysis of dynamics and control (robotics, flight controls, …), robust and stochastic optimization, statistics and machine learning, continuous games, software verification, filter design, quantum computation and information, automated theorem proving, packing problems, etc…

hornet sphere-packing

Remark: the TCS vs Mathematical Programming view of SOS

While the SOS algorithm is intensively studied in several communities, there are some differences in emphasizes between the different aspects. While I am not an expert on all SOS works, my impression is that the main characteristics of the TCS viewpoint as opposed to others are:

  1. In the TCS world, we typically think of the number of variables {n} as large and tending to infinity (as it corresponds to our input size),
    and the degree {d} of the SOS algorithm as being relatively small— a constant or logarithmic.
    In contrast, in the optimization and control world, the number of variables can often be very small (e.g. around ten or so, maybe even smaller) and hance {d} may be large compared to it.Note that since both time and space complexity of the general SOS algorithm scale roughly like {n^d}, even {d=6} and {n=100} would take something like a petabyte of memory (in practice, though we didn’t try to optimize too much, David Steurer and I had a hard time executing a program with {n=16} and {d=4} on a Cornell cluster).
    This may justify the optimization/control view of keeping {n} small, although if we show that SOS yields a polynomial-time algorithm for a particular problem, then we can hope that we would be able to then optimize further and obtain an algorithm that doesn’t require a full-fledged SOS solver.
  2. Typically in TCS our inputs are discrete and the polynomials are simple, with integer coefficients etc. Often we have constraints such as {x_i^2 = x_i} that restrict attention to the Boolean cube, and so we are less concerned with issues of numerical accuracy, boundedness, etc..
  3. Traditionally people have been concerned with exact convergence of the SOS algorithm—- when does it yield an exact solution to the optimization problem. This often precludes {d} from being much smaller than {n}. In contrast as TCS’ers we would often want to understand approximate convergence— when does the algorithm yield an “approximate” solution (in some problem-dependent sense).Since the output of the algorithm in this case is not actually in the form of a solution to the equations, this raises the question of a obtaining rounding algorithms, which are procedures to translate the output of the algorithm to an approximate solution.

4. Several views of the SOS algorithm

We now describe the SOS algorithm more formally. For simplicity, we consider the case that the set {\mathcal{E}} only consists of equalities (which is without loss of generality as we mentioned before). When convenient we will assume all equalities are homogenous polynomials of degree {d}. (This can be always be arranged by multiplying the constraints.) You can restrict attention to {d=4}— this will capture all of the main issues of the general case.

4.1. SOS Algorithm: convex optimization view

We start by presenting one view of the SOS algorithm, which technically might be the simplest, though perhaps at first not conceptually insightful.

Definition 3 Let {\mathbb{R}^n_d} denote the set of {n}-variate polynomials of degree at most {d}. Note that
this is a linear subspace of dimension roughly {n^d}.

We will sometimes also write this as {\mathbb{R}[x]_d} where we want to emphasize that these polynomials take the formal input {x=x_1\ldots x_n}.

Definition 4 Let {\mathcal{E} = \{ p_1 = \cdots p_m = 0 \}} be a set of polynomial equations where {p_i \in \mathbb{R}^n_d} for all {i}.
Let {\ell \in \mathbb{N}} be some integer multiple of {2d}. The degree-{\ell} SOS algorithm either outputs \texttt{‘fail’} or a
bilinear operator {M:\mathbb{R}^n_{\ell/2} \times \mathbb{R}^n_{\ell/2} \rightarrow \mathbb{R}} satisfying:

  • Normalization: {M(1,1)=1} (where {1} is simply the polynomial {p(x) = 1}).
  • Symmetry: If {p,q,r,s \in \mathbb{R}_{\ell/2}^n} satisfy {pq = rs} then {M(p,q)=M(r,s)}.
  • Non-nonnegativity (positive semi definiteness): For every {p}, {M(p,p) \geq 0}.
  • Feasibility: For every {i\in[m]}, {p\in \mathbb{R}^n_{\ell/2-d}}, {q\in \mathbb{R}^n_{\ell/2}}, {M(p_ip,q)=0}.

Exercise 2: Show that if the symmetry and feasibility constraints hold for monomials they hold for all polynomials as well.

Exercise 3: Show that the set of {M}‘s satisfying the conditions above is convex and has an efficient separation oracle.

Indeed, such an {M} can be represented as an {n^{\ell/2} \times n^{\ell/2}} PSD matrix satisfying some linear constraints. (Can you see why?) Thus by semidefinite programming finding such an {M} if it exists can be done in {n^{O(\ell)}} time (throughout this seminar we ignore issues of precision etc..).

The question is why does this have anything to do with solving our equations, and one answer is given by the following lemma:

Lemma 5 Suppose that {\mathcal{E}} is satisfiable. Then there exists an operator {M} satisfying the conditions above.

Proof: Let {x^0} be a solution for the equations and let {M(p,q) = p(x^0)q(x^0)}. Note
that {M} clearly satisfies all the conditions.
\Box

Since the set of such operators {M} is convex, for every distribution {\mu} over solutions of {\mathcal{E}}, the operator {M(p,q) = \mathbb{E}_{x\sim \mu} p(x)q(x)} also satisfies the conditions. As {\ell} grows, eventually the only operators that satisfy the condition will be of this form.

For this reason we will call {M} a degree-{\ell} pseudo-expectation operator. For a polynomial {p} of degree at most {\ell}, we define {M(p)} as follows: we write {p = \sum \alpha_i p_i} where each {p_i} is a monomial of degree at most {\ell}, and then decompose {p_i = p'_ip''_i} where the degree of {p'_i} and {p''_i} is at most {\ell/2} and then define {M(p) = \sum \alpha_i M(p'_i,p''_i)}. We will often use the suggestive notation {\Tilde{\mathbb{E}} p} for {M(p)}.

Exercise 4: Show that {M(p)} is well defined and does not depend on the decomposition.

4.2. Intuition- the Boolean cube

To get some intuition, we now focus attention about the special case that our goal is to maximize some polynomial {p_0(x)} over over Boolean cube {\{ \pm 1 \}^n} (i.e., the set of {x}‘s satisfying {x_i^2 = 1}.)
This case is not so special in the sense that (a) it captures much of what we want to do in TCS and (b) the intuition it yields largely applies to more general settings.

Recall that we said that for every distribution {\mu} over {x}‘s satisfying the constraints, we can get an operator {M} as above by looking at {\mathbb{E}_{x\sim \mu} p(x)q(x)}. We now show that in some sense every operator has this form, if, in a manner related to and very reminiscent of quantum information theory, we allow the probabilities to go negative.

Definition 6 A function {\mu:\{ \pm 1 \}^n\rightarrow \mathbb{R}} is a degree-{\ell} pseudo-distribution if it
satisfies:

  • Normalization: {\sum_{x\in\{\pm 1\}^n} \mu(x) = 1}.
  • Restricted non-negativity: For every polynomial {p} of degree at most {\ell/2}, {\Tilde{\mathbb{E}}_{x\sim \mu}p(x)^2 \geq 0},
    where we define {\Tilde{\mathbb{E}}_{x\sim \mu} f(x)} as {\sum_{x\in \{\pm 1 \}^n} f(x)}.

Note that if {\mu} was actually pointwise non-negative then it would be an actual distribution on the cube. Thus an actual distribution over the cube is always a pseudo distribution.

Exercise 5: Show that a degree {2n} pseudo-distribution is an actual distribution.

Exercise 6: Show that if {\mu} is a degree {\ell} pseudo-distribution, then there exists a degree-{\ell} pseudo-distribution {\mu'} such that {\Tilde{\mathbb{E}}_{x\sim \mu} p(x) = \Tilde{\mathbb{E}}_{x\sim \mu'} p(x)} for every polynomial {p} and that {\mu'(x)} is a degree {\ell} polynomial in the variables
of {x}. (Hence for our purposes we can always represent such pseudo-distributions with {n^{O(\ell)}} numbers.)

Exercise 7: Show that for every polynomial {p_0} of degree at most {\ell/2}, there exists a degree {\ell} pseudo-distribution {\mu} on the cube satisfying {\Tilde{\mathbb{E}}_{x\sim \mu} p_0(x) \geq \lambda} if and only if there exists a degree {\ell} pseudo-expectation operator {M} as above satisfying {\{ x_i^2 = 1: i=1..n \}} such that {M(p_0) \geq \lambda}.

Therefore, we can say that the degree-{\ell} SOS algorithm outputs either a degree-{\ell} pseudo-distribution over the solutions to {\mathcal{E}} or ‘fail’ and only outputs the latter if the former doesn’t exist. In particular if it outputs ‘fail’ then there isn’t any actual distribution over the solutions, and so the fact that the algorithm outputs ‘fail’ is a proof that the original equations are unsatisfiable. We will see that by convex duality, the algorithm actually outputs an
explicit proof of this fact that has a natural interpretation.

Exercise 8: (optional– for people who have heard about the Sherali-Adams linear programming hierarchy) Show that the variant of pseudo-distributions where we replace the condition that expectation is non-negative on all squares of degree {\ell/2} polynomials with the condition that it should be non-negative on all non-negative functions that depend on at most {\ell} variables can be optimized over using linear programming and is equivalent to {\ell} rounds of the Sherali-Adams LP.

Are all pseudo-distributions distributions?

For starters, we can always find a distribution matching all the quadratic moments.

Lemma 7 (Gaussian Sampling Lemma) Let {M} be a degree-{\ell} pseudo-expectation operator for {\ell\geq 2}.
Then there exists a distribution {(y_1,\ldots,y_n)} over {\mathbb{R}^n} such that for every polynomial {p}
of degree at most {2}, {M(p) = \mathbb{E} p(y)}. Moreover, {y} is a (correlated) Gaussian distribution.

Note that even if {M} comes from a pseudo-distribution {\mu} over the cube, the output of {y} will be real numbers that although satisfying {\mathbb{E} y_i^2 = 1}, will be in {\{ \pm 1 \}}.

Unfortunately, we don’t have an analogous result for higher moments:

Exercise 9: Prove that if there was an analog of the Gaussian Sampling Lemma for every polynomial {p} of degree at most {6} then P=NP. (Hint: show that you could solve 3SAT, can you improve the degree to {4}? maybe {3}?)

Unfortunately, this will not be our way to get fame and fortune:

Exercise 10: Prove that there exists a degree {4} pseudo-distribution {\mu} over the cube such that there does not exist any actual distribution {\nu} that matches its expectation on all polynomials of degree at most {4}. (Can you improve this to {3}?)

5. Sum of Square Proofs

As we said, when the SOS algorithm outputs \texttt{‘fail’} this can be interpreted as a proof that the system of equations is unsatisfiable. However, it turns out this proof actually has a special form that is known as an SOS proof or positivstenelsatz.
An SOS proof uses the following rules of inference

\displaystyle \begin{array}{rl}  p \geq 0 , q \geq 0 &\models p +q \geq 0 \\  p \geq 0 , q \geq 0 &\models pq \geq 0 \\  &\models p^2 \geq 0  \end{array}

They should be interpreted as follows. If you know that a set of conditions {\mathcal{E} = \{ p_1 \geq 0 , \ldots , p_m \geq 0 \}} is satisfied on some set {S}, then any conditions derived by the rules above would on that set as well. (Note that we only mentioned inequalities above, but of course {\{ p = 0 \}} is equivalent to the conditions {\{ p \geq 0 , -p \geq 0 \}}.)

Definition 8
Let {\mathcal{E}} be a set of equations. We say that {\mathcal{E}} implies {p\geq 0} via a degree-{\ell} SOS proof,
denoted {\mathcal{E} \models_\ell p \geq 0}, if {p \geq 0} can be inferred from the constraints in {\mathcal{E}} via
a sequence of applications of the rules above where all intermediate polynomials are of syntactic degree {\leq \ell}.

The syntactic degree of the polynomials in {\mathcal{E}} is their degree, while the syntactic degree of
{p+q} (resp. {pq}) is equal to the maximum (resp. the sum ) of the syntactic degrees of {p,q}.
That is, the syntactic degree tracks the degrees of the intermediate polynomials without accounting for
cancellations.

(Note: If we kept track of the actual degree instead of the syntactic degree we get a much stronger proof system for which we don’t have a static equivalent form, and can prove some things that the static system cannot. See the paper of Grigoriev, Hirsch and Pasechnik for discussion of this other system.)

Definition 9
Let {\mathcal{E}} be a set {\{ p_1 = \cdots = p_m = 0 \}} of polynomial equalities.
We say that {\mathcal{E}} has a degree-{\ell} SOS refutation if {\mathcal{E} \models_\ell 0 \geq 1}.

It turns out that a degree-{\ell} refutation can always be put in a particular compact static form.

Exercise 11: For every {d < \ell}, prove that {\mathcal{E}= \{ p_1 = \cdots = p_m = 0\}} (where all {p_i}‘s are of degree {d})
has a degree-{\ell} SOS refutation if and only if there exists
{q_1,\ldots,q_m} of degree at most {\ell' = O(\ell)} and {r_1,\ldots,r_{m'}} of degree at most {\ell'/2} such that

\displaystyle  \sum q_i p_i = 1 + s  \ \ \ \ \ (2)


where {s = \sum_{i=1}^{m'} r_i^2}, i.e. it is a sum of squares.
(It’s OK if you lose a bit in each direction, i.e., in the if direction it could be that {\ell' = 2\ell} while in the only if direction it could be that {\ell'=\ell/2}.)

Exercise 12: Show that we can take {m'} to be at most {n^{2\ell}}.

Exercise 13}: Show that the set {(p_1,\ldots,p_m,s)} satisfying (2) is a convex set with an efficient separation oracle.

Positivstellensatz (Stengle 64, Krivine 74): For every unsatisfiable system {\mathcal{E}} of equalities there exists a finite {\ell} s.t. {\mathcal{E}} has a degree {\ell} proof of unsatisfiability.

Exercise 14: Prove P-satz for systems that include the constraint {x_i^2 = x_i} for all {i}. In this case, show that {\ell} needs to be at most {2n} (where {n} is the number of variables). As a corollary, we get that the SOS algorithm does not need more than {n^{O(n)}} time to solve polynomial equations on {n} Boolean variables. (Not very impressive bound, but good to know.
In all TCS applications I am aware of, it’s easy to show that the SOS algorithm will solve the problem in exponential time. )

Exercise 15: Show that if there exists a degree-{\ell} SOS proof that {\mathcal{E}} is unsatisfiable then there is no
degree-{\ell} pseudo-distribution consistent with {\mathcal{E}}.

SOS Theorem (Shor, Nesterov, Parrilo, Lasserre)} Under some mild conditions (see Theorem 2.7 in my survey with Steurer),
there is an {n^{O(\ell)}} time algorithm that given a set {\mathcal{E}} of polynomial equalities either outputs:

  • A degree-{\ell} pseudo-distribution {\mu} consistent with {\mathcal{E}} or
  • A degree-{\ell} SOS proof that {\mathcal{E}} is unsatisfiable.

6. Discussion

The different views of pseudo distributions: The notion of pseudo-distribution is somewhat counter-intuitive and takes a bit of time to get used to. It can be viewed from the following perspectives:

  • Pseudo-distributions is simply a fancy name for a PSD matrix satisfying some linear constraints, which is the dual object to SOS proofs.
  • SOS proofs of unbounded degree is a sound and complete proof system in the sense that they can prove
    any true fact (phrased as polynomial equations) about actual distributions over {\mathbb{R}^n}.SOS proofs of degree {d} is a sound and not complete proof system for actual distributions, but it is a (sound and) complete system for degree {d} pseudo-distributions, in the sense that any true fact that holds not merely for actual distributions but also for degree {d} pseudo-distributions has a degree {d} SOS proof.
  • In statistical learning problems (and economics) we often capture our knowledge (or lack thereof) by a distribution.
    If an unknown quantity {X} is selected and we are given the observations {y} about it, we often describe our knowledge of
    by a the distribution {X|y}.
    In computational problems, often the observations {y} completely determine the value {X}, but pseudo-distribution
    can still capture our “computational knowledge”.
  • The proof system view can also be considered as a way to capture our limited computational abilities.
    In the example above, a computationally unbounded observer can deduce from the observations {y} all the true facts it implies
    and hence completely determine {X}. One way to capture the limits of a computationally bounded observer is that it can only deduce facts using a more limited, sound but not complete, proof system.

Lessons from History It took about 80 years from the time Hilbert showed that polynomials that are not SOS exist non-constructively until Motzkin came up with an explicit example, and even that example has a low degree SOS proof of positivity. One lesson from that is the following:

“Theorem”: If a polynomial {P} is non-negative and “natural” (i.e., constructed by methods known to Hilbert—
not including probabilistic method), then there should be a low degree SOS proof for this fact.

Corollary (Marley, 1980): If you analyze the performance of an SOS based algorithm pretending pseudo-distributions
are actual distributions, then unless you used Chernoff+union bound type arguments, then every little thing gonna be alright.

We will use Marley’s corollary extensively in analyzing SOS algorithms. That is, we will pretend that the pseudo-distributions are actual distributions, and then cross our fingers and hope that our analysis will carry over when the algorithm actually works with pseudo-distributions. Thus one can think of pseudo-distributions as a “non type safe” notation that perhaps is not always sound, but makes it easier to phrase and prove theorems that we might not be able to do otherwise.

There is a recurring theme in mathematics of “power from weakness”. For example, we can often derandomize certain algorithms by observing that they fall in some restricted complexity classes and hence can be fooled by certain pseudorandom generator. Another example, perhaps closer to ours, is that even though the original way people defined calculus with “infitesimal” amounts were based on false permises, still much of the results they deduced were correct. One way to explain this is that they used a weak proof system that cannot prove all true facts about the real numbers, and in particular cannot detect if the real numbers are replaced with an object that does have such an “infitesimal” quantity added to it. In a similar way, if you analyze an algorithm using a weak proof system (e.g. one that is captured by a small degree SOS proof), then the analysis will still hold even if we replaced actual distributions with a pseudo-distribution of sufficiently large degree.

Riding the Wheel of Samsara

September 23, 2014

Dozens of comments following Omer’s post, which confirmed the closure of the lab, would make you believe that it was a magical place where amazing things happened. And indeed, it was. But it was also more than just a place – the lab was a community with its own values, identity, voice and will. In short, it was a living being.

Since the news broke on Thursday, I’ve been searching for a right model to apply to the lab’s sudden demise: Shall we sit shiva? Hold a wake? Eulogize? Stumble through denial, anger, bargaining, and depression towards acceptance? Different cultures deal with a loss in remarkably varied ways. Which one is the most applicable to ours? Reflecting back on the history of the lab that by some accounts goes back more than thirty years and spans several companies, I realized that the right answer had been staring at me all along: the document authored by Naughton and Taylor that beautifully summarized the main principles on which our lab was founded was called “Zen and Art of Research Management” [1]. How very true! One cycle of many reincarnations and rebirths has just completed.

Buddhists process loss in a manner that may look insensitive and heartless. Instead of grieving they celebrate the chance for a new beginning. Even if no one entity may eventually claim to be MSR SVC’s rightful heir, its spirit of cross-area collaboration, mutual respect, commitment to fundamental research and support of technology transfer will live on.

I am truly grateful for the opportunities afforded to me by 11 years with MSR SVC, the most cherished of which is the list of collaborators that includes dear friends, bright Ph.D. students, some of the strongest minds in CS, and all around excellent fellows. Thank you and good luck!

[1] Elaboration of Naughton and Taylor’s principles in a piece by the last lab director Roy Levin can be found here.

Follow

Get every new post delivered to your Inbox.

Join 279 other followers