Now that the important event of the STOC deadline has passed, we can talk about trivial matters such as the future of the free world (but of course, like a broken record, I will come back to talking about sum of squares by the end of the post).

A priori predicting the result of the election seems like an unglamorous  and straighforward exercise: you ask $n$ people for their opinions $x_1,\ldots,x_n$ whether they prefer candidate $0$ or candidate $1$, and you predict that the result will be the majority opinion, with probability that is about  $1-\exp(-|\sum x_i - n/2|^2/n)$. This means that if two candidates are at least 2 percent apart, then you should get extremely high confidence if you ask some constant factor times 2,500 people.

Yet somehow, different analysts looking at the polls come up with very different numbers for the probability that Trump will win. As of today 538 says it is 34.2%, NY-times’ Upshot says it is 14%, David Rotchschild’s predictwise says it is 16%,  and Sam Wang’s PEC says it is 2%.

There are several reasons for this discrepancy, including the fact that the U.S. election is not won based on popular vote (though they almost always agree), that we need to estimate the fraction among actual voters as opposed to the general population, that polls could have systematic errors, and of course there is genuine uncertainty in the sense that some people might change their minds.

But at least one of the reasons seems to come up from a problem that TCS folks are familiar with, and arises in the context of rounding algorithms for convex optimization, which is to understand higher level correlations. For example, essentially all these predictors think that there  are a few states such Florida, New Hampshire, Nevada, and North Carolina that have reasonable chance of going either way, but that Trump cannot afford to lose even one of them.  Clearly these are not perfectly independent nor perfectly correlated events, but understanding the correlations between them seems hard, and appears to account for much of the huge discrepancy between the topline predictions.

Even if you do understand the correlations, using them to come up with predictions can be a computationally hard task. The typical way these people do it is to come up with an efficiently samplable distribution that matches the given marginals and correlations, and then run many simulations on this distribution to predict the outcome.

But coming up with efficiently sampleable distributions that match even simple pairwise or three-wise correlations is a computationally hard task.  (For constrained pairwise moments this is related to MAX-CUT while for unconstrained higher moments this is related to SAT, it is possible to do this for unconstrained pairwise moments using the quadratic sampling lemma, also known as Gaussian copula, which is related to the hyperplane rounding technique of Geomans and Williamson.) For an empirical demonstration, see this blog post of David Rothschild for how it is problematic to find a distribution that matches both the statewise and topline marginals of prediction markets (despite a fact that such a distribution should exist under the efficient market hypothesis). The fact that the matching moments problem is hard in general means that people use different heuristics to achieve this, and I don’t know if they have a good understanding of how the choice of heuristic affects the quality of prediction.

In our sum of squares lecture notes we discuss (with a javascript-powered  figure!) this issue with respect to the clique problem. Given a graph $G$ with a hidden $k$ clique $S$,  if we are computationally bounded and want to make predictions on events such as $A \subseteq S$ we cannot find an efficiently sampleable distribution over sets $S'$ that would not make non-sensical predictions such as $\Pr[ |S| \neq k ] < 1$ or $\Pr [ \{ i,j \} \subseteq S ] >0$ for some non neighbors $i,j$. The sos algorithm (and other convex optimization frameworks) can be thought of as a way to come up with predictions without a matching efficiently sampleable distribution, by generalizing to the notion of pseudo-distribution.

As mentioned before on this blog,   STOC 2017 will be part of an expanded “Theory Fest” (http://acm-stoc.org/stoc2017/) which is being planned by a small committee (Sanjeev Arora, Paul Beame, Avrim Blum, and Ryan Williams, as well as SIGACT chair Michael Mitzenmacher and STOC’17 PC chair Valerie King).

One component of Theory Fest would be a series of presentations to highlight some of the best theoretical work in the past two years from other conferences or journals. (A non-exhaustive list of sample venues is ICALP,SODA, CRYPTO, LICS, PODS, PODC, QIP, SPAA, KDD, CCC, SIGMETRICS, Transaction on IT, WWW, ICML/NIPS , Science/Nature, etc..) Invited papers from these venues  will be presented in short (20-30 minute) plenary presentations at Theory Fest.

We (the sub-committee in charge of this component) seek suggestions of theoretical papers that have made breakthrough advances, opened up new questions or areas, made unexpected connections, or had significant impact on practice or other sciences. Anyone is welcome to contact us, but we especially invite members of PCs or editorial boards in various venues to send us suggestions.

If you can think of any recent result that satisfies these criteria, or have any related questions, please email stoc17presentations@boazbarak.org (please CC my personal email as well). Suggestions for presentations should include the following information:

1. Name of the paper and authors.
2. Publication venue (conference / journal), with publication date no earlier than January 1 2015.
3. Short (1-3 paragraph) explanation of the paper and its importance.
4. (Optional) Names of 1-3 knowledgeable experts on the area of this paper.

To ensure maximum consideration, please send us all these suggestions by December 12,  2016. Self-nominations are discouraged.

Thank you,

Theory Fest 2017 short plenaries committee:

Dorit Aharonov (Hebrew University)
Boaz Barak (Harvard University, committee chair)
Paul Beame (University of Washington)
Kamalika Chaudhuri (UC San Diego)
Ronald Fagin (IBM Research – Almaden)
Piotr Indyk (MIT)
Friedhelm Meyer auf der Heide (University of Paderborn)
Eva Tardos (Cornell)
Suresh Venkatasubramanian (University of Utah)

The success of TOC is due to the intrinsic intellectual merit of our field but also due to many fruitful connections: connections between subfields of TOC (see Avi Wigderson’s depth through breadth), connection with Mathematics, connections with other fields in Science and Humanities via the computational lens, and connections with industry. All of these powerful connections contribute to the impact and vitality of TOC.

In this spirit, the theory group at Stanford and our theory colleagues all around the Silicon Valley are establishing a forum for continuous collaboration (through meetings and electronic means) between theoreticians in industry and academia in the SF Bay Area. We name it TOCA-SV. TOCA stands for Theory of Computing Associated and also, quite appropriately, means touches in Spanish. Like others, calling for a revolution: I hope that TOCA groups will surface all over.

Our first meeting will take place at Stanford on November 4th. Please join us if you are around and please don’t forget to register.

Students who want to present in this event (in a lightning-talk session) are asked to email us at tocasvalley@gmail.com. Please specify your name, affiliation, adviser (if you have one), year and email address. We will try to accommodate as many as possible (but future opportunities are planed).

Occupy TOC, power to the people, onwards and upwards or whatever seems appropriate here 🙂

I hope to see many readers of this blog in person tomorrow for the workshop in honor of Avi Wigderson 60th birthday (Wed-Sat), which will feature a collection of great speakers talking on a variety of areas in theoretical computer science and mathematics.

But, if you can’t make it in person, the talks will be streamed live on the workshop’s website. These talks can be a great resource for anyone, but I especially encourage beginning graduate students to watch them. Watching the kind of high level talks that are in this workshop (as well as other places, including the theoretically speaking and open lectures series of the Simons Institute of Computing) can be extremely useful for students trying to figure  out what area of research to focus on.

As I mentioned before, this term Pablo Parrilo, David Steurer, Pravesh Kothari, and I are teaching two sister seminars at Harvard/MIT and Princeton on the Sum of Squares algorithm. See the website http://sumofsquares.org  for details, lecture notes, as well as links to lecture videos and how to sign up to follow the course on Piazza.

But, if you want the short, in-person, better weather version, you might want to sign up for the winter Course David Steurer and I will teach on this topic January 4-7 2017 in UC San Diego.

We hope to touch on how the sos algorithm interacts with questions in computational complexity, approximation algorithms,  machine learning,  quantum information theory, extremal combinatorics, and more.

This fall I will be teaching a graduate seminar on the Sum of Squares algorithm. Actually, it will be two “sister seminars”. In the Cambridge/Boston area, I (with possibly some guest lectures by Pablo Parrilo) will be teaching the course on Fridays 10am-1pm, alternating between Harvard and MIT. In Princeton, David Steurer and Pravesh Kothari will teach the course on Mondays.

If you are interested in attending the course, or even following it remotely, please see the course web page, and also sign up for its Piazza page (people without a Harvard email can use this form). David and I will be trying to write fairly complete lecture notes, which we will post on the website, and I might also post some summaries on this blog.

Here is the introduction for this course:

Following the proud tradition of previous STOC/FOCS conferences, FOCS’16 will also have a (half) day of workshop/tutorials on Saturday, October 8th, right before the conference starts.

You are invited to submit your proposal of workshop or tutorial by August 31st; see details here.

In short: you just need to propose an exciting theme and arrange the speakers. We will take care of the logistic details like rooms, AV, coffee break, etc.

Note that this is only a half-day event (2:30pm-6pm) since in the morning there will be another not-to-be-missed event: A Celebration of Mathematics and Computer Science: Celebrating Avi Wigderson’s 60th birthday (which actually starts already on Thursday, October 5th). See Boaz’s announcement here.

If you have any questions about the FOCS workshops, feel free to get in touch with the coordinators: Aleksander Madry and Alexandr Andoni.