The success of TOC is due to the intrinsic intellectual merit of our field but also due to many fruitful connections: connections between subfields of TOC (see Avi Wigderson’s depth through breadth), connection with Mathematics, connections with other fields in Science and Humanities via the computational lens, and connections with industry. All of these powerful connections contribute to the impact and vitality of TOC.

In this spirit, the theory group at Stanford and our theory colleagues all around the Silicon Valley are establishing a forum for continuous collaboration (through meetings and electronic means) between theoreticians in industry and academia in the SF Bay Area. We name it TOCA-SV. TOCA stands for Theory of Computing Associated and also, quite appropriately, means touches in Spanish. Like others, calling for a revolution: I hope that TOCA groups will surface all over.

Our first meeting will take place at Stanford on November 4th. Please join us if you are around and please don’t forget to register.

Students who want to present in this event (in a lightning-talk session) are asked to email us at tocasvalley@gmail.com. Please specify your name, affiliation, adviser (if you have one), year and email address. We will try to accommodate as many as possible (but future opportunities are planed).

Occupy TOC, power to the people, onwards and upwards or whatever seems appropriate here 🙂

I hope to see many readers of this blog in person tomorrow for the workshop in honor of Avi Wigderson 60th birthday (Wed-Sat), which will feature a collection of great speakers talking on a variety of areas in theoretical computer science and mathematics.

But, if you can’t make it in person, the talks will be streamed live on the workshop’s website. These talks can be a great resource for anyone, but I especially encourage beginning graduate students to watch them. Watching the kind of high level talks that are in this workshop (as well as other places, including the theoretically speaking and open lectures series of the Simons Institute of Computing) can be extremely useful for students trying to figure  out what area of research to focus on.

As I mentioned before, this term Pablo Parrilo, David Steurer, Pravesh Kothari, and I are teaching two sister seminars at Harvard/MIT and Princeton on the Sum of Squares algorithm. See the website http://sumofsquares.org  for details, lecture notes, as well as links to lecture videos and how to sign up to follow the course on Piazza.

But, if you want the short, in-person, better weather version, you might want to sign up for the winter Course David Steurer and I will teach on this topic January 4-7 2017 in UC San Diego.

We hope to touch on how the sos algorithm interacts with questions in computational complexity, approximation algorithms,  machine learning,  quantum information theory, extremal combinatorics, and more.

This fall I will be teaching a graduate seminar on the Sum of Squares algorithm. Actually, it will be two “sister seminars”. In the Cambridge/Boston area, I (with possibly some guest lectures by Pablo Parrilo) will be teaching the course on Fridays 10am-1pm, alternating between Harvard and MIT. In Princeton, David Steurer and Pravesh Kothari will teach the course on Mondays.

If you are interested in attending the course, or even following it remotely, please see the course web page, and also sign up for its Piazza page (people without a Harvard email can use this form). David and I will be trying to write fairly complete lecture notes, which we will post on the website, and I might also post some summaries on this blog.

Here is the introduction for this course:

Following the proud tradition of previous STOC/FOCS conferences, FOCS’16 will also have a (half) day of workshop/tutorials on Saturday, October 8th, right before the conference starts.

You are invited to submit your proposal of workshop or tutorial by August 31st; see details here.

In short: you just need to propose an exciting theme and arrange the speakers. We will take care of the logistic details like rooms, AV, coffee break, etc.

Note that this is only a half-day event (2:30pm-6pm) since in the morning there will be another not-to-be-missed event: A Celebration of Mathematics and Computer Science: Celebrating Avi Wigderson’s 60th birthday (which actually starts already on Thursday, October 5th). See Boaz’s announcement here.

If you have any questions about the FOCS workshops, feel free to get in touch with the coordinators: Aleksander Madry and Alexandr Andoni.

STOC Theory Fest 2017 (Montreal June 19-23)

Sanjeev Arora, Paul Beame, Avrim Blum, Ryan Williams

SIGACT Chair Michael Mitzenmacher announced at the STOC’16 business meeting that starting in 2017, STOC will turn into a 5-day event, a Theory Fest. This idea was discussed at some length in a special session at FOCS 2014 and the business meeting at STOC 2015. Now the event is being planned by a small group (Sanjeev Arora, SIGACT ex-chair Paul Beame, Avrim Blum, and Ryan Williams; we also get guidance from Michael Mitzenmacher and STOC’17 PC chair Valerie King). We’re  setting up committees to oversee various aspects of the event.

Here are the major changes (caveat: subject to tweaking in coming years):

(i)  STOC talks go into 3 parallel sessions instead of two.  Slight increase in number of accepts to 100-110.

(ii)   STOC papers also required to be presented in evening poster sessions (beer/snacks served).

(iii) About 9 hours of plenary sessions, which will include: (a) Three keynote 50-minute talks (usually prominent researchers from theory and nearby fields) (b)  Plenary 20-minute talks selected from the STOC program by the STOC PC —best papers, and a few others. (c) Plenary 20-minute talks on notable papers from the broader theory world in the past year (including but not limited to FOCS, ICALP, SODA, CRYPTO, QIP, COMPLEXITY, SoCG, COLT, PODC, SPAA, KDD, SIGMOD/PODS, SIGMETRICS, WWW, ICML/NIPS), selected by a committee from a pool of nominations. (Many nominees may be invited instead to the poster session.)

(iv) 2-hour tutorials (three in parallel).

(v)    Some community-building activities, including grad student activities, networking, career advice, funding, recruiting,  etc.

(vi)   A day of workshops; 3 running in parallel. (Total of 18 workshop-hours.)

Our hope is that workshop day(s) will over time develop into a separate eco-system of regular meetings and co-located conferences (short or long). In many other CS fields the workshop days generate as much energy as the main conference, and showcase innovative, edgy work.

Poster sessions have been largely missing at STOC, but they have advantages: (a) Attendees can quickly get a good idea of all the work presented at the conference (b) Grads and young researchers get more opportunities to present their work and to interact/network, fueled by beer and snacks. (c)Attendees get an easy way to make up for having missed a talk during the day, or to ask followup questions. (d) Posters on work from other theory conferences broadens the intellectual scope of STOC,

We invite other theory conferences to consider co-locating with the Theory Fest. To allow such coordination, in future the location/dates for the Theory Fest will be announced at least 18 months in advance, preferably 2 years. Even for 2017 it is not too late yet.

Finally, we see the Theory Fest as a work in progress. Feedback from attendees will be actively sought and used to refashion the event.

STOC 2016 just ended and it included many great results with one highlight of course being Laci Babai’s quasipolynomial time algorithm for graph isomorphism. But today I wanted to mention another paper that I found quite interesting and reminded me of the famous Tolstoy quote that

Happy families are all alike; every unhappy family is unhappy in its own way.

I am talking about the work Reed-Muller Codes Achieve Capacity on Erasure Channels by Shrinivas Kudekar, Santhosh Kumar, Marco Mondelli, Henry D. Pfister, Eren Sasoglu and Rudiger Urbanke. We are used to thinking of some error correcting codes as being “better” than others in the sense that they have fewer decoding errors. But it turns out that in some sense all codes of a given rate have the same average number of errors. The only difference is that “bad” codes (such as the repetition code), have a fairly “smooth” error profile in the sense that the probability of decoding success decays essentially like a low degree polynomial with the fraction of errors, while for “good” codes the decay is like a step function, where one can succeed with probability $1$ when the error is smaller than some $\tau$ but this probability quickly decays to half when the error passes $\tau$.

Specifically, if $C\subseteq GF(2)^n$ is a linear code of dimension $k$ and $p \in [0,1]$, we let $Y$ be the random variable over $GF(2)^n$ that is obtained by sampling a random codeword $X$ in $C$ and erasing (i.e., replacing it with $\star$) every coordinate $i$ independently with probability $p$. Then we define $h_C(p)$ to be the average over all $i$ of the conditional entropy of $X_i$ given $Y_{-i}=(Y_1,\ldots,Y_{i-1},Y_{i+1},\ldots,Y_n)$. Note that for linear codes, the coordinate $X_i$ is either completely fixed by $Y_{-i}$ or it is a completely uniform bit, and hence $h_C(p)$ can be thought of as the expected number of the coordinates that we won’t be able to decode with probability better than half from a $1-p$-sized random subset of the remaining coordinates.

One formalization of this notion that all codes have the same average number of errors is known as the Area Law for EXIT functions which states that for every code $C$ of dimension $k$, the integral $\int_0^1 h_C(p) dp$ is a fixed constant independent of $C$. In particular note that if $C$ is the simple “repetition code” where we simply repeat every symbol $n/k$ times, then the probability we can’t decode some coordinate from the remaining ones (in which case the entropy is one) is exactly $p^{n/k-1}$ where $p$ is the erasure probability. Hence in this case we can easily compute the integral $\int_0^1 h_C(p)dp = \int_0^1 p^{n/k-1}dp = k/n$ which is simply one minus the rate of the code. In particular this tells us that the average entropy is always equal to the rate of the code. A code is said to be capacity achieving if there is some function $\epsilon=\epsilon(n)$ that goes to zero with $n$ such that $h_C(p)<\epsilon$ whenever $p< 1-k/n-\epsilon$. The area law immediately implies that in this case it must be that $h_C(p)$ is close to one when $p>1-k/n$ (since otherwise the total integral would be smaller than $1-k/n$), and hence a code is capacity achieving if and only if the function $h_C(p)$ has a threshold behavior. (See figure below).

The paper above uses this observation to show that the Reed Muller code is capacity achieving for this binary erasure channel. The only property they use is the symmetry of this code which means that for this code we might as well have defined $h_C(p)$ with some fixed coordinate (e.g., the first one). In this case, using linearity, we can see that for every erasure pattern $\alpha$ on the coordinates $2,\ldots, n$ the entropy of $X_1$ given $Y_2,\ldots,Y_n$ is a Boolean monotone function of $\alpha$. (Booleanity follows because in a linear subspace the entropy of the remaining coordinate is either zero or one; monotonicity follows because in the erasure channel erasing more coordinates cannot help you decode.) One can then use the papers of Friedgut or Friedgut-Kalai to establish such a property. (The Reed-Muller code has an additional stronger property of double transitivity which allows to deduce that one can decode not just most coordinates but all coordinates with high probability when the fraction of errors is a smaller than the capacity.)

How do you prove this area law? The idea is simple. Because of linearity, we can think of the following setting: suppose we have the all zero codeword and we permute its coordinates randomly and reveal the first $t=(1-p)n$ of them. Then the probability that the $t+1^{th}$ coordinate is determined to be zero as well is $1-H_C(p)$. Another way to say it is that if we permute the columns of the $n\times k$ generating matrix $A$ of $C$ randomly, then the probability that the $t+1^{th}$ column is independent from the first $t$ columns is $H_C(1-t/n)$. In other words, if we keep track of the rank of the first $t$ columns, then at step $t$ the probability that the rank will increase by one is $H_C(1-t/n)$, but since we know that the rank of all $n$ columns is $k$, it follows that $\sum_{t=1}^n H_C(1-t/n) = n \int_0^1 H_C(p)dp = k$, which is what we wanted to prove. QED

p.s. Thanks to Yuval Wigderson, whose senior thesis is a good source for these questions.