## Two years ahead of schedule?

When I taught my crypto course in the spring of 2016, I motivated the study of lattice-based cryptography by presenting the following spoofed NYTimes headline from four years into the future:

It seems like Google is trying to achieve this much earlier:

If and when a convincing “quantum supremacy” demonstration emerges, it would be interesting to see the effect on classical public key crypto systems. If communication needs to be kept secret for X years, and it takes Y years to transition to a new standard, then the transition away from RSA, Diffie-Hellman and Elliptic curves needs to start the moment we suspect that we are X+Y years from quantum computers large enough to break them, which is probably why the NSA already started this process.

(BTW the “president Trump” part in the spoofed headline was joke (or at least I thought so at the time): this was two months before Mr. Trump became the GOP presumptive nominee.)

## TOCA-SV: May 12

Our second TOCA-SV meeting will take place at Google, Mountain View, on Friday May 12. We had a very good first meeting, and we are hoping to continue the momentum. Please come and register.

## TheoryFest update

*[Guest post by Sanjeev Arora. Action items: register, book your hotel and flight, and consider submitting a poster.]*

ACM Symposium on Theory of Computing Is morphing this year into a 5-day Theory Fest, as announced earlier. This is going to be a new and exciting kind of event for the theory community, with talks covering a larger set of topics, and many more opportunities to learn new things and to network with colleagues. Registration is now open and the early reg. rate is very low, considering that the event now stretches to 5 days. For more details, visit the conference webpage.

Early registration deadline is **May 21**; Hotel Reservation deadline is **May 19**.

**Event highlights**

- 103 STOC talks in 3 parallel sessions.
- Three keynote speakers:
**Orna Kupferman**(Graph theoretic methods in verification),**Avi Wigderson**(Nature and Future of TCS), and**Knuth 2017 Winner**. - Three tutorial speakers (2 hours each, in parallel):
**Julia Chuzhoy**(Graph minors and algorithms),**Russ Salakhutdinov**(Deep learning),**Aviv Zohar**(Cryptocurrencies). - 15 Plenary short talks, including the STOC prize-winning papers and 11 other papers selected from a host of “theory+” conferences and journals. Areas represented include
**databases, programming languages, machine learning, networking, computational economics, security, and theoretical physics**. - 1-hour panel discussion on
**“Theoretical CS: The next decade”**moderated by Anna Karlin, and with the following panelists: Cynthia Dwork, Russell Impagliazzo, Ankur Moitra, Dan Spielman, Tim Roughgarden and Andy Yao **Evening poster sessions**on Days 2 and 4, where a total of about 150 papers will be presented. Out of these about 50 are those that appeared in other theory+ venues (the deadline for being considered is May 1.- Five exciting workshops on Day 5, touching upon
**machine learning, continuous optimization, mechanism design, PCPs, and Sum of Squares algorithms**.

As you can see, it is going to be a fantastic event. It is brought to you thanks to the hard work of several dozen people (listed here), plus hundreds of STOC reviewers. Please come to Montreal to see the amazing event they’ve put together!

Theory Fest 2018 will be in or around Santa Monica CA June 23—27, 2018. We welcome other theory meetings to collocate, or at least choose a date/place close to theory fest.

See you in Montreal!

## Celebrating TCS at STOC 2017

STOC 2017 is going to be part of an expanded *“Theory Festival”* which will include not just the paper presentations, but a host of other activities such as plenary talks and tutorials, workshops, and more.

One of the components I am most excited about is a sequence of **invited plenary short talks** where we will get a chance to hear about some exciting recent theoretical works from a variety of areas from areas as disparate as theoretical physics and network programming languages, and many others in between.

As the chair of the committee to select these talks, I was very fortunate for the work of the committee members as well as the many nominations we received from leading researchers across a great many fields. I am also grateful to all the speakers that agreed to come despite the fact that in most cases STOC is not their “home conference”. The end result is a collection of talks that is sure to contain interesting and new content for every theoretical computer scientist, and I encourage everyone who can make it to register to the conference and come to Montreal in June.

Here is some information about the talks (in the order of scheduling).

The short descriptions of the talks below are mine and not the authors’: more formal and informative (and maybe even correct 🙂 ) abstracts will be posted closer to the event.

**Tuesday, June 20, 3:50pm-5:30pm**

**Alon Orlitsky****:** *Competitive Distribution Estimation: Why is Good-Turing Good*

Estimating a distribution from samples is one of the most basic questions in information theory and data analysis, going at least far back to Pearson’s work in the 1800’s. In Alon’s wonderful NIPS 2015 paper with Ananda Theertha Suresh (which also won the NIPS best paper award) they showed that a somewhat mysterious but simple estimator is nearly-optimal in the sense of providing good competitive guarantees even against ideal offline estimators that have more information.

**John Preskill**: *Is spacetime a quantum error-correcting code?*

20th century physics’ quest for a “theory of everything” had encountered a “slight hitch” in that the two most successful theories: general relativity and quantum mechanics, are inconsistent with one another. Perhaps the most promising approach towards reconciling this mismatch is a 20 years old conjectured isomorphism between two physical theories, known as the “AdS/CFT correspondence“. A great many open questions relating to this approach remain; over the past several years, we have learned that quantum information science might shed light on these fundamental questions. John will discuss some of the most exciting developments in this direction, and in particular will present his recent Journal of High Energy Physics paper with Pastawski, Yoshida, and Harlow which connects quantum gravity (and black holes in particular), to issues in quantum information theory and specifically to quantum error correcting codes.

**Tim Roughgarden****:** *Why Prices Need Algorithms*

In recent years we have seen many results showing computational hardness of computing equilibria. But in Tim’s EC 2015 paper with Inbal Talgam-Cohen (which won the best student paper award) they showed a surprising connection between computational complexity and the question whether an equilibrium *exists at all*. It is the latter type of question that is often of most interest to economists, and the paper also gives some “barrier results” to resolving open questions in economics.

**Wim Martens**: *Optimizing Tree Pattern Queries: Why Cutting Is Not Enough*

T*ree patterns *are a natural (and practically used) formalism for queries about tree-shaped data such as XML documents. Wim will talk about some new insights on these patterns. It is rare that the counterexample for a 15-year old conjecture is small enough to print on a T shirt, but in Wim’s PODS 2016 paper with Czerwinski, Niewerth, and Parys (which was presented in the awards session and also chosen as SIGMOD highlight) they were able to do just that. (Wim did not tell me if the shirts would be available for sale in the conference..)

**Wednesday, June 21, 4:15pm-5:30pm**

**Atri Rudra****:** *Answering FAQs in CSPs, Probabilistic Graphical Models, Databases, Logic and Matrix operations*

The Functional Aggregate Query (FAQ) problem generalizes many tasks studied in a variety of communities including solving constraint-satisfaction problems, evaluating database queries, and problems arising in probabilistic graphical models, coding theory, matrix chain computation, and the discrete Fourier transform. In Atri’s PODS 2016 paper with Abo Khamis and Ngo (which won the best paper award and was selected as SIGMOD highlight), they unified and recovered many old results in these areas, and also obtained several new ones.

**Vasilis Syrgkanis**: *Fast convergence of learning in games*

Vasilis will talk on some recent works on the interface of learning theory and game theory. Specifically, he will discuss how natural learning algorithms converge much faster than expected (e.g., at a rate of instead of the classical ) to the optimum of various games. This is based on his NIPS 2015 paper with Agarwal, Luo, and Schapire, which won the best paper award.

**Chris Umans****:** *On cap sets and the group-theoretic approach to matrix multiplication*

Chris will discuss the recent breakthroughs on the “cap set problem” and how they led to surprising insights on potential matrix-multiplication algorithms. Based on this Discrete Analysis paper with Blasiak, Church, Cohn, Grochow, Naslund, and Sawin.

**Thursday, June 22, 3:50pm-5:30pm**

**Christopher Ré****:** *Ensuring Rapid Mixing and Low Bias for Asynchronous Gibbs Sampling*

Gibbs sampling is one of the most natural Markov Chains arising in many practical and theoretical contexts, but practically running the algorithm is very expensive. The Hogwild! Framework of Chris and co authors is a way to run such algorithms in parallel without locks but it’s unclear that the output distribution is still correct. In Chris’s ICML 2016 paper with De Sa and Olukotun (which won the best paper award) they gave the first theoretical analysis of this algorithm.

**Nate Foster****:** *The Next 700 Network Programming Languages*

I never expected to see Kleene Algebra, straight from the heart of Theory B, used for practical packet processing in routers, but this is exactly what was done by this highly influential POPL 2014 paper of Nate with Anderson, Guha, Jeannin, Kozen, Schlesinger, and Walker.

**Mohsen Ghaffari****:** *An Improved Distributed Algorithm for Maximal Independent Set*

*Maximal Independent Set* is the “crown jewel of distributed symmetry breaking problems“ to use the words from the 2016 Dijkstra prize citation for the works showing an time distributed algorithm. In Mohsen’s SODA 2016 paper (which won the best paper award) he improved on those works to give a local algorithm where each vertex will finish the computation in time that is . Moreover, in graphs with degree , all nodes will terminate faster than the prior algorithms, in particular almost matching the known lower bound.

**Valeria Nikolaenko****:** * Practical post-quantum key agreement from generic lattices*

With increasing progress in quantum computing, both the NSA and commercial companies are getting increasingly nervous about the security of RSA, Diffie-Hellman, and Elliptic Curve Crypto. Unfortunately, lattice-based crypto, which is the main candidate for “quantum resistant” public key encryption, was traditionally not efficient enough to be used in real world web security. This has been changing with recent works. In particular in Valeria’s ACM CCS 2016 paper with Bos et al they gave a practical scheme based on *standard* computational assumptions on lattices. This is a follow up to the New Hope cryptosystem which is currently implemented in Chrome canary.

## Why I dislike TeX (a pre-deadline rant)

TeX and LaTeX are in many ways, amazing pieces of software. Their contribution to improving and enabling scientific communication cannot be questioned, and I have been a (mostly) grateful user. But sometimes even grateful users have to rant a bit..

My main issue with TeX is that, at its heart, it is a programming language. A document is like a program, and it either compiles or doesn’t. This is really annoying when working on large projects, especially towards a deadline when multiple people are editing the document at the same time.

The issue is that a document is not like a program: if I made a typo in line 15, that’s not an excuse not to show me the rest of the document. In that sense, I much prefer markdown, as it will always produce *some* output, even if I made some formatting errors. Even the dreaded Microsoft Word will not refuse to produce a document just because I forgot to match a curly brace. (Not that I’d ever use Word over LaTeX!)

In fact, in this day and age, maybe it’s time for programs to behave more like documents rather than the other way around. Wouldn’t it be nice if we could always run a program, and instead of halting at the first sign of inconsistency, the interpreter would just try to guess the most reasonable way to continue with the execution? After all, with enough data one could imagine that it could guess correctly much of the time.

## On “external” definitions for computation

I recently stumbled upon a fascinating talk by the physicist Nima Arkani-Hamed on the The Morality of Fundamental Physics. (“Moral” here is in the sense of “morally correct”, as opposed to understanding the impact of science on society. Perhaps “beauty” would have been a better term.)

In this talk, Arkani-Hamed describes the quest for finding scientific theories in much the same terms as solving an optimization problem, where the solution is easy-to-verify (or “inevitable”, in his words) once you see it, but the problem is that you might get stuck in a local optimum:

The classical picture of the world is the top of a local mountain in the space of ideas. And you go up to the top and it looks amazing up there and absolutely incredible. And you learn that there is a taller mountain out there. Find it, Mount Quantum…. they’re not smoothly connected … you’ve got to make a jump to go from classical to quantum … This also tells you why we have such major challenges in trying to extend our understanding of physics. We don’t have these knobs, and little wheels, and twiddles that we can turn. We have to learn how to make these jumps. And it is a tall order. And that’s why things are difficult

But what actually caught my attention in this talk is his description that part of what enabled progress beyond Newtonian mechanics was a different, dual, way to look at classical physics. That is, instead of the Newtonian picture of an evolution of particles according to clockwork rules, we think that

The particle takes every path it could between A and B, every possible one. And then imagine that it just sort of sniffs them all out; and looks around; and says, I’m going to take the one that makes the following quantity as small as possible.

I know almost no physics and a limited amount of math, but this seems to me to be an instance of moving to an **external**, as opposed to **internal **definition, in the sense described by Tao. (Please correct me if I’m wrong!) As Arkani-Hamed describes, a hugely important paper of Emmy Noether showed how this viewpoint immediately implies the conservation laws and shows that this second viewpoint, in his words, is

simple, and deep, and will always be the right way of thinking about these things.

Since determinism is not “hardwired” into this second viewpoint, it is much easier to generalize it to incorporate quantum mechanics.

This talk got me thinking about whether we can find an “external” definition for **computation**. That is, our usual notion of computation via Turing Machines or circuits involves a “clockwork” like view of an evolving state via composition of some basic steps. Perhaps one of the reasons we can’t make progress on lower bounds is that we don’t have a more “global” or “external” definition that would somehow capture the property that a function F is “easy” without giving an explicit way to compute it. Alas, there is a good reason that we lack such a definition. The *natural proofs* barrier tell us that any property that is efficiently computable from the truth table and contains all the “easy” functions (which are an exponentially small fraction of all functions) must contain many many other functions (in fact more than 99.99% of all functions) . It is sometimes suggested that the way to bypass this barrier is to avoid the “largeness” condition, as for example even a property that contains all functions except a single function G would be useful to prove a lower bound for G if we can prove that it contains all easy functions. However, I think that to obtain a true understanding of computation, and not just a lower bound for a single function, we will need to find completely new types of nonconstructive arguments.