Skip to content

Celebrating TCS at STOC 2017

April 18, 2017

STOC 2017 is going to be part of an expanded “Theory Festival” which will include not just the paper presentations, but a host of other activities such as plenary talks and tutorials, workshops, and more.

One of the components I am most excited about is a sequence of invited plenary short talks where we will get a chance to hear about some exciting recent theoretical works from a variety of areas from areas as disparate as theoretical physics and network programming languages, and many others in between.

As the chair of the committee to select these talks, I was very fortunate for the work of the committee members as well as the many nominations we received from leading researchers across a great many fields.  I am also grateful to all the speakers that agreed to come despite the fact that in most cases STOC is not their “home conference”.  The end result is a collection of talks that is sure to contain interesting and new content for every theoretical computer scientist, and I encourage everyone who can make it to register to the conference and come to Montreal in June.

Here is some information about the talks (in the order of scheduling).

The short descriptions of the talks below are mine and not the authors’: more formal and informative (and maybe even correct 🙂 ) abstracts will be posted closer to the event.

 

Tuesday, June 20, 3:50pm-5:30pm

Alon Orlitsky: Competitive Distribution Estimation: Why is Good-Turing Good

Estimating a distribution from samples is one of the most basic questions in information theory and data analysis, going at least far back to Pearson’s work in the 1800’s. In Alon’s wonderful NIPS 2015 paper with Ananda Theertha Suresh (which also won the NIPS best paper award) they showed that a somewhat mysterious but simple estimator is nearly-optimal in the sense of providing good competitive guarantees even against ideal offline estimators that have more information.

 

John Preskill: Is spacetime a quantum error-correcting code?

20th century physics’ quest for a “theory of everything” had encountered a “slight hitch” in that the two most successful theories: general relativity and quantum mechanics, are inconsistent with one another.  Perhaps the most promising approach towards reconciling this mismatch is a 20 years old conjectured isomorphism between two physical theories, known as the “AdS/CFT correspondence“. A great many open questions relating to this approach remain; over the past several years, we have learned that quantum information science might shed light on these fundamental questions.  John will discuss some of the most exciting developments in this direction, and in particular will present his recent  Journal of High Energy Physics paper with Pastawski, Yoshida, and Harlow which connects quantum gravity (and black holes in particular), to issues in quantum information theory and specifically to quantum error correcting codes.

 

Tim Roughgarden: Why Prices Need Algorithms

In recent years we have seen many results showing computational hardness of computing equilibria. But in Tim’s EC 2015 paper with Inbal Talgam-Cohen (which won the best student paper award) they showed a surprising connection between computational complexity and the question whether an equilibrium exists at all. It is the latter type of question that is often of most interest to economists, and the paper also gives some “barrier results” to resolving open questions in economics.

 

Wim Martens: Optimizing Tree Pattern Queries: Why Cutting Is Not Enough

Tree patterns are a natural (and practically used) formalism for queries about tree-shaped data such as XML documents. Wim will talk about some new insights on these patterns. It is rare that the counterexample for a 15-year old conjecture is small enough to print on a T shirt, but in Wim’s  PODS 2016 paper with Czerwinski, Niewerth, and Parys (which was presented in the awards session and also chosen as SIGMOD highlight) they were able to do just that. (Wim did not tell me if the shirts would be available for sale in the conference..)

 

Wednesday, June 21, 4:15pm-5:30pm

Atri Rudra: Answering FAQs in CSPs, Probabilistic Graphical Models, Databases, Logic and Matrix operations

The Functional Aggregate Query (FAQ) problem generalizes many tasks studied in a variety of communities including solving constraint-satisfaction problems, evaluating database queries, and problems arising in probabilistic graphical models, coding theory, matrix chain computation, and the discrete Fourier transform. In Atri’s PODS 2016 paper with Abo Khamis and Ngo (which won the best paper award and was selected as SIGMOD highlight), they unified and recovered many old results in these areas, and also obtained several new ones.

 

Vasilis Syrgkanis: Fast convergence of learning in games

Vasilis will talk on some recent works on the interface of learning theory and game theory. Specifically, he will discuss how natural learning algorithms converge much faster than expected (e.g., at a rate of O(T^{-3/4}) instead of the classical  O(1/\sqrt{T}))  to  the optimum of various games. This is based on his NIPS 2015 paper with Agarwal, Luo, and Schapire, which won the best paper award.

 

Chris Umans: On cap sets and the group-theoretic approach to matrix multiplication

Chris will discuss the recent breakthroughs on the “cap set problem” and how they led to surprising insights on potential matrix-multiplication algorithms.  Based on this Discrete Analysis paper with Blasiak, Church, Cohn, Grochow, Naslund,  and Sawin.

 

Thursday, June 22, 3:50pm-5:30pm

Christopher Ré: Ensuring Rapid Mixing and Low Bias for Asynchronous Gibbs Sampling

Gibbs sampling is one of the most natural Markov Chains arising in many practical and theoretical contexts, but practically running the algorithm is very expensive. The Hogwild! Framework of Chris and co authors is a way to run such algorithms in parallel without locks but it’s unclear that the output distribution is still correct. In Chris’s  ICML 2016 paper with De Sa and Olukotun (which won the best paper award) they gave the first theoretical analysis of this algorithm.

 

Nate Foster: The Next 700 Network Programming Languages

I never expected to see Kleene Algebra, straight from the heart of Theory B,  used for practical packet processing in routers, but this is exactly what was done by this highly influential POPL 2014 paper of Nate with Anderson, Guha, Jeannin, Kozen, Schlesinger, and Walker.

 

Mohsen Ghaffari:  An Improved Distributed Algorithm for Maximal Independent Set

Maximal Independent Set is the “crown jewel of distributed symmetry breaking problems“  to use the words from the 2016 Dijkstra prize citation for the works showing an O(\log n)  time distributed algorithm. In Mohsen’s SODA 2016 paper (which won the best paper award) he improved on those works to give a local algorithm where each vertex will finish the computation in time that is O(\log degree). Moreover, in graphs with degree n^{o(1)}, all nodes will terminate faster than the prior algorithms, in particular almost matching the known lower bound.

 

Valeria Nikolaenko:  Practical post-quantum key agreement from generic lattices

With increasing progress in quantum computing, both the NSA and commercial companies are getting increasingly nervous about the security of RSA, Diffie-Hellman, and Elliptic Curve Crypto. Unfortunately, lattice-based crypto, which is the main candidate for “quantum resistant” public key encryption, was traditionally not efficient enough to be used in real world web security. This has been changing with recent works. In particular in Valeria’s  ACM CCS 2016 paper with Bos et al they gave a practical scheme based on standard computational assumptions on lattices. This is a follow up to the New Hope cryptosystem which is currently implemented in Chrome canary.

Why I dislike TeX (a pre-deadline rant)

April 6, 2017

TeX and LaTeX are in many ways, amazing pieces of software. Their contribution to improving and enabling scientific communication cannot be questioned, and I have been a (mostly) grateful user. But sometimes even grateful users have to rant a bit..

My main issue with TeX is that, at its heart, it is a programming language. A document is like a program, and it either compiles or doesn’t. This is really annoying when working on large projects, especially towards a deadline when multiple people are editing the document at the same time.

The issue is that a document is not like a program: if I made a typo in line 15, that’s not an excuse not to show me the rest of the document. In that sense, I much prefer markdown, as it will always produce some output, even if I made some formatting errors. Even the dreaded Microsoft Word will not refuse to produce a document just because I forgot to match a curly brace. (Not that I’d ever use Word over LaTeX!)

In fact, in this day and age, maybe it’s time for programs to behave more like documents rather than the other way around. Wouldn’t it be nice if we could always run a program, and instead of halting at the first sign of inconsistency, the interpreter would just try to guess the most reasonable way to continue with the execution? After all, with enough data one could imagine that it could guess correctly much of the time.

On “external” definitions for computation

March 14, 2017

I recently stumbled upon a fascinating talk by the physicist Nima Arkani-Hamed on the  The Morality of Fundamental Physics. (“Moral” here is in the sense of “morally correct”, as opposed to understanding the impact of science on society. Perhaps “beauty” would have been a better term.)

In this talk, Arkani-Hamed describes the quest for finding scientific theories in much the same terms as solving an optimization problem, where the solution is easy-to-verify (or “inevitable”, in his words) once you see it, but the problem is that you might get stuck in a local optimum:

The classical picture of the world is the top of a local mountain in the space of ideas. And you go up to the top and it looks amazing up there and absolutely incredible. And you learn that there is a taller mountain out there. Find it, Mount Quantum…. they’re not smoothly connected … you’ve got to make a jump to go from classical to  quantum … This also tells you why we have such major challenges in trying to extend our understanding of physics. We don’t have these knobs, and little wheels, and twiddles that we can turn. We have to learn how to make these jumps. And it is a tall order. And that’s why things are difficult

But what actually caught my attention in this talk is his description that part of what enabled progress beyond Newtonian mechanics was a different, dual, way to look at classical physics. That is, instead of the Newtonian picture of an evolution of particles according to clockwork rules,  we think that

The particle takes every path it could between A and B, every possible one. And then imagine that it just sort of sniffs them all out; and looks around; and says, I’m going to take the one that makes the following quantity as small as possible.

 

I know almost no physics and a limited amount of math, but this seems to me to be an instance of moving to an external, as opposed to internal definition, in the sense described by Tao. (Please correct me if I’m wrong!) As Arkani-Hamed describes, a hugely important paper of Emmy Noether showed how this viewpoint immediately implies the conservation laws and shows that this second viewpoint, in his words, is

simple, and deep, and will always be the right way of thinking about these things.

 

Since determinism is not “hardwired” into this second viewpoint, it is much easier to generalize it to incorporate quantum mechanics.

This talk got me thinking about whether we can find an “external” definition for computation.  That is, our usual notion of computation via Turing Machines or circuits involves a “clockwork” like view of an evolving state via composition of some basic steps. Perhaps one of the reasons we can’t make progress on lower bounds is that we don’t have a more “global” or “external” definition that would somehow capture the property that a function F is “easy” without giving an explicit way to compute it. Alas, there is a good reason that we lack such a definition. The natural proofs barrier tell us that any property that is efficiently computable from the truth table and contains all the “easy” functions (which are an exponentially small fraction of all functions) must contain many many other functions (in fact more than 99.99% of all functions) . It is sometimes suggested that the way to bypass this barrier is to avoid the “largeness” condition, as for example even a property that contains all functions except a single function G would be useful to prove a lower bound for G if we can prove that it contains all easy functions. However, I think that to obtain a true understanding of computation, and not just a lower bound for a single function, we will need to find completely new types of nonconstructive arguments.

The immigration ban is still antithetical to scientific progress

March 7, 2017

By Boaz Barak and Omer Reingold

President Trump had just signed a new executive order revising the prior ban on visitors from seven (now six) muslim-majority countries. It is fundamentally the same, imposing a blanket 90-day ban on entry of people from six countries, with the conditions for lifting the ban depending on the cooperation of these countries’ governments.

One good analysis of the original order called it  “malevolence tempered by incompetence”, and indeed the fact that it was incompetently drafted is the main reason why the original ban did not survive court challenges. The new version has obviously been crafted with more input from competent people but it does not change anything about the points we  wrote before.

Every country has a duty to protect its citizens and we have never advocated “open borders”. Indeed, as many people who visited or immigrated to the U.S. know, the visa process is already very arduous, and involves extensive vetting. A blanket policy does not make the U.S. safer. In fact, as opposed to individual vetting,  it actually removes an element of unpredictability for any group that is planning to carry out a terror attack in the U.S. Moreover this policy (whose first, hastily drafted version was crafted without much input from the intelligence community), is not the result of a careful balancing of the risks and benefits but rather an attempt to fulfill an obviously unconstitutional  campaign promise for a “muslim ban” while tailoring it to try to pass it through the courts.

This ban hurts the U.S. and science. Much of progress in science during the 20th century can be attributed to the U.S.’s role in becoming a central hub for scientists, welcoming scientists even from countries that it was in conflict with (including 1930’s Germany and cold-war Soviet Union, and more recently, Iran). This has benefited all the world, but in particular the U.S., which during the 20th century became the world leader in science and technology as a result. Science is not a zero-sum game, and collaborations and interactions are better for all of us. We continue to strenuously object to this ban, and call on all scientists to do the same.

 

Immigration ban is antithetical to scientific progress

January 26, 2017

By Boaz Barak and Omer Reingold

Update (1/28): If you are an academic that opposes this action, please consider signing the following open letter.

Today leaked drafts of planned executive actions showed that president Trump apparently intends to issue an order suspending (and possibly permanently banning) entry  to the U.S. of citizens of seven countries:  Iran, Iraq, Libya, Somalia, Sudan, Syria, and Yemen. As Scott Aaronson points out, one consequence of this is that students from these countries would not be able to study or do research in U.S. universities.

The U.S. has mostly known to separate its treatment of foreign governments from its treatment of their citizens, whether it is Cubans, Russians, or other cases. Based on the past records, the danger of terrorism by lawful visitors to the U.S. from the seven countries above is slim to none. But over the years, visitors and immigrants from these countries did contribute immensely to the U.S. society, economy, and the scientific world at large.

We personally have collaborated and built on the scientific works of colleagues from these countries. In particular, both of us are originally from Israel, but have collaborated with scientists from Iran who knew that the issues between the two governments should not stop scientific cooperation.

This new proposed policy is not just misguided, but also directly contradicts the interests of the U.S., and the advancement of science. We call on all our fellow scientists to express their strong  disagreement with it, and their solidarity and gratitude for the contributions of visiting and immigrant scientists, without which the U.S., and the state of human knowledge, would not have been the same.

On exp(exp(sqrt(log n))) algorithms.

January 5, 2017

Update: I made a bit of a mess in my original description of the technical details, which was based on my faulty memory from Laci’s talk a year ago at Harvard. See Laci’s and Harald’s posts for more technical information.

 

Laci Babai has posted an update on his graph isomorphism algorithm. While preparing a Bourbaki talk on the work Harald Helfgott found an error in the original running time analysis of one of the subroutines. Laci fixed the error but with running time that is quantitatively weaker than originally stated, namely  \exp(\exp(\sqrt{\log n}))  time (hiding poly log log factors). Harald has verified that the modified version is correct.

This is a large quantitative difference, but I think makes very little actual difference for the (great) significance of this paper. It is tempting to judge theoretical computer science papers by the “headline result”, and hence be more impressed with an algorithm that improves 2^n  time to n^3 than, say, n^3 to n^{2.9}. However, this is almost always the wrong metric to use.

Improving quantitative parameters such as running time or approximation factor is very useful as intermediate challenge problems that force us to create new ideas, but ultimately the important contribution of a theoretical work is the ideas it introduces and not the actual numbers. In the context of algorithmic result, for me the best way to understand what a bound such as \exp(\exp(\sqrt{\log n})) says about the inherent complexity of the problem is whether you meet it “on the way up” or “on the way down”.

Often, if you have a hardness result that (for example based on the exponential time hypothesis) shows that some problem (e.g., shortest codeword) cannot be solved faster than  \exp(\exp(\sqrt{\log n}))  then you could expect that eventually this hardness would improve further and the true complexity of the problem is \exp(n^c) for some c>0 (maybe even c=1). That is, when you meet such a bound “on your way up”, it sometimes makes sense to treat \exp(\sqrt{\log n}) as a function that is “almost polynomial”.

On the other hand, if you meet this bound “on your way down” in an algorithmic result, such as in this case, or in cases where for example you improve an n^2 algorithm to an n\exp(\sqrt{\log n}) then one expects further improvements and so in that context it sometimes makes sense to treat \exp(\sqrt{\log n}) as “almost polylogarithmic”.

Of course it could be that for some problems this kind of bound is actually their inherent complexity and not simply the temporary state of our knowledge. Understanding whether and how “unnatural” complexity bounds can arise for natural problems is a very interesting problem in its own right.

Motwani Postdoctoral Fellowship at Stanford

December 25, 2016
I’m happy to invite (in the name of the Stanford theory group) applications for the inaugural Motwani Postdoctoral Fellowship in Theoretical Computer Science, made possible by a gift from the Motwani-Jadeja foundation. Please see application instructions . Please apply by Jan 6, 2017 for full consideration.