Skip to content

Johan Håstad wins Knuth prize

August 16, 2018

Congratulations to Johan Håstad for winning the 2018 Knuth prize! Johan of course has done groundbreaking works from constructing pseudorandom generators based on one way functions, through his famous switching lemma, to his PCP theorem that continues to this day to be the blueprint for much of the work in hardness of approximation. A most deserving winner!

Johan will be presented with the award at the upcoming FOCS.

Book Review: “Factor Man”

August 14, 2018

At the recommendation of Craig Gentry, I recently read the book “Factor Man” by Matt Ginsberg. This book is about a computer scientist that discovers an efficient algorithm for SAT, which starts off a international game of intrigue involving the FBI, NSA, Chinese spies, Swiss banks, and even some characters we know such as Steven Rudich and Scott Aaronson.
(However there is no mention of Scott’s role as the criminal mastermind behind the great Philadelphia Airport Heist.)

While it’s by no means “great literature”, Factor Man is a fun page-turner. I think it can be a particularly enjoyable read for computer scientists, as it might prompt you to come up with your own scenarios as to how things would play out if someone discovers such an algorithm. Unsurprisingly, the book is not technically perfect. The technical error that annoyed me the most was that the protagonist demonstrates his algorithm by factoring integers of sizes that can in fact be fairly easily factored today (the book refers to factoring 128 or 256 bit numbers as impressive, while 768 bit integers of general form have been factored, see also this page and this paper). If you just imagine that when the book says “n bit” numbers it actually means n byte numbers then this is fine. Network security researchers might also take issue with other points in the book (such as the ability of the protagonist to use gmail and blogspot without being identified by neither the NSA nor Google, as well as using a SAT algorithm to provide a “final security patch” for a product).

Regardless of these technical issues, I recommend reading this book if you’re the type of person that enjoys both computer science and spy thrillers, and I do plan to mention it to students taking my introduction to theoretical CS  course.

Physics Envy

August 6, 2018

There is something cool about physics. Black holes, anti-matter, “God’s particle”: it all sounds so exciting. While our TCS “mental experiments” typically involve restricting the inputs of constant-depth circuits, physicists talk about jumping into black holes while holding a dictionary. Physicists also have a knack for names: notions such as “uncertainty principle” or “monogamy of entanglement” sound so much cooler than “Cauchy-Schwarz Inequality” or “Distributive Law”.

But a deeper reason to envy physicists is that (with certain important exceptions) they often have fairly good intuitions into how their systems of interest behave, even if they can’t always prove them. In contrast, we theoretical computer scientists are more often than not completely “in the dark”. A t-step algorithm to compute the mapping x \mapsto f(x) can be modeled as t applications of some simple local update rule to the initial state x. Studying the evolution of systems is the bread-and-butter of physics, but many physical intuitions fail for the progression of an algorithm’s computation. For example, for general algorithms, we do not have a natural sense in which the state of the system after 0.9t steps is “closer” to f(x) than it is to x. The intermediate state of a general algorithm is rather non informative. Similarly, we do not have a nice, even conjectural, way to characterize the set of functions that can be computed by t steps: the lack of such clean “complexity measures” is strongly related to the natural proofs barrier for proving circuit lower bounds.

But there are some algorithms for which better “physical intuition” exists. Many optimization algorithms have a “potential function” that improves at every step, and other algorithms such as Monte-Carlo Markov-Chain sampling, are inspired by and can be analyzed using physics intuition. These connections have been recently explored, resulting in both new algorithms as well as better understanding of algorithmic techniques for optimization and learning, as well as the regimes in which they apply.

 
There are also cases where the computer-science intuition can help in analyzing physical systems. Quantum computers are of course one example, but apparently there are other interesting physical systems (maybe even black holes? see also this summer school) which are “disordered” enough that the best way of thinking of them might be to treat them as random circuits of certain complexity. More generally, in recent years physicists have began to view information and computation as an increasingly useful lens through which to understand physics. The “it from qubit” perspective, whereby spacetime emerges from information rather than the other way around, is growing in popularity.

 
Finally, on a more “meta” level, the task of doing theoretical physics itself can be thought of as a computational problem. In a fascinating talk, physicist Nima Arkani-Hamed discusses the problem of finding a theory of physics as essentially solving an optimization problem in the “space of ideas”. Specifically, it is a non convex problem, and so a local optimum is not necessarily a global one. Arkani-Hamed calls classical physics “the top of a local mountain in the space of ideas”, i.e., a local optimum, while quantum mechanics is the top of a “taller mountain”. The reason it was hard to make the leap to quantum mechanics is exactly because they “they’re not smoothly connected”.

By classical physics being a local optimum, we mean that if you try to “tweak” classical physics by turning (in his words) “knobs, and little wheels, and twiddles”, you will only get a theory that is less beautiful and with less explanatory power. To get to the better theory of quantum mechanics, one needs to make a conceptual jump, rather than a series of small tweaks. Just like classical physics, quantum mechanics itself is a local optimum, for which every small “tweak” will only make it less beautiful and predictive. This is one explanation as to why it has been so difficult to find the grander theory that unifies general relativity and quantum mechanics. As Harkani-Hamed says, to find such a theory “there’s going to have to be a jump of a comparable magnitude, in the jump that people have to make in going from classical to quantum”. In a related point to “it from qubit”, he also says that “many, many of us suspect that the notion of spacetime can’t be fundamental and it has to be replaced by something else.”

 
Interestingly, in certain settings convex optimization can be applied to explore the “space of ideas”. In particular some works on the “bootstrap method” use semidefinite programming to explore the space of quantum field theories that satisfy certain symmetries. It turns out that sometimes the constraints that one can derive from these symmetries are so powerful, that they completely determine the theory.

 

A fall seminar

The bottom line is that we’re seeing a more and more interesting exchange of ideas between physics and theoretical computer science. As I’m sure I already demonstrated in some cringe-worthy statements above, I know very little about this interface, but am interested in finding out more.

So, Tselil Schramm and I will be running a graduate seminar this fall. We will be learning together with the seminar participants about some of these connections, and hopefully by the end of the term we will all be a little less ignorant.

Some of the topics we will discuss include:

  • Connections between statistical physics and algorithms, understanding the physics predictions for hard and easy regimes via phase transitions.

  • Quantum information theory: quantum-inspired classical results, as well as classical algorithms for quantum problems.

  • The conformal bootstrap: exploring the space of possible physical theories using semidefinite programming.

  • Black holes, bulk/boundary correspondence, and computational complexity.

  • Quantum superiority – understanding the current proposals for demonstrating exponential speedups for quantum computers, and the evidence for their classical difficulty.

  • Quantum Hamiltonian Complexity – the quantum analog of constraint satisfaction problems, with questions such as the existence of a “Quantum PCP Theorem”.

Each one of these is probably worth a semester course on its own, and is typically presented to people with significant physics background. But we are hoping we can create a “tasting menu” and manage to take away some insights and ideas from each of those areas, even if we can’t cover the whole ground.

Participants in the seminar will not only present papers or surveys, but also write a blog post about them, which I will post here, so stay tuned for more information.

Beyond CRYPTO workshop: August 19

August 1, 2018

[Unrelated note: Huge congratulations to Costis Daskalakis – winner of the 2018 Nevanlinna medal!]

As part of the CRYPTO 2018 conference (August 19-23, Santa Barbara, CA), there is a set of of affiliated events. The conference organizers (Tal Rabin, Elette Boyle, and Fabrice Benhamouda)  asked me  to advertise the workshop Beyond Crypto: A TCS Perspective (itself organized by Yuval Ishai and Guy Rothblum) on August 19th that can be of particular interest to theoretical computer scientists.

Among the speakers will be:

  • Aleksander Madry (MIT) will talk about “Machine Learning and Security: The Good, the Bad, and the Hopeful”
  • Cynthia Dwork (Harvard) will talk about “Theory for Society: Crypto on Steroids”.
  • Virginia Vassilevska Williams (MIT) will talk about “A Fine-Grained Approach to Complexity”
  • Mary Wootters (Stanford) will talk about “Cryptography, Local Decoding, and Distributed Storage”
  • Scott Aaronson (UT Austin) will talk about “Certified Randomness from Quantum Supremacy”.

(I am also speaking in this workshop, and my talk is titled “On Optimal Algorithms and Assumption Factories”).

 

Theoryfest recap and FOCS call for workshops

July 1, 2018

I just came back from a wonderful TheoryFest in LA. There was a fantastic program,  including not just the paper presentations, but also tutorials, keynote talks, plenary short papers, and workshops, as well as other events including the junior/senior lunches, STOC 50th birthday, and probably others that I am forgetting right now.

Still, while we are all taking down our TheoryFest trees, we can remind ourselves that there are other holidays on the theory calendar. In particular FOCS 2018 will be in October in Paris, and it also contains a “workshop and tutorial day”.  If you want to organize a workshop or tutorial, see the call for proposals. Key points are:

  • Workshop and Tutorial Day: Saturday, October 6, 2018 (Paris)
  • Workshop and Tutorial Co-Chairs: Robert Kleinberg and James R. Lee
  • Submission deadline: August 1st, 2018
  • Notification: August 6th, 2018
  • Send proposals and questions to focs2018workshops@gmail.com 

There are worse things in life than organizing a workshop in Paris..

 

Awesome Speakers at TheoryFest Computational Thresholds Workshop Tomorrow

June 29, 2018
by

Guest post by Sam Hopkins

I just got back from dinner with some of the great speakers who will be at our TheoryFest workshop tomorrow afternoon on computational thresholds for average-case problems, and I am very excited for what’s coming! Since I didn’t get much chance to introduce the speakers in my last post, and not all of them are the “usual suspects” for a STOC workshop, I’d like to take a few paragraphs to do so here, and discuss a little further what they might speak about.

The talks will run the gamut from statistical physics to machine learning to good old theory of computing, and in particular will aim to address questions at their 3-wise intersection. This intersection is full of open problems which are both interesting and approachable. I hope to see lots of people there!

On to the speakers:

Florent KrzakalaFlorent is a statistical physicist at the Sorbonne in Paris. For more than 10 years he has been one of the leaders in studying high-dimensional statistical inference and average-case computational problems through the lens of statistical physics.

One of my favorite lines Florent’s work was the construction (with several others) of random measurement matrices A and corresponding sparse-signal reconstruction algorithms which can recover a random sparse vector x \in \mathbb{R}^n with \rho n nonzero entries from the measurement Axwhere the dimension of A is only \rho n \times \rho. (That is, algorithms which recover a \rho n-sparse vector from only \rho n measurements — not \rho n \log n, or even 10 \rho n!) This involved bringing together insights from compressed sensing and spin-glass theory in a rather remarkable way.

Lately, Florent tells me he has been interested in the physics of matrix factorization and completion problems, and of neural networks. Tomorrow he is going to discuss computational-versus-statistical gaps for a variety of high-dimensional inference problems, addressing the question of why polynomial-time inference algorithms don’t necessarily achieve information-theoretically-optimal results from a statistical physics perspective.

Nike SunNike is a probabilist & computer scientist at UC Berkeley, before which she was the Schramm postdoctoral fellow at MIT and MSR. Many of you may know her from the tour de force work (joint with Jian Ding and Allan Sly) establishing the k-SAT threshold for large k — this was the biggest progress in years on the problem in random CSPs.

Her research spans many topics in probability, but tomorrow she is discussing random CSPs, on which she is among the world experts. In this area her work has made great steps in rigorous-izing the predictions of statistical physics regarding the geometry of the space of solutions to a random CSP instance. This geometry is rich: at some clause densities random CSPs seem to have well-connected spaces of solutions, and at others the solution spaces are shattered. In between are a wealth of phase transitions whose algorithmic implications remain poorly understood (read: a great source of open problems!).

Jacob SteinardtJacob is a graduating PhD student at Stanford, and a rising star in provable approaches to machine learning. One of his focuses of late has been design of algorithms for learning problems in the presence of untrustworthy data, model misspecification, and other challenges the real world imposes on the idealized learning settings we often see here at STOC.

His paper with Moses Charikar and Greg Valiant on list-learning has attracted a lot of attention and sparked several follow-up works at this year’s STOC. In that paper, Jacob and his coauthors realized that the problem of learning parameters of a distribution when only a tiny fraction (say 0.001 percent) of your samples come from that distribution provides a generalization of and useful perspective on many classic learning problems, like learning mixture models. (In the list learning setting, one aims to output a list of candidate parameters such that one of the sets of parameters is close to the parameters of the distribution you wanted to learn.)

Jacob’s work has explored the notion that polynomial-time tractability of learning problems is intimately related to robustness. A caricature: if any learning problem solvable in polynomial time can also be solved in polynomial time in the presence of some untrustworthy data, then polynomial-time algorithms cannot solve inference problems which are statistically impossible in the face of untrustworthy data. This offers an intriguing explanation for the existence of average-case/statistical problems which are unsolvable by polynomial-time algorithms, in spite of their information-theoretic (i.e. exponential-time) solvability.

I will also give a talk.

See you there!

Edit: correct some attributions.

Workshops at TheoryFest (including shameless advertisement)

June 27, 2018
by

Guest post by Sam Hopkins

TheoryFest is in full swing in Los Angeles! There is lots to be written about the wealth of STOC talks, invited papers, keynotes, and (maybe most importantly) the excellent food hall across the street which is swarming with theorists and mathematicians. But for now I want to bring to your attention the workshops planned for Friday.

As always, the schedule is packed with interesting talks! There are workshops on major themes in theory, like

Since I helped organize the last of these, let me say a bit about it.

Recently we have seen a surge of interest in algorithms and lower bounds for average-case problems. One driver of this surge has been theory’s increasing connection with machine learning, which has suggested a number of high-dimensional and noisy statistical inference problems to study. Another driver has been the substantial progress on long-standing problems regarding random instances—for example, a series of recent works proving many old conjectures on random k-SAT instances, and another series of works analyzing algorithms for community detection and recovery in very sparse random graphs.

A common theme in average-case computational problems is that their statistical difficulty (that is, whether they are solvable by any algorithm at all, regardless of running time) is not predictive of their algorithmic difficulty (that is, whether they are solvable by polynomial-time algorithms).

A now-standard example is the k-community stochastic blockmodel. In this problem, n nodes are randomly partitioned into k communities, and then a sparse random graph of average degree d is sampled on those vertices is sampled by including an edge \{i,j\} with probability (d/n)(1 - \epsilon/k) if i,j are in differing communities and otherwise with probability (d/n)(1 + (1-1/k)\epsilon) (this produces a graph of average degree d).

It turns out that so long as d > C k \log k / \epsilon^2 for a big-enough constant C, in exponential time one may recover the underlying communities from the graph (approximately, in the sense that it is possible to partition the graph into k classes such that this partitioning has nonvanishing correlation with the ground-truth partition).

However, it is conjectured that this community recovery is possible in polynomial time if and only if (!!) d > k^2/\epsilon^2. This threshold is conjectured to be sharp, in the sense that if d < 0.99 k^2 / \epsilon^2 no polynomial-time algorithm should exist, while one is known to when d > 1.01 k^2 / \epsilon^2.

This kind of threshold phenomenon is ubiquitous for high-dimensional inference problems, and remains largely unexplained. Of course, we are used to algorithmic intractability! But the appearance of intractability for average-case problems seems not to be explained by worst-case theories of hardness (like NP-completeness). And very little is known about what features of an average-case problem are predictive of computational tractability, and which might predict intractability (while in the worst case we know that, say, NP-completeness is an excellent predictor of intractability). The sharpness of a threshold like d = k^2/\epsilon^2 is striking and mysterious in a field where ignoring logarithmic factors is practically a (inter)national pastime.

We will have four talks, spanning algorithms and complexity, with (at least) four perspectives on computational thresholds for average-case problems and related matters. Florent Krzakala will lead off with a statistical-physics perspective on algorithms and hardness for matrix and tensor factorization problems. Nike Sun will give us a taste of the wild world of random constraint satisfaction problems. Jacob Steinhardt will talk about the relationship of outlier robustness and inaccurate/partial problem specification to efficient algorithms for statistical inference. Finally, I will survey some recent developments at the intersection of convex programming, the Sum of Squares method, and high-dimensional inference problems.

I hope to see you there!