Skip to content

Deep Double Descent (cross-posted on OpenAI blog)

December 5, 2019
by

By Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever

This is a lightly edited and expanded version of the following post on the OpenAI blog about the following paper. While I usually don’t advertise my own papers on this blog, I thought this might be of interest to theorists, and a good follow up to my prior post. I promise not to make a habit out of it. –Boaz

TL;DR: Our paper shows that double descent occurs in conventional modern deep learning settings: visual classification in the presence of label noise (CIFAR 10, CIFAR 100) and machine translation (IWSLT’14 and WMT’14). As we increase the number of parameters in a neural network, initially the test error decreases, then increases, and then, just as the model is able to fit the train set, it undergoes a second descent, again decreasing as the number of parameters increases. This behavior also extends over train epochs, where a single model undergoes double-descent in test error over the course of training. Surprisingly (at least to us!), we show these phenomenon can lead to a regime where “more data hurts”—training a deep network on a larger train set actually performs worse.

Introduction

Open a statistics textbook and you are likely to see warnings against the danger of “overfitting”: If you are trying to find a good classifier or regressor for a given set of labeled examples, you would be well-advised to steer clear of having so many parameters in your model that you are able to completely fit the training data, because you risk not generalizing to new data.

The canonical example for this is polynomial regression. Suppose that we get n samples of the form (x, p(x)+noise) where x is a real number and p(x) is a cubic (i.e. degree 3) polynomial. If we try to fit the samples with a degree 1 polynomial—-a linear function, then we would get many points wrong. If we try to fit it with just the right degree, we would get a very good predictor. However, as the degree grows, we get worse till the degree is large enough to fit all the noisy training points, at which point the regressor is terrible, as shown in this figure:

It seems that the higher the degree, the worse things are, but what happens if we go even higher? It seems like a crazy idea—-why would we increase the degree beyond the number of samples? But it corresponds to the practice of having many more  parameters than training samples in modern deep learning. Just like in deep learning, when the degree is larger than the number of samples, there is more than one polynomial that fits the data– but we choose a specific one: the one found running gradient descent.

Here is what happens if we do this for degree 1000, fitting a polynomial using gradient descent (see this notebook):

We still fit all the training points, but now we do so in a more controlled way which actually tracks quite closely the ground truth. We see that despite what we learn in statistics textbooks, sometimes overfitting is not that bad, as long as you go “all in” rather than “barely overfitting” the data. That is, overfitting doesn’t hurt us if we take the number of parameters to be much larger than what is needed to just fit the training set — and in fact, as we see in deep learning, larger models are often better.

The above is not a novel observation. Belkin et al called this phenomenon “double descent” and this goes back to even earlier works . In this new paper we (Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever) extend the prior works and report on a variety of experiments showing that “double descent” is widely prevalent across several modern deep neural networks and for several natural tasks such as image recognition (for the CIFAR 10 and CIFAR 100 datasets) and language translation (for IWSLT’14 and WMT’14 datasets).  As we increase the number of parameters in a neural network, initially the test error decreases, then increases, and then, just as the model is able to fit the train set, it undergoes a second descent, again decreasing as the number of parameters increases.  Moreover, double descent also extends beyond number of parameters to other measures of “complexity” such as the number of training epochs of the algorithm.

The take-away from our work (and the prior works it builds on) is that neither the classical statisticians’ conventional wisdom that “too large models are worse” nor the modern ML paradigm that “bigger models are always better” always hold. Rather it all depends on whether you are on the first or second descent.  Further more, these insights also allow us to generate natural settings in which even the age-old adage of “more data is always better” is violated!

In the rest of this blog post we present a few sample results from this recent paper.

Model-wise Double Descent

We observed many cases in which, just like in the polynomial interpolation example above, the test error undergoes a “double descent” as we increase the complexity of the model. The figure below demonstrates one such example: we plot the test error as a function of the complexity of the model for ResNet18 networks. The complexity of the model is the width of the layers, and the dataset is CIFAR10 with 15% label noise. Notice that the peak in test error occurs around the “interpolation threshold”: when the models are just barely large enough to fit the train set. In all cases we’ve observed, changes which affect the interpolation threshold (such as changing the optimization algorithm, changing the number of train samples, or varying the amount of label noise) also affect the location of the test error peak correspondingly.

We found the double descent phenomena is most prominent in settings with added label noise— without it, the peak is much smaller and easy to miss. But adding label noise amplifies this general behavior and allows us to investigate it easily.

Sample-Wise Nonmonotonicity

Using the model-wise double descent phenomenon we can obtain examples where training on more data actually hurts. To see this, let’s look at the effect of increasing the number of train samples on the test error vs. model size graph. The below plot shows Transformers trained on a language-translation task (with no added label noise):

On the one hand, (as expected) increasing the number of samples generally shifts the curve downwards towards lower test error. On the other hand, it also shifts the curve to the right: since more samples require larger models to fit, the interpolation threshold (and hence, the peak in test error) shifts to the right. For intermediate model sizes, these two effects combine, and we see that training on 4.5x more samples actually hurts test performance.

Epoch-Wise Double Descent

There is a regime where training longer reverses overfitting. Let’s look closer at the experiment from the “Model-wise Double Descent” section, and plot Test Error as a function of both model-size and number of optimization steps. In the plot below to the right, each column tracks the Test Error of a given model over the course of training. The top horizontal dotted-line corresponds to the double-descent of the first figure. But we can also see that for a fixed large model, as training proceeds test error goes down, then up and down again—we call this phenomenon “epoch-wise double-descent.”

Moreover, if we plot the Train error of the same models and the corresponding interpolation contour (dotted line) we see that it exactly matches the ridge of high test error (on the right).

In general, the peak of test error appears systematically when models are just barely able to fit the train set.

Our intuition is that for models at the interpolation threshold, there is effectively only one model that fits the train data, and forcing it to fit even slightly-noisy or mis-specified labels will destroy its global structure. That is, there are no “good models”, which both interpolate the train set, and perform well on the test set. However in the over-parameterized regime, there are many models that fit the train set, and there exist “good models” which both interpolate the train set and perform well on the distribution. Moreover, the implicit bias of SGD leads it to such “good” models, for reasons we don’t yet understand.

The above intuition is theoretically justified for linear models, via a series of recent works including [Hastie et al.] and [Mei-Montanari]. We leave fully understanding the mechanisms behind double descent in deep neural networks as an important open question.


Commentary: Experiments for Theory

The experiments above are especially interesting (in our opinion) because of how they can inform ML theory: any theory of ML must be consistent with “double descent.” In particular, one ambitious hope for what it means to “theoretically explain ML” is to prove a theorem of the form:

“If the distribution satisfies property X and architecture/initialization satisfies property Y, then SGD trained on ‘n’ samples, for T steps, will have small test error with high probability”

For values of X, Y, n, T, “small” and “high” that are used in practice.

However, these experiments show that these properties are likely more subtle than we may have hoped for, and must be non-monotonic in certain natural parameters.

This rules out even certain natural “conditional conjectures” that we may have hoped for, for example the conjecture that

“If SGD on a width W network works for learning from ‘n’ samples from distribution D, then SGD on a width W+1 network will work at least as well”

Or the conjecture

“If SGD on a certain network and distribution works for learning with ‘n’ samples, then it will work at least as well with n+1 samples”

It also appears to conflict with a “2-phase” view of the trajectory of SGD, as an initial “learning phase” and then an “overfitting phase” — in particular, because the overfitting is sometimes reversed (at least, as measured by test error) by further training.

Finally, the fact that these phenomena are not specific to neural networks, but appear to hold fairly universally for natural learning methods (linear/kernel regression, decision trees, random features) gives us hope that there is a deeper phenomenon at work, and we are yet to find the right abstraction.

We especially thank Mikhail Belkin and Christopher Olah for helpful discussions throughout this work. The polynomial example is inspired in part by experiments in [Muthukumar et al.].

HALG 2020 call for nominations (guest post by Yossi Azar)

November 27, 2019

[Guest post by Yossi Azar – I attended HALG once and enjoyed it quite a lot; I highly recommend people make such nominations –Boaz]

Call for Invited Talk Nominations :5th Highlights of Algorithms conference (HALG 2020)

ETH Zurich, June 3-5, 2020

http://2020.highlightsofalgorithms.org/

The HALG 2020 conference seeks high-quality nominations for invited talks that will highlight recent advances in algorithmic research. Similarly to previous years, there are two categories of invited talks:

A. survey (60 minutes): a survey of an algorithmic topic that has seen exciting developments in last couple of years.

B. paper (30 minutes): a significant algorithmic result appearing in a paper in 2019 or later.

To nominate, please email halg2020.nominations@gmail.com the following information:

  1. Basic details: speaker name + topic (for survey talk) or paper’s title, authors, conference/arxiv + preferable speaker (for paper talk).
  2. Brief justification: Focus on the benefits to the audience, e.g., quality of results, importance/relevance of topic, clarity of talk, speaker’s presentation skills.

All nominations will be reviewed by the Program Committee (PC) to select speakers that will be invited to the conference.

Nominations deadline: December 20, 2020 (for full consideration).

Harvard opportunity: lecturing / advising position

November 23, 2019

Harvard Computer Science is seeking a Lecturer/Assistant Director of Undergraduate Studies. A great candidate would be someone passionate about teaching and mentoring and excited to build a diverse and inclusive Undergraduate Computer Science community at Harvard. The position requires a Ph.D and is open to all areas of computer science and related fields, but of course personally I would love to have a theorist fill this role.

Key responsibilities are:

* Teach (or co-teach) one undergraduate Computer Science course per semester.

* Join and help lead the Computer Science Undergraduate Advising team (which includes mentoring and advising undergraduate students and developing materials, initiatives, and events to foster a welcoming and inclusive Harvard Computer Science community.)

The job posting with all details is at https://tiny.cc/harvardadus

Any questions about this position, feel free to contact me or Steve Chong  (the co directors of undergraduate studies for CS at Harvard) at cs-dus at seas.harvard.edu 

Puzzles of modern machine learning

November 15, 2019

It is often said that "we don’t understand deep learning" but it is not as often clarified what is it exactly that we don’t understand. In this post I try to list some of the "puzzles" of modern machine learning, from a theoretical perspective. This list is neither comprehensive nor authoritative. Indeed, I only started looking at these issues last year, and am very much in the position of not yet fully understanding the questions, let alone potential answers. On the other hand, at the rate ML research is going, a calendar year corresponds to about 10 "ML years"…

Machine learning offers many opportunities for theorists; there are many more questions than answers, and it is clear that a better theoretical understanding of what makes certain training procedures work or fail is desperately needed. Moreover, recent advances in software frameworks made it much easier to test out intuitions and conjectures. While in the past running training procedures might have required a Ph.D in machine learning, recently the "barrier to entry" was reduced to first to undergraduates, then to high school students, and these days it’s so easy that even theoretical computer scientists can do it 🙂

To set the context for this discussion, I focus on the task of supervised learning. In this setting we are given a training set S of n examples of the form (x_i,y_i) where x_i \in \mathbb{R}^d is some vector (think of it as the pixels of an image) and y_i \in { \pm 1 } is some label (think of y_i as equaling +1 if x_i is the image of a dog and -1 if x_i is the image of a cat). The goal in supervised learning is to find a classifier f such that f(x)=y will hold for many future samples (x,y).

The standard approach is to consider some parameterized family of classifiers, where for every vector \theta \in \mathbb{R}^m of parameters, we associate a classifier f_\theta :\mathbb{R}^d \rightarrow { \pm 1 }. For example, we can fix a certain neural network architecture (depth, connections, activation functions, etc.) and let \theta be the vector of weights that characterizes every network in this architecture. People then run some optimizing algorithm such as stochastic gradient descent with the objective function set as finding the vector \theta \in \mathbb{R}^m that minimizes a loss function L_S(\theta). This loss function can be the fraction of labels that f_\theta gets wrong on the set S or a more continuous loss that takes into account the confidence level or other parameters of f_\theta as well. By now this general approach has been successfully applied to a many classification tasks, in many cases achieving near-human to super-human performance. In the rest of this post I want to discuss some of the questions that arise when trying to obtain a theoretical understanding of both the powers and the limitations of the above approach. I focus on deep learning, though there are still some open questions even for over-parameterized linear regression.

The generalization puzzle

The approach outlined above has been well known and analyzed for many decades in the statistical learning literature. There are many cases where we can prove that a classifier obtained in this case has a small generalization gap, in the sense that if the training set S was obtained by sampling n independent and identical samples from a distribution D, then the performance of a classifier f_\theta on new samples from D will be close to its performance on the training set.

Ultimately, these results all boil down to the Chernoff bound. Think of the random variables X_1,\ldots,X_n where X_i=1 if the classifier makes an error on the i-th training example. The Chernoff bound tells us that probability that that \sum X_i deviates by more than \epsilon n from its expectation is something like \exp(-\epsilon^2 n) and so as long as the total number of classifiers is less than 2^k for k < \epsilon^2 n, we can use a union bound over all possible classifiers to argue that if we make a p fraction of errors on the training set, the probability we make an error on a new example is at most p+\epsilon. We can of course "bunch together" classifiers that behave similarly on our distribution, and so it is enough if there are at most 2^{\epsilon^2 n} of these equivalence classes. Another approach is to add a "regularizing term" R(\theta) to the objective function, which amounts to restricting attention to the set of all classifiers f_\theta such that R(\theta) \leq \mu for some parameter \mu. Again, as long as the number of equivalence classes in this set is less than 2^{\epsilon^2 n}, we can use this bound.

To a first approximation, the number of classifiers (even after "bunching together") is roughly exponential in the number m of parameters, and so these results tell us that as long as the number of m of parameters is smaller than the number of examples, we can expect to have a small generalization gap and can infer future performance (known as "test performance") from the performance on the set S (known as "train performance"). Once the number of parameters m becomes close to or even bigger than the number of samples n, we are in danger of "overfitting" where we could have excellent train performance but terrible test performance. Thus according to the classical statistical learning theory, the ideal number of parameters would be some number between 0 and the number of samples m, with the precise value governed by the so called "bias variance tradeoff".

This is a beautiful theory, but unfortunately the classical theorems yield vacous results in the realm of modern machine learning, where we often train networks with millions of parameters on a mere tens of thousands of examples. Moreover, Zhang et al showed that this is not just a question of counting parameters better. They showed that modern deep networks can in fact "overfit" and achieve 100% success on the training set even if you gave them random or arbitrary labels.

The results above in particular show that we can find classifiers that perform great on the training set but perform terribly on the future tests, as well as classifiers that perform terrible on the training set but pretty good on future test. Specifically, consider an architecture that has the capacity to fit 20n arbitrary labels, and suppose that we train it on a set S of n examples. Then we can find a setting of parameters \theta that both fits the training set exactly (i.e., satisfies f_\theta(x)=y for all (x,y)\in S) but also satisfies that the additional constraint that f_\theta(x)= -y (i.e., the negation of the label y) for every (x,y) in some additional set T of 19m pairs. (The set T is not part of the actual training set, but rather an "auxiliary set" that we simply use for the sake of constructing this counterexample; note that we can use T as means to generate the initial network which can then be fed into standard stochastic gradient descent on the set S.) The network f_\theta fits its training set perfectly, but since it effectively corresponds to training with 95% label noise, it will perform worse than even a coin toss.

In an analogous way, we can find parameters \theta that completely fail on the training set, but fit correctly the additional "auxiliary set" T. This will correspond to the case of standard training with 5% label noise, which typically yields about 95% of the performance on the noiseless distribution.

The above insights break the separation of concerns or separation of computational problems from algorithms which we theorists like so much. Ideally, we would like to phrase the "machine learning problem" as a well defined optimization objective, such as finding, given a set S, the vector \theta \in \mathbb{R}^m that mimimizes L_S(\theta). Once phrased in this way, we can try to find with an algorithm that achieves this goal as efficiently as possible.

Unfortunately, modern machine learning does not currently lend itself to such a clean partition. In particular, since not all optima are equally good, we don’t actually want to solve the task of minimizing the loss function in a "black box" way. In fact, many of the ideas that make optimization faster such as accelaration, lower learning rate, second order methods and others, yield worse generalization performance. Thus, while the objective function is somewhat correlated with generalization performance, it is neither necessary nor sufficient for it. This is a clear sign that we don’t really understand what makes machine learning work, and there is still much left to discover. I don’t know what machine learning textbooks in the 2030’s will contain, but my guess is that they would not prescribe running stochastic gradient descent on one of these loss functions. (Moritz Hardt counters that what we teach in ML today is not that far from the 1973 book of Duda and Hart, and that by some measures ML moved slower than other areas of CS.)

The generalization puzzle of machine learning can be phrased as the question of understanding what properties of procedures that map a training set S into a classifier \theta lead to good generalization performance with respect to certain distributions. In particular we would like to understand what are the properties of natural natural distributions and stochastic gradient descent that make the latter into such a map.

The computational puzzle

Yet another puzzle in modern machine learning arises from the fact that we are able to find the minimum of L_S(\theta) in the first place. A priori this is surprising since, apart from very special cases (e.g., linear regression with a square loss), the function \theta \mapsto L_S(\theta) is in general non convex. Indeed, for almost any natural loss function, the problem of finding \theta that minimizes L_S(\theta) is NP hard. However, if we look at the computational question in the context of the generalization puzzle above, it might not be as mysterious. As we have seen, the fact that the \theta we output is a global minimizer (or close to minimizer) of L_S(\cdot) is in some sense accidental and by far not the the most important property of \theta. There are many minima of the loss function that generalize badly, and many non minima that generalize well.

So perhaps the right way to phrase the computational puzzle is as

"How come that we are able to use stochastic gradient descent to find the vector \theta that is output by stochastic gradient descent."

which when phrased like that, doesn’t seem like much of a puzzle after all.

The off-distribution performance puzzle

In the supervised learning problem, the training samples S are drawn from the same distribution as the final test sample. But in any applications of machine learning, classifiers are expected to perform on samples that arise from very different settings. The image that the camera of a self-driving car observes is not drawn from ImageNet, and yet it still needs to (and often can) detect whether not it is seeing a dog or a cat (at which point it will break or accelerate, depending on whether the programmer was a dog or cat lover). Another insight to this question comes from a recent work of Recht et al. They generated a new set of images that is very similar to the original ImageNet test set, but not identical to it. One can think of it as generated from a distribution D' that is close but not the same as the original distribution D of ImageNet. They then checked how well do neural networks that were trained on the original ImageNet distribution D perform on D'. They saw that while these networks performed significantly worse on D' than they did on D, their performance on D' was highly correlated with their performance on D. Hence doing better on D did correspond to being better in a way that carried over to the (very closely related) D'. (However, the networks did perform worse on $D’$ so off-distribution performance is by no means a full success story.)

Coming up with a theory that can supply some predictions for learning in a way that is not as tied to the particular distribution is still very much open. I see it as somewhat akin to finding a theory for the performance of algorithms that is somewhere between average-case complexity (which is highly dependant on the distribution) and worst-case complexity (which does not depend on the distribution at all, but is not always achievable).

The robustness puzzle

If the previous puzzles were about understanding why deep networks are surprisingly good, the next one is about understanding why they are surprisingly bad. Images of physical objects have the property that if we modify them in some ways, such as perturbing them in a small number of pixels or by few shades or rotating by an angle, they still correspond to the same object. Deep neural networks do not seem to "pick up" on this property. Indeed, there are many examples of how tiny perturbations can cause a neural net to think that one image is another, and people have even printed a 3D turtle that most modern systems recognize as a rifle. (See this excellent tutorial, though note an "ML decade" has already passed since it was published). This "brittleness" of neural networks can be a significant concern when we deploy them in the wild. (Though perhaps mixing up turtles and rifles is not so bad: I can imagine some people that would normally resist regulations to protect the environment but would support them if they confused turtles with guns..) Perhaps one reason for this brittleness is that neural networks can be thought of as a way of embedding a set of examples in dimension n into dimension \ell (where \ell is the number of neurons in the penultimate layer) in a way that will make the positive examples be linearly separable from the negative examples. Amplifying small differences can help in achieving such a separation, even if it hurts robustness.

Recent works have attempted to rectify this, by using a variants of the loss function where L_S(\theta) corresponds to the maximum error under all possible such perturbations of the data. A priori you would think that while robust training might come at a computational cost, statistically it would be a "win win" with the resulting classifiers not only being more robust but also overall better at classifying. After all, we are providing the training procedure with the additional information (i.e., "updating its prior") that the label should be unchanged by certain transformations, which should be equivalent to supplying it with more data. Surprisingly, the robust classifiers currently perform worse than standard trained classifiers on unperturbed data. Ilyas et al argued that this may be because even if humans ignore information encoded in, for example, whether the intensity level of a pixel is odd or even, it does not mean that this information is not predictive of the label. Suppose that (with no basis whatsoever – just as an example) cat owners are wealthier than dog owners and hence cat pictures tend to be taken with higher quality lenses. One could imagine that a neural network would pick up on that, and use some of the fine grained information in the pixels to help in classification. When we force such a network to be robust it would perform worse. Distill journal published six discussion pieces on the Ilyas et al paper. I like the idea of such "paper discussions" very much and hope it catches on in machine learning and beyond.

The interpretability puzzle

Deep neural networks are inspired by our brain, and it is tempting to try to understand their internal structure just like we try to understand the brain and see if it has a "grandmother neuron". For example, we could try to see if there is a certain neuron (i.e., gate) in a neural network that "fires" only when it is fed images with certain high level features (or more generally find vectors that have large correlation with the state at a certain layer only when the image has some features). This also of practical importance, as we increasingly use classifiers to make decisions such as whether to approve or deny bail, whether to prescribe to a patient treatment A or B, or whether a car should steer left or right, and would like to understand what is the basis for such decisions. There are beautiful visualizations of neural networks’ decisions and internal structures , but given the robustness puzzle above, it is unclear if these really capture the decision process. After all, if we could change the classification from a cat to a dog by perturbing a tiny number of pixels, in what sense can we explain why the network made this decision or the other.

The natural distributions puzzle

Yet another puzzle (pointed out to me by Ilya Sutskever) is to understand what is it about "natural" distributions such as images, texts, etc.. that makes them so amenable to learning via neural networks, even though such networks can have a very hard time with learning even simple concepts such as parities. Perhaps this is related to the "noise robustness" of natural concepts which is related to being correlated with low degree polynomials. Another suggestion could be that at least for text etc.., human languages are implicitly designed to fit neural network. Perhaps on some other planets there are languages where the meaning of a sentence completely changes depending on whether it has an odd or an even number of letters…

Summary

The above are just a few puzzles that modern machine learning offers us. Not all of those might have answers in the form of mathematical theorems, or even well stated conjectures, but it is clear that there is still much to be discovered, and plenty of research opportunities for theoretical computer scientists. In this blog I focused on supervised learning, where at least the problem is well defined, but there are other areas of machine learning, such as transfer learning and generative modeling, where we don’t even yet know how to phrase the computational task, let alone prove that any particular procedure solves it. In several ways, the state of machine learning today seems to me as similar to the state of cryptography in the late 1970’s. After the discovery of public key cryptography, researchers has highly promising techniques and great intuitions, but still did not really understand even what security means, let alone how to achieve it. In the decades since, cryptography has turned from an art to a science, and I hope and believe the same will happen to machine learning.

Acknowledgements: Thanks to Preetum Nakkiran, Aleksander Mądry, Ilya Sutskever and Moritz Hardt for helpful comments. (In particular, I dropped an interpretability experiment suggested in an earlier version of this post since Moritz informed me that several similar experiments have been done.) Needless to say, none of them is responsible for any of the speculations and/or errors above.

Rabin postdoc fellowship

November 8, 2019

Hi, once again it is the time of the year to advertise the Michael O. Rabin postdoctoral fellowship at Harvard, see https://toc.seas.harvard.edu/rabin-postdoc for more details. The deadline to apply is December 2, 2019. For any questions please email theory-postdoc-apply (at) seas dot harvard dot edu

Boaz’s inferior classical inferiority FAQ

October 24, 2019

(For better info, see Scott’s Supreme Quantum Superiority FAQ and also his latest post on the Google paper; also this is not really an FAQ but was inspired by a question about the Google paper from a former CS 121 student)

“Suppose aliens invade the earth and threaten to obliterate it in a year’s time unless human beings can find the Ramsey number for red five and blue five. We could marshal the world’s best minds and fastest computers, and within a year we could probably calculate the value. If the aliens demanded the Ramsey number for red six and blue six, however, we would have no choice but to launch a preemptive attack.

Paul Erdős (as quoted by Graham and Spencer, 1990, hat tip: Lamaze Tishallishmi)

In a Nature paper published this week, a group of researchers from John Martinis’s lab at Google announced arguably the first demonstration of “quantum supremacy” – a computational task carried out by a 53 qubit quantum computer that would require a prohibitive amount of time to simulate classically.

Google’s calculations of the “classical computation time” might have been overly pessimistic (from the classical point of view), and there has been work from IBM as well as some work of Johnnie Gray suggesting that there are significant savings to be made. Indeed, given the lessons that we learned from private key cryptography, where techniques such as linear and differential cryptanalysis were used to “shave factors from exponents”, we know that even if a problem requires exponential time in general, this does not mean that by being very clever we can’t make significant savings over the naive brute force algorithm. This holds doubly so in this case, where, unlike the designers of block ciphers, the Google researchers were severely constrained by factors of geometry and the kind of gates they can reliably implement.

I would not be terribly surprised if we will see more savings and even an actual classical simulation of the same sampling task that Google achieved. In fact, I very much hope this happens, since it will allow us to independently verify the reliability of Google’s chip and whether it actually did in fact sample from the distribution it is supposed to have sampled from (or at least rule out some “null hypothesis”). But this would not change the main point that the resources for classical simulation, as far as we know, scale exponentially with the number of qubits and their quality. While we could perhaps with great effort simulate a 53 qubit depth 20 circuit classically, once we reach something like 100 qubits and depth then all current approaches will be hopelessly behind.

In the language of my essay on quantum skepticism, I think this latest result, and the rest of the significant experimental progress that has been going on, all but rules out the possibility of “Skepticland” where there would be some fundamental physical reason why it is not possible to build quantum computers that offer exponential advantage in the amount of resources to achieve certain tasks over classical computers.

While the worlds of “Popscitopia” (quantum computers can do everything) and “Classicatopia” (there is an efficient classical algorithm to simulate BQP) remain mathematical possiblities (just as P=NP is), most likely we live in “Superiorita” where quantum computers do offer exponential advantage for some computational problems.

Some people question whether these kind of “special purpose” devices that might be very expensive to build are worth the investment. First of all (and most importantly for me), as I argued in my essay, exploring the limits of physically realizable computation is a grand scientific goal in its own right worthy of investment regardless of applications. Second, technology is now a 3.8 trillion dollar per year industry, and quantum computers are in a very real sense the first qualitatively different computing devices since the days of Babbage and Turing. Spending a fraction of a percent of the industry’s worth to the economy on exploring the potential for quantum computing seems like a good investment, even if there will be no practical application in the next decade or two. (By the same token, spending a fraction of a percent on exploring algorithm design and the limitations of classical algorithms is a very good investment as well.)

Is quantum supremacy here?

September 23, 2019

See Scott Aaronson’s blog. It seems like researchers in John Martinis’s group at Google might have managed to demonstrate that a quantum computer can produce samples passing a certain statistical test for which we know no efficient classical algorithm to do so.

Of course I can’t help but posting again the fake nytimes headline I produced for my 2016 crypto course when I wanted to motivate the study of so called “quantum-resistant cryptography”: