Skip to content

Black holes, paradoxes, and computational complexity

August 22, 2018

(Thanks so much to Scott Aaronson for giving me many pointers, insights, explanations, and corrections that greatly improved this post. As I’m a beginner to physics, the standard caveat holds doubly here: Scott is by no means responsible to any of my remaining technical mistakes and philosophical misconceptions.)

One of the interesting features of physics is the prevalence of “thought experiments”, including Maxwell’s demon, Einstein’s Train, Schrödinger’s cat, and many more. One could think that these experiments are merely “verbal fluff” which obscures the “real math” but there is a reason that physicists return time and again to these types of mental exercises. In a nutshell, this is because while physicists use math to model reality, the mathematical model is not equal to reality.

For example, in the early days of quantum mechanics, several calculations of energy shifts seemed to give out infinite numbers. While initially this was viewed as a sign that something is deeply wrong with quantum mechanics, ultimately it turned out that these infinities canceled each other, as long as you only tried to compute observable quantities. One lesson that physicists drew from this is that while such mathematical inconsistencies may (and in this case quite possibly do) indicate some issue with a theory, they are not a reason to discard it. It is OK if a theory involves mathematical steps that do not make sense, as long as this does not lead to an observable paradox: i.e., an actual “thought experiment” with a nonsensical outcome.

A priori, this seems rather weird. An outsider impression of the enterprise of physics is that it is all about explaining the behavior of larger systems in terms of smaller parts. We explain materials by molecules, molecules by atoms, and atoms by elementary particles. Every term in our mathematical model is supposed to correspond to something “real” in the world.

However, with modern physics, and particular quantum mechanics, this connection breaks down. In quantum mechanics we model the state of the world using a vector (or “wave function”) but the destructiveness of quantum measurements tells us that we can never know all the coordinates of this vector. (This is also related to the so called “uncertainty principle”.) While physicists and philosophers can debate whether these wave functions “really exist”, their existence is not the reason why quantum mechanics is so successful. It is successful because these wave functions yield a mathematically simple model to predict observations. Hence we have moved from trying to explain bigger physical systems in terms of smaller physical systems to trying to explain complicated observations in terms of simpler mathematical models. (Indeed the focus has moved from “things” such as particles to concepts such as forces and symmetries as the most fundamental notions.) These simpler models do not necessarily correspond to any real physical entities that we’d ever be able to observe. Hence such models can in principle contain weird things such as infinite quantities, as long as these don’t mess up our predictions for actual observations.

Nevertheless, there are still real issues in physics that people have not been able to settle. In particular the so called “standard model” uses quantum mechanics to explain the strong force, the weak force, and the electromagnetic force, which dominate over short (i.e., subatomic) distances, but it does not incorporate the force of gravity. Gravity is explained by the theory of general relativity which is inconsistent with quantum mechanics but is predictive for phenomena over larger distances.

By and large physicists believe that quantum mechanics will form the basis for a unified theory, that will involve incorporating gravity into it by putting general relativity on quantum mechanical foundations. One of the most promising approaches in this direction is known as the AdS/CFT correspodence of Maldacena, which we describe briefly below.
Alas, in 2012, Almheiri, Marolf, Polchinski, and Sully (AMPS) gave a description of a mental experiment, known as a the “firewall paradox” that showed a significant issue with any quantum-mechanical description of gravity, including the Ads/CFT correspondence. Harlow and Hayden (see also chapter 6 of Aaronson’s notes and this overview of Susskind) proposed a way to resolve this paradox using computational complexity.

In this post I will briefly discuss these issues. Hopefully someone in Tselil’s and my upcoming seminar will present this in more detail and also write a blog post about it.

The bulk/boundary correspondence

Edwin Abbott’s 1884 novel “Flatland”, describes a world in which people live in only two dimensions. At some point a sphere visits this world, and opens the eye of one of its inhabitants (the narrator, which is a square) to the fact its two-dimensional world was merely an illusion and the “real world” actually has more dimensions.

 

Houghton_EC85_Ab264_884f_-_Flatland,_cover

Cover of first edition of “Flatland”. Image from Wikipedia, *EC85 Ab264 884f, Houghton Library, Harvard University.

 

However, modern physics suggest that things might be the other way around: we might actually be living in flatland ourselves. That is, it might be that the true description of our world has one less spatial dimension than what we perceive. For example, though we think we live in three dimensions, perhaps we are merely shadows (or a “hologram”) of a two dimensional description of the world. One can ask how could this be? After all, if our world is “really” two dimensional, what happens when I climb the stairs in my house? The idea is that the geometry of the two-dimensional world is radically different, but it contains all the information that would allow to decode the state of our three dimensional world. You can imagine that when I climb the stairs in my house, my flatland analog goes from the first floor to the second floor in (some encoding of) the two-dimensional blueprint of my house. (Perhaps this lower-dimensional representation is the reason the Wachowskis called their movie “The Matrix” as opposed to “The Tensor”?)

 

The main idea is that in this “flat” description, gravity does not really exist and physics has a pure quantum mechanical description which is scale free in the sense that the theory is the same independently of distance. Gravity and our spacetime gemoetry emerge in our world via the projection from this lower dimensional space. (This projection is supposed to give rise to some kind of string theory.) As far as I can tell, at the moment physicists can only perform this projection (and even this at a rather heuristic level) under the assumption that our universe is contracting or in physics terminology an “anti de-Sitter (AdS) space”. This is the assumption that the geometry of the universe is hyperbolic and hence one can envision spacetime as being bounded in some finite area of space: some kind of a d+1 dimensional cylinder that has a d-dimensional boundary. The idea is that all the information on what’s going on in the inside or bulk of the cylinder is encoded in this boundary. One caveat is that our physical universe is actually expanding rather than contracting, but as the theory is hard enough to work out for a contracting space, at the moment they sensibly focus on this more tractable setting. Since the quantum mechanical theory on the boundary is scale free (and also rotation invariant) it is known as a Conformal Field Theory (CFT). Thus this one to one mapping of the boundary and the bulk is also known as the “AdS/CFT correspondence”.

The firewall paradox

If it is possible to carry over this description, in terms of information it would be possible to describe the universe in purely quantum mechanical terms. One can imagine that the universe starts at some quantum state |x_0 \rangle, and at each step in time progresses to the state |x_{i+1} \rangle = U|x_i \rangle where U is some unitary transformation.

In particular this means that information is never lost. However, black holes pose a conundrum to this view since they seem to swallow all information that enters them. Recall that the “escape velocity” of earth – the speed needed to escape the gravitational field and go to space – is about 25,000 mph or Mach 33. In a black hole the “escape velocity” is the speed of light which means that nothing, not even light, can escape it. More specifically, there is a certain region in spacetime which corresponds to the event horizon of a black hole. Once you are in this event horizon then you have passed the point of no return since even if you travel in the speed of light, you will not be able to escape. Though it might take a very long time, eventually you will perish in the black hole’s so called “singularity”.

 

IMG_20180823_102333

New Yorker magazine, August 27, 2018

 

Entering the event horizon should not feel particularly special (a condition physicists colorfully refer to as “no drama”). Indeed, as far as I know, it is theoretically possible that 10 years from now a black hole would be created in our solar system with a radius larger than 100 light years. If this future event will happen, this means that we are already in a black hole event horizon even though we don’t know it.

The above seems to mean that information that enters the black hole is irrevocably lost, contradicting unitarity. However, physicists now believe that through a phenomenon known as Hawking radiation black holes might actually emit the information that was contained in them. That is, if the n qubits that enter the event horizon are in the state |x\rangle then (up to a unitary tranformation) the qubits that are emitted in the radiation would be in the state |x \rangle as well, and hence no information is lost. Indeed, Hawking himself conceded the bet he made with Preskill on information loss.
Nevertheless, there is one fly in this ointment. If we drop an n qubit state |x\rangle in this black hole, then they are eventually radiated (in the same state, up to an invertible transformation), but the original n qubits never come out. (It is a black hole after all.) Since we now have two copies of these qubits (one inside the black hole and one outside it), this seems to violate the famous “no cloning principle” of quantum mechanics which says that you can’t copy a qubit. Luckily however, this seemed to be one more of those cases where it is an issue with the math that could never effect an actual observer. The reason is that an observer inside the black hole event horizon can never come out, while an observer outside can never peer inside. Thus, even if the no cloning principle is violated in our mathematical model of the whole universe, no such violation would have been seen by either an outside or an inside observer. In fact, even if Alice – a brave observer outside the event horizon – obtained the state |x\rangle of the Hawking radiation and then jumped with it into the event horizon so that she can see a violation of the no-cloning principle then it wouldn’t work. The reason is that by the time all the n qubits are radiated, the black hole fully evaporates and inside the black hole the original qubits have already entered the singularity. Hence Alice would not be able to “catch the black hole in the act” of cloning qubits.
What AMPS noticed is that a more sophisticated (yet equally brave) observer could actually obtain a violation of quantum mechanics. The idea is the following. Alice will wait until almost all (say 99 percent) of the black hole evaporated, which means that at this point she can observe 0.99n of the qubits of the Hawking radiation |R \rangle, while there are still about 0.01n qubits inside the event horizon that have not yet reached the singularity. So far, this does not seem to be any violation of the no cloning principle, but it turns out that entanglement (which you can think of as the quantum analog of mutual information) plays a subtle role. Specifically, for information to be preserved the radiation will be in a highly entangled state, which means that in particular if we look at the qubit |A \rangle that has just radiated from the event horizon then it will be highly entangled with the 0.99n qubits |R \rangle we observed before.

On the other hand, from the continuity of spacetime, if we look at a qubit |B\rangle that is just adjacent to |A \rangle but inside the event horizon then it will be highly entangled with |A \rangle as well. For our classical intuition, this seems to be fine: a \{0,1\}-valued random variable A could have large (say at least 0.9) mutual information with two distinct random variables R and B. But quantum entanglement behaves differently: it satisfies a notion known as monogamy of entanglement, which implies that the sum of entanglement of a qubit |A \rangle with two disjoint registers can be at most one. (Monogamy of entangelement is actually equivalent to the no cloning principle, see for example slide 14 here.)

Specifically, Alice could use a unitary transformation to “distill” from |R\rangle a qubit |C\rangle that is highly entangled with |A\rangle and then jump with |A\rangle and |C\rangle into the event horizon to observe there a triple of qubits (|A\rangle, |B\rangle, |C \rangle) which violates the monogamy of entanglement.

One potential solution to the AMPS paradox is to drop the assumption that spacetime is continuous at the event horizon. This would mean that there is a huge energy barrier (i.e., a “firewall”) at the event horizon. Alas, a huge wall of fire is as close as one can get to the definition of “drama” without involving Omarose Manigault Newman.

The “firewall paradox” is a matter of great debate among physicists. (For example after the AMPS paper came out, a “rapid response workshop” was organized for people to suggest possible solutions.) As mentioned above, Daniel Harlow and Patrick Hayden suggested a fascinating way to resolve this paradox. They observed that to actually run this experiment, Alice would have to apply a certain “entanglement distillation” unitary D to the 0.99n qubits of the Hawking radiation. However, under reasonable complexity assumptions, computing D would require an exponential number of quantum gates!. This means that by the time Alice is done with the computation, the black hole is likely to completely evaporate, and hence there would be nothing left to jump into!

 

The above is by no means the last word of this story. Other approaches for resolving this paradox have been put forward, as well as ways to poke holes in the Harlow-Hayden resolution. Nor is it the only appearance of complexity in the AdS/CFT correspondence or quantum gravity at large. Indeed, the whole approach places much more emphasis on the information content of the world as opposed to the more traditional view of spacetime as the fundamental “canvas” for our universe. Hence information and computation play a key role in understanding how our spacetime can emerge from the conformal picture.

 

In the fall seminar, we will learn more about these issues, and will report here as we do so.

Johan Håstad wins Knuth prize

August 16, 2018

Congratulations to Johan Håstad for winning the 2018 Knuth prize! Johan of course has done groundbreaking works from constructing pseudorandom generators based on one way functions, through his famous switching lemma, to his PCP theorem that continues to this day to be the blueprint for much of the work in hardness of approximation. A most deserving winner!

Johan will be presented with the award at the upcoming FOCS.

Book Review: “Factor Man”

August 14, 2018

At the recommendation of Craig Gentry, I recently read the book “Factor Man” by Matt Ginsberg. This book is about a computer scientist that discovers an efficient algorithm for SAT, which starts off a international game of intrigue involving the FBI, NSA, Chinese spies, Swiss banks, and even some characters we know such as Steven Rudich and Scott Aaronson.
(However there is no mention of Scott’s role as the criminal mastermind behind the great Philadelphia Airport Heist.)

While it’s by no means “great literature”, Factor Man is a fun page-turner. I think it can be a particularly enjoyable read for computer scientists, as it might prompt you to come up with your own scenarios as to how things would play out if someone discovers such an algorithm. Unsurprisingly, the book is not technically perfect. The technical error that annoyed me the most was that the protagonist demonstrates his algorithm by factoring integers of sizes that can in fact be fairly easily factored today (the book refers to factoring 128 or 256 bit numbers as impressive, while 768 bit integers of general form have been factored, see also this page and this paper). If you just imagine that when the book says “n bit” numbers it actually means n byte numbers then this is fine. Network security researchers might also take issue with other points in the book (such as the ability of the protagonist to use gmail and blogspot without being identified by neither the NSA nor Google, as well as using a SAT algorithm to provide a “final security patch” for a product).

Regardless of these technical issues, I recommend reading this book if you’re the type of person that enjoys both computer science and spy thrillers, and I do plan to mention it to students taking my introduction to theoretical CS  course.

Physics Envy

August 6, 2018

There is something cool about physics. Black holes, anti-matter, “God’s particle”: it all sounds so exciting. While our TCS “mental experiments” typically involve restricting the inputs of constant-depth circuits, physicists talk about jumping into black holes while holding a dictionary. Physicists also have a knack for names: notions such as “uncertainty principle” or “monogamy of entanglement” sound so much cooler than “Cauchy-Schwarz Inequality” or “Distributive Law”.

But a deeper reason to envy physicists is that (with certain important exceptions) they often have fairly good intuitions into how their systems of interest behave, even if they can’t always prove them. In contrast, we theoretical computer scientists are more often than not completely “in the dark”. A t-step algorithm to compute the mapping x \mapsto f(x) can be modeled as t applications of some simple local update rule to the initial state x. Studying the evolution of systems is the bread-and-butter of physics, but many physical intuitions fail for the progression of an algorithm’s computation. For example, for general algorithms, we do not have a natural sense in which the state of the system after 0.9t steps is “closer” to f(x) than it is to x. The intermediate state of a general algorithm is rather non informative. Similarly, we do not have a nice, even conjectural, way to characterize the set of functions that can be computed by t steps: the lack of such clean “complexity measures” is strongly related to the natural proofs barrier for proving circuit lower bounds.

But there are some algorithms for which better “physical intuition” exists. Many optimization algorithms have a “potential function” that improves at every step, and other algorithms such as Monte-Carlo Markov-Chain sampling, are inspired by and can be analyzed using physics intuition. These connections have been recently explored, resulting in both new algorithms as well as better understanding of algorithmic techniques for optimization and learning, as well as the regimes in which they apply.
There are also cases where the computer-science intuition can help in analyzing physical systems. Quantum computers are of course one example, but apparently there are other interesting physical systems (maybe even black holes? see also this summer school) which are “disordered” enough that the best way of thinking of them might be to treat them as random circuits of certain complexity. More generally, in recent years physicists have began to view information and computation as an increasingly useful lens through which to understand physics. The “it from qubit” perspective, whereby spacetime emerges from information rather than the other way around, is growing in popularity.
Finally, on a more “meta” level, the task of doing theoretical physics itself can be thought of as a computational problem. In a fascinating talk, physicist Nima Arkani-Hamed discusses the problem of finding a theory of physics as essentially solving an optimization problem in the “space of ideas”. Specifically, it is a non convex problem, and so a local optimum is not necessarily a global one. Arkani-Hamed calls classical physics “the top of a local mountain in the space of ideas”, i.e., a local optimum, while quantum mechanics is the top of a “taller mountain”. The reason it was hard to make the leap to quantum mechanics is exactly because they “they’re not smoothly connected”.

By classical physics being a local optimum, we mean that if you try to “tweak” classical physics by turning (in his words) “knobs, and little wheels, and twiddles”, you will only get a theory that is less beautiful and with less explanatory power. To get to the better theory of quantum mechanics, one needs to make a conceptual jump, rather than a series of small tweaks. Just like classical physics, quantum mechanics itself is a local optimum, for which every small “tweak” will only make it less beautiful and predictive. This is one explanation as to why it has been so difficult to find the grander theory that unifies general relativity and quantum mechanics. As Harkani-Hamed says, to find such a theory “there’s going to have to be a jump of a comparable magnitude, in the jump that people have to make in going from classical to quantum”. In a related point to “it from qubit”, he also says that “many, many of us suspect that the notion of spacetime can’t be fundamental and it has to be replaced by something else.”
Interestingly, in certain settings convex optimization can be applied to explore the “space of ideas”. In particular some works on the “bootstrap method” use semidefinite programming to explore the space of quantum field theories that satisfy certain symmetries. It turns out that sometimes the constraints that one can derive from these symmetries are so powerful, that they completely determine the theory.

 

A fall seminar

The bottom line is that we’re seeing a more and more interesting exchange of ideas between physics and theoretical computer science. As I’m sure I already demonstrated in some cringe-worthy statements above, I know very little about this interface, but am interested in finding out more.

So, Tselil Schramm and I will be running a graduate seminar this fall. We will be learning together with the seminar participants about some of these connections, and hopefully by the end of the term we will all be a little less ignorant.

Some of the topics we will discuss include:

  • Connections between statistical physics and algorithms, understanding the physics predictions for hard and easy regimes via phase transitions.
  • Quantum information theory: quantum-inspired classical results, as well as classical algorithms for quantum problems.
  • The conformal bootstrap: exploring the space of possible physical theories using semidefinite programming.
  • Black holes, bulk/boundary correspondence, and computational complexity.
  • Quantum superiority – understanding the current proposals for demonstrating exponential speedups for quantum computers, and the evidence for their classical difficulty.
  • Quantum Hamiltonian Complexity – the quantum analog of constraint satisfaction problems, with questions such as the existence of a “Quantum PCP Theorem”.

Each one of these is probably worth a semester course on its own, and is typically presented to people with significant physics background. But we are hoping we can create a “tasting menu” and manage to take away some insights and ideas from each of those areas, even if we can’t cover the whole ground.

Participants in the seminar will not only present papers or surveys, but also write a blog post about them, which I will post here, so stay tuned for more information.

Beyond CRYPTO workshop: August 19

August 1, 2018

[Unrelated note: Huge congratulations to Costis Daskalakis – winner of the 2018 Nevanlinna medal!]

As part of the CRYPTO 2018 conference (August 19-23, Santa Barbara, CA), there is a set of of affiliated events. The conference organizers (Tal Rabin, Elette Boyle, and Fabrice Benhamouda)  asked me  to advertise the workshop Beyond Crypto: A TCS Perspective (itself organized by Yuval Ishai and Guy Rothblum) on August 19th that can be of particular interest to theoretical computer scientists.

Among the speakers will be:

  • Aleksander Madry (MIT) will talk about “Machine Learning and Security: The Good, the Bad, and the Hopeful”
  • Cynthia Dwork (Harvard) will talk about “Theory for Society: Crypto on Steroids”.
  • Virginia Vassilevska Williams (MIT) will talk about “A Fine-Grained Approach to Complexity”
  • Mary Wootters (Stanford) will talk about “Cryptography, Local Decoding, and Distributed Storage”
  • Scott Aaronson (UT Austin) will talk about “Certified Randomness from Quantum Supremacy”.

(I am also speaking in this workshop, and my talk is titled “On Optimal Algorithms and Assumption Factories”).

 

Theoryfest recap and FOCS call for workshops

July 1, 2018

I just came back from a wonderful TheoryFest in LA. There was a fantastic program,  including not just the paper presentations, but also tutorials, keynote talks, plenary short papers, and workshops, as well as other events including the junior/senior lunches, STOC 50th birthday, and probably others that I am forgetting right now.

Still, while we are all taking down our TheoryFest trees, we can remind ourselves that there are other holidays on the theory calendar. In particular FOCS 2018 will be in October in Paris, and it also contains a “workshop and tutorial day”.  If you want to organize a workshop or tutorial, see the call for proposals. Key points are:

  • Workshop and Tutorial Day: Saturday, October 6, 2018 (Paris)
  • Workshop and Tutorial Co-Chairs: Robert Kleinberg and James R. Lee
  • Submission deadline: August 1st, 2018
  • Notification: August 6th, 2018
  • Send proposals and questions to focs2018workshops@gmail.com 

There are worse things in life than organizing a workshop in Paris..

 

Awesome Speakers at TheoryFest Computational Thresholds Workshop Tomorrow

June 29, 2018
by

Guest post by Sam Hopkins

I just got back from dinner with some of the great speakers who will be at our TheoryFest workshop tomorrow afternoon on computational thresholds for average-case problems, and I am very excited for what’s coming! Since I didn’t get much chance to introduce the speakers in my last post, and not all of them are the “usual suspects” for a STOC workshop, I’d like to take a few paragraphs to do so here, and discuss a little further what they might speak about.

The talks will run the gamut from statistical physics to machine learning to good old theory of computing, and in particular will aim to address questions at their 3-wise intersection. This intersection is full of open problems which are both interesting and approachable. I hope to see lots of people there!

On to the speakers:

Florent KrzakalaFlorent is a statistical physicist at the Sorbonne in Paris. For more than 10 years he has been one of the leaders in studying high-dimensional statistical inference and average-case computational problems through the lens of statistical physics.

One of my favorite lines Florent’s work was the construction (with several others) of random measurement matrices A and corresponding sparse-signal reconstruction algorithms which can recover a random sparse vector x \in \mathbb{R}^n with \rho n nonzero entries from the measurement Axwhere the dimension of A is only \rho n \times \rho. (That is, algorithms which recover a \rho n-sparse vector from only \rho n measurements — not \rho n \log n, or even 10 \rho n!) This involved bringing together insights from compressed sensing and spin-glass theory in a rather remarkable way.

Lately, Florent tells me he has been interested in the physics of matrix factorization and completion problems, and of neural networks. Tomorrow he is going to discuss computational-versus-statistical gaps for a variety of high-dimensional inference problems, addressing the question of why polynomial-time inference algorithms don’t necessarily achieve information-theoretically-optimal results from a statistical physics perspective.

Nike SunNike is a probabilist & computer scientist at UC Berkeley, before which she was the Schramm postdoctoral fellow at MIT and MSR. Many of you may know her from the tour de force work (joint with Jian Ding and Allan Sly) establishing the k-SAT threshold for large k — this was the biggest progress in years on the problem in random CSPs.

Her research spans many topics in probability, but tomorrow she is discussing random CSPs, on which she is among the world experts. In this area her work has made great steps in rigorous-izing the predictions of statistical physics regarding the geometry of the space of solutions to a random CSP instance. This geometry is rich: at some clause densities random CSPs seem to have well-connected spaces of solutions, and at others the solution spaces are shattered. In between are a wealth of phase transitions whose algorithmic implications remain poorly understood (read: a great source of open problems!).

Jacob SteinardtJacob is a graduating PhD student at Stanford, and a rising star in provable approaches to machine learning. One of his focuses of late has been design of algorithms for learning problems in the presence of untrustworthy data, model misspecification, and other challenges the real world imposes on the idealized learning settings we often see here at STOC.

His paper with Moses Charikar and Greg Valiant on list-learning has attracted a lot of attention and sparked several follow-up works at this year’s STOC. In that paper, Jacob and his coauthors realized that the problem of learning parameters of a distribution when only a tiny fraction (say 0.001 percent) of your samples come from that distribution provides a generalization of and useful perspective on many classic learning problems, like learning mixture models. (In the list learning setting, one aims to output a list of candidate parameters such that one of the sets of parameters is close to the parameters of the distribution you wanted to learn.)

Jacob’s work has explored the notion that polynomial-time tractability of learning problems is intimately related to robustness. A caricature: if any learning problem solvable in polynomial time can also be solved in polynomial time in the presence of some untrustworthy data, then polynomial-time algorithms cannot solve inference problems which are statistically impossible in the face of untrustworthy data. This offers an intriguing explanation for the existence of average-case/statistical problems which are unsolvable by polynomial-time algorithms, in spite of their information-theoretic (i.e. exponential-time) solvability.

I will also give a talk.

See you there!

Edit: correct some attributions.