The CFAIL 2019 program chairs would like to remind you that some failures are good. Failure to attack a cryptosystem? Good! It might be a strong cryptosystem. Failure to publish in Eurocrypt? No problem! You’ll get ’em next time. Failure to cut back on caffeine like you promised in your New Year’s resolution? Good! You must be full of energy!
But failure to submit to CFAIL… lame!
Work on those submissions, cryptologists! We want to see your failures in all their glory!
Just a friendly reminder that the CFAIL submission deadline is April 1.
The firewall paradox (introduced here) is a bewitching thought experiment that mandates a deeper understanding of our reality. As luck would have it, QFT predictions seem sound, GR calculations appear valid, and semi-classical approximations look reasonable: no one is willing to budge! To save Alice from burning in the miserable firewall, therefore, we must come up with a radically new proposal. This blog post aims to map what seems to be a hard, physics dilemma into a Computer Science problem that we can, using the grace of a lazy programmer, show to be hard to solve. In particular, we present an overview of the Harlow-Hayden decoding task and show how it maps the Firewall Paradox to a hard computation on a quantum computer. We end by rigorously defining quantum circuit complexity, Aaronson’s improved proof, AdS/CFT correspondence, and some fascinating homework (open) problems.
Have you ever confessed to yourself that you don’t quite understand Black Hole complementarity well? In the past decade or so, physicists realized they did not grasp the concept thoroughly either. The firewall paradox is a natural result of bewildered physicists trying to make sense of reality. Thus far, no satisfying physical explanation reaches people’s consensus. Nevertheless, Daniel Harlow and Patrick Hayden [HH13] proposed a tempting solution to the firewall paradox using Computational Complexity (CC). Concretely, they showed the following.
We elaborate on this deep connection throughout this post.
The notion of a `conjecture’ has different implications for either field. In Physics, a wrong conjecture often delights physicists since there is more work left to do and better theory required to explain the physical phenomenon under study. For complexity theorists, however, if, say, the famous is proved to be false, a few consequences follow. First, the authors of the proof win a million dollars (See the Millennium problems.). Second, such a result would break almost all the foundations of computational complexity and cryptography. That is, refuting an (important) conjecture in computational complexity is tantamount to resulting in real-world catastrophes! Below in Table 1 is a short summary.
Theoretical Physics | Theoretical Computer Science | |
Object | Are the mathematical models for our physical world correct? | Is our intuition about the mathematical models we defined correct? |
Consequences of disproving | After few days/months/years, physicists will come up with a new model and try to falsify it. | The belief system of complexity theorists collapses. Some super algorithms might show up and shake the world. |
How to prove/disprove | Checking mathematical consistency, doing both thought and empirical experiments. | Using fancy mathematics or designing super algorithms. |
Table 1: “Conjecture”, as used in Physics and Computer Science.
We labour above to convince the reader about these differences because the Harlow-Hayden decoding task has vital implications for both, Physics and Computer Science. The connections between Black Holes and Computational Complexity can be thought of as a new testbench for physical models.
In Quantum Computation, gates are unitary operators. Some common gates used in the Quantum Information literature are as follows:
For more details, please refer to [NC02]. Interestingly enough, singe-qubit and two-qubit gates are sufficient to construct any -qubit gates! Such a set of operators is said to be universal. For example, and are universal for almost every single-qubit operator. Furthermore, Kitaev and Solovay gave a qualitative version of the universality theorem by showing that getting an approximation to an -qubit operator in trace norm, only gates are needed. A final remark on unitary operators: an -qubit operator is actually a matrix of size by . Namely, it requires complex numbers to describe an -qubit operator. (Note the difference between and .)
A quantum circuit has inputs consisting of -qubits, potentially with ancilla bits. The computation is done by interior gates from some universal gate set, e.g., . The outputs are qubits with potentially bits of garbage. See the following example of quantum circuit for the -qubit Hadamard operator in Figure 1.
Similarly, the size of a quantum circuit is defined as the number of interior gates. In Figure 1 for example, the size of the circuit is .
Let be a boolean function. Define its quantum circuit complexity as the size of the smallest quantum circuit such that for any
Let denote the class of boolean functions of quantum circuit complexity at most . The complexity class is defined as . It immediately follows from definition that . As proving lower bound for (i.e., finding a problem that is not in ) is a long-standing extremely difficult problem, it is believed to be hard to prove lower bound against .
As is too powerful to work with, one might want to define a weaker version of the quantum complexity measure. A natural choice is considering a uniform computational model.
In the classical setting, a uniform computational model is defined using a Turing machine. However, it is not clear how to define the corresponding version, a quantum Turing machine. One way to do so is via uniform circuits, defined as follows. We say a circuit family is -uniform if there exists a polynomial time Turing machine such that on input , it outputs .
Let be a boolean function. Define its uniform quantum circuit complexity as the size of the smallest uniform quantum circuit such that for any
Let denote the class of boolean functions of quantum circuit complexity at most . The complexity class is defined as . It immediately follows from definition that .
Let be an unitary matrix. Define be the smallest quantum circuit such that
This unitary complexity can be thought of as a relaxation of the quantum circuit complexity. The reason is that here a unitary matrix might not compute a boolean function. Thus, proving a lower bound for implies a lower bound for unitary complexity while the converse is not clear. Namely, proving a super-polynomial lower bound for the unitary complexity might be an easier task.
However, no non-trivial^{1} lower bound for the unitary complexity is known and there is, unfortunately, no formal barrier result explaining why this is difficult to prove.
We defined quantum circuits above, and we hope you find them exotic – at least start-up investors do. But given how fundamental quantum circuits are to the Harlow-Hayden decoding task, we ask: is it possible to efficiently (classically) simulate a quantum circuit made up of a restricted but non-trivial set of quantum gates? We show below a restricted variant of the popular Gottesman-Knill Theorem:
Theorem (Gottesman-Knill).
1. Given: Clifford circuit made up of gates , where is measured on its first output line.
2.Task: Show that it is possible to (classically) efficiently sample the output distribution of .
Proof:
where . Since the projector can be written as , we get
where since we only measure the first output line of . At first glance, might look like a monstrous computation to perform since, in general, the operator in the middle is a matrix, so the calculating the inner product would require exponential time classically. However, recognizing that Clifford gates are normalizers of the Pauli Group on qubits, note that where is some Pauli matrix. It is straightforward to show that these update rules can be computed efficiently. We thus have
which is a product of terms. We have thus reduced the (exponentially large) burden of computing a giant matrix
to computing matrices size , so we can sample the output distribution efficiently.
All of the black hole physics covered in the previous blog post leads to the moment (we hope) you have been waiting for: a charming resolution of the firewall paradox. Consider the interior of an old, rusty black hole that has radiated away more than half of its matter. Let be the old Hawking radiation, and let represent the fresh Hawking radiation coming right out of the boundary of the Black Hole. Alice is our canonical Ph.D. student who is brave enough to risk her life for physics. Since is a giant information scrambler, we expect to find entanglement between and with overwhelming probability. We know from QFT that there are bell pairs straddling the event horizon of the black hole, so and should be maximally entangled. But this is a problem because cannot be entangled with both and ! The AMPS argument shows that if Alice is able to distill a bell pair between and , then we should see a firewall of photons at the event horizon, thus violating the no-drama postulate. See Figure 2 for more intuition about the set up. (Note that the ‘s represent Bell Pairs, as consistent with the 3D-Quon Language) If we take Black Hole complimentary seriously, then we have an answer! If Alice does not distill a Bell pair between and , then nothing really happens. However, if Alice does manage to distill the entanglement between and , then we witness a firewall. Is not this answer so very unsatisfactory? Why should the existence of a firewall depend on Alice’s ability to distill entanglement? What is so special about this decoding task?
The H-H decoding task answers precisely this question. Intuitively, it says that if Alice manages to distill a Bell pair between and , she could also invert a one-way function, a task we believe is very hard to perform! We conjecture that Alice would take exponential time to decode the entanglement, so the Black Hole would disappear long before Alice even makes a dent in the problem! Before we provide an in depth resolution of the paradox through the H-H decoding, let us (as good philosophers do) briefly review assumptions:
Let us jump into the definition of the Harlow-Hayden decoding task.
Definition (Harlow-Hayden decoding task).
Given a (polynomial-size) quantum circuit as input such that where are three disjoint part of the qubits. Furthermore, it is guaranteed that there exists a unitary operator acting only on the qubits in such that after applying , the rightmost bit of and the leftmost bit of forms a bell pair . The goal of the Harlow-Hayden decoding task is then to find a quantum circuit for such U on the qubits in . See Figure 3.
A necessary condition for the firewall paradox to make sense is that the Harlow-Hayden decoding task should be easy. If Alice cannot distill the entanglement efficiently, the black hole will evaporate before Alice is ready to witness the firewall!
To refute the firewall paradox, Harlow and Hayden proved the following theorem.
Theorem 1.
If the Harlow-Hayden decoding task can be done in , then .
We won’t formally define the complexity class . However, it is important to know that the foundation of the lattice-based cryptography, a promising quantum-secure crypto framework, is based on the hardness of some problem in . If , then all lattice-based cryptosystems can be broken by polynomial time quantum algorithm!
Instead of a proof for Theorem 1, which is more involved, we give a proof for an improvement of the Harlow-Hayden theorem due to Scott Aaronson. (Aaronson also showed that there might not even exist quantum-secure cryptography if the Harlow-Hayden decoding task can be efficiently solved!)
In Aaronson’s lecture notes [Aar16], he showed the following improvement on Theorem 1.
Theorem 2.
If the Harlow-Hayden decoding task can be done in , then quantum-secure injective one-way function does not exist.
Before formally defining a one way function, it is paramount to understand its impact: modern cryptosystems are built from some variant of a one-way function. Intuitively, primitives that have the one-way property are (i) easy to implement (e.g., encrypt) but (ii) hard to invert (e.g., be attacked). As a result, if there is no quantum-secure injective one-way function, then that is strong evidence that quantum-secure cryptography might not exist.
Now, let us formally define what quantum-secure injective one-way function is and give a formal proof for Theorem 2.
Definition 1 (Quantum-secure injective one-way function).
A boolean function is a quantum-secure injective one-way function if
- is injective,
- , and
- for any polynomial time quantum algorithm
Note that since is injective, the last condition can actually be phrased as . Also, the condition should be read as “on input , the quantum algorithm outputs ”, namely, inverts .
Suppose the Harlow-Hayden decoding task is in , we are going to show that for any injective computable by some polynomial size quantum circuit, there is a polynomial time quantum algorithm that inverts . Namely, is not a quantum-secure injective one-way function.To get an efficient inverting algorithm for , let us first prepare a special circuit from and treat it as an input to the Harlow-Hayden decoding task. The circuit will simply map the to the following state
Note that as has a polynomial size quantum circuit, the circuit can also be implemented in polynomial size.Next, the easiness of the Harlow-Hayden decoding task guarantees us the existence of a unitary operation on the qubits in such that for any
for some state . By restricting on the first qubits, one can get unitary operators and such that for all ,
Thus, inverts because for any ,
Furthermore, as we are guaranteed that the Harlow-Hayden decoding task is in , as well as and all have polynomial size quantum circuits! Namely, can be efficiently inverted by a quantum algorithm and thus is not a quantum-secure injective one-way function.
The Harlow-Hayden decoding task as well as the Aaronson’s improvement can be interpreted as (strong) evidence that distilling the B-R Bell pair is hard (in the worst-case^{2}). One might hope for an average-case hardness for the Harlow-Hayden decoding task and thus infer that most black holes are difficult to distill. However, even if such average-case hardness results existed, physicists would still remain dissatisfied! The foremost grievance a physicist may have is the lack of a coherent causal framework to model reality. That is, what happens if, in the
very small but non-zero chance, a black hole is easy to distill? Does that mean that a firewall exists in such black hole? How can a unifying theory explain such situation coherently? An ideal theory for theoretical physicists should work for every black hole instead of for most black holes! Second, physicists seem to dislike the abstract, process-theoretic approach undertaken by computer scientists. Here, we have completely ignored talking about the internal dynamics of a black hole or even a full description of its evolving Hilbert space. They would, for instance, like to see a differential equation that captures the difficulty of distilling a black hole throughout its evolution. Resolutions to the firewall paradox or effort towards building a theory of quantum gravity should be somewhat explicit in the sense that one can really instantiate some (toy) examples from the theory and see how the system evolves and examine whether this fits the real experience from the world. In other words, a theory with a black box (i.e., a complexity conjecture) might not be regarded as a resolution.
^{1}Non-trivial here means the unitary matrix is explicit in the sense that given , one can efficiently compute .
^{2}Hard in worst-case means that there does not exist efficient algorithm that works on every input. Another hardness notion is hard on average, by which we mean there does not exist efficient algorithm the works for most of the input. Showing average-case hardness is in general a more difficult task than proving worst-case hardness.
^{3}Does the following hold: for any unitary matrix , there exists a classical oracle such that where is the minimum size of quantum circuit that approximates with oracle access to .
In 2013, Harlow and Hayden drew an unexpected connection between theoretical computer science and theoretical physics as they proposed a potential resolution to the famous black hole Firewall paradox using computational complexity arguments. This blog post attempts to lay out the Firewall paradox and other peculiar (at first) properties associated with black holes that make them such intriguing objects to study. This post is inspired by Scott Aaronson’s [1] and Daniel Harlow’s [2] excellent notes on the same topic. The notes accompanying this post provides a thorough and self-contained introduction to theoretical physics from a CS perspective. Furthermore, for a quick and intuitive summary of the Firewall paradox and it’s link to computational complexity, refer to this blog post by Professor Barak last summer.
Black holes are fascinating objects. Very briefly, they are regions of spacetime where the matter-energy density is so high and hence, where the gravitational effects are so strong that no particle (not even light!) can escape from it. More specifically, we define a particular distance called the “Schwarzschild radius” and anything that enters within the Schwarzschild radius, (also known as the “event horizon,”) cannot ever escape from the black hole. General relativity predicts that this particle is bound to hit the “singularity,” where spacetime curvature becomes infinite. In the truest sense of the word, they represent the “edge cases” of our Universe. Hence, perhaps, it is fitting that physicists believe that through thought experiments at these edges cases, they can investigate the true behavior of the laws that govern our Universe.
Once you know that such an object exists, many questions arise: what would it look it from the outside? Could we already be within the event horizon of a future black hole? How much information does it store? Would something special be happening at the Schwarzschild radius? How would the singularity manifest physically?
The journey of trying to answer these questions can aptly be described by the term “radical conservatism.” This is a phrase that has become quite popular in the physics community. A “radical conservative” would be someone that tries to modify as few laws of physics as possible (that’s the conservative part) and through their dogmatic refusal to modify these laws and go wherever their reasoning leads (that’s the radical part) is able to derive amazing things. We radically use the given system of beliefs to lead to certain conclusions (sometimes paradoxes!) and then conservatively update the system of beliefs to resolve the created paradox and iterate. We shall go through a few such cycles and end at the Firewall paradox. Let’s begin with the first problem: how much information does a black hole store?
A black hole is a physical object. Hence, it could be able to store some information. But how much? In other words, what should the entropy of a black hole be? There are two simple ways of looking at this problem:
The first answer troubled Jacob Bekenstein. He was a firm believer in the Second Law of Thermodynamics: the total entropy of an isolated system can never decrease over time. However, if the entropy of a black hole is 0, it provides with a way to reduce the entropy of any system: just dump objects with non-zero entropy into the black hole.
Bekenstein drew connections between the area of the black hole and its entropy. For example, the way in which a black hole’s area could only increase (according to classical general relativity) seemed reminiscent of entropy. Moreover, when two black holes merge, the area of the final black hole will always exceed the sum of the areas of the two original black holes This is surprising as for two spheres, the area/radius of the merged sphere, is always less than the sum of the areas/radii of two individual spheres:
Most things we’re used to, like a box of gas, have an entropy that scales linearly with its volume. However, black holes are not like most things. He predicted that entropy of a black hole should be proportional to its area, A and not its volume. We now believe that Bekenstein was right and it turns out that the entropy of the black hole can be written as:
where is Boltzmann constant and is the Planck-length, a length scale where physicists believe quantum mechanics breaks down and a quantum theory of gravity will be required. Interestingly, it seems as though the entropy of the black hole is (one-fourth times) the number of Planck-length-sized squares it would take to tile the horizon area. (Perhaps, the microstates of the black hole are “stored” on the horizon?) Using “natural units” where we set all constants to 1, we can write this as
which is very pretty. Even though this number of not infinite, it is very large. Here are some numerical estimates from [2]. The entropy of the universe (minus all the black holes) mostly comes from cosmic microwave background radiation and is about in some units. Meanwhile, in the same units, the entropy of a solar mass black hole is . The entropy of our sun, as it is now, is a much smaller . The entropy of the supermassive black hole in the center of our galaxy is , larger than the rest of the universe combined (minus black holes). The entropy of any of the largest known supermassive black holes would be . Hence, there is a simple “argument” which suggests that black holes are the most efficient information storage devices in the universe: if you wanted to store a lot of information in a region smaller than a black hole horizon, it would probably have to be so dense that it would just be a black hole anyway.
However, this resolution to “maintain” the second law of thermodynamics leads to a radical conclusion: if a black hole has non-zero entropy, it must have a non-zero temperature and hence, must emit thermal radiation. This troubled Hawking.
Hawking did a semi-classical computation looking at energy fluctuations near the horizon and actually found that black holes do radiate! They emit energy in the form of very low-energy particles. This is a unique feature of what happens to black holes when you take quantum field theory into account and is very surprising. However, the Hawking radiation from any actually existing black hole is far too weak to have been detected experimentally.
One simplified way to understand the Hawking radiation is by thinking about highly coupled modes (think “particles”) being formed continuously near the horizon. As this formation must conserve the conservation of energy, one of these particles has negative energy and one of the particles has the same energy but with a positive sign and hence, they are maximally entangled (if you know the energy of one of the particles, you know the energy of the other one): we will be referring to this as short-range entanglement. The one with negative energy falls into the black hole while the one with positive energy comes out as Hawking radiation. The maximally-entangled state of the modes looks like:
Here is a cartoon that represents the process:
Because energetic particles are leaving the black hole and negative energy particles are adding to it, the black hole itself will actually shrink, which would never happen classically! And, eventually a black-hole will disappear. In fact, the time of evaporation of the black hole scales polynomially in the radius of the black hole, as . The black holes that we know about are simply too big and would be shrinking too slowly. A stellar-mass black hole would take years to disappear from Hawking radiation.
However, the fact that black holes disappear does not play nicely with another core belief in physics: reversibility.
A core tenet of quantum mechanics is unitary evolution: every operation that happens to a quantum state must be reversible (invertible). That is: if we know the final state and the set and order of operations performed, we should be able to invert the operations and get back the initial state. No information is lost. However, something weird happens with an evaporating black hole. First, let us quickly review pure and mixed quantum states. A pure state is a quantum state that can be described by a single ket vector while a mixed state represents a classical (probabilistic) mixture of pure states and can be expressed using density matrices. For example, in both, the pure state and mixed state would one measure half the time and 50% half the time. However, in the later one would not observe any quantum effects (think interference patterns of the double-slit experiment).
People outside of the black hole will not be able to measure the objects (quantum degrees of freedom) that are inside the black hole. They will only be able to perform measurements on a subset of the information: the one available outside of the event horizon. So, the state they would measure would be a mixed state. A simple example to explain what this means is that if the state of the particles near the horizon is:
tracing over the qubit A leaves us with the state and density matrix:
,
which is a classical mixed state (50% of times results in 1 and 50% of times results in 0). The non-diagonal entries of the density matrix encode the “quantum inference” of the quantum state. Here, are they are, in some sense we have lost the “quantum” aspect of the information.
In fact, Hawking went and traced over the field degrees of freedom that were hidden behind the event horizon, and found something surprising: the mixed state was thermal! It acted “as if” it is being emitted by some object with temperature “T” which does not depend on what formed the black hole and solely depends on the mass of the black hole. Now, we have the information paradox:
What gives? If the process of black hole evaporation is truly “non-unitary,” it would be a first for physics. We have no way to make sense of quantum mechanics without the assumption of unitary operations and reversibility; hence, it does not seem very conservative to get ride of it.
Physicists don’t know exactly how information is conserved, but they think that if they assume that it does, it will help them figure out something about quantum gravity. Most physicists believe that the process of black hole evaporation should indeed be unitary. The information of what went into the black hole is being released via the radiation in way too subtle for us to currently understand. What does this mean?
However, this causes yet another unwanted consequence: the violation of the no-cloning theorem!
The no-cloning theorem simply states that an arbitrary quantum state cannot be copied. In other words, if you have one qubit representing some initial state, no matter what operations you do, you cannot end up with two qubits with the same state you started with. How do our assumptions violate this?
Say you are outside the black hole and send in a qubit with some information (input to the function). You collect the radiation corresponding to the qubit (output of the function) that came out. Now you decode this radiation (output) to determine the state of infalling matter (input). Aha! You have violated the no-cloning theorem as you have two copies of the same state: one inside and one outside the black hole.
So wait, again, what gives?
One possible resolution is to postulate that the inside of the black hole just does not exist. However, that doesn’t seem very conservative. According to Einstein’s theory of relativity, locally speaking, there is nothing particularly special about the horizon: hence, one should be able to cross the horizon and move towards the singularity peacefully.
The crucial observation is that for the person who jumped into the black hole, the outside universe may as well not exist; they can not escape. Extending this further, perhaps, somebody on the outside does not believe the interior of the black hole exists and somebody on the inside does not believe the exterior exists and they are both right. This hypothesis, formulated in the early 1990s, has been given the name of Black Hole Complementarity. The word “complementarity” comes from the fact that two observers give different yet complementary views of the world.
In this view, according to someone on the outside, instead of entering the black hole at some finite time, the infalling observer will instead be stopped at some region very close to the horizon, which is quite hot when you get up close. Then, the Hawking radiation coming off of the horizon will hit the observer on its way out, carrying the information about them which has been plastered on the horizon. So the outside observer, who is free to collect this radiation, should be able to reconstruct all the information about the person who went in. Of course, that person will have burned up near the horizon and will be dead.
And from the infalling observer’s perspective, however, they were able to pass peacefully through the black hole and sail on to the singularity. So from their perspective, they live, while from the outside it looks like they died. However, no contradiction can be reached, because nobody has access to both realities.
But why is that? Couldn’t the outside observer see the infalling observer die and then rocket themselves straight into the black hole themselves to meet the alive person once again before they hit the singularity, thus producing a contradiction?
The core idea is that it must take some time for the infalling observer to “thermalize” (equilibriate) on the horizon: enough time for the infalling observer to reach the singularity and hence become completely inaccessible. Calculations do show this to be true. In fact, we can already sense a taste of complexity theory even in this argument: we are assuming that some process is slower than some other process.
In summary, according to the BHC worldview, the information outside the horizon is redundant with the information inside the horizon.
But, in 2012, a new paradox, the Firewall paradox, was introduced by AMPS [3]. This paradox seems to be immune to BHC: the paradox exists even if we assume everything we have discussed till now. The physics principle we violate, in this case, is the monogamy of entanglement.
Before we state the Firewall paradox, we must introduce two key concepts.
Monogamy of entanglement is a statement about the maximum entanglement a particle can share with other particles. More precisely, if two particles A and B are maximally entangled with each other, they cannot be at all entanglement with a third particle C. Two maximally entangled particles have saturated both of their “entanglement quotas\”. In order for them to have correlations with other particles, they must decrease their entanglement with each other.
Monogamy of entanglement can be understood as a static version of the no-cloning theorem. Here is a short proof sketch of why polygamy of entanglement implies the violation of no-cloning theorem.
Let’s take a short detour to explain quantum teleportation:
Say you have three particles A, B, and C with A and B maximally entangled (Bell pair), and C is an arbitrary quantum state:
We can write their total state as:
Re-arranging and pairing A and C, the state simplifies to:
which means that if one does a Bell pair measurement on A and C, based on the measurement outcome, we know exactly which state B is projected to and by using rotations can make the state of B equal to the initial state of C. Hence, we teleported quantum information from C to B.
Now, assume that A was maximally entangled to both B and D. Then by doing the same procedure, we could teleport quantum information from C to both B and D and hence, violate the no-cloning theorem!
Named after Don Page, the “Page time” refers to the time when the black hole has emitted enough of its energy in the form of Hawking radiation that its entropy has (approximately) halved. Now the question is, what’s so special about the Page time?
First note that the rank of the density matrix is closely related to its purity (or mixedness). For example, a completely mixed state is the diagonal matrix:
which has maximal rank (). Furthermore, a completely pure state can always be represented as (if we just change the basis and make the first column/row represent ):
which has rank 1.
Imagine we have watched a black hole form and begin emitting Hawking radiation. Say we start collecting this radiation. The density matrix of the radiation will have the form:
where is the total number of qubits in our initial state, is the number of qubits outside (in form of radiation), and is the probability of each state. We are simply tracing over the degrees of freedom inside the black hole (as there are degrees inside the black hole, dimensionality of this space is ).
Don Page proposed the following graph of what he thought entanglement entropy of this density matrix should look like. It is fittingly called the “Page curve.”
The entanglement entropy of the outgoing radiation finally starts decreasing, as we are finally able to start seeing entanglements between all this seemingly random radiation we have painstakingly collected. Some people like to say that if one could calculate the Page curve from first principles, the information paradox would be solved. Now we are ready to state the firewall paradox.
Say Alice collects all the Hawking radiation coming out of a black hole. At maybe, about times the Page time, Alice is now able to see significant entanglement in all the radiation she has collected. Alice then dives into the black hole and sees an outgoing Hawking mode escaping. Given the Page curve, we know that knowing this outgoing mode must decrease the entropy of our observed mixed state. In other words, it must make our observed density matrix purer. And hence, be entangled with the particles we have already collected.
(Another way to think about this: let’s say that a random quantum circuit at the horizon scrambles the information in a non-trivial yet injective way in order for radiation particles to encode the information regarding what went inside the black hole. The output qubits of the circuit must be highly entangled due to the random circuit.)
However, given our discussion on Hawking radiation about short-range entanglement, the outgoing mode must be maximally entangled with an in-falling partner mode. This contradicts monogamy of entanglement! The outgoing mode cannot be entangled both with the radiation Alice has already collected and also maximally entangled with the nearby infalling mode!
So, to summarize, what did we do? We started with the existence of black holes and through our game of conservative radicalism, modified how physics works around them in order to make sure the following dear Physics principles are not violated by these special objects:
And finally, ended with the Firewall paradox.
So, for the last time in this blog post, what gives?
In this post, we will talk about detecting phase transitions using
Approximate-Message-Passing (AMP), which is an extension of
Belief-Propagation to “dense” models. We will also discuss the Replica
Symmetric trick, which is a heuristic method of analyzing phase
transitions. We focus on the Rademacher spiked Wigner model (defined
below), and show how both these methods yield the same phrase transition
in this setting.
The Rademacher spiked Wigner model (RSW) is the following. We are given
observations where
(sampled uniformly) is the true signal and is a
Gaussian-Orthogonal-Ensemble (GOE) matrix:
for and
. Here is the signal to noise
ratio. The goal is to approximately recover .
The question here is: how small can be such that it is
impossible to recover anything reasonably correlated with the
ground-truth ? And what do the approximate-message-passing algorithm
(or the replica method) have to say about this?
To answer the first question, one can think of the task here is to
distinguish vs
. One approach to distinguishing these distributions is to
look at the spectrum of the observation matrix . (In fact, it turns
out that this is an asymptotically optimal distinguisher [1]). The spectrum of behaves as ([2]):
When , we start to see an eigenvalue in the
planted model.
This section approximately follows the exposition in [3].
First, note that in the Rademacher spiked Wigner model, the posterior
distribution of the signal conditioned on the observation
is: This
defines a graphical-model (or “factor-graph”), over which we can perform
Belief-Propogation to infer the posterior distribution of .
However, in this case the factor-graph is dense (the distribution is a
product of potentials for all
pairs of ).
In the previous blog post, we saw belief propagation works great when the underlying interaction
graph is sparse. Intuitively, this is because is locally tree like,
which allows us to assume each messages are independent random
variables. In dense model, this no longer holds. One can think of dense
model as each node receive a weak signal from all its neighbors.
In the dense model setting, a class of algorithms called Approximate
message passing (AMP) is proposed as an alternative of BP. We will
define AMP for RWM in terms of its state evolution.
Recall that in BP, we wish to infer the posterior distributon of
, and the messages we pass between nodes correspond to marginal
probability distribution over values on nodes. In our setting, since the
distributions are over , we can represent distributions by
their expected values. Let denote the
message from to at time . That is, corresponds
to the expected value .
To derive the BP update rules, we want to compute the expectation
of a node , given the
messages for . We can
do this using the posterior distribution of the RWM, ,
which we computed above.
And similarly for .
From the above, we can take expectations over , and express
in terms of
. Doing this (and
using the heuristic assumption that the distribution of is a
product distribution), we find that the BP state update can be written
as:
where the interaction matrix , and
.
Now, Taylor expanding around , we find
since the terms are of order .
At this point, we could try dropping the “non-backtracking” condition
from the above sum (since the node contributes at most
to the sum anyway), to get the state update:
(note the messages no longer
depend on receiver – so we write in place of ).
However, this simplification turns out not to work for estimating the
signal. The problem is that the “backtracking” terms which we added
amplify over two iterations.
In AMP, we simply perform the above procedure, except we add a
correction term to account for the backtracking issue above. Given ,
for all , the AMP update is:
The correction term corresponds to error introduced by the backtracking
terms. Suppose everything is good until step . We will examine
the influence of backtracking term to a node through length 2 loops.
At time , exert additional influence to
each of it’s neighbor . At time , receive roughly
. Since has magnitude
and we need to sum over all of ’s neighbors,
this error term is to large to ignore. To characterize the exact form of
correction, we simply do a taylor expansion
In this section we attempt to obtain the phase transition of Rademacher
spiked Wigner model via looking at .
We assume that each message could be written as a sum of signal term and
noise term. where
. To the dynamics of AMP (and find its phase
transition), we need to look at how the signal and noise
evolves with .
We do the following simplification: ignore the correction term and
assume each time we obtain an independent noise .
Here, we see that
and .
Note that is essentially proportional to overlap between
ground truth and current belief, since the function keeps the
magnitude of the current beliefs bounded.
For the noise term, each coordinate of is a gaussian random
variable with mean and variance
It was shown in [4] that we can introduce a new
parameter s.t.
As , turns out and
. To study the behavior of
as , it is enough to track the evolution of
.
This heuristic analysis of AMP actually gives a phase transition at
(in fact, the analysis of AMP can be done rigorously as in [5]):
For : In this case, AMP’s solution has some correlation with the ground truth.
(Figure from [6])
Another way of obtaining the phase transition is via a non-rigorous
analytic method called the replica method. Although non-rigorous, this
method from statistical physics has been used to predict the fixed point
of many message passing algorithms and has the advantage of being easy
to simulate. In our case, we will see that we obtain the same phase
transition temperature as AMP above. The method is non-rigorous due to
several assumptions made during the computation.
Recall that we are interested in minizing the free energy of a given
system where is
the partition function as before:
and
.
In replica method, is not fixed but a random variable. The
assumption is that as , free energy doesn’t vary with
too much, so we will look at the mean of to approximate free
energy of the system.
is called the free energy density and the goal now is to
compute the free energy density as a function of only , the
temperature of the system.
The replica method is first proposed as a simplification of the
computation of
It is a generally hard problem to compute in a clear way. A
naive attempt of approximate is to simply pull the log out
Unfortunately and are quite different quantities,
at least when temperature is low. Intuitively, is looking at
system with a fixed while in , and are allowed to
fluctuate together. When the temperature is high, doesn’t play a big
roll in system thus they could be close. However, when temperature is
low, there could be a problems. Let ,
,
.
While is hard to compute,
is a much easier quantity. The
replica trick starts from rewriting with moments of :
Recall that for and
, using this we can rewrite in the following
way:
Claim 1. Let
Then,
The idea of replica method is quite simple
Extend analytically to all and take the limit of .
The second step may sound crazy, but for some unexplained reason, it has
been surprisingly effective at making correct predictions.
The term replica comes from the way used to compute
in Claim 1. We expand the -th moment
in terms of replicas of the system
In this section, we will see how one can apply the replica trick to
obtain phase transition in the Rademacher spiked Wigner model. Recall
that given a hidden , the observable
where
and .
We are interested in finding the smallest where we can still
recover a solution with some correlation to the ground truth . Note
that is not so important here as doesn’t carry
any information in this case.
Given by the posterior , the system we
set up corresponding to Rademacher spiked Wigner model is the following:
the signal to noise ratio as the inverse temperature
.
Following the steps above, we begin by computing
for : Denote where is the
th replica of the system.
We then simplify the above expression with a technical claim.
Claim 2. Let where is a fixed matrix and
is the GOE matrix defined as above. Then,
for some constant depending on distribution of .
Denote . Apply Claim 2 with
, we have
To understand the term inside exponent better, we can rewrite the inner
sum in terms of overlap between replicas:
where the last equality follows from rearranging and switch the inner
and outer summations.
Using a similar trick, we can view the other term as
Note that represents overlaps between the
and th replicas and represents the
overlaps between the th replica and the ground truth vector.
In the end, we get for any integer , (Equation 1):
Our goal becomes to approximate this quantity. Intuitively, if we think
of as indices on a matrices, ,
with , then is the average of i.i.d matrices. So we
expect for w.h.p. In the
remaining part, We find the correct via rewriting Equation 1.
Observe that by introducing a new variable for and
using the property of gaussian intergal (Equation 4):
Replace each by a such integral, we
have (Equation 2):
where is the constant given by introducing gaussian intergals.
To compute the integral in (Equation 2), we need to cheat a little bit and take
before letting . Note that free energy density
is defined as
This is the second assumption made in the replica method and it is
commonly believed that switching the order is okay here. Physically,
this is plausible because we believe intrinsic physical quantities
should not depend on the system size.
Now the Laplace method tells us when , the integral in (Equation 2) is dominated by the max of the exponent.
Theorem 1 (Laplace Method). Let , then
where and is the Hessian of evaluated at the point .
Fix a pair of and apply Laplace method with
what’s left to do is to find the critical point of . Taking the
derivatives gives
where
.
We now need to find a saddle point of where the hessian is PSD. To
do that, we choose to assume the order of the replicas does not matter,
which is refer to as the replica symmetry case. ^{1} One simplest form
of is the following: , and
for some . This also implies that for some
and
Plug this back in to Equation 2 gives: (Equation 3)
To obtain , we only need to deal with the last term in
(Equation 3) as . Using the fact that for all
and using the same trick of introducing new gaussain integral as
in (Equation 4) we have
Using the fact that we want the solution to minimizes free energy,
taking the derivative of the current w.r.t. gives
which matches the fixed point of AMP. Plug in and will give us
. The curve of looks like the Figure below, where
the solid line is the curve of with the given and the
dotted line is the curve given by setting all variables .
References
What’s rather tricky about showing such a result is that, rather than a direct argument about the capability of quantum computers, what we really need to demonstrate is the incapability of classical computers to achieve tasks that can be done with quantum computers.
One of the major leaps forward in demonstrating quantum supremacy was taken by Terhal and DiVincenzo in their 2008 paper “Adaptive quantum computation, constant depth quantum circuits and arthur-merlin games“. Their approach was to appeal to a complexity-theoretic argument: they gave evidence that there exists a certain class of quantum circuits that cannot be simulated classically by proving that if a classical simulation existed, certain complexity classes strongly believed to be distinct would collapse to the same class. While this doesn’t quite provide a proof of quantum supremacy – since the statement about the distinction between complexity classes upon which it hinges is not a proven fact – because the complexity statement appears overwhelmingly likely to be true, so too does the proposed existence of non-classically-simulatable quantum circuits. The Terhal and DiVincenzo paper is a complex and highly technical one, but in this post I hope to explain a little bit and give some intuition for the major points.
Now, let’s start at the beginning. What is a quantum circuit? I’m going to go ahead and assume you already know what a classical circuit is – the extension to a quantum circuit is rather straightforward: it’s a circuit in which all gates are quantum gates, where a quantum gate can be thought of as a classical gate whose output is, rather than a deterministic function of the inputs, instead a probability distribution over all possible outputs given the size of the inputs. For example, given two single-bit inputs, a classical AND gate outputs 0 or 1 deterministically given the inputs. A quantum AND gate on the analogous single-qubit inputs would output 0 with some probability and 1 with some probability . Similarly, a classical AND gate on two 4-bit inputs outputs the bitwise AND, while the quantum analog has associated with it a 4-qubit output: some probability distribution over all 4-bit binary strings. A priori there is no particular string that is the “output” of the computation by the quantum gate; it’s only after taking a quantum measurement of the output that we get an actual string that we can think of as the outcome of the computation done by the gate. The actual string we “observe” upon taking the measurement follows the probability distribution computed by the gate on its inputs. In this way, a quantum circuit can then be thought of as producing, via a sequence of probabilistic classical gates (i.e., quantum gates) some probability distribution over possible outputs given the input lengths. It’s not hard to see that in this way, we can compose circuits: suppose we have a quantum circuit and another quantum circuit . Let have an input of qubits and an output of qubits; suppose we measure of the output qubits of – then we can feed the remaining unmeasured qubits as inputs into (assuming that those qubits do indeed constitute a valid input to ).
Consider, then, the following sort of quantum circuit: it’s a composition of quantum circuits, such that after each -th circuit we take a measurement some of its output qubits (so that the remaining unmeasured qubits become inputs to the -th circuit), and then the structure of the -th circuit is dependent on this measurement. That is, it’s as though, given a quantum circuit, we’re checking every so often at intermediate layers over the course of the circuit’s computation what the value of some of the variables are (leaving the rest to keep going along through the circuit to undergo more computational processing), and based on what we measure is the current computed value, the remainder of the circuit “adapts” in a way determined by that measurement. Aptly enough, this is called an “adaptive circuit”. But since the “downstream” structure of the circuit depends on the outcomes of all the measurements made “upstream”, each adaptive circuit actually comprises a family of circuits, each of which is specified by the sequence of intermediate measurement outcomes. That is, we can alternatively characterize an adaptive circuit as a set of ordinary quantum circuits that is parameterized by a list of measurement outcomes. Terhal and DiVincenzo call this way of viewing an adaptive circuit, as a family of circuits parametrized by a sequence of measurement values, a “non-adaptive circuit” – since we replace the idea that the circuit “adapts” to intermediate measurements with the idea that there are just many regular circuits, one for each possible sequence of measurements. It’s this non-adaptive circuit concept that’ll be our main object of study going forward.
Now, the result we wanted to demonstrate about quantum circuits had to do with their efficient simulatability by classical circuits – and so we should establish some notion of what we mean when we talk about an “efficient simulation”.
Terhal and DiVincenzo offer the following notion of a classical simulation – which in their paper they call an “efficient density computation”: consider a quantum circuit with some output of length qubits. Recall that to actually obtain an output value, we need to take a measurement of the circuit output – imagine doing this in disjoint subsets of cubits at a time. That is, we can break up the qubits into disjoint subsets and consider the entire output measurement as a process of taking measurements, subset by subset. An efficient density computation exists if there’s a classical procedure for computing, in time polynomial in the width and depth of the quantum circuit, the conditional probability distribution over the set of possible measurement outcomes of a particular subset of qubits, given any subset of the other measurement outcomes. Intuitively, this is a good notion of what a classical simulation should consist of, or at least what data it should contain, since if you know the conditional probabilities given any (possibly empty) subset of the other measurements, you can just flip coins for the outputs according to the conditional probabilities as a way of actually exhibiting a “working” simulation.
It’s with this notion of simulation, along with our concept of an adaptive quantum circuit as a family of regular circuits parameterized by a sequence of intermediate measurement outcomes, we may now arrive at the main result of Terhal and DiVincenzo’s paper. Recall that what we wanted to show from the very beginning is that there exists some quantum circuit that can’t be simulated classically. The argument for this proceeds like a proof by contradiction: suppose the contrary, and that all quantum circuits can be simulated classically. We want to show that we can find, then, a quantum circuit which, if it were possible to be simulated classically (as per our assumption), we’d wind up with some strange consequences that we believe are false, leading us to conclude that those circuits probably can’t be simulated classically.
Thus, we shall now exhibit such a quantum circuit whose classical simulatability leads (as far as we believe) to a contradiction. Consider a special case of adaptive quantum circuits, considered as a parameterized family of regular circuits, in which the circuit’s output distribution is independent of the intermediate measurement outcomes; that is, the case in which the entire family of circuits corresponding to an adaptive circuit is logically the same – that is, is the same logical circuit on input qubits independent of intermediate measurements. I’d like to point out, just for clarification’s sake, the subtlety here, which makes this consideration non-redundant, and not simply a reduction of an adaptive quantum circuit (again, thought of as a family) to a single fixed circuit (i.e., a family of one): the situation in which the family is reduced to a single fixed circuit occurs when the structure of the circuit is independent of the intermediate measurement outcomes. If the structure were independent of the measurements, then no matter what we observed in the measurements, we’d get the same circuit – hence a trivial family of one. What we’re considering instead is the case in which the structure of the circuit is still dependent on the intermediate measurements (and so the circuit is still adaptive), but where the distribution over the possible outputs of the circuit is identical no matter what the intermediate measurements are. In this case, the circuit can still be considered as a parameterized and in general non-trivial family of circuits, but for which each member produces the same distribution over outputs – hence, a family of potentially structurally different circuits, but which are logically identical.
Suppose there’s some set of such circuits that’s universal – that is, that’s sufficient to implement all polynomial-time quantum computations. (This is a reasonable assumption to make, since there do in fact exist universal quantum gate sets. But now if a simulation of the kind we defined (an efficient density computation) existed for every circuit in this set, then we could calculate the outcome probability of any polynomial-depth quantum circuit, since any polynomial-depth quantum circuit could be realized as some composition of circuits in this universal set (and in particular as a composition of particular family members of each adaptive circuit in the universal set), and an efficient density computation, as we mentioned above, precisely gives us a way to compute the output distribution.
But now here is where our believed contradiction lies:
Theorem: Suppose there exists a universal set of adaptive quantum circuits whose output distributions are independent of intermediate measurements. If there is an efficient density computation for each family member of each adaptive circuit in this universal set, then for the polynomial hierarchy PH we have PH = BPP = BQP.
The proof goes something like this: if we can do our desired efficient density computations (as we assumed, for the sake of contradiction, we could for all quantum circuits), this is equivalent to being able to determine the acceptance probability of a quantum computation, which was shown in the paper “Determining Acceptance Possibility for a Quantum Computation is Hard for the Polynomial Hierarchy” by Fenner, Green, Homer and Pruim to be equivalent to the class . Thus, we have that . But it’s known that and so . That is, we have , and so the polynomial hierarchy would collapse to since (for more on these more obscure complexity classes, see here). Again, this is our “contradiction”: while it hasn’t been quite proven, it is widely believed, with strong supporting evidence, that the polynomial hierarchy does not collapse as would be the case if all quantum circuits were classically simulatable. Thus this provides a strong argument that not all quantum circuits are classically simulatable, which was precisely what we were looking to demonstrate.
Terhal and DiVincenzo actually go even further and show that there is a certain class of constant-depth quantum circuits that are unlikely to be simulatable by classical circuits – this, indeed, seems to provide even stronger evidence for quantum supremacy. This argument, which is somewhat more complex, uses the idea of teleportation and focuses on a particular class of circuits implementable by a certain restricted set of quantum gates. If you’re interested, I highly recommend reading their paper, where this is explained.
\begin{enumerate}
(For example, a new ad was just posted on Wednesday by Venkat Guruswami and Pravesh Kothari )
]]>(The following blog post serves as an introduction to the following notes:)
Black Holes, Hawking Radiation, and the Firewall
There are many different types of “theoretical physicists.” There are theoretical astrophysicists, theoretical condensed matter physicists, and even theoretical biophysicists. However, the general public seems to be most interested in the exploits of what you might call “theoretical high energy theorists.” (Think Stephen Hawking.)
The holy grail for theoretical high energy physicists (who represent only a small fraction of all physicists) would be to find a theory of quantum gravity. As it stands now, physicists have two theories of nature: quantum field theory (or, more specifically, the “Standard Model”) and Einstein’s theory of general relativity. Quantum field theory describes elementary particles, like electrons, photons, quarks, gluons, etc. General relativity describes the force of gravity, which is really just a consequence of the curvature of spacetimes.
Sometimes people like to say that quantum field theory describes “small stuff” like particles, while general relativity describes “big stuff” like planets and stars. This is maybe not the best way to think about it, though, because planets and stars are ultimately just made out of a bunch of quantum particles.
Theoretical physicists are unhappy with having two theories of nature. In order to describe phenomena that depend on both quantum field theory and general relativity, the two theories must be combined in an “ad hoc” way. A so-called “theory of everything,” another name for the currently unknown theory of “quantum gravity,” would hypothetically be able to describe all the phenomena we know about. Just so we’re all on the same page, they don’t even have a fuly worked out hypothesis. (“String theory,” a popular candidate, is still not even a complete “theory” in the normal sense of the word, although it could become one eventually.)
So what should these high energy theoretical physicists do if they want to discover what this theory of quantum gravity is? For the time being, nobody can think up an experiment that would be sensitive to quantum gravitation effects which is feasible with current technology. We are limited to so-called “thought experiments.”
This brings us to Hawking radiation. In the 1970’s, Stephen Hawking considered what would happen to a black hole once quantum field theory was properly taken into account. (Of course, this involved a bit of “ad hoc” reasoning, as mentioned previously.) Hawking found that, much to everybody’s surprise, the black hole evaporated, realeasing energy in the form of “Hawking radiation” (mostly low energy photons). More strangely, this radiation comes out exactly in the spectrum you would expect from something “hot.” For example, imagine heating a piece of metal. At low temperatures, it emits low energy photons invisible to the human eye. Once it gets hotter, it glows red, then yellow, then perhaps eventually blue. The spectrum of light emitted follows a very specific pattern. Amazingly, Hawking found that the radiation which black holes emit follow the exact same pattern. By analogy, they have a temperature too!
This is more profound than you might realize. This is because things which have an temperature should also have an “entropy.” You see, there are two notions of “states” in physics: “microstates” and “macrostates.” A microstate gives you the complete physical information of what comprises a physical system. For example, imagine you have a box of gas, which contains many particles moving in a seemingly random manner. A “microstate” of this box would be a list of all the positions and momenta of every last particle in that box. This would be impossible to measure in pratice. A “macrostate,” on the other hand, is a set of microstates. You may not know what the exact microstate your box of gas is in, but you can measure macroscopic quantities (like the total internal energy, volume, particle number) and consider the set of all possible microstates with those measured quantities.
The “entropy” of a macrostate is the logarithm of the number of possible microstates. If black holes truly are thermodynamic systems with some entropy, that means there should be some “hidden variables” or microstates to the black hole that we currently don’t understand. Perhaps if we understood the microstates of the black hole, we would be much closer to understanding quantum gravity!
However, Hawking also discovered something else. Because the black hole is radiating out energy, its mass will actually decrease as time goes on. Eventually, it should disappear entirely. This means that the information of what went into the black hole will be lost forever.
Physicists did not like this, however, because it seemed to them that the information of what went into the black hole should never be lost. Many physicists believe that the information of what went into the black hole should somehow be contained in the outgoing Hawking radiation, although they do not currently understand how. According to Hawking’s original calculation, the Hawking radiation only depends on a parameters of the black hole (like its mass) and have nothing to do with the many finer details on what went in, the exact “microstate” of what went in.
However, physicists eventually realized a problem with the idea that the black hole releases its information in the form of outgoing Hawking radiation. The problem has to do with quantum mechanics. In quantum mechanics, it is impossible to clone a qubit. That means that if you threw a qubit in a black hole and then waited for it to eventually come out in the form of Hawking radiation, then the qubit could no longer be “inside” the black hole. However, if Einstein is to be believed, you should also be able to jump into the black hole and see the qubit on the inside. This seems to imply that the qubit is cloned, as it is present on both the inside and outside of the black hole.
Physicists eventually came up with a strange fix called “Black Hole Complementarity” (BHC). According to BHC, according to people outside the black hole, the interior does not exist. Also, according to people who have entered the black hole, the outside ceases to exist. Both descriptions of the world are “correct” because once someone has entered the black hole, they will be unable to escape and compare notes with the person on the outside.
Of course, it must be emphasized that BHC remains highly hypothetical. People have been trying to poke holes in it for a long time. The largest hole is the so called “Firewall Paradox,” first proposed in 2012. Essentially, the Firewall paradox tries to show that the paradigm of BHC is self-contradictory. In fact, it was able to use basic quantum mechanics to show that, under some reasonable assumptions, the interior of the black hole truly doesn’t exist, and that anyone who tries to enter would be fried at an extremely hot “firewall!” Now, I don’t think most physicists actually believe that black holes really have a firewall (although this might depend on what day of the week you ask them). The interesting thing about the Firewall paradox is that it derives a seemingly crazy result from seemingly harmless starting suppositions. So these suppositions would have to be tweaked in the theory of quantum gravity order to get rid of the firewall.
This is all to say that all this thinking about black holes really might help physicists figure out something about quantum gravity. (Then again, who can really say for sure.)
If you would like to know more about the Firewall paradox, I suggest you read my notes, pasted at the top of this post!
The goal of the notes was to write an introduction to the Black Hole Information Paradox and Firewall Paradox that could be read by computer scientists with no physics background.
The structure of the notes goes as follows:
Because the information paradox touches on all areas of physics, I thought it was necessary to take “zero background, infinite intelligence” approach, introducing the all the necessary branches of physics (GR, QFT, Stat Mech) in order to understand what the deal with Hawking radiation really is, and why physicists think it is so important. I think it is safe to say that if you read these notes, you’ll learn a non-trivial amount of physics.
]]>Daniel Alabi alabid@g.harvard.edu
Mitali Bafna mitalibafna@g.harvard.edu
Emil Khabiboulline ekhabiboulline@g.harvard.edu
Juspreet Sandhu jus065@g.harvard.edu
Two-prover one-round (2P-1R) games have been the subject of intensive study in classical complexity theory and quantum information theory. In a 2P-1R game, a verifier sends questions privately to each of two collaborating provers , who then aim to respond with a compatible pair of answers without communicating with each other. Sharing quantum entanglement allows the provers to improve their strategy without any communication, illustrating an apparent paradox of the quantum postulates. These notes aim to give an introduction to the role of entanglement in nonlocal games, as they are called in the quantum literature. We see how nonlocal games have rich connections within computer science and quantum physics, giving rise to theorems ranging from hardness of approximation to the resource theory of entanglement.
, (*)
where the bar represents entrywise complex conjugation, is the entrywise dot product of matrices, and the entrywise complex inner product (Hilbert-Schmidt inner product). We now choose the measurements. Given question , Alice measures in the PVM with Similarly, Bob on question applies the PVM with The condition 3 in definition 6 ensures that for any question , the vectors are orthogonal so that this is a valid PVM. The measurement outcome “” is interpreted as “fail”, and upon getting this outcome the player attempts the measurement again on their share of a fresh copy of . This means that the strategy requires many copies of the entangled state to be shared before the game starts. It also leads to the complication of ensuring that with high probability the players measure the same number of times before outputting their measurement, so that the outputs come from measuring the same entangled state. By (*), at a given round of measurements the conditional distribution of answers is given by We wish to relate the LHS to , so to handle the factor each prover performs repeated measurements, each time on a fresh copy of , until getting an outcome . Moreover, to handle the factor , each prover consults public randomness and accepts the answer with probability and respectively, or rejects and start over depending on the public randomness. Under a few simplifying conditions (more precisely, assuming that the game is uniform meaning that an optimal strategy exists where the marginal distribution on each prover’s answers is uniform), we can let for all , and one can ensure that the conditional probabilities of the final answers satisfyAt this stage it is important that we are dealing with a unique game . Indeed, by (4) we have for every and , where the last inequality follows from concavity. Taking the expectation over and implies the bound (3), thus concluding the proof of theorem 4.
Theorem 8 establishes the CSP variant of the classical PCP theorem: distinguishing between and is NP-hard for some -CSP. Here, denotes the maximum fraction of clauses that are simultaneously satisfiable. Theorem 9 relates the general game obtained from the CSP to a two-player one-round game , in terms of the value (probability of winning) the game. The first inequality, equivalently saying , is achieved since the players can answer the questions in the game to satisfy the clauses in . These theorems together imply that is NP-hard to approximate to within constant factors. Allowing the two players to share entanglement can increase the game value to . Classical results do not necessarily carry over, but exploiting monogamy of entanglement allows us to limit the power of entangled strategies. One can show the following lemma, which is weaker than what we have classically.
where is the number of variables. Combining Theorem 9 and Lemma 10, we have Using Theorem 8, approximating is NP-hard to within inverse polynomial factors. Proving Lemma 10 takes some work in keeping track of approximations. For simplicity, we will show a less quantitative statement and indicate where the approximations come in.
Person | Strategy | Bound(entangled bits) |
---|---|---|
Slofstra | (Possibly) Non-Clifford | |
Tsirelson | Clifford |
[1] David Avis, Sonoko Moriyama, and Masaki Owari. From bell inequalities to tsirelson’s theorem. IEICE Transactions, 92-A(5):1254–1267, 2009.
[2] Lance Fortnow, John Rompel, and Michael Sipser. On the power of multi-prover interactive protocols. Theoretical Computer Science, 134(2):545 – 557, 1994.
[3] T. Ito, H. Kobayashi, and K. Matsumoto. Oracularization and Two-Prover One-Round Interactive Proofs against Nonlocal Strategies. ArXiv e-prints, October 2008.
[4] J. Kempe, H. Kobayashi, K. Matsumoto, B. Toner, and T. Vidick. Entangled games are hard to approximate. ArXiv e-prints, April 2007.
[5] Julia Kempe, Oded Regev, and Ben Toner. Unique games with entangled provers are easy. SIAM Journal on Computing, 39(7):3207– 3229, 2010.
[6] S. Khanna, M. Sudan, L. Trevisan, and D. Williamson. The approximability of constraint satisfaction problems. SIAM Journal on Computing, 30(6):1863–1920, 2001.
[7] Anand Natarajan and Thomas Vidick. Two-player entangled games are NP-hard. arXiv e-prints, page arXiv:1710.03062, October 2017.
[8] Anand Natarajan and Thomas Vidick. Low-degree testing for quantum states, and a quantum entangled games PCP for QMA. arXiv e-prints, page arXiv:1801.03821, January 2018.
[9] William Slofstra. Lower bounds on the entanglement needed to play xor non-local games. CoRR, abs/1007.2248, 2010.
[10] B.S. Tsirelson. Quantum analogues of the bell inequalities. the case of two spatially separated domains. Journal of Soviet Mathematics, 36(4):557–570, 1987.
[11] Thomas Vidick. Three-player entangled XOR games are NP-hard to approximate. arXiv e-prints, page arXiv:1302.1242, February 2013.
[12] Thomas Vidick. Cs286.2 lecture 15: Tsirelson’s characterization of xor games. Online, December 2014. Lecture Notes.
[13] Thomas Vidick. Cs286.2 lecture 17: Np-hardness of computing . Online, December 2014. Lecture Notes.
]]>author: Beatrice Nash
Abstract
In this blog post, we give a broad overview of quantum walks and some quantum walks-based algorithms, including traversal of the glued trees graph, search, and element distinctness [3; 7; 1]. Quantum walks can be viewed as a model for quantum computation, providing an advantage over classical and other non-quantum walks based algorithms for certain applications.
We begin our discussion of quantum walks by introducing the quantum analog of the continuous random walk. First, we review the behavior of the classical continuous random walk in order to develop the definition of the continuous quantum walk.
Take a graph with vertices and edges . The adjacency matrix of is defined as follows:
And the Laplacian is given by:
The Laplacian determines the behavior of the classical continuous random walk, which is described by a length vector of probabilities, p(t). The th entry of p(t) represents the probability of being at vertex at time . p(t) is given by the following differential equation:
which gives the solution .
Recalling the Schrödinger equation , one can see that by inserting a factor of on the left hand side of the equation for p(t) above, the Laplacian can be treated as a Hamiltonian. One can see that the Laplacian preserves the normalization of the state of the system. Then, the solution to the differential equation:
,
which is , determines the behavior of the quantum analog of the continuous random walk defined previously. A general quantum walk does not necessarily have to be defined by the Laplacian; it can be defined by any operator which “respects the structure of the graph,” that is, only allows transitions to between neighboring vertices in the graph or remain stationary [7]. To get a sense of how the behavior of the quantum walk differs from the classical one, we first discuss the example of the continuous time quantum walk on the line, before moving on to the discrete case.
An important example of the continuous time quantum walk is that defined on the infinite line. The eigenstates of the Laplacian operator for the graph representing the infinite line are the momentum states with eigenvalues , for in range . This can be seen by representing the momentum states in terms of the position states and applying the Laplacian operator:
Hence the probability distribution at time , , with initial position is given by:
While the probability distribution for the classical continuous time
random walk on the same graph approaches, for large , , or a Gaussian of width . One can see that the quantum walk has its largest peaks at the extrema, with oscillations in between that decrease in amplitude as one approaches the starting position at . This is due to the destructive interference between states of different phases that does not occur in the classical case. The probability distribution of the classical walk, on the other hand, has no oscillations and instead a single peak centered at , which widens and flattens as increases.
A glued tree is a graph obtained by taking two binary trees of equal height and connecting each of the leaves of one of the trees to exactly two leaves of the other tree so that each node that was a leaf in one of the original trees now has degree exactly . An example of such a graph is shown in Figure 2.
The time for the quantum walk on this graph to reach the right root from the left one is exponentially faster than in the classical case. Consider the classical random walk on this graph. While in the left tree, the probability of transitioning to a node in the level one to the right is twice that of transitioning to a node in the level one to the left. However, while in the right tree, the opposite is true. Therefore, one can see that in the middle of the graph, the walk will get lost, as, locally, there is no way to determine which node is part of which tree. It will instead get stuck in the cycles of identical nodes and will have exponentially small probability of reaching the right node.
To construct a continuous time quantum walk on this graph, we consider the graph in terms of columns. One can visualize the columns of Figure 2 as consisting of all the nodes equidistant from the entrance and exit nodes. If each tree is height , then we label the columns , where column contains the nodes with shortest path of length from the leftmost root node. We describe the state of each column as a superposition of the states of each node in that column. The number of nodes in column , , will be for and for . Then, we can define the state as:
The factor of latex ensures that the state is normalized. Since the adjacency matrix of the glued tree is Hermitian, then we can treat as the Hamiltonian of the system determining the behavior of the quantum walk. By acting on this state with the adjacency matrix operator , we get the result (for ):
Then for , we get the same result, because of symmetry.
For :
The case of is symmetric. One can see that the walk on this graph is equivalent to the quantum walk on the finite line with nodes corresponding to the columns. All of the edges, excluding that between columns and , have weight . The edge between column and has weight .
The probability distribution of the quantum walk on this line can be roughly approximated using the infinite line. In the case of the infinite line, the probability distribution can be seen as a wave propagating with speed linear in the time . Thus, in time linear in , the probability that the state is measured at distance from the starting state is . In [3] it is shown that the fact that the line is finite and has a single differently weighted edge from the others (that between and ) does not change the fact that in polynomial time, the quantum walk will travel from the left root node to the right one, although in this case there is no limiting distribution as the peaks oscillate. This was the first result that gives an exponential speed up over the classical case using quantum walks.
In this section, we will first give an introduction to the discrete quantum walk, including the discrete quantum walk on the line and the Markov chain quantum walk, as defined in [7]. Next, we discuss how Grover search can be viewed as a quantum walk algorithm, which leads us into Ambainis’s quantum-walks based algorithm from [1] for the element distinctness problem, which gives a speed up over classical and other quantum non-walks based algorithms.
The discrete time quantum walk is defined by two operators: the coin flip operator, and the shift operator. The coin flip operator determines the direction of the walk, while the shift operator makes the transition to the new state conditioned on the result of the coin flip. The Hilbert space governing the walk is , where corresponds to the space associated with the result of the coin flip operator, and corresponds to the locations in the graph on which the walk is defined.
For example, consider the discrete time walk on the infinite line. Since there are two possible directions (left or right), then the Hilbert space associated with the coin flip operator is two dimensional. In the unbiased case, the coin flip is the Hadamard operator,
and shift operator that produces the transition from state to or ,
conditioned on the result of the coin flip, is .
Each step of the walk is determined by an application of the unitary
operator . If the walk starts at position
, then measuring the state after one application of gives with probability and with probability . This is exactly the same as the case of the classical random walk on the infinite line; the difference between the two walks becomes apparent after a few steps.
For example, the result of the walk starting at state after 4 steps gives:
One can see that the distribution is becoming increasingly skewed
towards the right, while in the classical case the distribution will be
symmetric around the starting position. This is due to the destructive
interference discussed earlier. The distribution after time
steps is shown in Figure 4.
Now, consider the walk starting at state :
This distribution given by this walk is the mirror image of the first.
To generate a symmetric distribution, consider the start state . The resulting distribution after steps will be , where is the probability distribution after steps resulting from the start state and is the probability distribution after steps resulting from the start state . The result will be symmetric, with peaks near the extrema, as we saw in the continuous case.
A reversible, ergodic Markov chain with states can be represented by a transition matrix with equal to the probability of transitioning from state to state and . Then, , where is an initial probability distribution over the states, gives the distribution after one step.
Since for all , is stochastic and thus preserves normalization.
There are multiple ways to define a discrete quantum walk, depending on the properties of the transition matrix and the graph on which it is defined (overview provided in [4]). Here we look at the quantum walk on a Markov chain as given in [2]. For the quantum walk on this graph, we define state as the state that represents currently being at position and facing in the direction of . Then, we define the state as a superposition of the states associated with position :
The unitary operator,
,
acts as a coin flip for the walk on this graph. Since is reversible, we can let the shift operator be the unitary operator:
.
A quantum walk can also be defined for a non-reversible Markov chain using a pair of reflection operators (the coin flip operator is an example of a reflection operator). This corresponds to the construction given in [7].
Given a black box function and a set of inputs with , say we want to find whether an input exists for which equals some output value. We refer to the set of inputs for which this is true as marked. Classically, this requires queries, for nonempty . Using the Grover search algorithm, this problem requires quantum queries. In this section, we give a quantum walks based algorithm that also solves this problem in time. If we define a doubly stochastic matrix with uniform transitions, then we can construct a new transition matrix from as:
Then, when the state of the first register is unmarked, the operator defined in the previous section acts as a diffusion over its neighbors. When the state in the first register is marked, then will act as the operator , and the walk stops, as a marked state has been reached. This requires two queries to the black box function: one to check whether the input is marked, and then another to uncompute. By rearranging the order of the columns in so that the columns corresponding to the non-marked elements come before the columns corresponding to the marked elements, we get:
where gives the transitions between non-marked elements and gives the transitions from non-marked to marked elements.
We now look at the hitting time of the classical random walk. Assume
that there is zero probability of starting at a marked vertex. Then, we
can write the starting distribution , where the last elements of , corresponding to the marked elements, are zero, as
, where are the eigenvalues of , and are the corresponding eigenvectors, with the last entries zero. Let be the principal (largest) eigenvalue. Then, the probability that, after steps, a marked element has not yet been reached will be . Then, the
probability that a marked element has been reached in that time will be
. Setting
gives probability that a marked element will be reached in that time.
The eigenvalues of will be and
. Then, the classical hitting time will be:
It can be showed that for a walk defined by a Markov chain, the
classical hitting time will be , where , the spectral gap, and [2].
Magniez et al proved in [6] that for a reversible, ergodic
Markov chain, the quantum hitting time for a walk on this chain is
within a factor of the square root of the classical hitting time. Since
the walk on this input acts as a walk on a reversible Markov chain until
a marked element is reached, then this is also true for a walk defined
by our transition matrix . This arises from the fact that the
spectral gap of the matrix describing the quantum walk corresponding to
stochastic matrix is quadratically larger than the spectral gap of
the matrix describing the classical random walk corresponding to , the proof of which is given in [2]. Thus, the quantum hitting time
is , which exactly matches the quantum query complexity of Grover search.
Now, we describe Ambainis’s algorithm given in [1] for solving
the element distinctness problem in time, which
produces a speed up over the classical algorithm, which requires queries, and also over other known quantum algorithms that do not make use of quantum walks, which require queries. The element distinctness problem is defined as follows: given a function on a size set of inputs
,…,,
determine whether there exists a pair for which . As in the search problem defined in the previous section, this is a decision problem; we are not concerned with finding the values of these pairs, only whether at least one exists.
The algorithm is similar to the search algorithm described in the previous section, except we define the walk on a Hamming graph. A Hamming graph is defined as follows: each vertex corresponds to an -tuple, (,…,), where for all and repetition is allowed (that is, may equal for ), and is a parameter we will choose. Edges will exist between vertices that differ in exactly one coordinate (order matters in this graph). We describe the state of each vertex as:
,…,,…,
Then, moving along each edge that replaces the th coordinate with such that requires two queries to the black box function to erase and compute . In the case, the marked vertices will be those that contain some for . Since the function values are stored in the description of the state, then no additional queries to the black box are required to check if in a marked state.
The transition matrix is given by . is the all one matrix, and the superscript denotes the operator acting on the th coordinate. The factor of normalizes the degree, since the graph is regular. We can compute the spectral gap of this graph to be (for details of this computation, see [2]). Then, noting that that the fraction of marked vertices, , is
, classically, the query complexity is , where is the queries required to construct the initial state. Setting the parameters equal to minimize with respect to gives classical query complexity , as expected.
Then in the quantum case, queries are still required to set up the state. queries are required to perform the walk until a marked state is reached, by [6]. Setting parameters equal gives queries, as desired.
[1] Ambainis, A. Quantum walk algorithm for element distinctness, SIAM Journal on Computing 37(1):210-239 (2007). arXiv:quant-ph/0311001
[2] Childs, A. Lecture Notes on Quantum Algorithms (2017). https://www.cs.umd.edu/ amchilds/qa/qa.pdf
[3] Childs, A., Farhi, E. Gutmann, S. An example of the difference between
quantum and classical random walks. Journal of Quantum Information
Processing, 1:35, 2002. Also quant-ph/0103020.
[4] Godsil, C., Hanmeng, Z. Discrete-Time Quantum Walks and Graph Structures
(2018). arXiv:1701.04474
[5] Kempe, J. Quantum random walks: an introductory overview, Contemporary
Physics, Vol. 44 (4) (2003) 307:327. arXiv:quant-ph/0303081
[6] Magniez, F., Nayak, A., Richter, P.C. et al. On the hitting times of
quantum versus random walks, Algorithmica (2012) 63:91.
https://doi.org/10.1007/s00453-011-9521-6
[7] Szegedy, M. Quantum Speed-up of Markov Chain Based Algorithms, 45th
Annual IEEE Symposium on Foundations of Computer Science (2004).
https://ieeexplore.ieee.org/abstract/document/1366222
[8] Portugal, R. Quantum Walks and Search Algorithms. Springer, New York, NY (2013).
]]>This is part of a series of blog posts for CS 229r: Physics and Computation. In this post, we will talk about progress made towards resolving the quantum PCP conjecture. We’ll briefly talk about the progression from the quantum PCP conjecture to the NLTS conjecture to the NLETS theorem, and then settle on providing a proof of the NLETS theorem. This new proof, due to Nirkhe, Vazirani, and Yuen, makes it clear that the Hamiltonian family used to resolve the NLETS theorem cannot help us in resolving the NLTS conjecture.
We are all too familiar with NP problems. Consider now an upgrade to NP problems, where an omniscient prover (we’ll call this prover Merlin) can send a polynomial-sized proof to a BPP (bounded-error probabilistic polynomial-time) verifier (and we’ll call this verifier Arthur). Now, we have more decision problems in another complexity class, MA (Merlin-Arthur). Consider again, the analogue in the quantum realm where now the prover sends over qubits instead and the verifier is in BQP (bounded-error quantum polynomial-time). And now we have QMA (quantum Merlin-Arthur).
We can show that there is a hierarchy to these classes, where NP MA QMA.
Our goal is to talk about progress towards a quantum PCP theorem (and since nobody has proved it in the positive or negative, we’ll refer to it as a quantum PCP conjecture for now), so it might be a good idea to first talk about the PCP theorem. Suppose we take a Boolean formula, and we want to verify that it is satisfiable. Then someone comes along and presents us with a certificate — in this case, a satisfying assignment — and we can check in polynomial time that either this is indeed a satisfying assignment to the formula (a correct certificate) or it is not (an incorrect certificate).
But this requires that we check the entire certificate that is presented to us. Now, in comes the PCP Theorem (for probabilistically checkable proofs), which tells us that a certificate can be presented to us such that we can read a constant number of bits from the certificate, and have two things guaranteed: one, if this certificate is correct, then we will never think that it is incorrect even if we are not reading the entire certificate, and two, if we are presented with an incorrect certificate, we will reject it with high probability [1].
In short, one formulation of the PCP theorem tells us that, puzzingly, we might not need to read the entirety of a proof in order to be convinced with high probability that it is a good proof or a bad proof. But a natural question arises, which is to ask: is there a quantum analogue of the PCP theorem?
The answer is, we’re still not sure. But to make progress towards resolving this question, we will present the work of Nirkhe, Vazirani, and Yuen in providing an alternate proof of an earlier result of Eldar and Harrow on the NLETS theorem.
Before we state the quantum PCP conjecture, it would be helpful to review information about local Hamiltonians and the -local Hamiltonian problem. A previous blog post by Ben Edelman covers these topics. Now, let’s state the quantum PCP conjecture:
(Quantum PCP Conjecture): It is QMA-hard to decide whether a given local Hamiltonian (where each ) has ground state energy at most or at least when for some universal constant .
Recall that MAX--SAT being NP-hard corresponds to the -local Hamiltonian problem being QMA-hard when . (We can refer to Theorem 4.1 in these scribed notes of Ryan O’Donnell’s lecture, and more specifically to Kempe-Kitaev-Regev’s original paper for proof of this fact.) The quantum PCP conjecture asks if this is still the case when the gap is .
Going back to the PCP theorem, an implication of the PCP theorem is that it is NP-hard to approximate certain problems to within some factor. Just like its classical analogue, the qPCP conjecture can be seen as stating that it is QMA-hard to approximate the ground state energy to a factor better than .
Let’s make the observation that, taking to be the ground state energy, the qPCP conjecture sort of says that there exists a family of Hamiltonians for which there is no trivial state (a state generated by a low depth circuit) such that the energy is at most above the ground state energy.
Freedman and Hastings came up with an easier goal called the No Low-Energy Trivial States conjecture, or NLTS conjecture. We expect that ground states of local Hamiltonians are sufficiently hard to describe (if NP QMA). So low-energy states might not be generated by a quantum circuit of constant depth. More formally:
(NLTS Conjecture): There exists a universal constant and a family of local Hamiltonians where acts on particles and consists of local terms, s.t. any family of states satisfying requires circuit depth that grows faster than any constant.
To reiterate, if we did have such a family of NLTS Hamiltonians, then it we wouldn’t be able to give “easy proofs” for the minimal energy of a Hamiltonian, because we couldn’t just give a small circuit which produced a low energy state.
-error states are states that differ from the ground state in at most qubits. Now, consider -error states (which “agree” with the ground state on most qubits). Then for bounded-degree local Hamiltonians (analogously in the classical case, those where each variable participates in a bounded number of clauses), these states are also low energy. So any theorem which applies to low energy states (such as the NLTS conjecture), should also apply to states with -error (as in the NLETS theorem).
To define low-error states more formally:
Definition 2.1 (-error states): Let (the space of positive semidefinite operators of trace norm equal to 1 on ). Let be a local Hamiltonian acting on . Then:
Here, see that is just the partial trace on some subset of integers , like we’re tracing out or “disregarding” some subset of qubits.
In 2017, Eldar and Harrow showed the following result which is the NLETS theorem.
Theorem 1 (NLETS Theorem): There exists a family of 16-local Hamiltonians s.t. any family of -error states for requires circuit depth where .
In the next two sections, we will provide background for an alternate proof of the NLETS theorem due to Nirkhe, Vazirani, and Yuen. After this, we will explain why the proof of NLETS cannot be used to prove NLTS, since the local Hamiltonian family we construct for NLETS can be linearized. Nirkhe, Vazirani, and Yuen’s proof of NLETS makes use of the Feynman-Kitaev clock Hamiltonian corresponding to the circuit generating the cat state (Eldar and Harrow make use of the Tillich-Zemor hypergraph product construction; refer to section 8 of their paper). What is this circuit? It is this one:
First, we apply the Hadamard gate (drawn as ) which maps the first qubit . Then we can think of the CNOT gates (drawn as ) as propagating whatever happens to the first qubit to the rest of the qubits. If we had the first qubit mapping to 0, then the rest of the qubits map to 0, and likewise for 1. This generates the cat state , which is highly entangled.
Why do we want a highly entangled state? Roughly our intuition for using the cat state is this: if the ground state of a Hamiltonian is highly entangled, then any quantum circuit which generates it has non-trivial depth. So if our goal is to show the existence of local Hamiltonians which have low energy or low error states that need deep circuits to generate, it makes sense to use a highly entangled state like the cat state.
(We’ll write that the state of a qudit – a generalization of a qubit to more than two dimensions, and in this case dimensions – is a vector in . In our diagram above, we’ll see 4 qudits, labelled appropriately.)
Let’s briefly cover the definitions for the quantum circuits we’ll be using.
Let be a unitary operator acting on a system of qudits (in other words, acting on ), where . Here, each is a unitary operator (a gate) acting on at most two qudits, and is a product of such operators.
If there exists a partition into products of non-overlapping two-qudit unitaries (we call these layers and denote them as , where each here is in layer ) such that then we say has layers.
In other words, has size and circuit depth .
Consider and an operator.
For define as the gates in layer whose supports overlap that of any gate in , …, or with .
Definition 3.1 (lightcone): The lightcone of with respect to is the union of : .
So we can think of the lightcone as the set of gates spreading out of all the way to the first layer of the circuit. In our diagram, the lightcone of is the dash-dotted region. We have , , and .
We also want a definition for what comes back from the lightcone: the set of gates from the first layer (the widest part of the cone) back to the last layer.
Define . For , let be the set of gates whose supports overlap with any gate in .
Definition 3.2 (effect zone): The effect zone of with respect to is the union .
In our diagram, see that , , and . The effect zone of is the dotted region.
Definition 3.3 (shadow of the effect zone): The shadow of the effect zone of with respect to is the set of qudits acted on by the gates in the effect zone.
In our diagram, the first three qudits are effected by gates in the effect zone. So .
Given all of these definitions, we make the following claim which will be important later, in a proof of a generalization of NLETS.
Claim 3.1 (Disjoint lightcones): Let be a circuit and operators. If the qudits acts on are disjoint from , then the lightcones of and in are disjoint.
Now we’ll give some definitions that will become necessary when we make use of the Feynman-Kitaev Hamiltonian in our later proofs.
Let’s define a unary clock. It will basically help us determine whatever happened at any time little along the total time big . Let . For our purposes today, we won’t worry about higher dimensional clocks. So we’ll write , but we’ll really only consider the case where , which corresponds to . For simplicity’s sake, we will henceforth just write .
Our goal is to construct something a little similar to the tableaux in the Cook-Levin theorem, so we also want to define a history state:
Definition 4.1 (History state): Let be a quantum circuit that acts on a witness register and an ancilla register. Let denote the sequence of two-local gates in . Then for all , a state is a -dimensional history state of if:
where we have the clock state to keep track of time and is some state such that and . With this construction, we should be able to make a measurement to get back the state at time .
We provide a proof of (a simplified case of) the NLETS theorem proved by Nirkhe, Vazirani, and Yuen in [2].
Theorem 2 (NLETS): There exists a family of -local Hamiltonians on a line (Each Hamiltonian can be defined on particles arranged on a line such that each local Hamiltonian acts on a particle and its two neighbors) such that for all , the circuit depth of any -error ground state for is at least logarithmic in .
First, we’ll show the circuit lower bound. Then we’ll explain why these Hamiltonians can act on particles on a line and what this implies about the potential of these techniques for proving NLTS.
Proof: We will use the Feynman-Kitaev clock construction to construct a -local Hamiltonian for the circuit : .
Fix and let have size . The Hamiltonian acts on qubits and consists of several local terms depending on :
We can think of a qubit state as representing a step computation on qubits (i.e. for each time , we have a bit computation state of ). Intuitively, a qubit state has energy with respect to iff it is the history state of . This is because checks that at time , consists of the input to . Each checks that proceed correctly from (i.e. that the th gate of is applied correctly). Then checks that at time , the output is . Finally, checks that the qubit state is a superposition only over states where the first qubits represent “correct times” (i.e. a unary clock state where time is represented by zeros followed by ones).
Therefore, has a unique ground state, the history state of , with energy :
Later we will show how to transform into a Hamiltonian on qutrits on a line. Intuitively, the structure of allows us to fuse the time qubits and state qubits and represent unused state qubits by . For the Hamiltonian , the ground state becomes
For the rest of this proof, we work with respect to .
Let be an -error state and let be the subset of qutrits such that . We define two projection operators which, when applied to alone, produce nontrivial measurements, but when applied to together, produce trivial measurements.
Definition 5.1: For any , the projection operator
projects onto the subspace spanned by on the th qutrit.
For any , the projection operator
projects onto the subspace spanned by on the th qutrit.
Claim 5.1: For , . For , . Note that these values are positive for any .
Proof: If , then measurements on the th qutrit are the same for and .
If , then any qutrit pure state cannot have nonzero weight in both and (every pure state ends in some number of s which tells which (if any) it can be a part of). Therefore,
If , then projecting onto the th qutrit gives with probability . Therefore, .
Similarly, .
Claim 5.2: For such that , .
Proof: As before, we can calculate
If , then the th qutrit of is so . If , then because the first qutrits of contain the state so under any measurement, the and th qutrits must be the same.
Now we use these claims to prove a circuit lower bound. Let be a circuit generating (a state with density matrix) . Let be the depth of .
Consider some . For any operator acting on the th qutrit, its lightcone consists of at most gates so its effect zone consists of at most gates which act on at most qudits (called the shadow of the effect zone).
Assume towards contradiction that . Then the shadow of any operator acting only on the th qutrit has size at most since . So there is some outside of the shadow which is in the complement of . By Claim 3.1, we have found two indices such that any pair of operators acting on and have disjoint lightcones in . WLOG let . The lightcones of are disjoint which implies
By the two claims above, we get a contradiction.
Therefore, . We can take any constant epsilon: letting , we get
This analysis relies crucially on the fact that any -error state matches the groundstate on most qudits. However, NLTS is concerned with states which may differ from the groundstate on many qudits, as long as they have low energy.
Remark 2.1: The paper of Nirkhe, Vazirani, and Yuen [2] actually proves more:
So far, we’ve shown a local Hamiltonian family for which all low-error (in “Hamming distance”) states require logarithmic quantum circuit depth to compute, thus resolving the NLETS conjecture. Now, let’s try to tie this back into the NLTS conjecture. Since it’s been a while, let’s recall the statement of the conjecture:
Conjecture (NLTS): There exists a universal constant and a family of local Hamiltonians where acts on particles and consists of local terms, s.t. any family of states satisfying requires circuit depth that grows faster than any constant.
In order to resolve the NLTS conjecture, it thus suffices to exhibit a local Hamiltonian family for which all low-energy states require logarithmic quantum circuit depth to compute. We might wonder if the local Hamiltonian family we used to resolve NLETS, which has “hard ground states”, might also have hard low-energy states. Unfortunately, as we shall show, this cannot be the case. We will start by showing that Hamiltonian families that lie on constant-dimensional lattices (in a sense that we will make precise momentarily) cannot possibly be used to resolve NLTS, and then show that the Hamiltonian family we used to prove NLTS can be linearized (made to lie on a one-dimensional lattice!).
Definition 6.1: A local Hamiltonian acting on qubits is said to lie on a graph if there is an injection of qubits into vertices of the graph such that the set of qubits in any interaction term correspond to a connected component in the graph.
Theorem 2: If is a local Hamiltonian family that lies on an -dimensional lattice, then has a family of low-energy states with low circuit complexity. In particular, if is a local Hamiltonian on a -dimensional lattice acting on qubits for large enough , then for any , there exists a state that can be generated by a circuit of constant depth and such that where is the ground-state energy.
Proof: In what follows, we’ll omit some of the more annoying computational details in the interest of communicating the high-level idea.
Start by partitioning the -dimensional lattice (the one that lives on) into hypercubes of side length . We can “restrict” to a given hypercube (let’s call it ) by throwing away all local terms containing a qubit not in . This gives us a well-defined Hamiltonian on the qubits in . Define to be the -qubit ground state of , and define
where is an -qubit state. Each can be generated by a circuit with at most gates, hence at most depth. Then, can be generated by putting all of these individual circuits in parallel – this doesn’t violate any sort of no-cloning condition because the individual circuits act on disjoint sets of qubits. Therefore, can be generated by a circuit of depth at most . and are both constants, so can be generated by a constant-depth circuit.
We claim that, for the right choice of , is also a low-energy state. Intuitively, this is true because can only be “worse” than a true ground state of on local Hamiltonian terms that do not lie entirely within a single hypercube (i.e. the boundary terms), and by choosing appropriately we can make this a vanishingly small fraction of the local terms of . Let’s work this out explicitly.
Each hypercube has surface area , and there are hypercubes in the lattice. Thus, the total number of qubits on boundaries is at most . The number of size -connected components containing a given point in a -dimensional lattice is a function of and . Both of these are constants. Therefore, the number of size -connected components containing a given vertex, and hence the number of local Hamiltonian terms containing a given qubit, is constant. Thus, the total number of violated local Hamiltonian terms is at most . Taking to be , we get the desired bound. Note that to be fully rigorous, we need to justify that the boundary terms don’t blow up the energy, but this is left as an exercise for the reader.
Now that we have shown that Hamiltonians that live on constant-dimensional lattices cannot be used to prove NLTS, we will put the final nail in the coffin by showing that our NLETS Hamiltonian (the Feynman-Kitaev clock Hamiltonian on the circuit ) can be made to lie on a line (a -dimensional lattice). To do so, we will need to understand the details of a bit better.
Proposition 6.1: for the circuit is -local.
Proof: Recall that we defined
Let’s go through the right-hand-side term-by-term. We will use to denote the qubit of the time register and to denote the qubit of the state register.
Now, we follow an approach of [3] to embed into a line.
Theorem 3: The Feynman-Kitaev clock Hamiltonian can be manipulated into a -local Hamiltonian acting on qutrits on a line.
Proof: Rather than having act on total qubits ( time qubits and state qubits), let’s fuse each and pair into a single qudit of dimension . If we view as acting on the space of particles , we observe that, following Proposition 6.1, each local term needs to check at most the particles corresponding to times , , and . Therefore, is -local and on a line, as desired.
To see that we can have act on particles of dimension (qutrits) rather than particles of dimension , note that the degree of freedom corresponding to is unused, as the qubit of the state is never nonzero until timestamp . Thus, we can take the vectors
as a basis for each qutrit.
Even though we’ve shown that the clock Hamiltonian for our original circuit cannot be used to prove NLTS (which is still weaker than the original Quantum PCP conjecture) this does not necessarily rule out the use of this approach for other “hard” circuits which might then allow us to prove NLTS. Furthermore, NLETS is independently interesting, as the notion of being low “Hamming distance” away from vectors is exactly what is used in error-correcting codes.