Skip to content

Quantum circuits and their role in demonstrating quantum supremacy

January 25, 2019

There’s a lot of discussion and (possibly well-deserved) hype nowadays about quantum computation and its potential for computation at speeds we simply can’t reach with the classical computers we’re used to today. The excitement about this has been building for years, even decades, but it’s only very recently that we’ve really been approaching a solid proof that quantum computers do have an advantage over classical computers.

What’s rather tricky about showing such a result is that, rather than a direct argument about the capability of quantum computers, what we really need to demonstrate is the incapability of classical computers to achieve tasks that can be done with quantum computers.

One of the major leaps forward in demonstrating quantum supremacy was taken by Terhal and DiVincenzo in their 2008 paper “Adaptive quantum computation, constant depth quantum circuits and arthur-merlin games“. Their approach was to appeal to a complexity-theoretic argument: they gave evidence that there exists a certain class of quantum circuits that cannot be simulated classically by proving that if a classical simulation existed, certain complexity classes strongly believed to be distinct would collapse to the same class. While this doesn’t quite provide a proof of quantum supremacy – since the statement about the distinction between complexity classes upon which it hinges is not a proven fact – because the complexity statement appears overwhelmingly likely to be true, so too does the proposed existence of non-classically-simulatable quantum circuits. The Terhal and DiVincenzo paper is a complex and highly technical one, but in this post I hope to explain a little bit and give some intuition for the major points.

Now, let’s start at the beginning. What is a quantum circuit? I’m going to go ahead and assume you already know what a classical circuit is – the extension to a quantum circuit is rather straightforward: it’s a circuit in which all gates are quantum gates, where a quantum gate can be thought of as a classical gate whose output is, rather than a deterministic function of the inputs, instead a probability distribution over all possible outputs given the size of the inputs. For example, given two single-bit inputs, a classical AND gate outputs 0 or 1 deterministically given the inputs. A quantum AND gate on the analogous single-qubit inputs would output 0 with some probability p and 1 with some probability 1-p. Similarly, a classical AND gate on two 4-bit inputs outputs the bitwise AND, while the quantum analog has associated with it a 4-qubit output: some probability distribution over all 4-bit binary strings. A priori there is no particular string that is the “output” of the computation by the quantum gate; it’s only after taking a quantum measurement of the output that we get an actual string that we can think of as the outcome of the computation done by the gate. The actual string we “observe” upon taking the measurement follows the probability distribution computed by the gate on its inputs. In this way, a quantum circuit can then be thought of as producing, via a sequence of probabilistic classical gates (i.e., quantum gates) some probability distribution over possible outputs given the input lengths. It’s not hard to see that in this way, we can compose circuits: suppose we have a quantum circuit c_1 and another quantum circuit c_2. Let c_1 have an input of n qubits and an output of m qubits; suppose we measure k of the output qubits of c_1 – then we can feed the remaining m-k unmeasured qubits as inputs into c_2(assuming that those m-k qubits do indeed constitute a valid input to c_2).

Consider, then, the following sort of quantum circuit: it’s a composition of N quantum circuits, such that after each i-th circuit we take a measurement some of its output qubits (so that the remaining unmeasured qubits become inputs to the (i+1)-th circuit), and then the structure of the (i+1)-th circuit is dependent on this measurement. That is, it’s as though, given a quantum circuit, we’re checking every so often at intermediate layers over the course of the circuit’s computation what the value of some of the variables are (leaving the rest to keep going along through the circuit to undergo more computational processing), and based on what we measure is the current computed value, the remainder of the circuit “adapts” in a way determined by that measurement. Aptly enough, this is called an “adaptive circuit”. But since the “downstream” structure of the circuit depends on the outcomes of all the measurements made “upstream”, each adaptive circuit actually comprises a family of circuits, each of which is specified by the sequence of intermediate measurement outcomes. That is, we can alternatively characterize an adaptive circuit as a set of ordinary quantum circuits that is parameterized by a list of measurement outcomes. Terhal and DiVincenzo call this way of viewing an adaptive circuit, as a family of circuits parametrized by a sequence of measurement values, a “non-adaptive circuit” – since we replace the idea that the circuit “adapts” to intermediate measurements with the idea that there are just many regular circuits, one for each possible sequence of measurements. It’s this non-adaptive circuit concept that’ll be our main object of study going forward.

Simulating quantum circuits

Now, the result we wanted to demonstrate about quantum circuits had to do with their efficient simulatability by classical circuits – and so we should establish some notion of what we mean when we talk about an “efficient simulation”.

Terhal and DiVincenzo offer the following notion of a classical simulation – which in their paper they call an “efficient density computation”: consider a quantum circuit with some output of length n qubits. Recall that to actually obtain an output value, we need to take a measurement of the circuit output – imagine doing this in disjoint subsets of cubits at a time. That is, we can break up the n qubits into k disjoint subsets and consider the entire output measurement as a process of taking k measurements, subset by subset. An efficient density computation exists if there’s a classical procedure for computing, in time polynomial in the width and depth of the quantum circuit, the conditional probability distribution over the set of possible measurement outcomes of a particular subset of qubits, given any subset of the k-1 other measurement outcomes. Intuitively, this is a good notion of what a classical simulation should consist of, or at least what data it should contain, since if you know the conditional probabilities given any (possibly empty) subset of the other measurements, you can just flip coins for the outputs according to the conditional probabilities as a way of actually exhibiting a “working” simulation.

It’s with this notion of simulation, along with our concept of an adaptive quantum circuit as a family of regular circuits parameterized by a sequence of intermediate measurement outcomes, we may now arrive at the main result of Terhal and DiVincenzo’s paper. Recall that what we wanted to show from the very beginning is that there exists some quantum circuit that can’t be simulated classically. The argument for this proceeds like a proof by contradiction: suppose the contrary, and that all quantum circuits can be simulated classically. We want to show that we can find, then, a quantum circuit which, if it were possible to be simulated classically (as per our assumption), we’d wind up with some strange consequences that we believe are false, leading us to conclude that those circuits probably can’t be simulated classically.

Thus, we shall now exhibit such a quantum circuit whose classical simulatability leads (as far as we believe) to a contradiction. Consider a special case of adaptive quantum circuits, considered as a parameterized family of regular circuits, in which the circuit’s output distribution is independent of the intermediate measurement outcomes; that is, the case in which the entire family of circuits corresponding to an adaptive circuit is logically the same – that is, is the same logical circuit on input qubits independent of intermediate measurements. I’d like to point out, just for clarification’s sake, the subtlety here, which makes this consideration non-redundant, and not simply a reduction of an adaptive quantum circuit (again, thought of as a family) to a single fixed circuit (i.e., a family of one): the situation in which the family is reduced to a single fixed circuit occurs when the structure of the circuit is independent of the intermediate measurement outcomes. If the structure were independent of the measurements, then no matter what we observed in the measurements, we’d get the same circuit – hence a trivial family of one. What we’re considering instead is the case in which the structure of the circuit is still dependent on the intermediate measurements (and so the circuit is still adaptive), but where the distribution over the possible outputs of the circuit is identical no matter what the intermediate measurements are. In this case, the circuit can still be considered as a parameterized and in general non-trivial family of circuits, but for which each member produces the same distribution over outputs – hence, a family of potentially structurally different circuits, but which are logically identical.

Suppose there’s some set of such circuits that’s universal – that is, that’s sufficient to implement all polynomial-time quantum computations. (This is a reasonable assumption to make, since there do in fact exist universal quantum gate sets. But now if a simulation of the kind we defined (an efficient density computation) existed for every circuit in this set, then we could calculate the outcome probability of any polynomial-depth quantum circuit, since any polynomial-depth quantum circuit could be realized as some composition of circuits in this universal set (and in particular as a composition of particular family members of each adaptive circuit in the universal set), and an efficient density computation, as we mentioned above, precisely gives us a way to compute the output distribution.

But now here is where our believed contradiction lies:

Theorem: Suppose there exists a universal set of adaptive quantum circuits whose output distributions are independent of intermediate measurements. If there is an efficient density computation for each family member of each adaptive circuit in this universal set, then for the polynomial hierarchy PH we have PH = BPP = BQP.

The proof goes something like this: if we can do our desired efficient density computations (as we assumed, for the sake of contradiction, we could for all quantum circuits), this is equivalent to being able to determine the acceptance probability of a quantum computation, which was shown in the paper “Determining Acceptance Possibility for a Quantum Computation is Hard for the Polynomial Hierarchy” by Fenner, Green, Homer and Pruim to be equivalent to the class \text{coC=P}. Thus, we have that \text{coC=P} \subseteq \text{BPP}. But it’s known that \text{PH} \subseteq \text{BPP}^{\text{coC=P}} and so \text{PH} \subseteq \text{BPP}^{\text{BPP}} = \text{BPP}. That is, we have \text{PH} = \text{BPP} = \text{BQP}, and so the polynomial hierarchy would collapse to \Sigma^P_2 since \text{BPP} \subseteq \Sigma^P_2 (for more on these more obscure complexity classes, see here). Again, this is our “contradiction”: while it hasn’t been quite proven, it is widely believed, with strong supporting evidence, that the polynomial hierarchy does not collapse as would be the case if all quantum circuits were classically simulatable. Thus this provides a strong argument that not all quantum circuits are classically simulatable, which was precisely what we were looking to demonstrate.

Conclusion

Terhal and DiVincenzo actually go even further and show that there is a certain class of constant-depth quantum circuits that are unlikely to be simulatable by classical circuits – this, indeed, seems to provide even stronger evidence for quantum supremacy. This argument, which is somewhat more complex, uses the idea of teleportation and focuses on a particular class of circuits implementable by a certain restricted set of quantum gates. If you’re interested, I highly recommend reading their paper, where this is explained.

Recommended reading

  • Terhal, Barbara and David DiVincenzo. “Adptive quantum computation, constant depth quantum circuits and arthur-merlin games.” Quantum Info. Comput. 4, 2 (2004); 134-145.
  • Bravyil, Sergey, David Gosset, and Robert König. “Quantum advantage with shallow circuits.” Science 362, 6412 (2018); 308-311.
  • Harrow, Aram Wettroth and Ashley Montanaro. “Quantum computational supremacy.” Nature 549 (2017): 203-209.
  • Boixo, Sergio et. al. “Characterizing quantum supremacy in near-term devices.” Nature Physics 14 (2018); 595-600.

References

\begin{enumerate}

  • Terhal, Barbara and David DiVincenzo. “Adptive quantum computation, constant depth quantum circuits and arthur-merlin games.” Quantum Info. Comput. 4, 2 (2004); 134-145.
  • Fenner, Stephen et. al. “Determining Acceptance Possibility for a Quantum Computation is Hard for the Polynomial Hierarchy.” https://arxiv.org/abs/quant-ph/9812056. (1998).
  • Bravyil, Sergey, David Gosset, and Robert König. “Quantum advantage with shallow circuits.” Science 362, 6412 (2018); 308-311.

Comments are closed.

%d bloggers like this: