Achieving quantum supremacy with sparse and noisy commuting quantum computations

The class of commuting quantum circuits known as IQP (instantaneous quantum polynomial-time) has been shown to be hard to simulate classically, assuming certain complexity-theoretic conjectures. Here we study the power of IQP circuits in the presence of physically motivated constraints. First, we show that there is a family of sparse IQP circuits that can be implemented on a square lattice of n qubits in depth O(sqrt(n) log n), and which is likely hard to simulate classically. Next, we show that, if an arbitrarily small constant amount of noise is applied to each qubit at the end of any IQP circuit whose output probability distribution is sufficiently anticoncentrated, there is a polynomial-time classical algorithm that simulates sampling from the resulting distribution, up to constant accuracy in total variation distance. However, we show that purely classical error-correction techniques can be used to design IQP circuits which remain hard to simulate classically, even in the presence of arbitrary amounts of noise of this form. These results demonstrate the challenges faced by experiments designed to demonstrate quantum supremacy over classical computation, and how these challenges can be overcome.


Introduction
Over the last few years there has been significant attention devoted to devising experimental demonstrations of quantum supremacy [33]: namely a quantum computer solving a computational task that goes beyond what a classical machine could achieve. This is, in part, driven by the hope that a clear demonstration of quantum supremacy can be performed with a device that is intermediate between the small quantum circuits that can currently be built and a full-scale quantum computer. The theoretical challenge that this poses is twofold: firstly we must identify the physically least expensive quantum computations that are classically unachievable; and we must also determine if this advantage can be maintained in the presence of physical noise.
There are several intermediate quantum computing models which could be used to demonstrate quantum supremacy, including simple linear-optical circuits (the boson sampling problem [1]); the one clean qubit model [31]; and commuting quantum circuits, a model known as "IQP" [41,10]. In each of these cases, it has been shown that efficient classical simulation of the simple quantum computations involved is not possible, assuming that the polynomial hierarchy does not collapse. However, these results only prove hardness of simulating the ideal quantum computations in question up to a small relative error in each output probability.
Any quantum experiment will be subject to noise, and the noisy experiment could be substantially easier to simulate than the noise-free experiment. In an attempt to address this, it was shown in [1,11] that, assuming certain additional complexity-theoretic conjectures, the probability distributions resulting from boson sampling and IQP circuits are still hard to sample from classically, even up to small total variation distance. For example, in [11] the following two conjectures were made, one native to condensed-matter physics, the other to computer science: Conjecture 1. Consider the partition function of the general Ising model, where the exponentiated sum is over the complete graph on n vertices, w ij and v k are real edge and vertex weights, and ω ∈ C. Let the edge and vertex weights be picked uniformly at random from the set {0, . . . , 7}.
Then it is #P-hard to approximate |Z(e iπ/8 )| 2 up to multiplicative error 1/4 + o(1) for a 1/24 fraction of instances, over the random choice of weights. It was shown in [11] that, if we assume either Conjecture 1 or Conjecture 2, and the widelybelieved complexity-theoretic assumption that the polynomial hierarchy does not collapse, then there is no polynomial-time classical algorithm for approximately sampling from the output distributions of IQP circuits. That is, if p is the distribution that the noise-free quantum circuit would produce, it is hard for the classical machine to sample from any distribution p such that p − p 1 ≤ , for some small , where the size of depends on the conjectures one is willing to assume. These results imply that a fault-tolerant implementation of IQP sampling or boson sampling can be made resilient to noise while (potentially) maintaining a quantum advantage.
Although this was a significant step towards the near-term possibility of quantum supremacy, these results still suffer from some shortcomings: 1. They do not yet resolve the question of whether realistically noisy, and non-fault-tolerant, quantum experiments are likely to be hard to simulate classically. Indeed, applying a small amount of independent noise to each qubit can readily lead to a distribution p which is much further from p than the regime in which the results of [1,11] are applicable.
2. The results of [11] assume that all pairs of qubits are able to interact. Such long-range interactions incur significant physical resource overheads for most computational architectures.

Our results
Here we study the behaviour of IQP circuits which are implemented on hardware with spatial locality constraints, and in the presence of noise. These are critical questions for any realistic experimental implementation.
An IQP circuit ("Instantaneous Quantum Polynomial-time") is a quantum circuit of the form C = H ⊗n DH ⊗n , where H is the Hadamard gate and D is a diagonal matrix produced from poly(n) diagonal gates. The IQP sampling problem is to sample from the distribution p on n-bit strings produced by applying C to the initial state |0 ⊗n , then measuring each qubit in the computational basis. (Throughout, p denotes this original noise-free distribution.) Our first main result is the following: Theorem 3 (informal). There is a family of commuting quantum circuits on n qubits where: with high probability, a random circuit picked from the family contains O(n log n) 2-qubit gates and can be implemented on a 2d square lattice in depth O( √ n log n); and a constant fraction of circuits picked from the family cannot be simulated classically unless the polynomial hierarchy collapses to the third level, assuming a "sparse" version of Conjecture 1. Here "simulate" means to approximately sample from the output distribution of the circuit, up to 1 distance , for some constant > 0.
In the above we use "2d square lattice" as shorthand for an architecture consisting of a square lattice of √ n × √ n qubits, where all gates are performed across neighbours in the lattice. To prove Theorem 3 we proceed as follows. First, we show that the "dense" IQP circuits from [11], which contained O(n 2 ) gates, can be reduced to "sparse" circuits of O(n log n) long-range gates while still likely being hard to simulate. Second, we show that a random circuit of this form can be parallelised to depth O(log n) with high probability. Third, we apply the results of [5] to show that sorting networks can be used to implement an arbitrary quantum circuit of depth t on a 2d square lattice in depth (Note that this final circuit is no longer an IQP circuit as it contains SWAP gates.) While it might seem that sparse IQP sampling is more likely to be classically simulable, it is possible that the converse is true. The complexity-theoretic hardness arguments rely on the conjecture that complex temperature partition functions of the Ising model retain #P-hardness on random graphs [11]. It is known that there are a range of related #P-hard and NP-hard graph problems that admit an efficient approximation for random dense graphs [4,2], while retaining their hardness on sparse graphs. However, it should be stressed that there are no known efficient approximation methods for the complex temperature partition functions associated with sparse-and dense-IQP sampling.
It remains to be seen whether a more sparse version of IQP sampling can be devised while retaining its classical hardness. Standard tensor network contraction techniques would allow any output probability of the above circuits on a square lattice to be classically computed in time O(2 D √ n ), so achieving a similar hardness result for D = o( √ n) would violate the counting exponential time hypothesis [9,15]. The challenge remains to remove a factor of log n from the depth while maintaining the anticoncentration requirements of [1,11].
It is worth comparing Theorem 3 with results of Brown and Fawzi [12,13]. In [13], these authors show that random noncommutative quantum circuits with O(n log 2 n) gates are good decouplers (a somewhat similar notion of randomisation), and that such circuits can be parallelised to depth O(log 3 n) with high probability. Using a sorting network construction, these circuits could be implemented on a 2d square lattice in depth O( √ n log 3 n). Our result thus saves an O(log 2 n) factor over [13]. One reason for this is that the commutative nature of IQP circuits makes them easier to parallelise. However, in [12], Brown and Fawzi also study an alternative model for random circuits, where gates are applied at each timestep according to a random perfect matching on the complete graph, and show that this achieves a weaker notion of "scrambling" in depth O(log n). Although it is not clear that this notion in itself would be sufficient for a complexity-theoretic hardness argument, it is thus plausible that our results could be extended to noncommuting circuits. It should also be noted that recent [7] numerical evidence suggests that anticoncentration can be achieved on a square lattice with circuits of depth O( √ n), potentially allowing for random circuit sampling quantum supremacy experiments.
Next we study the effect of noise on IQP circuits. We consider a very simple noise model: independent depolarising noise applied to every qubit at the end of the circuit. First the IQP circuit is applied to |0 ⊗n as normal; let |ψ be the resulting state. Then the qubit depolarising channel D with noise rate is applied to each qubit of |ψ . This channel is defined by D (ρ) = (1 − )ρ + I 2 for any mixed state ρ of a single qubit; with probability 1 − , the input state is retained, and with probability , it is discarded and replaced with the maximally mixed state. Finally, each qubit is measured in the computational basis to give a distribution p. ( each bit of x with independent probability /2. To see this, first note that the operation of measuring a qubit in the computational basis commutes with D . If we write M(ρ) = |0 0|ρ|0 0| + |1 1|ρ|1 1| for this measurement operation, then Second, when applied to |0 0|, D replaces it with the state (1 − /2)|0 0| + ( /2)|1 1|, i.e. applies a NOT operation to the state with probability /2; the same is true when applied to |1 1|.
We also remark that, for IQP circuits, this notion of noise is equivalent to applying depolarising noise to the qubits at the start of the computation. This is because noise at the start of the computation is equivalent to replacing the initial state |0 n with a state |y where y is distributed as a noisy version of 0 n , and x|C|y = x + y|C|0 .
We first show that if fault-tolerance techniques are not used, then "most" IQP circuits can be classically simulated approximately if any constant amount of noise is applied in this model. The notion of approximate simulation we use is sampling up to accuracy δ in 1 norm, i.e. sampling from some distribution p such that x | p x − p x | ≤ δ. (Throughout, p denotes any distribution that is close to p in 1 norm.) We will show: Theorem 4. Consider a unitary circuit C = H ⊗n DH ⊗n whose diagonal part D is defined by x|D|x = f (x) for some f : {0, 1} n → C such that f (x) can be computed in time poly(n) for any x. Let the probability of receiving output x after applying C to input |0 ⊗n be p x , and assume that x p 2 x ≤ α2 −n for some α. Further assume C experiences independent depolarising noise on each qubit with rate as defined above. Then T samples can be generated from a distribution which approximates the noisy output probability distribution up to δ in 1 norm, in time n O(log(α/δ)/ ) + T poly(n).
The parameter α occurring in Theorem 4 measures how spread out the output probability distribution of C is. It is shown in [11] that, for random IQP circuits picked from some natural distributions, the expected value of α is O(1). Hence, for an average IQP circuit picked from one of these distributions, and for fixed δ and , the runtime of the classical algorithm is polynomial in n. The circuits that were proven hard to simulate in [11] (assuming some conjectures in complexity theory) have α = O(1). So Theorem 4 shows that precisely those circuits which are hard to simulate in the absence of noise become easy in the presence of noise.
This theorem actually covers cases more general than IQP, since computing f could even require ancilla qubits that are not available in the usual IQP model. Indeed, as well as the application to IQP, the ideas behind Theorem 4 can also be used to show that, in the absence of fault-tolerance, Simon's algorithm [42] can be simulated classically if an arbitrarily small amount of depolarising noise is applied to each qubit. The proof of Theorem 4 uses Fourier analysis over Z n 2 to show that a noisy output probability distribution p can be approximated well given the knowledge of only a small number of its Fourier coefficients, because the high-order coefficients are exponentially suppressed by the noise.
Our final result is that this notion of noise can be fought using simple ideas from classical errorcorrection, while still remaining within the framework of IQP. We show that for any IQP circuit C on n qubits, we can produce a new IQP circuit C on O(n) qubits in polynomial time such that, if depolarising noise is applied to every qubit of the output of C , we can nevertheless sample from a distribution which is close to p up to arbitrarily small 1 distance. This holds for any noise rate < 1, contrasting with standard fault-tolerance thresholds. (However, the notion of noise here is different and substantially simpler than the usual models.) Crucially, this noise-tolerance can be combined with the notion of approximation used in [11] to show that, under the same complexity assumptions as [11], it is hard for a classical algorithm to approximately sample from the noisy output distribution of C , up to small 1 distance. Theorem 5. Assume either Conjecture 1 or Conjecture 2. Let C = H ⊗n DH ⊗n be an IQP circuit which experiences independent depolarising noise on each qubit with rate as defined above, for some < 1. Then there exists δ > 0 such that, if there is a polynomial-time classical algorithm which samples from the output probability distribution of all noisy IQP circuits C of this form up to accuracy δ in 1 norm, the polynomial hierarchy collapses to its third level.
Local noise more general than that arising from single-qubit depolarising channels may also be dealt with via our method of classical error correction. Writing x for a sample from the noise-free distribution p, and x+e for a sample from p, we see that e is distributed such that Pr[e = e ] = ( /2) |e | (1− /2) n−|e | . But in fact we show in Section 5 that any local noise model that makes e overwhelmingly likely to have small Hamming weight would equally well be tolerated by the incorporation of classical error correction.
Thinking of an IQP circuit as a Hamiltonian which is diagonal in the X basis, the error-correction approach we use can be viewed as encoding the terms in the Hamiltonian with a classical errorcorrecting code. The idea of encoding a Hamiltonian in this way with a classical or quantum code has previously been used to protect adiabatic quantum algorithms against noise (see [30] and references therein). In the setting of IQP, the analysis becomes particularly clean and simple.

Related work and perspective
Circuit depth and optimal sparse IQP sampling. Below we improve on the results of [11] to extend the hardness results of IQP sampling introduced in [11] to sparsely connected circuits. The motivation for this is both theoretical and practical. We want to both improve the likelihood that the hardness conjectures that are made are correct, while also decreasing the physical requirements of the IQP sampling protocol.
The complexity of dense IQP sampling depends on the conjecture that average-case complexity of complex temperature Ising model partition functions over dense graphs is #P-hard. That is, that the average and worst case complexities coincide for a large fraction of randomly chosen graphs. It is natural to assume that the complexity of combinatorial problems on graphs increases with the density of the graph instances, however this is known to not always be the case. A number of key combinatorial problems that do not generally admit (classical) polynomial time approximation schemes do admit such approximations on dense instances [4,2]. While these results do not hold for the hardness conjectures made in [11], they are a clear incentive to determine to what extent the IQP sampling argument can be applied to Ising models on sparse graphs.
In [10] it was shown that IQP sampling, up to relative errors, could not be efficiently performed classically without a collapse in the polynomial hierarchy. It was also noted in [10] that this result still holds for IQP circuits with only nearest neighbour gates arranged on a 2d lattice. If this result could be extended to apply to classical simulations that are reasonably close in total variation distance it would be a massive improvement over the results of [11]. Such circuits could be implemented in constant depth with nearest neighbour interactions, suggesting an exceptional target for quantum supremacy experiments. Unfortunately, the techniques used in [1,11] to argue for hardness of simulation up to small total variation distance require the output probability distribution of the circuit to "anticoncentrate" with high probability, i.e. to be spread out, and it does not appear that IQP circuits on a square lattice display sufficient anticoncentration for these techniques to be applicable. Therefore, Theorem 3 is proven by showing that sparse circuits of O(n log n) long-range gates anticoncentrate, and then showing that such circuits can be implemented on a 2d square lattice of size Recent results relating lower bounds for computing sparse Tutte polynomials to the exponential time hypothesis demonstrate that this is likely close to the optimal depth. Last year it was shown that precise evaluations of Tutte polynomials on sparse graphs cannot be performed in time exp(o(n)) without a violation of the counting equivalent of the exponential time hypothesis [9,15]. That is, if there were a sub-exponential runntime for Tutte polynomials on sparse graphs at all #P-hard points, then key NP-hard problems such as 3SAT could also be solved in subexponential time. The Ising models studied here are examples of #P-hard points of complex-variable Tutte polynomials [40]. Tensor network contraction techniques can be used to show that any output probability of any quantum circuit of depth D implemented on a 2d square lattice can be precisely evaluated classically in time O(2 D √ n ) [29], suggesting that if it were possible to implement arbitrary sparse IQP circuits in depth o( √ n) then we are likely to violate the exponential time hypothesis.
The question remains if it is possible to identify a sampling problem that matches the O( √ n) depth bound while also remaining classically difficult to simulate. Recent numerical studies indicate that it might be possible to find random circuits that are drawn from universal gate sets that anticoncentrate with depth O( √ n) on a 2d square lattice [7]. However, an analytic proof that this is possible remains an open question. Finally, it should be noted that a recent paper has suggested that hardness of approximate IQP sampling up to small total variation distance could be proven for IQP circuits that do not necessarily satisfy the anticoncentration property [22]. In this work, the anticoncentration property is replaced with the assumption that most amplitudes corresponding to the results of measurements applied to a 2D "brickwork" state, which is universal for measurement-based quantum computing, are hard to approximately compute. The approach of [22] leads to a lower-depth circuit than ours, but with a polynomial increase in the number of qubits; and as the hardness assumption used is somewhat different, the results are not directly comparable with ours. Subsequently to the first version of this paper, Bermejo-Vega et al. [6] have described several other constant-depth architectures which have a similar polynomial increase in size, but whose hardness is based on conjectures closer to those we use here.
Hardness results for noisy IQP. It was recently shown by Fujii and Tamate [21], using the theory of quantum fault-tolerance, that the distributions produced by IQP circuits are classically hard to simulate, even under a small amount of noise. That is, a quantum channel N is applied to each qubit of the output state such that N − id ≤ for a sufficiently small constant . Fujii and Tamate [21] show that the resulting distribution cannot be sampled from classically unless the polynomial hierarchy collapses to the third level. Theorem 4 may appear to be in conflict with this result; however, this is not the case. Fujii and Tamate's result shows that it is classically hard to sample from the noisy output distribution p of arbitrary IQP circuits up to a small relative error in each probability. Theorem 4 shows that for random IQP circuits these distributions can nevertheless be sampled from approximately, if the notion of approximation used is 1 distance.
Note that the notion of multiplicative approximation used in [21] (and also [10]) is a very strong one: for example, if any of the output probabilities are 0, this 0 must be reproduced exactly in the sampled distribution. By contrast, the 1 distance is a physically realistic measure of distance. For example, if p − p 1 ≤ , Ω(1/ ) samples are required to distinguish between p and p. The framework of relative-error approximation appears naturally in [21] because that work applies standard quantum fault-tolerance within a postselected version of IQP, and approximation up to small 1 error does not combine well with postselection. In order to show that noisy versions of IQP circuits are hard to simulate up to small 1 error, it appears necessary to use a notion of fault-tolerance which is itself

Multiplicative approximation
Additive approximation Noise-free Hard (if PH does not collapse) [10] Hard (w/ stronger complexity assumptions) [11] Noisy Hard (if PH does not collapse) [21] Hard (general circuits, similar assumptions) / polynomial-time (random circuits) Table 1: Comparison of hardness results for simulating IQP circuits classically. "Multiplicative approximation" means the task of sampling from the output distribution up to small relative error in each probability; "additive approximation" is the task of sampling from the output distribution up to small 1 distance. "Noisy" means depolarising noise with rate applied to each qubit of the output state, for some small fixed > 0. PH is short for "polynomial hierarchy".
native to IQP, as we do in Theorem 5.
It was shown in [11] that classical sampling from the output distribution of random IQP circuits up to 1 distance smaller than a universal constant c is hard, assuming either of two reasonable averagecase hardness conjectures. Again, this is not in conflict with the classical simulation results given here: applying noise to the output distributions of the circuits which are hard to simulate in [11] could change them dramatically. Indeed, if depolarising noise with rate is applied to each qubit of an n-qubit quantum state, the distance between the resulting state and the original state could be as high as Ω(n ). So a constant amount of noise on each qubit is easily sufficient to leave the regime which was shown to be hard in [11].
The results obtained here are compared with previously known results in Table 1.
Classical simulation of general quantum circuits. The theory of quantum fault-tolerance states that there is a constant noise threshold below which universal quantum computation is possible. A number of authors have provided converses to this, in a variety of different models [35,44,14,26]. These works show that, if a quantum circuit experiences sufficient noise, either it is simulable classically, or its output is essentially independent of its input. Perhaps the most relevant of these results to the case of IQP circuits is that of Razborov [35], which considers arbitrary quantum circuits containing gates of fan-in at most k, and a model where depolarising noise with rate larger than 1 − 1/k is applied to each qubit after each layer of gates in the circuit. It is shown in [35] that, after O(log n) layers of gates, the output state of n qubits produced by the circuit essentially does not depend on the input to the circuit. Theorem 4 is a rare case where there is no threshold noise rate: there is a classical algorithm which approximately samples from the output distribution for any noise rate > 0. This does not contradict standard fault-tolerance results, because fault-tolerance techniques have not been applied to the IQP circuits which are classically simulable.
Boson sampling. The boson sampling problem of Aaronson and Arkhipov [1] is defined as follows. For an m × n column-orthonormal matrix U , approximately sample from the distribution D bs on sequences S = (s 1 , . . . , s m ), where the s i are nonnegative integers which sum to n, given by where U S is the n × n submatrix of U containing s i copies of the i'th row of U , for all i = 1, . . . m, and perm(U S ) is the permanent of U S .
Kalai and Kindler have given evidence that suggests that, for small errors in the matrix U , boson sampling is classically simulable [25] (see also [24], and [34] for a recent study of more physicallymotivated noise models). To be precise, they show the following. Let X and U be random Gaussian matrices (n × n matrices whose entries are picked from a normalised Gaussian distribution), and set , and p can be efficiently approximated classically to within a constant.
It was also shown by Leverrier and García-Patrón [28], and independently Kalai and Kindler [25], that, for considerably smaller levels of imperfection (e.g. 1/n), the output of the boson sampling circuit is far from the intended output. Note that, in the intermediate regime = o(1), 1/n, the output of the circuit could still be hard to approximate while being far from the intended output. On the other hand, it was shown by Arkhipov [3] (see also [39]) that if = o(1/n 2 ), the 1 distance between the noisy distribution and the original distribution is o (1).
As discussed in [25], the results of Kalai and Kindler do not quite imply that the boson sampling problem as described in [1] can be solved classically with a sufficiently large (but constant) amount of noise. The results of [25] cannot simply be averaged over S to obtain a similar low-degree polynomial approximation to D bs , as they do not take the normalisation term in (2) into account, nor the possibility of repeated columns in S. Nevertheless, they provided the first rigorous evidence that boson sampling in the presence of noise could be classically simulable. Based on this evidence, it was conjectured in [24] that "small noisy quantum circuits and other similar quantum systems" could be approximated by low-degree polynomials. The present work proves this conjecture for the first time for a nontrivial class of quantum circuits, using similar "noise sensitivity" ideas to [25].
The noise model. Noise models are deeply specific to any given implementation of a quantum computation. The noise model considered in this paper is relatively simple, where a perfect implementation of the desired circuit is followed by independent depolarising noise on each qubit in the circuit. As this is at the end of the circuit, it results in independent bitflip noise on each qubit.
Despite the simplicity, it is a reasonable testbed for several physically relevant scenarios. A common noise model, and the model in which the fault-tolerance threshold theorem is proven, would have noise applied before and after every gate in the circuit, rather than just at the beginning or end as here.
If the intermediate errors are dephasing errors, then this scenario is equivalent to the model studied in this paper. This follows from two facts. Firstly, sequential dephasing maps compose into another dephasing map, albeit one with a higher probability of error. The second key feature is that dephasing maps commute with the diagonal gates in an IQP circuit. These can be "commuted through" the Hadamard gates to produce bitflip channels. Finally, dephasing at the beginning and end of the circuit is not observable.
The results of Section 5 demonstrate that IQP circuits can be made fault tolerant to dephasing errors using only marginally larger IQP circuits. However it is not clear that more general noise models, for example those allowing for depolarising errors between gates, can be made correctable within IQP (unless of course the entire IQP circuit is trivially regarded as a single gate acting on the whole system). For example, consider a circuit made up of CZ gates, each of which has depolarising noise applied to both of its qubits before and after the gate (call these NCZ gates). Then NCZ gates do not commute with one another, even when applied to the initial state |+ ⊗n . This opens up the intriguing possibility that noise could actually increase the power of IQP circuits, by allowing them to sample from otherwise inaccessible distributions.
Perspective on these results and quantum supremacy. We feel that our results highlight the challenges for quantum supremacy experiments in the presence of noise, and also the challenges for skeptics attempting to prove that quantum supremacy is impossible. In the case of IQP circuits that are apparently hard to simulate classically, then if no fault-tolerance is used, the circuits can be simulated in polynomial time if there is a very small amount of noise. On the other hand, correcting noise of a rather natural form can be achieved using only classical ideas, with no need for the full machinery of quantum fault-tolerance, and only a small increase in the size of the circuit. The setting of IQP serves as a simple laboratory in which to explore these issues, which we expect will also apply to other proposed experiments. Another important challenge, as for all sampling problems, is to find a simple method for verifying that an experimental implementation of IQP sampling has been correctly implemented. An IQP verification procedure was proposed in [23], but this requires the preparation of states going beyond the IQP model.
We finally remark that, although our classical simulation of noisy IQP circuits runs in polynomial time, it is not remotely efficient in practice for reasonable noise rates (e.g. ≈ 0.01), as the runtime exponent depends linearly on 1/ . A suitable experiment could still demonstrate quantum supremacy over this algorithm even without an exponential separation being possible.

Sparse IQP circuits
In this section we discuss how to parallelise IQP circuits and implement them on a square lattice. The first step is to replace the "dense" IQP circuits studied in [11] with a sparser type of circuit, which will be easier to parallelise. We consider the following method of choosing the diagonal part of a random IQP circuit C on n qubits: • For each possible choice of a pair (i, j) of distinct qubits, include a gate across those qubits with probability p = γ(ln n)/n, for some fixed γ > 0.
• Each qubit has a 1-qubit gate acting on it, which is picked uniformly at random from the set Call an IQP circuit picked from this distribution sparse. Sparse IQP circuits contain O(n log n) gates with high probability and are a variant of the "Ising-like" class of IQP circuits considered in [11]. Indeed, for any circuit C of the above form, we have for some integer weights w ij , v k : this is easily seen to correspond to an Ising model partition function Z C (ζ) (cf. (1)). We will need the following key technical lemma, a sparse counterpart of anticoncentration results proven in [11]. We prove Lemma 6 in Appendix A. By the Paley-Zygmund inequality, which states that for any random variable R with finite variance and any 0 < α < 1, we have that, for a large enough constant γ, Pr[| 0|C|0 | 2 ≥ α · 2 −n ] ≥ (1 − α) 2 /5. We use this within the following result from [11] (slightly rephrased): Corollary 7. Let F be a family of IQP circuits on n qubits. Pick a random circuit C by choosing a circuit from F at random according to some distribution, then appending X gates on a uniformly random subset of the qubits. Assume that there exist universal constants α, β > 0 such that Further assume there exists a classical polynomial-time algorithm A which, for any IQP circuit C of this form, can sample from a probability distribution which approximates the output probability distribution of C up to additive error = αβ/8 in 1 norm. Then there is a FBPP NP algorithm which, given access to A, approximates | 0|C|0 | 2 up to relative error 1/4 + o(1) on at least a β/2 fraction of circuits C.
In this corollary, FBPP NP is the complexity class corresponding to polynomial-time classical randomised computation, equipped with an oracle to solve NP-complete problems. By inserting the parameters from Lemma 6, we see that there are universal constants 0 < , c < 1 such that the following holds: If there is a classical algorithm A which can sample from a probability distribution approximating the output probability distribution of any such circuit C up to additive error in 1 norm, then there is a FBPP NP algorithm which, given access to A, approximates | 0|C|0 | 2 up to relative error 1/4 + o(1) on at least a c fraction of sparse IQP circuits C. Note that, by a union bound, we can weaken the requirement that the algorithm A works for all such circuits C to the requirement that it works for a large constant fraction of them, at the expense of reducing the constant c.
We conjecture that this latter problem is #P-hard. This corresponds to approximating the partition function of the Ising model up to small relative error, for random graphs that are relatively sparse (yet still connected with high probability), which is a similar hardness assumption to one considered in [11]. If this conjecture holds, then the existence of such a classical sampler would imply collapse of the polynomial hierarchy [43], a complexity-theoretic consequence considered very unlikely; see [11] for more.

Conjecture 8.
There is a universal constant c < 1/80 such that it is #P-hard to approximate |Z C (ζ)| 2 up to relative error 1/4 + o(1) for an arbitrary c fraction of instances C picked from the above distribution.
It should be noted that finding such a relative-error approximation to |Z C (ζ)| 2 is #P-hard in the worst case even for constant-depth IQP circuits. Note that the precise value of c is not very significant. The decrease in the bound on c compared with Conjecture 1 is because the constant in Lemma 6 is somewhat larger than in the equivalent result in [11].

Parallelising IQP circuits
Next we show that sparse IQP circuits can be parallelised efficiently, assuming that long-range interactions are allowed. An arbitrary IQP circuit whose gates act on at most 2 qubits can be implemented by first implementing the 2-qubit gates (combining multiple gates acting on the same qubits into one gate), and then implementing the 1-qubit gates in one additional layer. So consider an IQP circuit C on n qubits, where each gate acts on 2 qubits, and such that there is at most one gate acting across each pair of qubits.
C can be implemented in depth t if the gates can be partitioned into t sets such that, within each set, no pair of gates "collide" (act on the same qubit). Let G C be the corresponding graph on n vertices which has an edge between vertices i and j if C has a gate between qubits i and j. Then such a partition of C is equivalent to colouring the edges of G C with t colours such that no pair of edges incident to the same vertex share the same colour. Vizing's theorem [16] states that any graph G has a proper edge-colouring of this form with at most ∆(G) + 1 colours, where ∆(G) is the maximal degree of a vertex of G. So all that remains is to bound ∆(G C ) for a random sparse IQP circuit C. This is equivalent to bounding ∆(G) for a random graph G such that each edge is present with probability p = γ(ln n)/n. The maximum degree of random graphs has been studied in detail (see e.g. [8]); here we give an elementary upper bound.

Proof. By a union bound, for any
is the degree of a fixed vertex v 1 . The degree of v 1 is the number of edges incident to v 1 ; each edge is present with probability p; so by a Chernoff bound argument [17] Pr[deg(v 1 ) ≥ 2γ(ln n)] ≤ e −γ(ln n)/4 = n −γ/4 .

The claim follows.
So, for a large enough constant γ, the probability that ∆(G) ≥ 2γ(ln n) is negligible. Note that, in this regime, with high probability G is connected and has maximal treewidth, implying that it is not obvious how to simulate C classically using tensor-contraction techniques [29].
We can therefore parallelise a random IQP circuit containing O(n log n) gates to depth O(log n), which is optimal. It is worth comparing this to the bounds obtained in [13] for parallelising general quantum circuits. There it was shown that a random circuit of depth t can be parallelised to depth O(t(log n)/n) with high probability. Here we have removed a log factor by taking advantage of our ability to commute gates through each other.

Sorting networks
We finally show how to implement sparse IQP circuits depth-efficiently on a 2d square lattice. Consider an arbitrary quantum circuit C on n qubits of depth t. We would like to implement C on a 2d square lattice of √ n × √ n qubits. It is known [5] that, for any geometric arrangement of n qubits, sorting networks on that geometry correspond to efficient implementations of quantum circuits in that geometry. A sorting network on n elements is a kind of circuit on n lines, where each line is thought to carry an integer, and each gate across two lines is a comparator which swaps the two integers if they are out of order. Sorting networks are designed such that, at the end of the sorting network, any input sequence will have been sorted into ascending order. The aim is to minimise the depth of the network, while possibly obeying geometric constraints (such as comparisons needing to occur across nearest neighbours in some lattice geometry).
We briefly sketch the argument that sorting networks give efficient implementations of circuits on particular geometries [5]. Imagine we have a sequence of non-nearest-neigbour 2-qubit gates to apply in parallel, each (necessarily) acting on distinct qubits, but that we are only allowed to perform nearest-neighour gates (in some geometry). To perform this sequence, it is sufficient to rearrange the qubits such that each pair across which we want to apply a gate is adjacent, then perform the gates (in parallel), then rearrange the qubits to their original order. To do this, we would like to perform a certain permutation of the qubits using only SWAP gates, where each SWAP gate acts across nearest neighbours. This is almost exactly what sorting networks achieve. Each gate in a sorting network can be thought of as a controlled-SWAP, where the values in the two lines are swapped if they are in the incorrect order. To produce a circuit of SWAPs from a sorting network to achieve a desired permutation σ, we can feed in the sequence σ −1 (1), . . . , σ −1 (n) to the network. Whenever a gate is applied to two integers which are currently out of order, we represent it in the circuit by a SWAP gate; otherwise, we do not include it. Assuming that the sorting network works correctly, it will map σ −1 (1), . . . , σ −1 (n) to 1, . . . , n, or in other words will perform the permutation σ. Any geometric constraints obeyed by the comparators in the sorting network will also be obeyed by the network of SWAPs.
It was shown in [37] that there exists a sorting network on a 2d √ n × √ n lattice which has depth 3 √ n + o( √ n); this is close to optimal by diameter arguments. Therefore, any quantum circuit of depth t on n qubits can be implemented on a 2d square lattice of √ n × √ n qubits in depth O(t √ n). Putting all the above pieces together, we have completed the proof of Theorem 3: there is a family of quantum circuits on n qubits where with high probability a circuit picked from the family contains O(n log n) 2-qubit commuting gates and can be implemented on a 2d square lattice in depth O( √ n log n); and a constant fraction of circuits picked from the family are hard to simulate classically, assuming similar conjectures to [11]. Restating Theorem 3 more formally: Theorem 3 (restated). Assume Conjecture 8. Then there is a distribution D on the set of commuting quantum circuits on n qubits and universal constants q, > 0 such that: with high probability, a circuit picked from D contains O(n log n) 2-qubit commuting gates and can be implemented on a 2d square lattice in depth O( √ n log n); and a q fraction of circuits picked from D cannot be simulated classically unless the polynomial hierarchy collapses to the third level. Here "simulate" means to approximately sample from the output distribution of the circuit, up to 1 distance .

Approximating the output probability distribution of noisy IQP circuits
We now turn to giving a classical algorithm for approximately simulating noisy IQP circuits. We prove that, in many cases, noisy probability distributions p produced by IQP circuits are approximately classically simulable (Theorem 4) by showing the following, for any fixed δ > 0: 1. We can calculate a description of a function q which approximates p up to 1 error δ, and which has only poly(n) Fourier coefficients over Z n 2 .
2. We can calculate all marginals of the function q exactly and efficiently.
3. This enables us to sample from a probability distribution p which approximates p up to 1 error O(δ).
In order to show all these things, we will use some basic ideas from Fourier analysis of boolean functions [32]. Any function f : {0, 1} n → C can be expanded in terms of the functions the valuesf (s) are called the Fourier coefficients of f . It is easy to show that Fourier analysis is important in the study of IQP because the model can be understood as sampling from the Fourier spectrum of a function f (x) = x|D|x ; indeed, the probability of receiving outcome s when measuring at the end of the circuit is precisely |f (s)| 2 when noise is absent.
Fourier analysis is also useful to understand the effect of noise. Recall from the introduction that the depolarising noise applied at the end of the circuit is equivalent to applying noise to the output probability distribution p to give a new distribution p. The noise operation applied is precisely the binary symmetric channel, also known simply as the "noise operator" for functions on the boolean cube. We denote this classical noise operation N . The Fourier coefficients of the resulting distribution behave nicely under this noise [32]: for all s ∈ {0, 1} n , where |s| is the Hamming weight of s.

The IQP simulation algorithm
We first show how to determine a function q approximating the noisy output distribution p up to 1 error δ, for arbitrary δ > 0. Imagine we know approximations p (s) to the Fourier coefficients of p for |s| ≤ , for some integer , such that | p (s) − p(s)| ≤ γ2 −n for some γ. Then our approximation is defined by q(s) = (1 − ) |s| p (s) for |s| ≤ , and q(s) = 0 for |s| > . So, bounding the 1 norm by the 2 norm and using Parseval's equality, we have where we use |{s : |s| ≤ }| = k=0 n k ≤ n + 1. Now assume that x∈{0,1} n p 2 x ≤ α2 −n for some α. For random IQP circuits, for example, we have α = O(1) with high probability [11]. Then we have To see that we can approximate these coefficients efficiently, observe that there is a nice expression for them when p is the output probability distribution of an IQP circuit defined by a diagonal matrix D, where x|D|x = f (x) for some f : {0, 1} n → C: Next, we show that, for any q, knowledge of the Fourier coefficients of q implies that we can compute its marginals efficiently (see [40] for a related discussion). Note that q is not necessarily a probability distribution: i.e. it may take negative values and not sum to 1. Let x 1...k denote the string consisting of the first k bits of x. Assume that q has N nonzero Fourier coefficients and consider the sum S y := x,x 1...k =y q(x) for each k ∈ {0, . . . , n} and each y ∈ {0, 1} k , where for k = 0 we consider the empty string y = ∅ and let S ∅ = x q(x). Then  Although in general the sum could contain up to 2 n terms, we only need to include those terms where q(s) = 0. For each y, S y can therefore be computed exactly in N poly(n) = n O(log(α/δ)/ ) time. It remains to show part 3 of the plan sketched in the introduction to this section: that knowledge of the marginals of q allows us to sample from a distribution approximating p.

Sampling from an approximate probability distribution
We now show that, in a quite general setting, if we can compute the marginals of an approximation p to a probability distribution p, we can approximately sample from p. Note that this task is apparently rather similar to one considered by Schwarz and Van den Nest [38], who showed that certain quantum circuit families -such as IQP circuits -with sparse output distributions can be simulated classically, by using the Kushilevitz-Mansour algorithm [27] to approximately learn the corresponding Fourier coefficients, then showing that a probability distribution close to the corresponding approximate probability distribution can be sampled from exactly. However, in [38] it was sufficient to show that, given a probability distribution p with at most poly(n) nonzero probabilities, each determined up to additive error O(1/ poly(n)), we can approximately sample from p. Here we have something harder to work with: that the distribution we have approximates p up to constant overall 1 error.
Fix integer n ≥ 0. Imagine we have access to marginals of some p ∈ R 2 n such that p − p 1 ≤ δ (for n = 0, p is just a real number), and that x p x > 0. Here "access" means that we can exactly compute sums of the form S y := x,x 1...k =y p x for each k ∈ {0, . . . , n} and each y ∈ {0, 1} k , where for k = 0 we consider the empty string y and define S = x p x . We would like to sample from a probability distribution approximating p. Note that p may not be a probability distribution itself.
We use the following procedure, which is a "truncated" version of a standard procedure for sampling from a probability distribution, given access to its marginals: 1. Set y to the empty string.

Return y.
We observe that, at each step of the procedure, there can be at most one z ∈ {0, 1} such that S yz < 0. Otherwise, we would have S y < 0, and hence y would not have been picked at the previous step. Therefore this procedure defines a probability distribution Alg(p ) on n-bit strings, for any p such that S > 0. Crucially, we can show that Alg(p ) ≈ p: We defer the proof of Lemma 10 to Appendix B.
All that remains to prove Theorem 4 is to put all the pieces together. The overall algorithm starts by approximating and storing enough Fourier coefficients of q required to ensure that Alg( q)−p 1 ≤ δ. From Lemma 10 and the discussion in previous sections, this can be achieved in time n O(log(α/δ)/ ) . Then each sample from Alg( q) can be produced in time poly(n). This completes the proof.

Other algorithms
There is not that much about the classical simulation approach proposed here which is specific to IQP circuits. Indeed, it will work for any class of circuits for which the output distribution is sufficiently anticoncentrated, and for which we can classically compute the Fourier coefficients of the output distribution.
Simon's algorithm. Simon's quantum algorithm solves a certain oracular problem using exponentially fewer queries to the oracle than any possible classical algorithm [42]. In Simon's problem we are given access to a function f : {0, 1} n → Y for some set Y , and are promised that there exists t ∈ {0, 1} n such that f (x) = f (y) if and only if x + y = t, where addition is bitwise modulo 2. Our task is to determine t. Simon's algorithm solves this problem using O(n) evaluations of f , whereas any classical algorithm requires Ω(2 n/2 ) evaluations. The output probability distribution of the algorithm is uniformly random over bit-strings x ∈ {0, 1} n such that x · t = 0. This distribution is sufficiently anticoncentrated for the above algorithm to work (α = 2), and the Fourier coefficients of the output probability distribution p can easily be calculated;p(0 n ) = 2 −n , and for s = 0 n , So we can evaluatep(s) by determining whether s = t, which can be done efficiently (for a given s).
Other algorithms? Assume that we have the ability to exactly compute arbitrary probabilities p x in poly(n) time (note that this does not necessarily give us the ability to sample from p). For the above approach to work, we would like to approximate 2 np (s) = x p x (−1) s·x up to additive accuracy δ. In general, we will not be able to do this efficiently; for example, imagine p x = 1 for some unique x, and all other probabilities are 0. Thenp(s) only depends on one x, which we do not know in advance. A similar argument still holds for relatively anticoncentrated distributions. On the other hand, by a similar argument to that used to approximatep(s) for IQP circuits, we can achieve a suitable level of approximation whenever we are able to exactly compute the Fourier coefficients of the output state |ψ . Indeed, it is even sufficient to approximate s|H ⊗n |ψ up to very high accuracy.
One particular case which is tempting to address is the "quantum approximate optimization algorithm" (QAOA) invented by Farhi, Goldstone and Gutmann [18,19]. This algorithm has been proposed to offer a route towards proving quantum supremacy [20]. In the simplest version of the algorithm, the first step is to produce the state |ψ = e −iB e −iC |+ ⊗n , where B = β i X i , C = γ i C i for some coefficients β, γ, where X i is Pauli-X on the i'th qubit, and each matrix C i is diagonal and only acts nontrivially on O(1) qubits. The second step is to measure |ψ in the computational basis to sample from a hopefully interesting distribution. The structure of the QAOA is very similar to an IQP circuit, and hardness of simulating the algorithm classically, up to small relative error, can be proven under similar assumptions to those for IQP circuits [20]. We can think of e −iβB = cos β −i sin β −i sin β cos β as a kind of variant H gate. In this case we can approximate s|H ⊗n |ψ , but not to a sufficiently high level of accuracy for the above approach to work.

Reducing the anticoncentration requirement
One apparently non-ideal aspect of our results on simulating IQP circuits is the dependence on α, meaning that we only obtain a polynomial-time classical simulation when the output probability distribution of the circuit is rather spread out. Interestingly, it was shown by Schwarz and Van den Nest [38] that IQP circuits can be simulated classically (with a similar notion of simulation to that considered here) if the output probability distribution p is close to sparse. That is, if there exists a distribution p such that p−p 1 ≤ δ for some small fixed δ, and such that p only contains t = poly(n) nonzero probabilities ("p is -close to t-sparse"). This seems close to being a converse to the condition considered here, that x p 2 x ≤ α2 −n for α = O (1). If this were the case, we would have shown that noisy IQP circuits can always be simulated (if we can simulate a noiseless IQP circuit, we can simulate a noisy one, by sampling from the output distribution and then applying noise to the sample). From our results on fault-tolerance below, we would not expect this to be possible.
However, the constraint used here is not precisely the converse of that in [38]. Consider an IQP circuit C whose diagonal part consists of CZ gates on qubits (1, 2), (3,4), . . . , (k − 1, k). Then p is uniformly distributed over the set of bit-strings x such that x i = 0 for i ∈ {k + 1, . . . , n}. So p is 2 k -sparse, but far from L-sparse for any L ≤ 2 k−1 , for example. Further, x p 2 x = 2 −k . If we take k = n/2, neither the present simulation method nor the method of [38] gives an efficient algorithm.

Fault-tolerance
We now show that the type of depolarising noise considered in this work can be dealt with using purely classical ideas from the theory of error-correcting codes. That is, we show that for any IQP circuit C with output probability distribution p, we can write down a corresponding IQP circuit C such that, if depolarising noise is applied to every qubit of the output of C , we can still sample from a distribution which is close to p up to arbitrarily small 1 distance.
Let M be an n × m matrix over F 2 , m ≥ n, such that the rows of M are linearly independent.
For any function f : where M x denotes matrix multiplication over F 2 . Then it is easy to see that, for any s ∈ {0, 1} m such that where we use linear independence of the rows of M in the second equality. So g = f M , and equivalently f M =ĝ.
This implies that a linear transformation s → M T s of the output probability distribution of a unitary operation C = H ⊗n DH ⊗n can be achieved by applying a corresponding linear transformation to the diagonal part of the circuit. If C is an IQP circuit where D is made up of poly(n) diagonal gates, this transformation can be performed efficiently, i.e. in time poly(n). Indeed, the diagonal part of any IQP circuit can be written as for some real coefficients θ j , where C is an × n matrix over F 2 , and Z j denotes a Pauli-Z operation acting on the j'th qubit. This formalism was introduced in [41] and is known as an "X-program" 2 . If we replace C with CM , we obtain a new circuit whose diagonal part is Then, using the fact that Applying linear transformations over F 2 to the output distribution of C allows us to use techniques from the classical theory of error-correcting codes to combat noise. Let M be the generator matrix of an error-correcting code of length m and dimension n. The noise operation N we consider corresponds to sampling a bit-string t = M T s, then flipping each bit of t with independent probability /2 to produce a new bit-string t; we write t ∼ t for the distribution on noisy bit-strings t. So if M has an efficient decoding algorithm, given a noisy sample t ∈ {0, 1} m , we can apply the decoding algorithm to produce a new sample s . If M has good error-correction properties, s = s with high probability.
More formally, we consider the distribution p on bit-strings s ∈ {0, 1} n produced by the following procedure: 1. Produce a sample s ∈ {0, 1} n from a distribution p where p s = |f (s)| 2 .

Set t = M T s.
3. Flip each bit of t with independent probability /2 to produce t.

Apply the decoding algorithm D for M to produce s = D( t).
Then we would like to show that p − p 1 ≤ δ for arbitrarily small δ. If it holds that, for all encoded bit-strings x, Pr y∼ x [D(y) = D(x)] ≤ δ/2, then (p ) (s) − p (s) 1 ≤ δ for all original bit-strings s, where p (s) is the point distribution on bit-strings s ∈ {0, 1} n such that p (s) (s) = 1, and (p ) (s) is the corresponding distribution on decoded noisy bit-strings. Taking the average over bit-strings s according to p, we have p − p 1 ≤ δ by convexity.
There are classical error-correcting codes which achieve Pr y∼ x [D(y) = D(x)] → 0 for any < 1 with only a modest overhead. Indeed, Shannon's noisy channel coding theorem for the binary symmetric channel states that an exponentially small (in n) failure probability can be achieved (nonconstructively) for any < 1 by taking m = c n for some c that depends only on . Explicit codes which almost achieve Shannon's nonconstructive bound, and have efficient decoding algorithms, are known; for example, low-density parity check codes and certain concatenated codes [36]. So the overhead need only be a constant factor.
For our purposes, it would even be sufficient to use a simple repetition code, where each bit is encoded in r bits. Here M is just r copies of the identity matrix (for r odd) and the decoding algorithm takes a majority vote. The probability that a bit is decoded incorrectly is the same as the probability that more than r/2 bits are flipped, which is so for any < 1 the probability that an individual bit is decoded incorrectly is exponentially small in r. So, using a repetition code, it is sufficient to take r = O(log n) for all n bits to be decoded successfully except with low probability.
The IQP circuits produced using error-correcting codes could be more complex than the original circuits, as they may involve gates acting on up to n qubits. However, in some cases these gates can then be replaced with gates acting on only O(1) qubits each. For example, consider the family of circuits where θ j is restricted to be a multiple of π/8 in (3), which is one of the cases shown hard to simulate in [11] (assuming some complexity-theoretic conjectures). It is shown in [40] that any gate in such a circuit, even acting on all n qubits, can be replaced with poly(n) gates from the same family acting on at most 3 qubits each without changing the output of the circuit. A Anticoncentration bound Lemma 6 (restated). Let C be a random sparse IQP circuit. Then E C [| 0|C|0 | 2 ] = 2 −n and, for a large enough constant γ,

Acknowledgements
Proof. It is easy to see from symmetry arguments [11] that E C [| 0|C|0 | 2 ] = 2 −n . So all that remains is to get a bound on E C [| 0|C|0 | 4 ].
Let α ij ∈ {0, . . . , 3} be the number of times the gate (1, 1, 1, ω) is applied across qubits i and j. It is shown in the last appendix of [11] that, for any distribution on the α ij coefficients, For any coefficients β ij , we have where U is the uniform distribution on {0, . . . , 3}, recalling that p is the probability that a gate is applied across qubits i and j. As w, x, y ∈ {0, 1} n , the expression F ij (w, x, y) := w i (y j − x j ) + x i (y j − w j ) + y i (w j + x j ) − 2y i y j is zero mod 4 if and only if it equals zero. We therefore have It can be checked that, for any k ∈ {1, . . . , n}, F ij (w, x, y) = 0 if and only if F ij (w k , x k , y k ) = 0, where w k is the bit-string produced from w by flipping the k'th bit. By flipping bits of w, we can therefore assume that w = 0 n and obtain For a given pair (i < j), F ij (0 n , x, y) = x i y j + y i x j − 2y i y j = 0 if and only if the pairs of strings (x i x j , y i y j ) are in the following set: {(00, 11), (01, 10), (01, 11), (10, 01), (10,11), (11,01), (11,10)}.
Put another way, the strings (x i y i , x j y j ) should be in the following set: is the number of pairs (x, y) with the correct numbers of combinations of bits (so |{i : x i = 0, y i = 1}| = b, etc.), and we have simplified by removing the b 2 term, which can only make the inequality looser. We now split into cases. Let α be a constant such that n αn ≤ 2 n/3 /(n + 1) (for example, α = 1/20 works for large enough n). Then, as N bcd ≤ n b n c n d , all terms in the sum such that max{b, c, d} ≤ αn are bounded by 2 n /(n + 1) 3 . Now consider a term in the sum such that at least one of b, c, d is larger than αn (assume b wlog). Then Taking γ = 4/α, this is upper-bounded by 2 n n −3 whenever c ≥ 1 or d ≥ 1. It remains to consider the cases where c = 0, d = 0: the sum resulting from these is bounded by 2 −3n n b=0 n b = 2 −2n . Multiplying by 3, to allow for the choice of each of b, c, d as the large value, the entire sum, and hence | 0|C|0 | 4 , is bounded by 5 · 2 −2n . This completes the proof.

B Sampling from an approximate distribution
In this appendix we prove Lemma 10 regarding the behaviour of the algorithm Alg applied to approximate probability distributions p . To do so, we define a closely related, but somewhat easier to analyse, procedure Fix(p) as follows for vectors p ∈ R 2 n , integer n ≥ 0, such that x p x > 0. First, for n = 0, Fix(p) = p. For n ≥ 1, writing p = ( a b ) for some a, b ∈ R 2 n−1 , Note that we cannot have x a x ≤ 0 and x b x ≤ 0 simultaneously because x p x > 0, and further that in the recursive definition, Fix is never applied to an "illegal" vector whose entry sum is nonpositive. Up to scaling, Fix is equivalent to Alg: Lemma 11. For any integer n ≥ 0, and any p ∈ R 2 n such that x p x > 0, Fix(p) = ( x p x ) Alg(p).
Proof. The proof is by induction on n. The claim clearly holds for n = 0, where Fix(p) = p = p · 1 = p Alg(p). For n ≥ 1, writing p = ( a b ) for some a, b ∈ R 2 n−1 , by the definition of Alg we have So, using the inductive hypothesis, which is equal to Fix(p) as required.
Therefore, if we apply Alg to some approximate distribution p , the final probability distribution sampled from is precisely Fix(p )/S, where S = x p x . This shows, in particular, that for all p such that x p x > 0, Fix(p) x ≥ 0 for all x.
Proof. We first show that the following claims imply the lemma: for all p ∈ R 2 n such that x p x > 0, then 1. For all x such that p x ≥ 0, 0 ≤ Fix(p) x ≤ p x ; 2. For all x such that p x < 0, Fix(p) x = 0; 3. x Fix(p) x = x p x .
Indeed, assuming these claims, we have as desired, where the second equality uses claims 1 and 2, and the third uses claims 2 and 3. It remains to prove the claims. Claim 1 follows from observing that Fix never changes the sign of an element of p, and at each step modifies elements by either zeroing them, or rescaling them by a scaling factor upper-bounded by 1. Claim 2 follows from considering the last-but-one step of Fix, where it is applied to vectors of the form ( α β ) with α + β > 0; if either of α or β is negative, it will be zeroed by Fix. Claim 3 is shown by induction: it clearly holds for n = 0 as p ≥ 0 and Fix(p) = p, and for n ≥ 1, assuming the inductive hypothesis for vectors on R 2 n−1 and inspecting the definition of Fix shows that in all three cases x Fix(p) x = x p x . This completes the proof.
so S := x p x ≥ 1 − δ > 0 and hence p satisfies the preconditions of Lemmas 11 and 12. So where the first equality is Lemma 11, the first inequality is the triangle inequality, the second is Lemma 12, and the third uses x,p x <0 |p x | ≤ x,p x <0 |p x − p x | ≤ δ.