Additive-error fine-grained quantum supremacy

It is known that several sub-universal quantum computing models, such as the IQP model, the Boson sampling model, the one-clean qubit model, and the random circuit model, cannot be classically simulated in polynomial time under certain conjectures in classical complexity theory. Recently, these results have been improved to"fine-grained"versions where even exponential-time classical simulations are excluded assuming certain classical fine-grained complexity conjectures. All these fine-grained results are, however, about the hardness of strong simulations or multiplicative-error sampling. It was open whether any fine-grained quantum supremacy result can be shown for additive-error sampling. In this paper, we show the additive-error fine-grained quantum supremacy. As examples, we consider the IQP model, a mixture of the IQP model and log-depth Boolean circuits, and Clifford+$T$ circuits. Similar results should hold for other sub-universal models.


IQP
In this section, we show additive-error fine-grained quantum supremacy of the IQP model. The IQP model is defined as follows. 4. H ⊗N is applied. 5. All qubits are measured in the computational basis.
Let us consider an n-variable degree-3 polynomial, f : {0, 1} n → {0, 1}, over F 2 defined by for any x ≡ (x 1 , x 2 , ..., x n ) ∈ {0, 1} n , where α i , β i,j , γ i,j,k ∈ {0, 1}. If we say that we randomly choose f , it means that we randomly choose each α i , β i,j , γ i,j,k uniformly and independently. The conjecture on which additive-error fine-grained quantum supremacy of the IQP model is based is stated as follows.
Conjecture 1 Let f be an n-variable degree-3 polynomial over F 2 . Let us define There exist positive constants a and n 0 such that for every n > n 0 the following holds. Computing [gap(f )] 2 within a multiplicative error u for at least v fraction of f cannot be done with a classical probabilistic O * (2 an )-time algorithm that makes queries of length O(2 an ) to an NTIME[n 2 ] oracle with a success probability at least w. Here, u, v, w are certain constants. (O * means that the polynomial factor is ignored.) Here, the oracle query is the standard one: there is a separate oracle tape and answers can be returned from the oracle instantaneously. Note that the parameters u, v, w can be adjustable to some extent. (See the proof.) We do not know whether this conjecture is true or false, but at least at this moment we do not know how to refute it. (For more discussions, see Sec. 5.1).
Based on Conjecture 1, we show the following result.
Theorem 1 If Conjecture 1 is true, then there exists an N -qubit IQP circuit whose output probability distribution cannot be classically sampled in O(2 aN )-time within a certain constant additive error .
For simplicity, we consider degree-3 polynomials, but it is clear from the following proof that a similar result holds for degree-k polynomials for any constant k ≥ 3. (The anti-concentration lemma holds for any degree-k polynomial with k ≥ 2, but the degree-2 case is classically simulatable because it is a Clifford circuit, so k ≥ 3 is necessary.) If we consider Conjecture 1 for degree-k polynomials, it becomes more stable for larger k [22].
Proof of Theorem 1. Given an n-variable degree-3 polynomial f , we can construct an n-qubit IQP circuit such that the probability p z (f ) of outputting z ∈ {0, 1} n satisfies Assume that there exists a T -time classical probabilistic algorithm that outputs z ∈ {0, 1} n with probability q z (f ) such that for a certain and any f . From Markov's inequality, for any f and δ > 0. According to the fine-grained Stockmeyer's theorem (see Appendix), a classical O * (T )-time probabilistic algorithm that makes queries of O(T ) length to the NTIME[n 2 ] oracle can computeq z (f ) such that for any f , integer α ≥ 1, and z ∈ {0, 1} n , with a success probability at least w. Due to the anti-concentration lemma [4], for any 0 < τ < 1.
Therefore we have with a success probability at least w for each f and z) If we take and δ such that −δ + 1 3 (1 − σδ ) 2 = v, the above inequality is correct for at least v fraction of (z, f ). Hence, we obtain for at least v fraction of (z, f ). It means (gap(f )) 2 is computable within the multiplicative error u for at least v fraction of f , which contradict Conjecture 1.
Note that w can be arbitrary close to 1, but u is lower-bounded as u ≥ 1+

IQP plus log-depth Boolean circuit
In this section, we show additive-error fine-grained quantum supremacy for the IQP plus log-depth Boolean circuit model.
Let us consider the following conjecture.
Computing [gap(f + g)] 2 within a multiplicative error u for at least v fraction of f cannot be done with a classical probabilistic O * (2 an )-time algorithm that makes queries of length O(2 an ) to an NTIME[n 2 ] oracle with a success probability at least w.
Conjecture 2 is "more stable" than Conjecture 1, because log-depth Boolean circuit is more general than constant-degree polynomials. For constant-degree polynomials, there is a non-trivial exponential time algorithm to count the number of solutions [23], but we do not know how to apply it to log-depth Boolean circuits. Furthermore, note that in Conjecture 2, the average case is considered only for f , and g can be taken as the worst case one.
Based on Conjecture 2, we show the following result.
Theorem 2 If Conjecture 2 is true, then there exists an N -qubit poly(N )-size quantum circuit (consisting of an IQP circuit and a log-depth Boolean circuit) whose output probability distribution cannot be classically sampled in O(2 aN )-time within a certain constant additive error .
Proof of Theorem 2. Given a log-depth Boolean circuit g : {0, 1} n → {0, 1}, we can construct an (n + 1)-qubit poly(n)-size quantum circuit U such that for any x ∈ {0, 1} n , where h is a certain function whose detail is irrelevant here [24]. Let us consider the following circuit.

Apply
3. Apply U to obtain 4. Apply Z on the last qubit to obtain 6. Apply Z and CZ that correspond to f to obtain 7. Apply H ⊗n ⊗ I and measure the first n qubits in the computational basis.
The probability of obtaining z ∈ {0, 1} n is Assume that there exists a T -time classical probabilistic algorithm that outputs z ∈ {0, 1} n with probability q z (f + g) such that From Markov's inequality, for any f , g, and δ > 0. According to the fine-grained Stockmeyer's theorem, a classical O * (T )-time probabilistic algorithm that makes queries of length O(T ) to the NTIME[n 2 ] oracle can computẽ q z (f + g) such that for any f , g, integer α ≥ 1, and z ∈ {0, 1} n , with a success probability at least w. Due to the anti-concentration lemma [4] Pr z, for any 0 < τ < 1.
Then we have fraction of (z, f )) If we take and δ such that −δ + 1 3 (1 − σδ ) 2 = v, the above inequality is correct for at least v fraction of (z, f ), which contradict Conjecture 2.

Clifford plus T
In this section, we finally show additive-error fine-grained quantum supremacy for Clifford+T circuits. Let us consider the following conjecture. Define There exist g, and positive constants a and n 0 such that for every n > n 0 the following holds.
Computing [gap(f + g)] 2 within a multiplicative error u for at least v fraction of f cannot be done with a classical probabilistic O * (2 am )-time algorithm that makes queries of length O(2 am ) to an NTIME[n 2 ] oracle with a success probability at least w.
Based on Conjecture 3, we show the following result.
for any x ∈ {0, 1} n , where ξ ≡ 3m − 1, and junk(x) ∈ {0, 1} n+ξ−1 is a certain bit string whose detail is irrelevant here. Note that U consists of Clifford and 7(3m − 1) number of T gates. (The 3-CNF g contains 2m OR gates and m − 1 AND gates. Each AND and OR gate can be simulated with a single Toffoli gate by using a single ancilla qubit. A single Toffoli gate can be simulated with Clifford and 7 T gates.) Let us consider the following circuit.

Apply H ⊗n ⊗ I ⊗ξ to obtain
3. Apply U to obtain

Apply
6. Apply Z and CZ that correspond to f to obtain 7. Apply H ⊗n ⊗ I ⊗ξ and measure all qubits in the first register in the computational basis.
This quantum computing uses t ≡ 14(3m − 1) number of T gates. The probability of obtaining z ∈ {0, 1} n is Assume that there exists a T -time classical probabilistic algorithm that outputs z ∈ {0, 1} n with probability q z (f + g) such that From Markov's inequality, for any f , g, and δ > 0. According to the fine-grained Stockmeyer's theorem, a classical O * (T )-time probabilistic algorithm that makes queries of length O(T ) to the NTIME[n 2 ] oracle can computẽ q z (f + g) such that for any f , g, integer α ≥ 1, and z ∈ {0, 1} n , with a success probability at least w. Due to the anti-concentration lemma [4] Pr z, for any 0 < τ < 1.
Then we have fraction of (z, f )) If we take and δ such that −δ + 1 3 (1 − σδ ) 2 = v, the above inequality is correct for at least v fraction of (z, f ), which contradict Conjecture 3.

Conjectures
In this paper, we have shown additive-error fine-grained quantum supremacy based on several conjectures. In this subsection, we provide some "evidence" that support these conjectures.
Our conjectures are related to the exponential-time hypothesis (ETH) and the strong exponentialtime hypothesis (SETH) that are standard conjectures in fine-grained complexity theory [25,26]. ETH and SETH are stronger (or more pessimistic) versions of the famous NP = P conjecture that says that an NP-complete problem cannot be solved in polynomial-time. More precisely, ETH is stated as follows: Conjecture 4 (ETH) Any (classical) deterministic algorithm that decides whether #f > 0 or #f = 0 given (a description of ) a 3-CNF with n variables, f : {0, 1} n → {0, 1}, needs 2 Ω(n) time.

SETH is the stronger version of ETH which says as follows:
Conjecture 5 (SETH) Let A be any (classical) deterministic T (n)-time algorithm such that the following holds: given (a description of ) a CNF, f : {0, 1} n → {0, 1}, with at most cn clauses, A accepts if #f > 0 and rejects if #f = 0, where #f ≡ x∈{0,1} n f (x). Then, for any constant a > 0, there exists a constant c > 0 such that T (n) > 2 (1−a)n holds for infinitely many n.
All conjectures used in this paper are the average-case hardness of computing GapP functions within a multiplicative error in classical probabilistic time with an NTIME oracle, and therefore different from ETH and SETH. There are, however, three reasons that support these conjectures.
First, our conjectures consider GapP functions, while ETH and SETH consider #P functions. A GapP function is a difference of two #P functions. Furthermore, to our knowledge, only known way of computing a GapP function is to compute the number of accepting and rejecting paths (i.e., #P functions). Therefore, computing GapP functions should not be easier than computing #P functions.
Second, our conjectures study average cases. One might think that solving an average case could be easier than the worst case, but, at least, SETH has not been refuted even in average cases. (The best upper-bound is Ref. [27].) Third, our conjectures allow the algorithm to use an NTIME oracle. We point out that at least #ETH, which is the counting version of ETH, has not been refuted for MA (which is in ZPP NP ) and AM (which is coNP NP ).
It is an important open problem for the research of (not only fine-grained but also non-finegrained) quantum supremacy to show additive-error quantum supremacy based on standard conjectures.

Other models
For simplicity, we have considered the three models, but similar results should hold for other sub-universal models such as the one-clean qubit model and the random circuit model. For all sub-universal models, Markov's inequality and the anti-concentration lemma hold. (For the Boson sampling model, the anti-concentration is a conjecture.) Therefore if we assume similar averagecase hardness conjectures as we have introduced in this paper, we should be able to show additiveerror fine-grained quantum supremacy for other sub-universal models. Assume that k ≤ 5. Then, if |S| ≥ 2 k+1 , the probability that the algorithm accepts is 1. If |S| < 2 k , the probability that the algorithm accepts is 0.