The Pursuit of Uniqueness: Extending Valiant-Vazirani Theorem to the Probabilistic and Quantum Settings

Valiant-Vazirani showed in 1985 [VV85] that solving NP with the promise that “yes” instances have only one witness is powerful enough to solve the entire NP class (under randomized reductions). We are interested in extending this result to the quantum setting. We prove extensions to the classes Merlin-Arthur ( MA ) and Quantum-Classical-Merlin-Arthur ( QCMA ) [AN02]. Our results have implications for the complexity of approximating the ground state energy of a quantum local Hamiltonian with a unique ground state and an inverse polynomial spectral gap. We show that the estimation (to within polynomial accuracy) of the ground state energy of poly-gapped 1-D local Hamiltonians is QCMA -hard, under randomized reductions. This is in stark contrast to the case of constant gapped 1-D Hamiltonians, which is in NP [Has07]. Moreover, it shows that unless QCMA can be reduced to NP by randomized reductions, there is no classical description of the ground state of every poly-gapped local Hamiltonian that allows eﬃcient calculation of expectation values. Finally, we discuss a few of the obstacles to the establishment of an analogous result to the class Quantum-Merlin-Arthur ( QMA ). In particular, we show that random projections fail to provide a polynomial gap between two witnesses.


Introduction 1.Extending Valiant-Vazirani
One of the properties of the class NP is that the number of witnesses may vary from zero to exponentially many.How difficult is it to distinguish between "no" instances and "yes" instances that have a unique witness?Though one might think that such a problem is easier than solving NP, the celebrated result of Valiant and Vazirani [VV85] proved that it is not much easier.Their main result can be stated as follows:

Theorem 1 ([VV85]
).There exists a UP promise problem that is NP-Hard under randomized reductions.
In the above, UP is the class containing all promise problems for which a "yes" instance has a unique accepting witness -see Definition 13, the promise problem that is shown to exist is given in Definition 21, and randomized reductions are introduced formally in Definition 11.
Theorem 2. There exists a promise problem (specified in Definition 31) in UMA that is MA-hard under randomized reductions.Theorem 3.There exists a promise problem (specified in Definition 41) in UQCMA that is QCMAhard under randomized reductions.
The full proofs of Theorems 2 and 3 are given in Sections 4 and 5, respectively.Both proofs rely heavily on the Valiant-Vazirani construction [VV85] (see also [AB09,Section 17.4.1]for another simple proof).We present a proof for the (original) Valiant-Vazirani theorem in three attempts, each of which improves on a shortcoming of its predecessor: 1.The reduction guesses the size of the accepting witness set, and uses a random "filter" with a certain degree of screening that is determined by the set size.If the size of the accepting set is w, we add each potential witness to a random set with probability 1/w, and everything outside that set is filtered out.If we correctly guess the size of the accepting witness set with a constant probability, exactly one valid witness will pass the filter.
2. We observe that it is not essential to guess the exact size of the witness set; a multiplicative approximation is adequate.Using this approach reduces the possible number of guesses from exponentially many in the previous attempt to linear (in the witness length).
3. We replace the random "filter" with a pseudorandom "filter" -a pairwise-independent hash function -without losing any of the properties.Moreover, a pairwise-independent hashfunction has a polynomial size description and can be computed efficiently (unlike truly random subsets of {0, 1} n ).This is crucial so that the reduction runs in (randomized) polynomial time and that the verifier is efficient.
These arguments are made precise in Section 3. The MA and also QCMA settings elicit a new challenge: on "yes" instances, there may be an exponentially larger number of (classical) witnesses in the gap-interval (e.g., (1/3, 2/3)) than in the "yes" interval (2/3, 1).Thus, randomly choosing the witnesses -in the spirit of the first attempt in the Valiant-Vazirani construction -would, with overwhelmingly large probability, fail to choose a unique witness from the "yes" interval, and no witnesses from the gap interval.The main idea behind eliminating this obstacle is to divide the "gap" interval into polynomially many smaller intervals and to argue that in at least one interval, the number of witnesses inside the interval is not much larger than the number of witnesses in the intervals above it (see Observation 37 for detail).Therefore, by guessing the approximate sizes of the "gap" interval and of all the intervals above it, we have a constant probability that exactly one element from these intervals will be filtered and that this element will not be from the "gap" interval.
We defer the discussion about some impossibility result related to UQMA to Section 1.3.Here we use the definition of UQMA together with Theorem 3 to derive interesting implications.

Implications for Hamiltonian Complexity
We say that a Hamiltonian acting on n d-dimensional particles, is k-local if it can be written as a sum of poly(n) terms that act non-trivially on at most k sites.

Definition 4. k-local hamiltonian:
We are given a k-local Hamiltonian on n qubits H = r j=1 H j with r = poly (n).Each H j is a Hermitian operator with a bounded operator norm ||H j || ≤ poly(n) acting non-trivially on at most k qudits.We are also given two constants, a and b, with b − a ≥ 1/poly(n).In "yes" instances, the smallest eigenvalue of H is at most a.In "no" instances, it is larger than b.We should decide whether it is a Yes case or a No case.
In a seminal work, Kitaev showed that the 5-local hamiltonian problem is complete for QMA [Kit99,KSV02].Improvements in parameters (dimensionality and locality) were given in [KR03, KKR06,OT08], leading to the QMA-completeness of the 1-d hamiltonian [AGIK07,HNN13], which is the variant of the original problem to one-dimensional nearest-neighbor Hamiltonians (where the local dimension of every qudit, according to [HNN13] can be as low as d = 8).The importance of these results stems not only from the fact that local hamiltonian is probably the most representative QMA-complete problem, but also from the key role played by local Hamiltonians and their ground-state energy in physics.
An important parameter when dealing with the complexity of ground states and local Hamiltonians is the spectral gap of local Hamiltonians, which is given by the difference between the ground and the first excited energy levels, ∆ := λ 1 (H) − λ 0 (H).From now on in the discussion, we assume that the operator norm of each term in the Hamiltonian is bounded by some constant and that there cannot be two terms acting on the same set of qubits.When the spectral gap is constant, the Hamiltonian is said to be gapped.When it is inverse polynomial, we say the Hamiltonian is poly-gapped.
What are the implications of a gap for the local hamiltonian problem?A groundbreaking result by Hastings shows that ground states of 1-D gapped Hamiltonians have an efficient classical approximation by a Matrix-Product-State (MPS) of polynomial bond dimension [Has07] 1 .Since the expectation values of local observables of an MPS can be calculated in time polynomial in the number of sites and in its bond dimension (see e.g., [PVWC07]), Hastings' result implies that 1-d constant-gap local hamiltonian (the restriction of the original problem to 1-D gapped Hamiltonians) belongs to NP. 2  The question of whether such efficient descriptions might also exist for the ground state of 1-D poly gapped Hamiltonians has also been asked.Under some reasonable complexity-theoretic assumption, we answer this question in the negative.The reasoning for our negative answer is as follows.
We define the unique local hamiltonian problem as similar to the local hamiltonian problem, where the conditions for a "no" instance are the same, but for a "yes" instance we demand that there exists a state |ψ with energy below the lower threshold and all other eigenvalues above the upper threshold.We similarly define the unique 1-d hamiltonian.
1 A state |ψ ∈ (C d ) ⊗n has an MPS representation with bond dimension D if it can be written as with A

[k]
i D × D matrices.Note that only ndD 2 complex numbers are needed to specify the state. 2Hastings' result was extended in various directions.In particular, a classical algorithm that efficiently finds the MPS approximation of the ground state was shown to exist, see [LVV15,ALVV17].
The 1-d hamiltonian is QMA-Complete [AGIK07,HNN13].We show that a similar result holds for the "unique" variant of this problem.

Theorem 5. The unique 1-d hamiltonian problem is UQMA-Complete.
The main observation here is that the construction of Aharonov et al. preserves the uniqueness.The precise definition of the problem as well as the proof are given in Section 6.1, p. 19.Combining Theorems 5 and 3, we have: Corollary 6.The unique 1-d hamiltonian problem is QCMA-hard under randomized reductions.
From Corollary 6, we can deduce the following no-go result for the ground state of polygapped Hamiltonians.Consider any set of states that are (i) described by poly(n) parameters and (ii) from which one can efficiently compute expectation values of local observables.Matrix product states consititute an example of such a set, and several others have recently been proposed [APD + 06, Vid07, HKH + 09].We can show: Corollary 7. If all ground states of 1-D poly gapped local Hamiltonians can be approximated to inverse polynomial accuracy by states satisfying properties (i) and (ii) above, then QCMA ⊆ RP NP . 3  Since it seems unlikely that QCMA ⊆ RP NP , we view this as a no-go corollary.Proof sketch: We will show that under the assumptions of the corollary, unique 1-d hamiltonian ∈ NP.Combining that with Corollary 6, by Observation 12.4 we get the desired result, i.e., QCMA ⊆ RP NP .
Consider the following NP verification for the unique 1-d hamiltonian problem.The prover sends a classical witness, which approximates the ground-state with properties (i) and (ii) above, to the verifier.The length of the witness is polynomial in n by property (i).The verifier uses property (ii) to efficiently calculate the expectation value of the local Hamiltonian H that was received as the input.The verifier accepts if the energy is at most a+b 2 , and rejects otherwise.Completeness and soundness hold by construction, and therefore, unique 1-d hamiltonian ∈ NP.
To further analyze the complexity of the local Hamiltonian problem for poly-gapped Hamiltonians, we introduce a variant of the UQMA class, which we call the poly-gapped QMA (PGQMA), as follows: in both "yes" and "no" instances, we require that there be a gap (given by a pre-determined quantity larger than an inverse polynomial in the input size) from the witness, which accepts with the largest probability, to all the others.We show that the problem 1-d poly-gap local hamiltonian, in which the Hamiltonians are promised to be poly-gapped, is PGQMA-Complete (see Theorem 48).We also present a simple randomized reduction from any UQMA problem to a PGQMA -see Lemma 49 -which is used to show that: Theorem 8.The 1-d poly-gap local hamiltonian problem is QCMA-hard under randomized reductions.
The proof is given on p. 20.We thus see that, unless BQP = QCMA 4 , the determination of the ground energy of poly-gapped 1-D local Hamiltonians is an intractable problem for quantum computation.Note that this conclusion cannot be drawn from the previous lower bounds on the complexity of the problem [AGIK07,SCV08].Indeed, the results of [AGIK07] concerning adiabatic quantum computation with a 1-D poly-gapped Hamiltonian indirectly imply that the 1-d poly-gap local hamiltonian problem is BQP-hard 5 , while in [SCV08], the problem was shown to be hard for the class UP ∩ co-UP (the intersection of unique NP with its complement), whose relation with BQP is unknown.
3 Recall that RP is defined as BPP, except every x not in the language needs to be rejected with certainty.In other words, languages in RP have perfect soundness.
4 BQP is the class of problems that can be efficiently solved with high probability, by a quantum computer 5 The construction of [AGIK07] for adiabatic quantum computation with one-dimensional Hamiltonians provides a way to encode the outcome of any polynomial quantum computation into the expectation value of a measurement, in the computational basis, of the first site of the ground state of a 1-D poly-gapped local Hamiltonian with a zero ground state energy.By adding a small perturbation to the Hamiltonian, penalizing the first site when it is not in the zero state and with a strength that is much smaller than the spectral gap but that is still inverse polynomial in the number of sites, we can readily conclude that this construction shows that the 1-d poly-gap local hamiltonian problem is BQP-hard

Impossibility Results for UQMA
Finally, we examine the UQMA case.We show that, when attempting to apply the brute force analogue of the previous proofs in the case of UQMA, we already fail in the first (inefficient) component.A new idea seems to be required if an extension of the Valiant-Vazirani approach is possible at all for QMA.This challenge is demonstrated by a simple family of QMA "yes" instances for which the first component fails to work: Example 1.Let C be a quantum circuit on l qubits with the property that there exists a subspace V of dimension 2, s.t.∀|ψ ∈ V, Pr(C accepts |ψ ) = 1, and ∀|ψ ∈ V ⊥ , Pr(C accepts |ψ ) = 0.
In the classical case, the analogous example of two solutions is easy to deal with by choosing a "filter" (hash-function) that screens about half of the witnesses.A suitable natural quantum analogue is to use a random projection that will reject half of the space.In proposition 1, we prove that such a transformation does not create an inverse polynomial gap between the two states in the subspace V : with probability exponentially close to 1, regardless of the dimension of the random projection, all states in V are accepted with probabilities exponentially close to each other.
The reason for this is that the projection of every N -dimensional vector on a d-dimensional random subspace is concentrated around d N , with a standard deviation of order √ d N for a sufficiently large N .Therefore, regardless of how we choose d, we always get that the gap is less than 1 √ N (which is exponentially small in the number of qubits).Hence, the behavior of random setsthe filters in the classical setting -is very different from the behavior of random subspaces, the natural quantum analogue.
One might hope that a more refined measurement would help.In fact, [Sen06] has shown that the two distributions that result from applying a random von Neumann measurement on two arbitrary orthogonal states have a constant total variation distance with all but exponentially small probability.This sounds promising; moreover, a similar effect can be achieved efficiently by quantum t-designs as shown by [AE07].Unfortunately, a constant total variation distance between two distributions does not imply an efficient method to distinguish between them; this problem is tightly related to complete problems for the complexity class SZK, which are not known to have a quantum polynomial time algorithm.Thus, the problem of whether there exists a randomized reduction from QMA to UQMA remains open.

Subsequent Works
Since the first version of this work appeared in 2008 [ABOBS08], several other papers have extended the study of notions related to those that constituted the focus of this work.
In Section 1.3, we presented Example 1, for which the direct approach to show a reduction from QMA to UQMA fails.We argued that an alternative idea is needed.Indeed, in [JKK + 12], Jain et al. presented a different approach that successfully, among other things, tackles the example given there.Let Few-QMA be the analogue of UQMA with polynomially many witnesses.They show that Few-QMA ⊆ P UQMA with the following alternative technique.Another way to resolve the classical analog in Example 1 (i.e., an NP instance with exactly two accepting witnesses) is to ask for 2 witnesses in lexicographic order that both pass the original verification.Indeed, if the valid witnesses are w 1 and w 2 (where we assume w 1 < w 2 ), the unique witness that will be accepted is (w 1 , w 2 ).The approach can be generalized to the setting in which there are (at most) polynomially many witnesses.But what could take the analog of the lexicographic order in the quantum setting?It turns out that the properties of the anti-symmetric subspace bear some resemblance to those of the lexicographic order.For example, suppose the subspace V in Example 1 is spanned by the orthogonal basis |ψ 1 and |ψ 2 .In this case, the unique anti-symmetric state with respect to two registers is 1 √ 2 (|ψ 1 ⊗ |ψ 2 − |ψ 2 ⊗ |ψ 1 ).Therefore, the verification that takes two registers and tests that they are both in V , and that these two registers are anti-symmetric will have a unique eigenvector that is accepted with probability 1, while all states orthogonal to it are rejected with certainty.Similar to the classical setting, Jain et al. generalizes this approach (only) to polynomially many witnesses and prove that Few-QMA ⊆ P UQMA .
It is known that the class MA can have perfect completeness, i.e., MA = MA 1 [ZF87].Jordan et al. [JKNN12] proved an analogous statement for QCMA, i.e., QCMA = QCMA 1 .It remains to be seen whether Theorem 3 could be strengthened to show there exists a UQCMA 1 promise problem that is QCMA-Hard (or QCMA 1 -Hard, since these classes are equal).
Several other works studied the role of the spectral gap of local Hamiltonians.Ambainis [Amb14] defined the spectral gap problem: the input is a local Hamiltonian and a parameter , where is inverse polynomial in the number of qubits; the problem is to determine whether the spectral gap of the Hamiltonian is at most or above 2 .Ambainis proved that this problem is in P QMA [log] , that is, a polynomial TM with logarithmic number of oracle queries to QMA.Gharibian and Yirka [GY19] proved that it is P UQMA[log] -hard under a Cook reduction.Gharibian and Yirka improved Ambainis result and proved hardness6 for 4-local Hamiltonians.González-Guillén and Cubitt [GGC18] proved that it is impossible to show the QMA-hardness of constant gapped Hamiltonians via certain generalizations of Kitaev's circuit-to-Hamiltonian construction (see [KSV02, Section 14.4.1]).It was shown that the problem of deciding whether a translationally invariant Hamiltonian has a constant gap or is gap-less, when taking the size of the system n to infinity, is undecidable on a 2-D [CPGW15] or a 1-D [BCLPG20] system.Fefferman and Lin proved that PreciseQMA = PSPACE, where the completeness and soundness gap in PreciseQMA can be exponentially small [FL16].Recently, Deshpande, Gorshkov and Fefferman [DGF22] defined the class PrecisePGQMA, which restricts PreciseQMA to instances that have an inverse-polynomial gap, and proved that PrecisePGQMA = PP.The spectral gap also plays a role in quantum algorithms.For example, in Ref. [GS17], an algorithm that constructs the ground-state of a special class of Hamiltonians.The running time of the algorithm scales inverse polynomially with the uniform spectral gap of the Hamiltonian, meaning the minimal spectral gap of every subsystem -see [GS17] for more detail.

Organization
The structure of the remainder of the paper is as follows: in Section 2, we present the definitions.Section 3 comprises a review of the proof of the Valiant-Vazirani Theorem, while Sections 4 and 5 contain the extensions of the theorem to the classes MA and QCMA, respectively.In Section 6, we discuss some alternative definitions of the class UQMA and complete problems for this class.We also show that the two classes are equivalent under randomized reductions.Finally, in Section 7, we prove the impossibility results for the extension of our results to QMA using similar ideas.

Definitions
This work lies at the intersection of three fields with which we assume the reader has some familiarity: Computational complexity (see [AB09]), Quantum Computing (see [NC10]), and Hamiltonian Complexity (see [GHLS15]).We begin by defining a few standard complexity classes that we will consider throughout the paper.Then we turn to the definition of unique versions of MA, QCMA, and QMA that, to the best of our knowledge, have not been formalized before.

Definition 11 (Randomized reduction (adapted from [VV85])). A promise problem
A is reducible to a promise problem B by a randomized reduction, if there exists a randomized polynomial Turing Machine (TM) M and a polynomial p s.t.: where y are the random bits of the TM M .We denote this by A ≤ r B. We say that a promise problem B is C-Hard under randomized reductions for some complexity class The motivation behind randomized reductions stems from, among other reasons, the following immediate properties:
Definition 14 (Merlin-Arthur (MA) [Bab85]).A promise problem L = (L yes , L no ) ∈ MA if there exists a probabilistic polynomial TM M that is polynomial in its first argument, and its random bits are denoted by the string r, s.t. for every x ∈ {0, 1} * : there exists a uniformly generated11 quantum circuit U x , having l(x) qubits as input and requiring m(x) ancilla qubits initialized to |0 m , such that for every 7 The probability is over the coin tosses of the randomized TM.

x ∈ L
For brevity, we write l = l(x) and m = m(x) when x can be understood from the context.
Definition 16 (Quantum Merlin-Arthur (QMA) [Kit99,KSV02]).A promise problem L = (L yes , L no ) ∈ QMA if there exists a uniformly generated quantum circuit U x , having l(x) qubits as input and requiring m(x) ancilla qubits initialized to |0 m , such that for every Π 1 is the projection onto |1 in the first qubit.

New Definitions
We now provide the analogous, unique versions for the classes MA, QCMA and QMA.
Π 1 is the projection onto |1 in the first qubit.
Definition 19 (Unique Quantum Merlin-Arthur (UQMA)).A promise problem L = (L yes , L no ) ∈ UQMA if there exists a uniformly generated polynomial quantum circuit U x that can be computed in poly(|x|) time, having l(x) qubits as input and requiring m(x) ancilla qubits initialized to |0 m , s.t.
Later we show an alternative way to define this class, see Definition 43 and Lemma 44.

The Valiant-Vazirani Theorem Revisited
In the introduction, the Valiant-Vazirani theorem was mentioned as something that should be interpreted as "it is not much easier to solve UP than it is to solve NP".We now discuss this interpretation in greater detail.By combining the Valiant-Vazirani theorem (see Theorem 1) with Observation 12.4 (where we use D = UP and C = NP), we obtain the following conclusion: In other words, an efficient randomized algorithm for a UP-Complete problem implies an efficient randomized algorithm for all of NP.Of course, it is conjectured 12 that no such efficient randomized algorithm exists for NP, and therefore, such an efficient algorithm could not exist for a UP-Complete problem.Our results could also be used to show impossibility results based on natural complexity conjectures -see, for example, Corollary 7.
We are now ready to review the proof of the Valiant-Vazirani theorem (see Theorem 1).We divide the proof into three components, so that we can better understand which components of the original construction fail in the probabilistic and quantum settings.
The original proof of the theorem works with the well known NP-complete problem sat.We will not use it, however, because sat lacks a simple variant that is complete for the classes MA or QCMA.Instead, we will use the following problem: Definition 21 (Trivial NP Problem (tnpp)).The words in tnpp are tuples, V, x, l, t , where V is a description of a deterministic Turing machine, x is a string, and l, t ∈ N, given in unary.
It can easily be seen that tnpp is NP-Complete.Similarly, the following promise problem is a "unique" version of tnppthat is UP-Complete.
Definition 22 (Unique-NP Promise Problem (unppp)).The promise problem unppp = (unppp yes , unppp no ).The words in unppp are tuples, V, x, l, t , where V is a description of a deterministic Turing machine, x is a string of length n, and l, t ∈ N, given in unary.
V, x, l, t ∈ unppp yes if there exists exactly one string y ∈ {0, 1} l s.t.V (x, y) accepts in at most t steps.V, x, l, t ∈ unppp no if for all strings y ∈ {0, 1} l , V (x, y) does not accept in t steps.

Proof Sketch
Our goal is to prove Theorem 1 by showing that unppp is NP-Hard under randomized reductions.This completes the proof since, as mentioned already, unppp ∈ UP.Since tnpp (recall Definition 21) is NP-Hard, by Observation 12.2, it is enough to prove that tnpp ≤ r unppp.
We present the proof in a series of attempts, each of which improves on its predecessor by introducing a new component.The third attempt is that which completes the proof.
Component 1: The right random "filter" for the right size For a tnpp instance I = V, x, l, t , let W be its set of accepting witnesses: W ≡ {y ∈ {0, 1} l : V (x, y) accepts in at most t steps}, and let w ≡ |W |.Notice that I ∈ tnpp ⇐⇒ w = 0.
Definition 23 (R-restriction).Let R be a set of strings of length l, with the property that there is an algorithm that answers whether y ∈ R in exactly T time steps.Given a Turing machine V , we call the following Turing machines the R-restriction of V and denote it by V R : 2. Run V on (x, y).
We view the R-restriction as a filter added to the original problem, because the new machine, V R , accepts only accepting witnesses of the original machine V , which belong to the set R.
Let us denote by I the instance V R , x, l, t+T .Component 1 takes the filter R to be a random set, where each string in {0, 1} l is chosen independently with probability 1 w .Notice that the Turing machine V R may not have a short description, because to decide whether y ∈ R, all the elements of R should somehow be "hard-wired" to the machine.Of course, using Kolmogorov complexity arguments [CT06], one can easily show that for large values of |R|, the length of the description of the Turing machine that decides whether y ∈ R cannot be polynomial.Therefore, the mapping between I to I is not efficient.Similarly, T , which was defined as the time it takes for the TM to decide membership in R, might not be polynomial.These drawbacks will be circumvented in component 3.
We claim that I will be in unppp yes with probability Ω(1).Let W = {y ∈ {0, 1} l : V R (x, y) accepts in t + T steps}.Defining W = {y 1 , ..., y w }, The first equality follows from I ∈ unppp yes ⇐⇒ |W | = 1 and the second from W = W ∩ R. The third is a direct consequence of the definition of y i .The fourth stems from the fact that the events in the line above are all disjoint.The fourth is based on the construction of R (where each element was added to the set independently with probability 1 w ).Therefore, V , x, l, t + t is a "yes" with probability of at least 1 e .Using this idea, we define 2 l (random) instances, I 1 , ..., I 2 l , one for every possible value of w: I j = V Rj , x, l, t + T j .Here, R j is sampled so that each string belongs to R j with probability 1 j , and T j is the running time of the algorithm that decides membership in the set R j (note that T j may be exponential).We claim: Lemma 24.(Completeness) If I ∈ tnpp, then there exists a j ∈ [2 l ] for which, with probability Ω(1) over the choice of R, I j ∈ unppp yes .(Soundness) If I / ∈ tnpp, then all the I j are in unppp no .
Proof.The completeness follows from the previous argument: one of the I j 's is I w .By Eq. ( 2), I w ∈ unppp yes with probability of at least 1/e.Soundness: and therefore, I j ∈ unppp no .
Suppose we try to prove that tnpp ≤ r unppp by using Lemma 24.Consider the following reduction: we sample j from 1 to 2 n uniformly at random and output unppp I j as defined above.The completeness asserts that for a "yes" instance, we accept with probability Ω(1) if we guessed j correctly.The soundness asserts that we always reject in a "no" instance.We have two sources for the failure for the reduction: (i) the probability to guess j correctly is 1 2 l , and therefore, completeness holds only with exponentially small probability and (ii) since the sets R j are chosen uniformly at random, the description of I j is inefficient and the running time T j might be exponential (recall that T is given in unary form).Hence, the reduction would take exponential time.In component 2, we resolve the first issue, and in component 3, we resolve the second issue.
Component 2: An approximated "filter" also works The second component addresses the fact that we do not know the value w, and therefore, the reduction described in component 1 has an exponentially small completeness probability.The key idea is that having an approximating w by some constant, multiplicative factor only changes the probability of having a unique solution by another constant factor.
More explicitly, we transform our instance I into a polynomial number of random instances: I 0 , I 1 , ..., I l .These instances are formed by choosing random sets R k again, but now, each element is taken with probability 1 2 k .A similar statement to Lemma 24 also holds here, despite the fact than now we have only l instances (whereas before we had 2 l ): Lemma 25. (Completeness) If I ∈ tnpp, then there exists a j ∈ {0, . . ., l} for which, with probability Ω(1) over the choice of R, I j ∈ unppp yes .(Soundness) If I / ∈ tnpp, then all the I j are in unppp no .
Proof.The soundness analysis follows from exactly the same argument as in component 1 -see Lemma 24.To analyze the completeness of the protocol, we notice that for some k ∈ {0, . . ., l}, 2 k ≤ w < 2 k+1 .Hence, for such k, Component 3: An approximated pseudorandom filter is just as good The third component attends to the inefficiency of the previous reduction: a random and exponential large set R cannot be described by a polynomial description.The solution is to replace the randomness by a suitable notion of pseudorandomness.In this case, the pseudorandom objects of interest are pairwise independent hash functions.

2.2]).
A family of functions H n,m where each h ∈ H n,m , h : {0, 1} n → {0, 1} m , is called a pairwise independent 13 family of hash-functions if: Note that this probability is the same as if the map h was chosen uniformly at random from the set of all functions that map n bits to m bits.An interesting property is the existence of families that have concise descriptions and that can be efficiently computed: Fact 27 ( [CW79], see also [AB09, Section 8.2.2]).Efficient pairwise independent hash functions exist.More precisely, there exist a polynomial T (n, m) and a family of pairwise independent hashfunctions H n,m such that • There exists a randomized algorithm to sample a TM that computes h for a uniformly random h ∈ H n,m .By abuse of notation, we also denote the TM that computes h by h.The running time of this algorithm is T (n, m).
• For every h ∈ H n,m , the running time of h(x) is T (n, m).
In this final attempt, we define l instances I 1 , I 2 , . . ., I l .Again, each I j is based on some Rrestriction of V , but now we use a restriction that is based on an (efficient) pairwise independent hash function: For j ∈ {0, . . ., l}, we sample h from the family H l,j+2 , and define R j = h −1 (0).We define I j = V Rj , x, l, t + T (l, j + 2) .Note that testing membership in R j is efficient (deciding membership in R j is done by testing whether h(y) = 0, which takes T (l, j + 2) steps, where T is the polynomial in Fact 27).
Furthermore, there exists a polynomial time randomized TM that receives as an input I = V, x, l, t and j ∈ {0, . . ., l} and that outputs I j .
Proof of Lemma 28: Also here, the soundness analysis follows from exactly the same argument as in component 1 -see Lemma 24.
For completeness, we make use of the following lemma: Lemma 29.Let W ⊆ {0, 1} l of size w, and let j ∈ N such that 2 j ≤ w < 2 j+1 , and let h be a random function sampled from a pairwise independent hash function family H l,j+2 .Then, This Lemma is proved in Appendix A. Note that I j = V Rj , l, t+T l,j+2 ∈ unppp yes is equivalent to |W j | = 1.By construction, W j = W ∩ R j = W ∩ h −1 j (0) and using Lemma 29, |h −1 j (0) ∩ W | = 1 with probability at least 1 8 over the choice of h.The "furthermore" part of the lemma follows immediately from the fact that we use an efficient pairwise independent hash-function in the construction of R j .
We are now ready to prove the Valiant-Vazirani Theorem.Proof of Theorem 1: The randomized reduction is given in Algorithm 1.
Input: V, x, l, t . 1 Sample j uniformly at random from {0, . . ., l} 2 Sample h uniformly at random from a family of efficient pairwise independent hash functions H l,j+2 , and let R j = h −1 (0) Output: V Rj , x, l, t + T (l, j + 2) Algorithm 1: The randomized reduction from tnpp to unppp The efficiency of the reduction follows from the efficiency of the pairwise-independent hash function that we use -see the first item in Fact 27.Completeness: with probability at least 1 l , we sample j, which satisfies 2 j ≤ w < 2 j+1 in Line 1; conditioned on guessing j correctly, by Lemma 28, I j ∈ unppp yes with probability at least 1 8 .Overall, if I ∈ tnpp, then the reduction maps it to a unppp yes instance with probability at least 1 8l .Soundness follows directly from Lemma 28.

Valiant-Vazirani Extended to the Class MA
In this section, we prove Theorem 2. Relying on exactly the same argument as in Corollary 20, Theorem 2 implies the following: We first define the promise problems that we will work with throughout this section.
V, x, p 1 , p 2 , l, t ∈ tmapp yes if there exists a string y of length l s.t.Pr(V (x, y) accepts in t steps) ≥ p 2 .
V, x, p 1 , p 2 , l, t ∈ tmapp no if for all strings y of length l, Pr(V (x, y) accepts in t steps) ≤ p 1 .
It can be easily verified that tmapp is MA-Complete.The containment tmapp ∈ MA uses error reduction for MA (more precisely, that MA p1(n),p2(n) ⊆ MA, whenever p 2 (n) − p 1 (n) ≥ 1/poly(n) for some polynomial p, which is proved in the same way as error reduction for BPP).
Next, we define the unique variant of tmapp, which is UMA-Complete: Definition 32 (Unique MA Promise Problem (umapp)).umapp = (umapp yes , umapp no ).The words in umapp are tuples, V, x, p 1 , p 2 , l, t , where V is a description of a randomized Turing machine, x is a string, and 0 ≤ p 1 < p 2 ≤ 1, and l, t ∈ N. The parameters l, t, p 1 , p 2 are given in unary 15 .V, x, p 1 , p 2 , l, t ∈ tmapp yes if there exists a string y of length l s.t.Pr(V (x, y) accepts in t steps) ≥ p 2 , and for every y = y of length l, Pr(V (x, y) accepts in t steps) ≤ p 1 .
V, x, p 1 , p 2 , l, t ∈ tmapp no if for all strings y of length l, Pr(V (x, y) accepts in t steps) ≤ p 1 .
We will prove Theorem 2 by showing that tmapp ≤ r umapp.Hence, our goal is to create a transformation that takes a tmapp yes instance (right panel in Fig. 1) to a umapp yes instance (Fig. 2) with inverse polynomial probability, and a tmapp no instance to a umapp no instance (left panel in Fig. 1) with probability 1.We divide the potential witnesses into three groups based on their probability of acceptance: : Typical "no" and "yes" tmapp instances.The y-axis is probability.The ellipses are all the 2 l different witnesses of a specific instance.The red lines outline the boundaries (p1, p2) -the maximal acceptance probability of a tmapp instance is promised not to be in that interval.The left panel is an example of a "no" instance, and the right panel is an example of a "yes" instance.
Let us look at the R-restriction of V , V R , where R is a random set and each element in [2 l ] is taken with some probability p.We denote it by I = V R , x, p 1 , p 2 , l, t + t , where t is the time taken for the machine V R to decide membership in R. Define Y yes , Y gap , Y no for I , as was done in Equation 3.For every y of length l, denote by f (y) = Pr(V (x, y) accepts in t steps), and f (y) = Pr(V (x, y) accepts in t + t steps).
Using the same method as in the NP case fails, as we show next.
15 See Footnote 14.There is exactly one witness that is accepted with probability greater than p2, and all others are accepted with probability smaller than p1.

Problems with the First Component
We present an instance that shows that implementing component 1 in the probabilistic case fails.The example is a I problematic = V problematic , x, p 1 , p 2 , l, t ∈ tmapp yes instance that can be seen in Fig. 3  Because the size of the set gap is exponentially bigger than that of the set Y yes , we cannot "filter" one element from Y yes and none from Y gap with non-negligible probability: For example, suppose we pick the size of R based on the set Y yes , so each element is chosen with probability 1/2.With probability Ω(1), exactly one element will be chosen from Y yes , but about half of the elements of Y gap will also be chosen.Therefore, the second property of a umapp yes instance fails to hold.If we pick elements in R by the size of Y gap , so that each element is picked with probability 1 2 l −2 , then with probability (1 − 1 2 l −2 ) 2 (which is exponentially close to one), no element will be picked from Y yes ; therefore, the first property of a umapp yes instance fails to hold.

The Fourth Component
We first define the notion of a lightweight-gap: Definition 34 ("lightweight-gap" instance).An instance I = V, x, p 1 , p 2 , l, t is a "lightweightgap" tmapp yes instance if it is a tmapp yes instance, and |Y gap | < 3|Y yes |.
Lemma 38 explains how lightweight-gap instances are not prone to the problem that was shown in Section 4.1.But first we will see how to create a very simple transformation that takes a general tmapp yes instance to a "lightweight-gap" tmapp yes instance: Lemma 35.Let Î be a tmapp instance.There exists an efficient randomized transformation that maps I to several instances I 1 , ..., I l−2 with the following properties: Proof.The transformation is as follows.We map Î = V , x, p 1 , p 2 , l, t to This is done by using standard error reduction techniques.Here we crucially use the fact that p 1 and p 2 are represented in unary, and therefore, where n is the input size.Observation 36.Let I 1 = V, x, p 1 , p 2 , l, t and let I 2 = V, x, q 1 , q 2 , l, t , where The observation follows immediately from the definitions of tmapp.The second step of the transformation is as follows: we take the instance I = V, x, 1 l , 1 − 1 l , l, t and define l − 2 instances, I 1 , ..., I l−2 , where I j = V, x, j l , j+1 l , l, t .By observation 36, we know that if I ∈ tmapp yes then ∀k I k ∈ tmapp yes ; and that if I ∈ tmapp no then ∀k I k ∈ tmapp no .
We are left to prove that when I ∈ tmapp yes , one of the I k is a "lightweight-gap" tmapp yes instance.This will follow from Observation 37: Observation 37 (Existence of lightweight range (see also Fig. 4)).We define l − 1 ranges: Proof.First, notice that I ∈ tmapp yes implies |Y l−1 | ≥ 1.Now, assume by contradiction that the inequality does not hold for every j, i.e., , where the strict inequality holds for l ≥ 7, which can be assumed w.l.o.g.The total number of the witnesses is 2 l , and therefore, |Y 1 | ≤ 2 l .Contradiction.
To prove Lemma 35, we observe that |Y j | < 3|Y j+1 | for some j ∈ {1, . . ., l − 2}, which implies that I j is a "lightweight-gap" tmapp yes instance.Observation 37 asserts that such a j indeed exists, which completes the proof of Lemma 35.
The following lemma proves that component 1 works for lightweight-gap tmapp yes instances: Lemma 38.Suppose I = V, x, p 1 , p 2 , l, t is a lightweight-gap tmapp yes instance (see Definition 34).Let I = V R , x, p 1 , p 2 , l, t + t , where V R is the R-restriction of V , and each element in R is taken with probability p = 1 |Ygap|+|Yyes| (see Eq. (3) for the Y gap and Y yes ).With probability Ω(1) (over the choice of R), I is a umapp yes instance.Proof.As was shown for component 1, exactly one witness will be picked from the set Y yes ∪ Y gap with probability Ω(1) .The probability that the instance is from the set Y yes is proportional to the set's size.Therefore, Pr(I ∈ umapp yes ) = Ω(1) Component 2 works without any change in the probabilistic setting: a constant approximation of the size |Y yes | is sufficient.To adapt component 3 to the current setting, we need a simple variant of Lemma 29: Lemma 39.Let S ⊆ {0, 1} l of size b, and j ∈ N such that 2 j ≤ b < 2 j+1 , S 1 ⊂ S of size a, and S 2 = S \ S 1 .Let h be a random function sampled from a pairwise independent hash function family H l,j+2 .Then, The proof is given in Appendix A. We apply Lemma 39 to our construction by setting We now have all the tools needed to prove Theorem 2. Proof of Theorem 2: As mentioned in the beginning of this section, we show that tmapp ≤ r umapp.The randomized reduction is given in Algorithm 2.
2 Sample k uniformly at random from {1, . . ., l − 2} (k represents the guess of the lightweight range) 3 Sample j uniformly at random from {0, . . ., l} 4 Sample h uniformly at random from a family of efficient pairwise independent hash functions H l,j+2 , and let Algorithm 2: Randomized reduction from tmapp to umapp.It is obvious that the algorithm runs in (randomized) polynomial time.For the soundness, we have that ∀k, b I ∈ tmapp no ⇒ I k,j ∈ tmapp no , by using observation 36 and observation 33.Finally, let us analyze the completeness of the reduction.By Eq. (4), we have I ∈ tmapp yes .According to lemma 35, for some k, I k is a "lightweight-gap" tmapp yes instance, so this k is chosen with probability 1 l−2 .Define Y k yes , Y k gap for I k in a similar manner to Equation (3).According to Lemma 39, for k such that with S 1 = Y k yes , S 2 = Y k gap , S = S 1 ∪ S 2 , we have: where in the last inequality we used the fact that I k is a lightweight-gap yes instance, and therefore, If the event in Eq. (5) holds, we have exactly one witness that gets accepted with probability at least k+1 l and no witnesses that get accepted in the gap region [ k l , k+1 l ), which makes it a umapp yes instance.Overall, if Î ∈ tmapp yes , it is mapped to a umapp yes instance with probability at least 1 32l .
5 Valiant-Vazirani Extended to the Class QCMA The proof of Theorem 3 is almost identical to that for the MA case (Theorem 2) but with some minor adaptations discussed below.By exactly the same argument as in Corollary 20, Theorem 3 implies the following: We define the QCMA analogue of tmapp and umapp to be: tqcmapp is (trivially) QCMA-Complete, and similarly, uqcmapp is UQCMA-Complete.
Equipped with the definitions above and all of the steps done in the proof of Theorem 2, we are ready to that tqcmapp ≤ r uqcmapp.This reduction is shown in Algorithm 3.
Sample k uniformly at random from {1, . . ., l − 2} (k represents the guess of the lightweight range) 3 Sample j uniformly at random from {0, . . ., l} 4 Sample h uniformly at random from a family of efficient pairwise independent hash functions H l,j+2 , and let Algorithm 3: Randomized reduction from tqcmapp to uqcmapp.In the output of the algorithm, we define U Rj in an approach analogous to that used for MA.Specifically, we begin by testing whether y ∈ R j .If it is not, it rejects, and otherwise, it applies U .Of course, this can be done efficiently with a quantum circuit, since we use an efficient hash functions.
Soundness and Completeness follow from the same arguments used in the MA case.This ends the proof of Theorem 3.
6 Robustness of UQMA 6.1 Discussion about QMA and the Marriott-Watrous Formalism In this section, we discuss the robustness of our definition of unique QMA and prove Theorem 5.
From Definition 16, we see that for a given QMA verification scheme and a state |ψ , its probability of acceptance is: A useful operator in this context, as defined in [MW05], is the following Note that Pr(verifier As Q is Hermitian, there is a basis of orthonormal eigenvectors {|ψ i } 2 l i=1 for which are the eigenvalues of Q.Note that by knowing the eigenvectors and eigenvalues of Q, we can find out the acceptance probability of every witness in a simple way where Let us consider another possible definition of the class UQMA. Definition 43 (Alternative definition for UQMA).A promise problem L = (L yes , L no ) ∈ UQMA if there exists a polynomial quantum circuit U x that can be computed in poly(|x|) time, having l(x) qubits as input and requiring m(x) ancilla qubits initialized to |0 m , s.t.
Proof.We start by proving that, given a I ∈ L yes according to Definition 19, it is also in L yes according to Definition 43.We know from Definition 19 that there is state |ψ that is accepted with probability of at least 2 3 .According to Eq. ( 7), the acceptance probability of |ψ is ψ|Q|ψ = p ≥ 2/3.From Eq. (8), in turn, we see that p can be written as a convex combination of the λ's.Therefore, λ 1 ≥ 2/3.
We now prove that λ 2 ≤ 1/3.Denote by V the subspace spanned by the eigenvectors with eigenvalue greater than 1/3.Note that ∀|ϕ ∈ V ϕ|Q|ϕ > 1/3.If dim(V ) ≥ 2, there must exist a |ϕ ∈ V orthogonal to |ψ , and therefore, the acceptance probability of |ϕ is greater than 1/3, which is in contradiction to the properties of an L yes instance according to definition 19.
The other direction is straightforward.
We now turn to the proof of Theorem 5. Let us start with the precise definition of the problem unique 1-d hamiltonian: Definition 45. unique 1-d hamiltonian: We are given a 2-local Hamiltonian on n d-dimensional sites H = r j=1 H j with r = poly(n) arranged in a line.Each H j has a bounded operator norm ||H j || ≤ poly(n).We are also given two constants, a and b, with b − a ≥ 1/poly(n).In "yes" instances, the smallest eigenvalue of H is at most a, and all the other eigenvalues are above b.In "no" instances, the smallest eigenvalue is larger than b.We need to decide whether a given instance is a "yes" or a "no" instance.

Proof of Theorem 5:
From the following verification procedure, it is evident that the problem is in UQMA.As a proof, we expect the unique ground state of H. Given a witness |ψ , we use the phase estimation algorithm (see e.g., Ref. [WZ06]) to determine its energy, i.e., ψ|H|ψ , within inverse polynomial accuracy δ.The algorithm has the property that if |ψ is an eigenstate of H, then the output will be the eigenvalue (up to accuracy δ) with exponentially high probability.
If the output of the phase estimation is smaller than a + δ, we accept; otherwise we reject.It is clear that in "yes" instances, there is exactly one witness that is accepted with probability exponentially close to one (the ground state of H), while any state orthogonal to it is accepted only with an exponentially small probability (which is the probability that the phase estimation does not give the correct answer).
The hardness of the problem for UQMA is a simple application of the construction of [AGIK07], which presents a reduction from any problem in QMA to a 1-d hamiltonian with d = 13 16 .The details of the construction are not important here.We only note that the low-lying eigenvectors of the Hamiltonian considered are well approximated, within an inverse polynomial, to a class of states parametrized by all possible proofs -called history states -with the property that two orthogonal proofs give raise to two orthogonal history states.Moreover, the probability of acceptance of a given proof is imprinted in the energy of the associated history state, which again holds up to inverse polynomial accuracy.It is then clear that a problem in UQMA will give rise to valid instance of unique 1-d hamiltonian, since in "yes" instances of the problem (which is the only case we must analyze), the second eigenvalue of the Hamiltonian, which is well approximated by the energy of the history state associated with the witness that has the second highest probability of acceptance, will be separated from the ground state energy by a constant factor.

Yet Another New Class and Its Equivalence To UQMA
One might define a class that is similar to QMA, with the added promise of the gap in its acceptance probability.
The above definition is motivated by the local hamiltonian problem, with the additional promise that the spectral gap of the Hamiltonian is inverse polynomial.Its one-dimensional version is defined as follows.

Definition 47. 1-d poly-gap local hamiltonian:
We are given a 2-local Hamiltonian on n d-dimensional sites H = r j=1 H j with r = poly(n) arranged in a line.Each H j has a bounded operator norm ||H j || ≤ poly(n).We are also given three constants, a, b and ∆, with b − a, ∆ ≥ 1/poly(n).We have the promise that the spectral gap of H is at least ∆.In "yes" instances, the smallest eigenvalue of H is at most a.In "no" instances, the smallest eigenvalue is at least b.We need to decide whether a given instance is a "yes" or a "no" instance.
As in the unique case, we can show, Theorem 48.1-d poly-gap local hamiltonian is PGQMA-Complete.
The proof is completely analogous to the reasoning we provided for Theorem 5.
Proof.We first show that UQMA ⊆ PGQMA.This inclusion is not immediate because of the following reason: If I ∈ L no ∈ UQMA, then we know that λ 1 (Q) ≤ 1/3, but it is not guaranteed that (λ To resolve this issue, we use the amplification property of QMA and change the "no"-probability to be 1/3 − δ instead of 1/3 so that we have λ 1 (Q) ≤ 1/3 − δ.Then, by a simple construction that we explain below, we add a single state that is accepted with probability 1/3, having λ 1 (Q) = 1/3 and λ 2 (Q) ≤ 1/3 − δ, which provides the necessary gap.
Adding the 1/3-eigenvalue is done by changing the circuit: we append another qubit to the input qubits, and measure it in the beginning of the circuit.If the outcome is 0, then we proceed as before; If it is 1, we measure all other l input qubits in the computational basis.If all of them are 1, we accept with probability 1/3, and reject otherwise.A simple calculation shows that the action of such a procedure is exactly what we want: it adds a single 1/3-eigenvalue (for the state |1 ⊗ |1 l ), and 2 l − 1 0-eigenvalues (for states of the form |1 ⊗ |z for z = 1 l ), which do not concern us.The other eigenvalues remain the same: If the state |ψ had an eigenvalue λ in the original circuit, then the state |0 ⊗ |ψ has the same eigenvalue in the modified circuit.
We are now ready to prove Theorem 8.

Proof of Theorem 8:
We use the same proof as in the second part of the proof in Lemma 49, to get the reduction 1-d poly-gap local hamiltonian ≤ r unique 1-d hamiltonian.We already know that the unique 1-d hamiltonian problem is QCMA-hard under randomized reductions by Theorem 3. Using the transitivity of randomized reductions (see Observation 12.1), we can combine these two reductions and conclude that 1-d poly-gap local hamiltonian is QCMA-hard, as required.
7 The QMA Case

Random Projections Fail to Create Inverse Polynomial Gap
As mentioned earlier, we divided the proof of the Valiant-Vazirani Theorem into three components.Component 1 solves the problem in the simple case where the number of the accepting witnesses is known; Component 2 improves it by observing that the size of the set can only be approximated, without considerably affecting the probability of acceptance; Finally, Component 3 shows that we may achieve the same results by using a pairwise independent hash function instead of a random function, thus rendering the reduction efficient.
In this section, we show that even in the case where the number of solutions is known, as in component 1, we cannot -at least in the most direct approach -create a transformation that maps it to a "unique instance".The main difficulty in the QMA case is that we do not know in which basis to operate.Notice that if there exists a description (that Merlin can supply) of how to efficiently transform a standard basis state to one of the states that is accepted with probability greater than 2 3 , then the problem is in QCMA.Let us define a possible quantum analogue of an R-restriction.A natural generalizationrather than restricting to witnesses that belong to some set R -is to project onto some subspace R; We call this procedure a quantum R-restriction.As we did in the discussion of component 1, we will not consider the efficiency of implementing the restriction.A diagram of a general circuit and its R-restriction are given in Figure 5.
While the relevant operator for the original verification is Q

|0
Figure 5: A quantum R-restriction.On the left: a general description of a QMA verification scheme.On the right: its R-restriction, where ΠR is the projection on the subspace R. The state is accepted only if in both measurements the outcome was 1.
subspace R to be a random subspace of dimension d, chosen according to the Haar measure, for some convenient d.The next proposition shows that this approach, unfortunately, fails.
where |ψ ⊥ is the -up to a phase -unique orthogonal vector to |ψ in V .We consider the following quantity, which gives the expectation value of the gap created by applying the random R-projection defined by U P d U † : where the integral is taken over the Haar measure of the unitary group U (2 l ).Let {|0 , |1 } be a basis for V .Note that m V (U, d) is given by the difference of the maximum λ max and minimum λ min eigenvalues of the following matrix By Gershgorin disc Theorem ([Bha97, p. 244] ), we find Applying Lemma 50 to each of the two terms in the R.H.S. of the equation above, for any 1 ≤ k ≤ 2 l .To complete the proof, note that by Markov's inequality, To compute the R.H.S. of the equation above, we first note that By Schur's Lemma [FH04] (see also [Wat18,,

Using a Many-Outcome Measurement
In the previous section we tried to solve example 1 by applying the most natural idea that comes to mind: do a random 2-outcome measurement, and see if one state can "pass" the projection with an amount that is not negligible, compared to the other state on the subspace.We found out that such a procedure fails.In this section, we analyze the use a many-outcome measurement.We begin by applying a measurement in a random basis (or, to put it differently, by applying a random unitary according to the Haar measure, and then measuring in the standard basis).This, of course, cannot be done efficiently, but we will deal with it later.Radhakrishnan et al.where M is an orthogonal basis chosen uniformly from the Haar measure.
A stronger result was presented in [Sen06, Theorem 1], which implies the same kind of result, but instead of the expectation, it asserts that the same conditions hold with all but an exponentially small probability.
Furthermore, Ambainis and Emerson [AE07] have shown that: where M is a POVM with respect to an -approximate (4, 4)-design.
For our purposes, one does not need to understand what an -approximate (4, 4)-design is, but rather, only that an efficient construction exists that enables us to realize the POVM M for any constant .Notice that this is a constant POVM, and for every two states, the TVD of the distributions is constant.For more details of how one can implement a 4-design, see Theorem 1 of [AE07].Although the POVM is constant, it achieves the same result as a random object (manyoutcome measurements), but in an efficient way, and therefore, we see it as a "pseudorandom" object.
So how can we exploit that?Suppose we had the description of the distribution of M (|ψ 1 ) and M (|ψ 2 ).Then we could select a unique witness by accepting only when we measure an outcome j that is associated with the j's for which M (|ψ 1 )(j) > M (|ψ 2 )(j).As such, we would get by Theorem 52 that |ψ 1 is accepted with a Ω(1) probability larger than |ψ 2 .Of course, this approach does not lead to the solution of the problem, as the promise of having a description of the distributions is too strong.
Indeed, although there is a classical description that would enable us to distinguish, with high probability, between the two cases, there is no known general way to achieve that which is in BQP.We note that there is a resemblance between this problem and the SZK-Complete problem given in Ref. [Vad99], in both problems, it is required to distinguish between two probabilities with some total variation distance.

Proof of Lemma 39:
The proof is very similar to the proof of Lemma 29 above: Let y 1 , ..., y a be the elements of S 1 , and y a+1 , ..., y b the elements of S 2 .So, The next steps are exactly the same as the ones between Eq. (12) to (14), so we get: where in the last two inequalities we used 2 j ≤ b < 2 j+1 .
or even C-Hard under randomized reductions) and A ≤ r B, then B is C-Hard under randomized reductions.3.If A ≤ r B and B ∈ RP, then A ∈ RP. 4. If A is C-Hard under randomized reductions, and A ∈ D then C ⊆ RP D . 5. If A is C-Hard under randomized reductions, and A ∈ D and B is D-Hard (or, even D-Hard under randomized reductions 9 ) then B is also C-Hard under randomized reductions.6.If A is C-Hard under randomized reductions, and A ∈ RP then C ⊆ RP.Definition 13 (Unique Nondeterministic Polynomial Time (UP 10 )[Val76]).A promise problem L = (L yes , L no ) ∈ UP if there exists a Turing Machine (TM) M that is polynomial in its first argument s.t. for every x ∈ {0, 1} * : 1. x ∈ L yes ⇒ ∃y s.t.M (x, y) accepts and ∀y = y M (x, y ) rejects.

Figure 2 :
Figure2: A umappyes instance.There is exactly one witness that is accepted with probability greater than p2, and all others are accepted with probability smaller than p1.

Figure 3 :
Figure 3: A tmapp instance for which the first component fails to work: it has numerous witnesses with probability inside the "gap-interval" and very few in the "yes-interval".See the discussion in Section 4.1.

Figure 4 :
Figure 4: A yes-instance with its lightweight range.
where Π R is a projection onto the subspace R. The quantum analogue of component 1 consists of taking the
and Pr(V (x, y) accepts in t steps) ≤ p 1 } Y gap = {y| |y| = l and Pr(V (x, y) accepts in t steps) ∈ (p 1 , p 2 )} Y yes = {y| |y| = l and Pr(V (x, y) accepts in t steps) ≥ p 2 } Proposition 1.For every > 0 and d ∈ N, with probability larger than 1− , applying the quantum random R-restriction with dimension d, to Example 1 creates an instance with a gap smaller than −1 2 −l/2+2 .Proof.As the verification circuit already rejects any state in the orthogonal complement of the two-dimensional subspace V , it is clear that we only have to analyze the gap created on states in V .A rank d random projector can be written as U P d U † , where U is a unitary drawn from the Haar measure and P d