Quantum query complexity of symmetric oracle problems

We study the query complexity of quantum learning problems in which the oracles form a group $G$ of unitary matrices. In the simplest case, one wishes to identify the oracle, and we find a description of the optimal success probability of a $t$-query quantum algorithm in terms of group characters. As an application, we show that $\Omega(n)$ queries are required to identify a random permutation in $S_n$. More generally, suppose $H$ is a fixed subgroup of the group $G$ of oracles, and given access to an oracle sampled uniformly from $G$, we want to learn which coset of $H$ the oracle belongs to. We call this problem coset identification and it generalizes a number of well-known quantum algorithms including the Bernstein-Vazirani problem, the van Dam problem and finite field polynomial interpolation. We provide character-theoretic formulas for the optimal success probability achieved by a $t$-query algorithm for this problem. One application involves the Heisenberg group and provides a family of problems depending on $n$ which require $n+1$ queries classically and only $1$ query quantumly.

In this paper, we analyze the query complexity of the general coset identification problem. We prove that nonadaptive algorithms are optimal for any coset identification problem. We provide tools to reduce the analysis of query complexity to purely character theoretic questions (which are themselves often combinatorial). In particular we derive a formula for the exact quantum query complexity for coset identification in terms of characters. In the case of symmetric oracle discrimination (which itself includes polynomial interpolation as a special case) we find the lower and upper bound for bounded error query complexity.
Another motivation for our work is the study of nonabelian oracles. Much is known about quantum speedups when the oracle is a standard Boolean oracle. Less is known about whether oracle problems with nonabelian symmetries can offer notable speedups. To that end we study the follow scenario: suppose a group G acts by permutations on a finite set Ω (we call Ω a G-set). A learner is given access to a machine which takes an element ω ∈ Ω and returns a · ω for some hidden group element a ∈ G. With as few queries as possible the learner should guess the hidden element a ∈ G. The classical query complexity for this problem is a long-known invariant of G-sets called the base size. For instance, if G is the full permutation group of Ω = {1, . . . , n} then n − 1 queries are required classically to determine the hidden permutation. This problem is a special case of symmetric oracle discrimination and we can express the bounded error quantum query complexity of this purely in terms of the character of the G-set Ω. For instance, we find that when G is the full permutation group of X = {1, . . . , n} then n − 2 √ n + Θ(n 1/6 ) queries are necessary (and sufficient) to determine the hidden element.
This result bears some similarity to other work on learning problems related to the symmetric group. Aaronson and Ambainis [AA14], who prove that at most a polynomial speedup can be achieved in computing functions on n inputs which are invariant under the action of the full symmetric group S n (using a standard evaluation oracle). Ben-David [BD16] proves that at most a polynomial speedup is possible for Boolean functions defined on the full symmetric group. More recently, Dafni, Filmus, Lifshitz, Lindzay and Vinyals [DFL + 21] have studied the query complexity of Boolean functions defined on the symmetric group, again proving a polynomial relationship between the quantum and classical query complexities (as well as numerous other complexity measures). These results may be compared to the well-known fact that only polynomial speedups are possible in computing total Boolean functions [BBC + 01], the idea being that learning problems on the full symmetric group correspond to total functions, while learning problems on a subgroup correspond to partial functions. All of the results mentioned above are not directly comparable to ours, since they use a standard evaluation oracle, while we examine a more symmetric "in-place" oracle model.
The task of oracle identification can be further refined: fix a group G, a G-set Ω, and a function f : G → X which is constant on left cosets of some subgroup H, and distinct on distinct cosets. The (left) coset identification problem is to determine f (a) given access to a permutational black-box hiding a through the action on Ω. For instance, when G = S n (the symmetric group), Ω = {1, . . . , n} its defining representation and f the sign homomorphism, it requires n − 1 classical queries to determine f (a). As a counterpoint to the harsh lower bound above we provide a family of examples for this task parametrized by n in which the quantum query complexity is 1 while the classical complexity is O(n). The groups we use are Heisenberg groups acting as small subgroups of the full permutation group. This example is a nonabelian analogue of the fact that good quantum speedups can be found in computing partial Boolean functions [BV97].
The paper is organized as follows. In section 2 we formalize coset identification in the context of quantum learning algorithms and review the notions of adaptive and nonadaptive learning. In section 3 we prove that parallel queries suffice to produce an optimal algorithm for this task. Section 4 applies this theorem to symmetric oracle discrimination and addresses numerous example problems. In section 5 we return to the general coset identification task and we prove the main theorem of this paper, Theorem 5.1, which is a formula for the success probability of an optimal tquery algorithm in terms of characters. We use this in section 6 to compute the exact and bounded error query complexity of some special examples (including the Heisenberg group example). We conclude in section 7 by explaining how our work reproduces several previously known results involving abelian oracles.
Our paper uses the language of representation theory of finite groups. A suitable reference is the first third of Serre's textbook [Ser96]. We review some important notations later in Section 5 (in particular, the idea of induced representation is critical for the statement of our results.) Here we mention that a representation of a finite group G always refers to a finite dimensional and unitary representation of G over the complex numbers. In other words, a representation is a group homomorphism π : G → U (V ) (the unitary group of a f.d. vector space V ). We often think of V as a left module for the group alegbra CG, and use the notation gv for π(g)v when the map π is clear from the context.

Quantum learning from oracles
A quantum or classical oracle problem is described by a set of hidden information Y , a function f : Y → X (the function to learn or compute), and a representation of Y as operations on inputs of some kind (which determines the oracles). Classically such a representation consists of a set of inputs Ω and an assignment taking each y ∈ Y to a permutation of Ω, i.e. a map π : Y → Sym(Ω).
A classical oracle problem is specified by a tuple (Y, Ω, π, f ). A classical computer has access to π(y) for some unknown y ∈ Y by spending one query to input ω ∈ Ω to learn π(y)·ω. The goal is to determine f (y) with a high degree of certainty with as few queries as possible. More concretely, we measure the efficacy of an algorithm by its average case success probability, namely the probability of correctly outputting f (y) supposing the hidden information y is sampled uniformly from Y . For the highly symmetric problems considered in this paper, this is the same as the worst-case success probability, as explained below.
The quantum representation of oracles is described by a Hilbert space V and an assignment taking each y ∈ Y to a unitary operator of V , in other words a map π : Y → U (V ). Thus a quantum oracle problem is specified by a tuple (Y, V, π, f ). The quantum computer spends one query to input a state |ψ ∈ V to π(y) to acquire the state π(y)|ψ ; the goal is to produce a state and measurement scheme which outputs the value f (y).
We note that there are other oracle models used to encode permutations. One possibility is to require an oracle to act on a bipartite system, with one subsystem specified to be the "input register" and the other a "response register". 1 While we do not specifically consider this model here, we note that many oracle problems, such as polynomial interpolation and group summation, that are normally formulated in this two-register setup do have an easy reformulation in our setup. Thus, our results and analyses apply to these problems in their original two-register formulation. See Section 7. However, in some cases, the two-register setup results in a set of oracles that do not form a group, for instance in the work of Ambainis on permutation inversion [Amb02]. In general, it is an interesting question (and to our knowledge, open) whether these oracle models are the same, or if they lead to different query complexities. 2 A symmetric oracle problem is an oracle problem in which the hidden information is a group G (so we are replacing Y with G) and the map π is a homomorphism G → Sym(Ω) in the classical case or G → U (V ) in the quantum case. If π : G → U (V ) is a homomorphism, then it is common practice to regard V as a (left) CG-module where CG is the group algebra of G (spanned by an orthonormal basis sometimes written without kets as {g | g ∈ G}. In module notation we sometimes write g · v := π(g)(v) (for g ∈ G, v ∈ V ) if the representation π is understood from context. The quantum oracle arising from a symmetric classical problem is also symmetric.
Of special interest to us is the case when the function f to be learned is compatible with the group structure G. An instance of the coset identification problem is a symmetric oracle problem (G, V, π, f ) where the function f : G → X is constant on left cosets of a subgroup H ≤ G and distinct on distinct cosets. We also assume f is onto. The typical example is when X = {gH | g ∈ G} is the set of left cosets of H and f (g) = gH. An equivalent formulation is to say that X is a transitive G-set and the map f : G → X is a map of (left) G-sets (i.e., f (gh) = gf (h) for all g, h ∈ G). Then the subgroup H can be recovered as the preimage of f (e).
For our analysis of the coset identification problem, we focus on average case success probability. The symmetry of the problem implies that worst case and average case success probabilites are equal, as the following argument shows: provided an unknown oracle π(a) we can select g ∈ G uniformly at random and preprocess our input by applying π(g). Then an optimal average-case algorithm will return the coset containing ga with optimal average-case success probability. The coset which contains a can then be retrieved by applying g −1 . Hence it suffices to consider the average case success probability of any algorithm for this task (with the unknown oracle π(a) sampled uniformly from G).
We examine bounded error and exact measures of query complexity. The exact (or zero error) query complexity of a learning problem is the minimum number of queries needed by an algorithm to compute f (y) with zero probability of error. The bounded error query complexity is the minimum number of queries needed by an algorithm to compute f (y) with probability ≥ 2/3. The bounded error query complexity is often studied for a family of problems growing with a parameter n and so changing the constant 2/3 above to any number strictly greater than 1/2 will only change the query complexity by a constant factor mostly ignored in asymptotic analysis.
Broadly speaking, there are two qualitatively different approaches to solving an oracle problem. The first approach is to ask questions one at a time, carefully changing your questions as you receive more information. This is called using adaptive queries. The other approach is to prepare all your questions and ask them at once in one go (imagining the learner has access to multiple copies of the teacher). This is known as using non-adaptive, or parallel queries. Classically the adaptive model is at least as strong as the nonadaptive model, since you can convert any nonadaptive algorithm into an adaptive one (by picking your questions in advance but asking them one at a time). This is well-known to be true also in the quantum setting. In the next section we will prove the converse for coset identification: Theorem 2.1. Suppose (G, V, π, f ) describes an instance of coset identification. Then there exists a t-query quantum algorithm to determine f (a) with probability P if and only if there exists a t-query nonadaptive query algorithm which does the same.
This theorem is certainly not true for arbitrary learning problems: Grover's algorithm provides an example in which any optimal algorithm must use adaptive queries [Zal99]. To prove the theorem we must precisely state what adaptive and nonadaptive algorithms are.

Adaptive vs. nonadaptive: definitions
Recall that a quantum learning problem is described by a tuple (Y, V, π, f : Y → X) where Y indexes the set of hidden information, V is a finite dimensional Hilbert space, π : Y → U (V ) a representation of the unknown information by unitary operators, and f is the function to learn.
The standard model for an adaptive algorithm is as follows (see e.g. [BBC + 01, Section 3.2]): A t-query adaptive quantum algorithm for the quantum oracle problem • N is the dimension of the auxiliary workspace used in the computation The algorithm uses t queries to the oracle π(a) (with a sampled uniformly from Y ) to produce the output state upon which the algorithm executes the measurement described by {E x } x∈X . Here and elsewhere I denotes the identity operator (in this case acting on the space C N ).
In quantum circuit notation the preparation of the state |ψ A a reads: By contrast, an algorithm is nonadaptive if at any point during the algorithm, the input for some query does not depend on the results to any of the previous queries. Essentially this means that all the inputs are completely determined before the algorithm begins. Classically, t nonadaptive queries are identical to t simultaneous queries to t copies of an oracle. This motivates the following definition (cf [Mon10, Section 2]): • N is the dimension of the auxiliary register.
• |ψ is the input state, a unit vector of V ⊗t ⊗ C N .
• {E x } is a POVM indexed by X.
The algorithm operates on the input state to produce |ψ A a = (π(a) ⊗t ⊗ I)|ψ which is then measured using the POVM {E x }. The next fact is very useful and follows immediately from definitions.

Lemma 2.2.
A t-query nonadaptive algorithm for the problem (Y, V, π, f ) is the same as a singlequery nonadaptive algorithm for the oracle problem (Y, V ⊗t , π ⊗t , f ).
The quantum circuit notation for the nonadaptive preparation of the state |ψ A a is drawn as follows.
In either model, the algorithm A uses t copies of the unitary π(a) to produce a state |ψ A a . Using the POVM {E x } results in a measurement value x ∈ X with probability Since we assume the oracle is sampled uniformly from Y , the probability that A executes successfully is

Symmetric oracle problems
Suppose we have a symmetric oracle problem (G, V, π, f ). As mentioned in the introduction, since we are focusing on query complexity and not on issues of implementation, analysis of this problem depends only on the character χ V of π : G → U (V ), as we prove in the lemma below. In fact, a little more is true. Let Irr(G) denote the set of irreducible characters of G. Given a representation π : G → U (V ) define the set Here we are using (·, ·) to denote the usual inner product of characters. If χ ∈ Irr(G) and (χ, χ V ) > 0 we say that χ appears in the representation V .
Lemma 2.3. The optimal success probability of a t-query algorithm to solve a symmetric oracle problem (G, V, π, f ) depends only on I(V ) and f .
Proof. First, note that if U : V → W is a Hilbert space isomorphism then we can define a new oracle problem (G, W, U πU −1 , f ) where the oracles now act on W . Any t-query algorithm to solve the original problem can be "conjugated" by U (e.g. the input state |ψ becomes U |ψ and the non-oracle unitaries and POVM are conjugated by U ) to produce a t-query algorithm for the new problem which succeeds with the same probability. Conversely any algorithm to solve the new problem can be conjugated by U −1 to solve the old problem with the same probability. Therefore oracle problems with isomorphic unitary representations of G will have the same t-query optimal success probability. In other words, only the character χ V is relevant.
Second, we claim that the multiplicities of irreducible characters in V are not important; only whether they appear in V or not. Indeed, adding a d-dimensional workspace to a computer's original system V produces a new representation V ⊗ C d of G with character dχ V . Since we allow our algorithm to introduce any such workspace, we are in effect allowing it to increase the multiplicity of each character by a factor of d. Note that this process will never produce irreps which did not appear in V to begin with. Hence the optimal success probability depends only on which irreps appear in V , i.e. the set I(V ).
It makes sense that if an algorithm is granted access to more representations to work with, its success probability cannot decrease. To be more precise, fix t, and let P opt (G, V, π, f ) denote the optimal success probability of a t-query algorithm for the symmetric oracle problem (G, V, π, f ).
Lemma 2.4. Suppose π V , π W are representations of G on the spaces V and W , with Proof. The basic idea is any t-query algorithm to solve (G, W, π W , f ) can be extended to produce a t-query algorithm for (G, V, π V , f ). Suppose an algorithm A for W uses an N dimensional ancilla space, i.e. operates on W ⊗ C N . Since

Parallel queries suffice
Here we prove Theorem 2.1, namely that the optimal success probability for coset identification can be attained by a parallel (nonadaptive) algorithm. We prove this by showing that any t-query adaptive algorithm can be converted to a t-query nonadaptive algorithm without affecting the success probability. Another way to say this is that every t-query adaptive algorithm can be simulated by a t-query nonadaptive one. This technique is greatly inspired by the work of Zhandry [Zha15] who proves this result when G is abelian, and also bears resemblance to the lower bound technique of Childs, van Dam, Hung and Shparlinski [CvDHS16], where the special case of polynomial interpolation is addressed.
Let π : G → U (V ) be a unitary representation of G. Let CG denote the group algebra of G. Each h ∈ G acts on CG by left multiplication, an operator we denote L h . We will use the controlled multiplication operator ([DBCW + 02]) defined on V ⊗ CG by This defines a unitary operator and is a generalization of the standard CNOT gate (take G = Z 2 and V = CZ 2 ). As such we draw it using circuit diagrams as in section 3. There are two G-actions on V ⊗ CG we use, one given by π(h) ⊗ L h and the other I ⊗ L h . Our first observation is that CM intertwines these actions.

Lemma 3.1. The controlled multiplication operator satisfies
The proof follows by applying both sides to a vector |v, g and using the definition of CM . The representation obtained by letting each h ∈ G act by the identity on V is a direct sum of dim V many copies of the trivial reprseentation, so we denote it 1 ⊕ dim V . The lemma allows us to interpret CM as a CG-module isomorphism V ⊗CG → 1 ⊕ dim V ⊗CG. In pictures the lemma reads: The next property is crucial for our parallelization argument. Recall that if W is a CG-module then I(W ) denotes the set of irreducible characters of G which appear in W .
Then CM restricts to a CG-module isomorphism V ⊗ W → Z. Next let Y be the submodule of CG which contains each irreducible of I(Z) with maximal multiplicity (so if χ appears in Y then χ appears with multiplicity χ(e)). Now Z ∼ = V ⊗W as CG-modules so in particular I(Z) = I(V ⊗W ).
is a t-query adaptive algorithm to evaluate the homomorphism f . First, by replacing π with π ⊗ I if necessary, we may assume that the algorithm does not use a workspace, that is N = 1. We will describe a new adaptive algorithm A which is a modification of A as follows. We introduce a new workspace which is a copy of CG. The new intermediate When the oracle is hiding the unitary π(a) this produces the following state: Next measurement is performed: first the second register is measured in the standard basis of CG. Then the original POVM is applied to the first register. The result of these two measurements will be a pair (g, x); the final output of the algorithm is gx. 3 Lemma 3.3. The algorithm A succeeds with the same success probability as A.
Lemma 3.4. The algorithm A can be simulated by a t-query parallel query algorithm.
Proof of Theorem 2.1 from Lemmas 3.3 and 3.4. By the two lemmas, given any t-query adaptive algorithm A which solves coset identification with probability P , there exists a t-query parallel query algorithm which succeeds with the same probability.
Proof of Lemma 3.3. Consider the pre-measurement state for A given that the hidden group element is a ∈ G. It can be written If the first measurement reads g then the state collapses to |ψ A g −1 a ⊗|g . If the second measurement is now performed, the result will read f (g −1 a) with the same probability that the algorithm A would read this result given that the oracle was hiding g −1 a. The algorithm then classically converts the result to gf (g −1 a) which is equal to f (a) since f is a left G-set map. So the following conditional probabilities are equal: Denote these probabilities by P A (f (a) | a, g) and P A (f (g −1 a) | g −1 a) respectively. Since the probability that the first measurement of A reads g is 1/|G| for all G and g is sampled independently of a, we compute the average case success probability by Proof of Lemma 3.4. We rewrite the pre-measurement state of A expressed by Figure 3 using Lemma 3.1. Denote the state that results when the hidden element is a ∈ G by |ψ A a . We apply Lemma 3.1 diagrammatically from left to right: Formally the algorithm A is given by Accepted in Quantum 2021-03-03, click title to verify. Published under CC-BY 4.0.
In the last step, in addition to applying Lemma 3.1 at the right of the diagram, we used the fact that L a −1 |η = |η . In formulas we have Therefore we have converted this algorithm to a single-query algorithm using the oracle This is readily proved by induction and Lemma 3.2. For instance, by Lemma 3.2 the image of Therefore the inital state U |ψ, η belongs to the subspace V ⊗Y , which means that the algorithm A may be simulated by a single query algorithm to the oracle I ⊗L a acting on the subspace V ⊗Y . Note that the irreducibles appearing in this subspace are Hence Lemma 2.3 implies there exists a single-query algorithm using the representation V ⊗t which achieves the same success probability as A . As noted in Lemma 2.2 this is the same as a t-query parallel algorithm using the representation V . This concludes the proof of Lemma 3.4.
Corollary 3.5. The optimal t-query success probability for an algorithm solving an instance of coset identification (G, V, π, f ) is equal to the optimal single-query success probability achievable solving the instance (G, V ⊗t , π ⊗t , f ).

Application to symmetric oracle identification
Symmetric oracle discrimination is the following task: given oracle access to a symmetric oracle hiding a group element a ∈ G, determine a exactly. This is the special case of coset identification in which H = {e}. Thus an instance of this problem is determined by a finite group G and a (finite-dim) unitary representation π : G → U (V ). The following theorem computes the success probability of a single-query algorithm and is proved by Bucicovschi, Copeland, Meyer and Pommersheim: Theorem 4.1. ([BCMP16], Theorem 1) Suppose G is a finite group and π : G → U (V ) a unitary representation of G. Then an optimal single-query algorithm to solve symmetric oracle discrimination succeeds with probability The result of the previous section tells us that parallel algorithms are optimal for symmetric oracle discrimination.
Theorem 4.2. Suppose G is a finite group and π : G → U (V ) a unitary representation of G. Then an optimal t-query algorithm to solve symmetric oracle discrimination succeeds with probability Proof. Theorem 2.1 tells us that a t-query parallel algorithm achieves the optimal success probability. As noted this is equivalent to a single-query algorithm using the representation π ⊗t : G → U (V ⊗t ). Now apply Theorem 4.1.
To express the exact and bounded error query complexity of symmetric oracle discrimination we're compelled to make the following definitions.
Let V denote a CG-module. The quantum base size, denoted γ(V ), is the minimum t for which every irrep of G appears in V ⊗t . If no such t exists then γ(V ) = ∞. The bounded error quantum base size, denoted γ bdd (V ) is the minimum t for which is a case of symmetric oracle discrimination then by Theorem 4.2 the number of queries needed to produce a probability 1 algorithm is γ(V ). That is, the exact quantum query complexity of the problem is equal to the quantum base size of V . Similarly the bounded error query complexity is γ bdd (V ).
It may happen that one of these quantities is infinite. However when V is a faithful representation then a classical result attributed to Brauer and Burnside ([Isa76], Theorem 4.3) guarantees that every irrep of G appears in one of the tensor powers V ⊗0 , V, V ⊗2 , . . . , V ⊗m−1 where m is the number of distinct values of the character of V . If V contains a copy of the trivial representation, then we can say that every irrep of G is contained in some tensor power V ⊗t for some t. Hence in this case (with V faithful and containing a copy of the trivial irrep) both γ(V ) and γ bdd (V ) are finite.
In particular, this occurs whenever we "quantize" a classical symmetric oracle discrimination problem. This is the learning problem specified by a finite set Ω and a homomorphism G → Sym(Ω). A query to an oracle hiding a ∈ G consists of inputting ω ∈ Ω and receiving a · ω. The learner must determine the hidden group element (or permutation) a. The quantized learning problem uses the homomorphism G → U (CΩ) sending elements of G to permutation matrices. (Such a representation is called a permutation representation.) Then the quantized learning problem is faithful if the original problem is faithful and the CG-module contains a copy of the trivial representation, namely span{ ω∈Ω |ω }. This is precisely the situation we would like to study because we can compare the classical and quantum query complexity. Classically the exact and bounded error query complexities are equal, since if a classical algorithm does not use enough queries to identify the hidden permutation with certainty then it must make a guess between at least 2 equally likely permutations which behave the same on all the queries that were used, resulting in a success rate of at most 1/2.
• Suppose Ω = {1, . . . , n} hosts the defining permutation representation of G = S n . Then n − 1 queries are required to determine a hidden permutation σ.
• If we take the same action but restrict the group to A n ≤ S n then we need n − 2 queries to determine a hidden element σ ∈ A n .
• Consider the action of the dihedral group D n on the set of vertices of an n-gon. Then 2 queries are required to determine a hidden group element.
In general the classical query complexity is a well-known invariant of a permutation group G denoted b(G) called the minimal base size or just base size of G [LS:02]. It may be defined to be the length of the smallest tuple (ω 1 , . . . , ω t ) ∈ Ω t with the property that (g·ω 1 , . . . , g·ω t ) = (ω 1 , . . . , ω t ) if and only if g = 1. From the definition it is clear that the base size agrees with the non-adaptive classical query complexity of the problem. In fact, it is also equal to the adaptive query complexity, since if a sequence of adaptive guesses (ω 1 , . . . , ω t ) suffices to identify a particular hidden g ∈ G, then the same sequence of guesses works for every element of the group. This means any optimal algorithm may be implemented non-adaptively. Thus the classical query complexity of symmetric oracle discrimination of G ≤ Sym(Ω) is the base size of G and the quantum exact (bounded error) query complexity is the (bounded error) quantum base size. We are naturally led to a broad group theoretic problem: Question. What are the relationships between b(G), γ(CΩ) and γ bdd (CΩ)?
We are not aware of any direct comparison of these quantities in the group theory literature. Here we only compute the various quantities for some special cases. We saw earlier that b(S n ) = n − 1. We will prove 2. γ bdd = n − 2 √ n + Θ(n 1/6 ) queries are necessary and sufficient to succeed with probability 2/3.
Proof. Recall that the irreducible characters of S n are parametrized by partitions of n which can be written either as a sequnce [λ 1 , . . . , λ n ] or as a Young diagram with n total boxes and λ i boxes in the ith row. Let V = C{1, . . . , n} denote the CG-module corresponding to the defining permutation representation of S n . Then V decomposes as a sum of two irreducibles: We note that V [n] is the trivial representation. A well-known rule says that if V λ is a simple representation corresponding to the Young diagram λ then the irreps appearing in V ⊗ V λ where λ ± is the set of Young diagrams obtained from λ by adding then removing a box from lambda. In particular, this shows by induction that We see that n − 1 queries are required until every irreducible is contained in V ⊗t (in particular, the sign representation corresponding to the partition [1 n ] = [1, 1, . . . , 1] is not included in V ⊗t unless t ≥ n − 1). This proves part (1) of the theorem.
To prove part (2) we must examine more closely the set I t = I(V ⊗t ) consisting of all partitions with at least n − i columns (i.e. λ 1 ≥ n − i). We are interested in the sum It is well known that if χ is an irrep corresponding to the Young diagram λ then χ(e) is equal to the number of standard tableaux of shape λ ([Sag01], Theorem 2.5.2). Hence χ(e) 2 is equal to the number of pairs of standard tableaux of shape λ. Now by the Robinson-Schensted correspondence, the sum above is equal to the number of sequences of the numbers {1, . . . , n} whose longest increasing subsequence is at least n − t (see e.g. [Sag01], Theorem 3.3.2). Next, a deep result of Baik, Deift and Johannson [BDJ99] identifies the distribution of the l n , the length of the longest increasing subsequence of a random permutation of n elements, as the Tracy-Widom distribution (which also governs the largest eigenvalue of a random Hermitian matrix) of mean 2 √ n and standard deviation n 1/6 . In particular, Theorem 1.1 of [BDJ99] asserts that if F (x) is the cumulative distribution function for the Tracy-Widom distribution, then Let c be any real number. If we use t = n − 2 √ n + cn 1/6 queries, then our success probability will be Thus for any ∈ (0, 1), if we wish to succeed with probability 1 − , it will be necessary and sufficient to use t = n − 2 √ n + cn 1/6 queries, where c = −F −1 ( ) (for n sufficiently large).
Here is the analogous result for identifying an element of the alternating group. 1. γ = n − √ n are necessary for exact learning.
Proof. Recall the following facts about the representation theory of A n . The conjugate of a partition λ is the partition λ * obtained by swapping the rows and columns of λ; in other words λ * = (λ * 1 , λ * 2 , . . . ) where λ * i = the number of boxes in the ith column of λ. For each partition λ of n that is not self-conjugate, i.e. λ = λ * , the restriction of V λ to A n is an irreducible representation W λ of A n . Also, W λ = W λ * . For self conjugate λ, the representation V λ breaks up into two distinct irreducible representations W + λ and W − λ of equal dimension. Recall from the previous proof that after t queries, we get copies of all the V λ such that λ 1 ≥ n − t. Observe that for any partition λ, we must have either λ 1 ≥ √ n or λ * 1 ≥ √ n . (If both fail, the partition fits into a square of side length √ n − 1, which contains fewer than n boxes.) It follows that after t = n − √ n queries, for any λ, we have picked up a copy of V λ or V λ * . Hence we have every irreducible representation of A n . Therefore, n − √ n queries suffice for exact learning. Showing that that fewer queries cannot suffice is similar. Here we make the observation that there exists a partition λ such that λ 1 < √ n + 1 and λ * 1 < √ n + 1, since n boxes can be packed into a square of side length √ n . It follows that t = n − √ n − 1 queries do not pick up the V λ or V λ * for such λ. Thus, we do not get every irrep of A n .
We now examine the bounded error case. For a positive integer t, let p t be the success probability of the optimal t-query algorithm for identifying a permutation of S n and let q t be the corresponding probability for A n .
Let V denote the t-fold tensor power of the defining representation of S n . We can decompose V as a direct sum of irreps of S n and if we know which V λ appear we can determine which irreps of A n appear in V . In particular, each time we have a non-self-conjugate λ such that V λ appears in V , we will have W λ appearing in V . Let's consider the contribution of this appearance to the success probability p t and q t , which is the square of the dimension divided by the order of the group. Since the dimension of V λ equals the dimension of W λ , while the order of S n is twice the order of A n , the contribution to q t is twice the contribution to p t . Now if λ is self-conjugate then V λ decomposes into two irreps of S n of equal dimensions. The sum of the squares of these two irreps is thus one-half the square of the dimension of V λ . Once we've divided by the sizes of the groups, we see that the contribution to q t is equal to the contribution to p t .
We have thus seen that for any λ the contribution to q t is either 2 or 1 times the contribution to p t . It follows that p t ≤ q t ≤ 2p t Thus for q t ≥ 2/3 we must have p t ≥ 1/3, which as we showed in Theorem 4.3 requires n − 2 √ n + Θ(n 1/6 ) queries. On the other hand, if we are given n−2 √ n+Θ(n 1/6 ) queries, we achieve p t ≥ 2/3, which forces q t ≥ 2/3. The two theorems above show that there is very little speedup possible when trying to identify a permutation from the symmetric group or the alternating group. For the alternating group, one can at least get by with √ n fewer queries for exact quantum learning. Here there is an analogy to Van Dam's problem of exactly learning the value of an n-long bitstring using queries to its bits [van98]. Exact learning requires n queries. However, if we are guaranteed in advance that the parity of the string is even, then only n/2 queries are required for exact learning. To see this using the techniques of the current paper, we argue as follows. Let G be the subgroup Z n 2 consisting of all strings of even parity. If we are allowed t queries, then we can access those representations ρ x of Z n 2 corresponding to strings x of Hamming weight less than or equal to t (see also the remarks in Section 7.3). Ifx is the bitwise complement of x, then ρ x and ρx take the same values on G. Now, for any string x, one of x andx will have Hamming weight less than or equal to n/2 . Hence every representation of G can be accessed by n/2 queries to the oracle, and we will succeed with probability 1.

Query complexity of coset identification
In this section we derive a formula for the optimal success probability of a t-query algorithm to solve coset identification. In light of our previous result on parallelizability (Corollary 3.5), this boils down to finding a formula in the single-query case. This will directly generalize the single-query results of [BCMP16] used in Section 4.
To state the result we fix some notation. Suppose (G, V, π, f ) is an instance of coset identification with H the preimage of f (e). Given an H-representation W let W ↑ denote the induced representation of W (which is a representation of G; see Section 5.1.1 below for more details.) Likewise if W is a CG-module then we denote by W ↓ the CH-module obtained by restriction to H. Recall that if V is a CG-module then I(V ) denotes the set of all irreducible characters of G appearing in V . We sometimes use the notation I G (V ), I H (V ) to emphasize which group we are considering. Finally, given two representations A and B we let Thus A B denotes the sum of all the isotypical components of A which correspond to an irreducible isotype appearing in B. We will be interested in the quantities which can be understood as the fraction of A which is shared with B.
Theorem 5.1. An optimal single-query algorithm to solve the instance (G, V, π, f ) of coset identification succeeds with probability In words: to find the optimal success probability, you look at an irrep Y of H which appears in V ↓ . Then you examine the fraction of Y ↑ which is shared with V . Finally take the maximum over all irreps Y appearing in V ↓ .
From this theorem we can quickly deduce Theorem 4.2, the single-query result for symmetric oracle identification. This is the special case when H is the trivial group. Then H has only one irrep, namely the trivial representation 1, and 1 ↑ is isomorphic to CG. Hence the formula we get from Theorem 5.1 is which is the formula of Theorem 4.2.
The next two sections are devoted to the proof of Theorem 5.1. First we prove the lower bound (i.e. existence of a state and measurement achieving the desired success probability) and then we prove the upper bound (optimality of that success probability).

The lower bound
First we collect some facts concerning induced representations and averaging operators needed for the proof of Theorem 5.1. A fine treatment of the subject is contained in Serre's book [Ser96].

Induced Representations
Suppose H is a subgroup of a finite group G and let Y denote a representation of H. Note that

CG admits a right H-action. The representation of G induced from Y is
When H and G are understood we simply write Y ↑ . Similarly if W is a representation of G then it is also a representation of H, called the restriction of W to H. We denote it by W ↓ G H or simply W ↓ .
From the definition of induced representations, we can write where t ranges over a set of coset representatives for H. Conversely, if a representation W of G contains an H-invariant subspace W 0 such that where t again ranges over a set of coset representatives for H, then W is isomorphic to W ↑ 0 as G representations.
In our situation all representations are unitary. In particular if Y is a unitary representation of H then Y ↑ is equipped with the inner product determined by requiring the subspaces t ⊗ Y to be pairwise orthogonal, and translating the inner product of Y to each subspace t ⊗ Y . With this inner product Y ↑ is a unitary representation of G. We will often denote the orthogonal projection onto e ⊗ Y by E. Then the orthogonal projection onto t ⊗ Y is tEt −1 , and we have t tEt −1 = I.

Averaging operators
Given a CG-module V we can define the averaging operator, which turns an arbitrary linear map A : V → V into a G-invariant one: The map R G is tracepreserving, so in particular if p is a projection then R G (p) is non-zero, since it has non-zero trace.
If V contains only a single isotype of irrep, i.e. V ∼ = Y ⊗ C m for some irrep Y then R G is closely related to the partial trace with respect to the subspace Y : (1)

Proof of the lower bound
Before giving the proof of the lower bound in Theorem 5.1 we prove a preliminary proposition.
If R is a algebra over C, V an R-module and W ≤ V a linear subspace, we let R · W denote the submodule of V generated by W (i.e. the smallest submodule containing the subspace W ). Similarly for r ∈ R we let r · W denote the subspace {rw : w ∈ W }.

Proposition 5.2. Suppose Y is an irreducible unitary representation of H (a subgroup of G). Also
Then there exists a unit vector ψ ∈ V such that Remark. In Proposition 5.7 we will prove this is an upper bound for ψ|E|ψ over all unit vectors ψ ∈ V .
Proof. Let Π V denote the G-invariant orthogonal projection onto V . Since Y is irreducible, E is a minimal idempotent in End H (Y ↑ ). Therefore, since Π V also belongs to End H (Y ↑ ), we know EΠ V E is a scalar times E. In turn this implies Π V EΠ V is a scalar multiple of an orthogonal projection, since it is self-adjoint and for some non-zero scalar λ ∈ C. We will also use the fact that which results from Eq. (2) by multiplying the equation by Π Y on the left and right. Next, we claim that Y is not zero (so it is in fact isomorphic to Y as an H-module). Indeed, we have where the sum is over a set of coset representatives of H. This shows that Y is non-zero. In particular we have dim Y = dim Y . We can now compute the scalar λ via Finally, let |ψ be any unit vector in Y . Consider the rank-1 projection |ψ ψ| : Y → Y . We apply the averaging operator R H (see Section 5.1.2) to get R H (|ψ ψ|) = 1 |H| h∈H h|ψ ψ|h −1 . The space of H-invariant maps from Y to Y is 1-dimensional (by Schur's Lemma) and spanned by Π Y . Hence R H (|ψ ψ|) is a scalar multiple of Π Y , and by taking traces we find R H (|ψ ψ|) = 1 dim Y Π Y .
Using this we compute as needed.
Proof of Theorem 5.1, lower bound. Let Y be an irreducible constituent of V ↓ which maximizes the quantity Let V denote the G-subrepresentation (Y ↑ ) V of Y ↑ and again let E denote the orthogonal projection onto the subspace e ⊗ Y ⊂ Y ↑ . Then by Proposition 5.2 there exists a unit vector |ψ ∈ V such that Now consider the oracle problem given by (G, V , π , f ) (i.e. the coset identification problem where the oracle is represented on V rather than V ). Let Π V denote the G-invariant orthogonal projection onto V . We define a single-query algorithm for (G, V , π , f ) using no ancilla, the input state |ψ , and projective measurement {tΠ V EΠ V t −1 } t where t ranges over a set of coset representatives for H (so measuring outcome t uniquely determines a coset of H). The measurement is used to distinguish the density operators {ρ t = tρt −1 } where ρ = 1 |H| h∈H h|ψ ψ|h −1 . Note that the support of ρ is contained in V , since |ψ ∈ V and V is G-invariant. Therefore ρΠ V = Π V ρ = ρ. Using this, we compute the success probability as Tr(ρΠ V EΠ V ) (since the trace is cyclic, and t −1 ρ t t = ρ) = Tr(ρE) (since the trace is cyclic, and ρΠ V = Π V ρ = ρ) This shows that there is an algorithm for (G, V , π , f ) which succeeds with probability only contains irreps which are also contained in V , Lemma 2.4 implies there is also an algorithm for (G, V, π, f ) which succeeds with the same probability.
Remark. In applying Lemma 2.4 to produce an algorithm for (G, V, π, f ), one may have to introduce an ancilla register, to ensure that irreps appear with sufficiently large multiplicity to allow an embedding of V into the workspace.

The upper bound
In this section we prove the upper bound of Theorem 5.1 using a minimum-error quantum state discrimination approach [EMV06]. Before explaining the strategy to obtain the bound, we review the set-up. We fix an instance of coset identification (G, V, π, f : G → X). The subgroup H is the preimage of f (e), and the elements of X may be identified with the left cosets of H. A singlequery algorithm uses an initial state |ψ ∈ V and feeds it to the oracle, which is a hidden element a sampled uniformly from G. Afterwards, a measurement {E x } x∈X is applied with the goal of recovering f (a). With a choice of initial state fixed, the task of finding an optimal measurement {E x } amounts to finding an optimal measurement to discriminate the mixed states {ρ x } x∈X , where Indeed, the success probability of the algorithm is equal to the probability that the measurement {E x } successfully discriminates the mixed states {ρ x }, namely We will prove that this success probability is bounded above by the quantity given in Theorem 5.1, which involves induced representations. We now provide an outline of the proof to give an indication of how induced representations enter the picture.
To take advantage of symmetry in the problem, note that the density matrices {ρ x } always satisfy We say a set of operators with this symmetry is orbital (a precise definition is given below). We first argue that any optimal measurement to distinguish an orbital set of density matrices can be modified to produce another optimal measurement which is itself orbital (Lemma 5.3). Next we aim to simplify the problem further by showing that any orbital POVM can be replaced by a measurement which is both orbital and projective. To do so requires embedding the original CG-module V into a larger one W by adding an ancilla register. This is the content of Lemma 5.5, which is a "symmetric" version of the usual result that any POVM can be simulated using projective measurements and ancilla registers. As a result of this lemma we may make the following assumptions about an optimal single query algorithm, which uses the larger Hilbert space W : 1. The measurement operators {E x } x∈X are projective and orbital.
2. The initial state |ψ belongs to a G-invariant subspace of W which is isomorphic to V .
The existence of a projective orbital measurement implied by (1) is a strong condition on the structure of W : using the completeness relation, W can be written as the direct sum of the images of the measurement operators {E x }. The subspace Y which is the image of E x0 is left invariant by H, and the other subspaces are obtained through translation by a coset representative. This realizes W as the induced representation Y ↑ . Finally, in this restricted setting (incorporating the assumption (2) that |ψ ∈ V ) we are able to bound the success probability by decomposing Y into irreducible H-subrepresentations, and then applying a critical inequality (Proposition 5.7) that covers the situation when Y is irreducible. We now give the details.
With a given unitary representation V of G and a fixed G-set X understood we say a set of operators {A x } x∈X (on V ) is orbital if gA x g −1 = A g·x for all x and g. The density matrices for a single query algorithm for coset identification form an orbital set.

Lemma 5.3. Suppose {ρ x } x∈X is an orbital set of density matrices. Then there exists an optimal measurement to distinguish the states {ρ x } which is orbital.
Proof. Eldar, Megretski, Verghese give the proof when X = G with the action of left multiplication ( [EMV04], Section 4.3) and it works in this setting as well. We give the proof for the reader's convenience. Suppose {E x } x∈X is an optimal measurement. Then we define new measurement We claim that { E x } is an orbital POVM which discriminates the states {ρ x } with the same success probability as {E x }. Each operator E x is a nonnegative combination of positive semi-definite operators, hence is positive semi-definite. They satisfy the completeness relation: The completness relation for {E x } is used in the second equality. We check that the POVM { E x } is orbital: To complete the proof it suffices to show that the new measurement discriminates the states {ρ x } with the same probability as the original measurement. Indeed, we have Tr(E y ρ y ).
The second equality follows from the orbital assumption ρ g −1 ·x = g −1 ρ x g, and the other steps are index substitutions.
For the next result we use the following fact (cf. [NC11], Exercise 2.67): Lemma 5.4. Suppose V is a unitary representation of G, W a subrepresentation, and C : W → V a CG-module map which preserves inner products. Then C can be extended to V , meaning there is a unitary CG-module isomorphism U : V → V such that U coincides with C on W .
Proof. Let Y denote the orthogonal complement of W and Y the orthogonal complement of C(W ).
Since C preserves inner products, it is injective, so C(W ) ∼ = W as CG-modules. Hence Y ∼ = Y as CG-modules, and there exists an inner product preserving isomorphism D : Y → Y . Now the desired unitary operator U is given by The following result is an equivariant version of the argument given by Chuang and Nielsen to show that arbitrary measurement operators can be simulated using projective measurements and ancilla spaces (see [NC11], Section 2.2.8). Proof. Let W be the space V ⊗ CG and fix a basepoint x 0 of the G-set X.Let M x = (E x ) 1/2 be the non-negative square root of E x . The uniqueness of square roots implies that the set M = {M x } is orbital. In addition, these constitute a set of measurement operators for the POVM, meaning

Lemma 5.5. Suppose
Now let C M be the controlled-M operator acting on W via For the third equality we used the fact that M is orbital, i.e. M g·x0 = gM x0 g −1 . Now C M is not necessarily invertible, but we claim that C M preserves inner products on the subspace V ⊗ |η , where |η = 1 √ |G| g∈G |g is the equal superposition vector in CG: Therefore, by the previous lemma, there exists a unitary CG-module endomorphism U which restricts to C M on V ⊗ |η . We are ready to define the embedding ι and measurement {E x } that satisfy the claim of the theorem.
We take ι to be the inclusion of V as V ⊗ |η : Clearly ι is an inner product preserving CG-module embedding. We define the projective mea- Here I denotes the identity on V . The operators {E x } constitute a projective measurement, and we check that they form an orbital set. Let h ∈ G. Then Then the probability of reading outcome x is ψ, η|E x |ψ, η = ψ, η|U −1 g:g·x0=x The first three equalities are definitions, the fourth expands the multiplication, the fifth is notational and the last follows since the number of g for which g · x 0 = x is equal to |G|/|X| for all x ∈ X (since X is a transitive G-set). This proves the lemma.
As a result of the lemma, any orbital measurement to distinguish orbital states in a CG-module Y can be simulated by a projective orbital measurement in a larger CG-module W . The next lemma explains that the existence of a projective orbital measurement implies a decomposition of the Hilbert space W that realizes W as a representation induced from H.
where the sum is over a set of left coset representatives for H. By the characterization of induced representations discussed in Section 5, this shows W ∼ = W ↑ x0 . The lemmas above show that as long as we are willing to embed our original representation V into a larger representation W , we may assume that W is induced from some representation Y of H and that the measurement operators are projections corresponding to the direct sum decomposition of W as an induced representation. In other words, the measurement operator corresponding to outcome x ∈ X is projection onto t ⊗ Y where t is any element such that t · f (e) = x. The next lemma is the final key to unlocking the upper bound.
Proposition 5.7. Suppose Y is an irreducible unitary representation of H (a subgroup of G). Let V be a G-subrepresentation of Y ↑ . Let E denote orthogonal projection onto the subspace e ⊗ Y ⊂ Y ↑ . Then for any unit vector ψ ∈ V we have Proof. Consider the action of H on Y ↑ and let W denote the Y -isotypic component of Y ↑ . Then W ∼ = Y ⊗ C m as H-representations where m is the multiplicity of the irrep Y in Y ↑↓ . Since E is an H-invariant projection with image isomorphic to Y , we may assume that ψ belongs to W in addition to V . (Indeed, the support of E is contained in W , so E = Π W E = EΠ W , which implies ψ|E|ψ = Π W ψ|EΠ W |ψ .) Fix an orthonormal basis {y 1 , . . . , y d } of Y so that we may write where the u i 's are unit vectors in C m and λ i ≥ 0 with i λ 2 i = 1. We apply the averaging operator R H of Section 5.1.2 to the projection |ψ ψ|. By Equation (1) of Section 5.1.2 we have In particular R H (|ψ ψ|) ≤ 1 dim Y Π W . Note that since |ψ ∈ V ∩ W , the support of R H (|ψ ψ|) is also contained in the H-submodule V ∩ W . Hence we deduce the stronger inequality Now we may estimate ψ|Eψ : Here Tr(EΠ V ) can be computed by averaging over a set of coset representatives for H: Using that Π V commutes with the action of G and that t tEt −1 = I we have We are ready to prove the upper bound in Theorem 5.1.
Proof of Theorem 5.1, upper bound. Let (G, V, π, f ) specify an instance of coset identification and let H denote the stabilizer of a chosen point x 0 ∈ X (recall that the codomain of f is a transitive G-set X). Suppose an optimal single-query algorithm is given by an input state |ψ ∈ V (again we may assume there is no workspace by absorbing it into V ) and POVM { E x }. By Lemmas 5.5 and 5.6, there is a (not necessarily irreducible) representation Y of H and CG-submodule of Y ↑ isomorphic to V (which we identify with V ) such that the success probability of our algorithm is equal to the success probability of an algorithm using input state |ψ ∈ V ⊂ Y ↑ and the projective measurement {tEt −1 } t where E denotes orthogonal projection onto e ⊗ Y and t ranges over a set of coset representatives for H. Now decompose Y into irreducible H-invariant orthogonal subspaces: Then Then |ψ can be decomposed as a combination of orthogonal unit vectors |ψ = λ 1 |ψ 1 + · · · + λ r |ψ r such that each |ψ i belongs to Y ↑ i . Even more is true: since λ i |ψ i = Π i |ψ and Π i is a CG-module map, we know We are ready to bound the success probability of the algorithm. Recall that we are using the measurement {tEt −1 } t to distinguish the density operators {tρt −1 } t where ρ = 1 |H| h∈H h|ψ ψ|h −1 . Then Now using the decomposition of |ψ we have Now by Proposition 5.7 we have, for all i,

Query complexity
We now know the success probability of an optimal single-query algorithm solving coset identification. As in Section 4, we combine this with the fact that an optimal t-query algorithm with access to the representation V is the same as an optimal 1-query algorithm to V ⊗t to determine the optimal success probability for t-query algorithms: Corollary 5.8. Let (G, V, π, f ) describe a case of coset identification. Then an optimal t-query algorithm succeeds with probability A straightforward consequence is the following: Theorem 5.9. Let (G, V, π, f ) describe a case of coset identification. Then the zero-error quantum query complexity of the problem is the minimum t for which there exists some Y ∈ Irr(H) such that every irrep of G appearing in Y ↑ also appears in V ⊗t .
The bounded error quantum query complexity is the minimum t for which The diagram indicates, for instance, that χ ↓ [2,1 2 ] = ψ 0,1 +ψ 1,0 +ψ 1,1 and ψ ↑ 0,0 = χ [4] +2χ [2 2 ] +χ [1 4 ] . Finally, we are given access to the defining permutation representation of S 4 which decomposes as V = χ [4] + χ [3,1] . To find the optimal success probability of a single-query algorithm to determine which coset of H a permutation belongs to, we examine the irreps of H appearing in V . From the diagram we see that every irrep of H appears in V , so we look at each one. First consider the trivial representation ψ 0,0 . The only irrep of S 4 that appears in both V and ψ ↑ 0,0 is χ [4] , which contributes a one dimensional subspace to the 6 dimensional ψ ↑ 0,0 . Therefore using the irrep ψ 0,0 gives a success probability of 1/6. Now consider ψ 0,1 . In this case only χ [3,1] appears in both V and ψ ↑ 0,1 , and it contributes 3 dimensions to the 6 dimensional ψ ↑ 0,1 . Therefore the success probability using this irrep is 3/6 = 1/2. The other characters ψ 1,0 and ψ 1,1 give the same ratio so the optimal success probability of a single-query quantum algorithm is 1/2 (note a single-query classical algorithm can do no better than probability 1/6).
That the optimal 2-query success probability is 1 can be verified using the fact that V ⊗2 contains a copy of every irrep of S 4 except the sign representation, and so using any of the irreps ψ 0,1 , ψ 1,0 , ψ 1,1 we can achieve probability 1.

An action of the Heisenberg Group
We now consider a natural action of the Heisenberg group over a finite field for which the oracle identification problem achieves a significant quantum speedup over the best classical algorithm. For this action, we also show that a single query suffices to solve the coset identification problem, where the chosen subgroup H is the center of the group.
Specifically, let p be prime and let n be a positive integer. Let G = G(p, n) denote the Heisenberg group of all (n + 2)-by-(n + 2) matrices with entries in Z p , 1's on the main diagonal and whose only other nonzero entries are in the first row and last column. Such matrices are in correspondence with triples (x, y, z), with x, y ∈ Z n p and z ∈ Z p , where (1, x, z) is the first row of the matrix and (z, y, 1) is the last column of the matrix. Then G(p, n) is a p-group of order p 2n+1 .
We consider the usual action of G(p, n) on the set X = Z n+2 p , considered as column vectors, by matrix-vector multiplication. The corresponding classical oracle identification problem turns out to have complexity b(G) = n + 1. To see this note that y and z can be determined by the single query (0, . . . , 0, 1). Further queries give affine conditions on x, and it requires at least n of these to determine the value of x.
In contrast to the n + 1 queries needed to solve this question classically, we now show that a single quantum query suffices to solve the problem with high probability, and that two queries suffice to solve the problem with certainty.
Theorem 6.1. Let G(p, n) denote the Heisenberg group defined above acting by multiplication on the set of column vectors X = Z n+2 p . Then an optimal single query quantum algorithm solves the oracle identification problem with probability Furthermore, two queries suffices to solve the oracle identification problem with probability 1.
We will prove this theorem shortly. Before doing so, let us consider a related coset identification problem. Let H < G(p, n) be the subgroup in which x = y = 0. Then H is a subgroup of order p, and in fact H is the center of G(p, n). The coset identification problem with respect to this subgroup H asks us to determine the values of x and y. In the classical case, n + 1 queries are again required. However this time, a single quantum query solves the coset identification problem with certainty. In order to prove these theorems, we must understand the representation theory of G = G(p, n), which we now describe briefly (for a concise and elegant review, see the letter by M. Isaacs to P. Diaconis published in the appendix to [Dia]). The group G has p 2n one-dimensional irreducible representations and p − 1 irreducible representations of dimension p n . The one-dimensional representations will be denoted χ α,β , indexed by tuples α, β ∈ Z n p . We identify these representations with their characters which are given by the formula with ω denoting a primitive p-th root of unity.
The p n dimensional representations denoted ρ c , with c ∈ Z p , c = 0 are described as follows. Let U be the vector space of all complex-valued functions on (Z p ) n . Fix c ∈ Z p with c = 0. Then there is an irreducible representation ρ c of G on U given by The character of this representation is given by In order to understand the query complexity of the oracle identification problem we must decompose the representation V = C X into irreducible representations. Since this representation comes from a permutation representation of G, each character value χ V (x, y, z) is simply the number of fixed points of the matrix A = (x, y, z). This number of fixed points is determined by the rank of the matrix A = A − I. If (x, y, z) = 0, then A has rank 0, and if x and y are both nonzero, then A has rank 2. In all other cases A has rank 1. We thus obtain the following character values of our given permutation representation V : To find the number of copies of the trivial representation χ 0,0 appearing in χ V , we simply average these values and obtain χ V , χ 0,0 = p n + 2(p − 1). Now let φ be any nontrivial irreducible character of G. We compute the number χ V , φ of copies of φ appearing in χ V as follows where (x, y, z) indicates a sum over those (x, y, z) such that x = 0 or y = 0, but (x, y, z) = (0, 0, 0).
Taking φ = θ c in this formula, we conclude that V contains p − 1 copies of ρ c . Taking φ = χ α,β , we get We conclude that our V contains copies of all irreducible representations of G except the χ α,β for which both α and β are nonzero. The optimal single-query quantum success probability is thus given by If two queries are allowed, we have access to the representation V ⊗ V . Noting that χ α,β = χ α,0 ⊗ χ 0,β , it follows that V ⊗ V contains every irreducible representation of G. Hence, there is a probability 1 algorithm with two quantum queries.
Finally, we turn our attention to the coset identification problem for the subgroup H = {(0, 0, z)|z ∈ Z p }. To see that there is a probability one algorithm, note that any of the nontrivial characters of H induces up to p n times one of the ρ c . Since ρ c is contained in V , it follows that the coset identification problem can be solved with one query.

Guessing the sign of a permutation
Suppose there is an unknown permutation g ∈ G = S n for some n ≥ 2. We wish to learn the sign of g using queries to the standard action of S n on {1, ..., n}. This is an instance of the hidden coset problem where H = A n . Classically, n − 1 queries are necessary to determine the sign of g. In fact, any fewer queries and we do not learn anything about the sign. Quantumly, we have Theorem 6.3. Let n ≥ 2 and consider the standard action of S n on {1, . . . , n}. Consider the hidden coset problem for the subgroup H = A n . That is we wish to determine the sign of a hidden permutation. For exact learning, t = n 2 quantum queries suffice. With any smaller number of quantum queries, one cannot do any better than random guessing (p = 1/2.) Proof. For facts and notation about representations of S n and A n , we refer the reader to the proofs of Theorems 4.3 and 4.4.
Let V be the defining representation of S n , and suppose we use t queries so that we have access to V = V ⊗t . Suppose λ is a non-self-conjugate partition such that V contains V λ . Letting Y = W λ , we see that Y ↑ consists of one copy of V λ and one copy of V λ * . Hence the quotient of dimensions dim (Y ↑ ) V dim Y ↑ equals 1 if V contains both V λ and V λ * and 1 2 if V contains V λ but not V λ * . Now consider a self-conjugate partition λ contained in V . In this case, if we take Y = W + λ , then Y ↑ is V λ . Hence in this case the quotient of dimensions is 1.
We thus wish to find the smallest t such that V ⊗t contains both V λ and V λ * for some partition λ (including the possibility that λ is self-conjugate). For such t, we will have a t-query probability 1 algorithm and for fewer queries we cannot do better than probability 1/2, which is random guessing.

Previously studied examples of coset identification
Here we discuss the relation of this work to preceding work. To the authors knowledge, every previously studied special case of the general coset identification problem uses oracles sampled from an abelian group. Zhandry [Zha15] addresses this problem (calling it the oracle classification problem) and provides an expression for the optimal success probability essentially identical to 5.1. Thus our results are a non-abelian generalization of Zhandry's work, which was a key inspiration for the present paper. We briefly explain why Zhandry's result is equivalent to ours and then examine some other more specialized and well-known problems.
Coset identification for an abelian group is described by a tuple (A, V, π, f ) with A abelian and f : A → X distinct and constant on the cosets of a subgroup B. We remark that since B is a normal subgroup it is possible to identify X with the quotient group A/B and f with the standard homomorphism A → A/B. Hence coset identification in this instance may also be called homomorphism evaluation. By Cor. 5.8 the optimal success probability for a t-query algorithm to determine f (a) is Since B is abelian, Y is 1-dimensional and Y ↑ decomposes as |A : B| many distinct A-characters (corresponding to the characters of A/B). Hence dim Y ↑ V ⊗t (which by definition is the dimension of the maximal subspace of Y ↑ containing only characters in V ⊗t ) is exactly equal to the number of shared irreps, i.e. the cardinality of the set I(Y ↑ ) ∩ I(V ⊗t ). As Y varies, these sets partition I(V ⊗t ) into equivalence classes [χ], and by Frobenius reciprocity two characters are equivalent if and only if their restrictions to B are identical. Hence the equation above can be restated: Theorem 7.1. (Zhandry,([Zha15], Theorem 4.1)) The optimal success probability of a t-query algorithm for abelian coset identification is Under this interpretation we're aiming to find the largest collection of characters appearing in V ⊗t which have the same restriction to B. Zhandry includes several nice applications of the previous theorem, explained in a linear algebraic framework. Below we readdress a couple of these problems (polynomial interpolation and group summation) using character theoretic language, and we revisit the van Dam algorithm [van98].

Polynomial interpolation
The polynomial interpolation problem as outlined by Zhandry [Zha15] and Childs, van Dam, Hung and Shparlinski [CvDHS16] is as follows. Let F = F q where q = p r for some prime p. Suppose we have an unknown polynomial f (X) over F of degree less than or equal to d and we wish to determine f using queries that provide the value f (x) for x ∈ F . That is access to f is provided via the oracle U f acting on V = C F ⊗ C F by This equation defines a representation on V of the group G of all polynomials of degree less than or equal to d under addition.
We would like to see which of the characters of G appear in this representation. Let ω be a primitive p-th root of unity and let Tr denote the trace map from F q to F p . The characters of the additive group F are given by χ y with y ∈ F defined by χ y (x) = ω Tr(yx) .
For y ∈ F , define the character state |ω y = s∈F χ y (−s)|s .
It is easy to see that U f |x, ω y = χ y (f (x))|x, ω y .
Thus if we let V x,y denote the 1-dimensional space spanned by |x, ω y , we have the decomposition into irreducible representations. The characters of F d+1 , which is isomorphic to G, are given, for a ∈ F d+1 , by φ a , where φ a (c) = ω Tr(a·c) The character of V x,y is φ a , with a = (y, yx, yx 2 , . . . , yx d ). (4) To see this note that if f (x) = c i X i , then χ y (f (x)) = ω Tr(yf (x)) = ω c·(y,yx,yx 2 ,...,yx d ) Thus the irreps that appear in V are exactly the φ a , where a has the form in Equation 4. Since φ a ⊗ φ a = φ a+a , it follows that the k-fold tensor power contains those φ b where b can be expressed as a k-fold sum of vectors of the form in Equation 4. This is exactly in image of the map Z as described by Childs, van Dam, Hung and Shparlinski [CvDHS16], so we have reproved their Theorem 1. The computation of the optimal success probability is now reduced to an algebraic/combinatorial problem which is nontrivial to solve (and is achieved in [CvDHS16]). Hence this example serves to show the limitations of our main results: they can be used to translate questions about query complexity into purely algebraic problems which may or may not be easily solvable. The character theoretic technique shown above could also be used to reduce the query complexity of multivariable polynomial interpolation to a counting problem, as was achieved by Chen, Childs and Hung [CCH18] without referring to characters. So far though, the character based language has not led to any progress on this problem.

Group summation problem
Fix an abelian group G. The k-element group summation problem is the task of computing the sum f (1) + · · · + f (k) given access to an evaluation oracle hiding a function f : {1, 2, . . . k} → G.
This is an instance of coset identification. The oracles form a representation of the group of functions Fun([k], G) = {f : {1, . . . , k} → G}, which we identify with G k . In the quantum version they act on the Hilbert space V = C k ⊗ CG via The irreducible characters of Fun([k], G) are all of the form χ 1 × · · · × χ k : G k → C, where each χ i ∈ Irr(G). The Hamming weight of such a character, denoted wt(χ 1 × · · · × χ k ), is the number of components which are nontrivial. The characters appearing in the evaluation representation on V = C k ⊗ CG are exactly those with Hamming weight ≤ 1. This implies that the characters appearing in V ⊗t are those with Hamming weight ≤ t.
By Zhandry's theorem (Thm. 7.1) the optimal success probability for a t-query algorithm is obtained by finding the largest collection of characters in V ⊗t which restrict to the same irrep of H. We can describe an element of such a maximal equivalence class: the character χ 1 × · · · × χ k should have at least k − t trivial components (to guarantee its Hamming weight is ≤ t), then k − t components equal to some nontrivial character ψ 1 (so then χ 1 ψ −1 1 × · · · × χ k ψ −1 also has Hamming weight ≤ t), another k − t components equal to ψ 2 , and so on. For instance, we may pick χ = 1 × 1 × · · · × ψ 1 × ψ 1 × · · · × ψ N × ψ N . . .
where the characters 1, ψ 1 , . . . , ψ N are distinct (but otherwise arbitrary), and each one appears at least k − t many times. Then the equivalence class of χ has size N + 1, consisting of the characters The size N + 1 of this equivalence class is either |G| (if we can fit every irrep of G, which happens iff k k−t ≥ |G|) or k k−t . Hence for a t-query algorithm This is exactly Thm. 5.1 by Zhandry ([Zha15]). An efficient algorithm achieving this success probability had previously been described (for G cyclic) by Meyer and Pommersheim [MP14].

The van Dam algorithm
The van Dam learning problem [van98] is concerned with identifying a (total) Boolean function f : {1, . . . , n} → Z 2 given access to evaluation queries. This is a special case of symmetric oracle discrimination (see Section 4). The group of oracles is isomorphic to Z n 2 and irreps can be again written as a product χ 1 × · · · × χ n of characters of Z 2 . The characters appearing in the t-th tensor power of the evaluation oracle representation are exactly those with Hamming weight ≤ t. Hence the optimal success probability of a t-query algorithm is P opt = 1 2 n |{characters of Z n 2 with wt ≤ t }| = 1 2 n t i=0 n i which reproves the optimality of van Dam's algorithm.