Product Decomposition of Periodic Functions in Quantum Signal Processing

We consider an algorithm to approximate complex-valued periodic functions f ( e iθ ) as a matrix element of a product of SU (2) -valued functions, which underlies so-called quantum signal processing. We prove that the algorithm runs in time poly( n log(1 /(cid:15) )) on a Turing machine where n is the degree of the polynomial that approximates f with accuracy (cid:15) ; previous eﬃciency claim assumed a strong arithmetic model of computation and lacked numerical stability analysis.


Introduction
Quantum signal processing [1][2][3] refers to a scheme to construct an operator V from a more elementary unitary W where V = θ f (e iθ ) |θ θ| and W = e iθ |θ θ| share the eigenvectors but the eigenvalues of V are transformed by a function f from those of W . The transformation requires only one ancilla qubit, and is achieved by implementing control-W and control-W † , interspersed by single-qubit rotations on the control, and final post-selection on the control. 1 This technique produced gate-efficient quantum algorithms for, e.g., Hamiltonian simulations, which is asymptotically optimal when the Hamiltonian is sparse and given as a blackbox, or as a linear combination of oracular unitaries [5,6]. Furthermore, this technique with rigorous error bounds appears to be useful and competitive even for explicitly described, rather than oracular, local Hamiltonian simulation problems [7,8]. It is also promised to be useful in solving linear equations [3,4,9].
However, in quantum signal processing the classical preprocessing to find interspersing single-qubit rotations for a given transformation function f has been so numercially unstable that it has been unclear whether it can be performed efficiently. In fact, Ref. [7,App. H.3] reports that the computation time is "prohibitive" to obtain sequences of length 30 of interspersing unitaries for Jacobi-Anger expansions that we explain in Section 5. The true usefulness of quantum signal processing hinges upon the ability to compute long sequences of interspersing single-qubit rotations.
It has been asserted that there exists a polynomial time classical algorithm in Refs. [2,4], but these work do not consider numerical instability. If a computational model assumes that any real arithmetic with arbitrarily high precision can be done in unit time, then an unavoidable conclusion is that not only can one factor integers in time that is linear in the number of digits [10], but also solve NP-hard problems in polynomial time [11]. 2 In a seemingly mundane problem involving real numbers, the number of required bits during the computation can be a priori very large. For example, it is still an open problem whether one can decide the larger between k j=1 √ a j and k j=1 b j , which are sums of square roots of positive integers a j , b j that are smaller than n, in time poly(k log n) on a Turing machine; see e.g. [12,13] for recent results.
The numerical instability of previous methods may be attributed to expansions of large degree polynomials that are found by roots of another polynomial. Crudely speaking, there are two problems in this approach. First, the polynomial expansions can be regarded as the computation of convolutions, 3 which, when naively iterated, may suffer from numerical instability. Second, although the root finding is a well-studied problem, to use the roots to construct another polynomial one has to understand the distribution of the roots to keep track of the loss of precision. These problems were not addressed previously.
Here, we refine a classical algorithm to find interspersing single-qubit rotations and bound the number of required bits in the computation for a desired accuracy to a final answer. We conclude that the classical preprocessing can indeed be done in polynomial time on a Turing machine. We generally adopt the methods of Ref. [2], but make manipulations easier by avoiding trigonometric polynomials. Some generalizations are obtained with simpler calculations. For the numerical stability and analysis, our algorithm avoids too small numbers by sacrificing approximation quality in the initial stage, and replaces polynomial expansions by a Fourier transform. These modifications enable us to handle the problems that are mentioned above. However, it should be made clear that our refinement also requires high precision arithmetic. Specifically, we show that O(N log(N/ )) bits of precision during the execution of our algorithm is sufficient to produce a reliable final decomposition, where N is the degree of the polynomial that approximates a given transformation function f of eigenvalues up to additive error . Previously no such bound was known. On a sequential (rather than parallel) random access machine, our algorithm runs in time O(N 3 polylog(N/ )). In our rudimentary numerical experiment we were able to produce a sequence of length over 2000 for the Jacobi-Anger expansion on a laptop by a few hours of computation. 4 2 The basic idea in Ref. [11] is to relate the expansion of a Boolean formula in a conjunctive form into a disjunctive form with the expansion of a polynomial from its factors. Then, the satisfiability translates to the positiveness of certain coefficients in the expansion. The variable in the polynomial is set to a sufficiently large number, effectively encoding the entire polynomial in a single big number. An important elementary fact that underlies the power of arithmetic model is that we only need O(n) compositions of squaring operation x → x 2 to reach a 2 n -bit number. 3 A polynomial j ajt j can be identified with a "coefficient function" j → aj. The coefficient function of the product of two polynomials is the convolution of the two coefficient functions. 4 Note added 5 May 2020: This was based on an implementation on Wolfram Mathematica. An implementation in a lower level programming language can be found at https://github.com/microsoft/Quantum-NC/tree/master/src/simulation/qsp with a sample application (https://github.com/microsoft/Quantum-NC/tree/master/samples/simulation/qsp). We have observed that the numerical instability is not harmful for the Jacobi-Anger expansion; the number of bits of precision that we used in a Mathematica implementation was much more than necessary. The reduction in the number of bits of precision and the lower level language implementation, have expedited the calculation by orders of magnitude. Empirically the time complexity appeared to be quadratic in N for the Jacobi-Anger expansion.
We will start by reviewing quantum signal processing in the next section, and then develop an algorithm and analyze it. In two short sections later we provide self-contained treatment of polynomial approximations for Hamiltonian simulation and matrix inversion problems. The section on Hamiltonian simulation contains some running time data of our algorithm. Throughout the paper, we use U (1) to denote the group of all complex numbers of unit modulus. Sometimes we will refer to U (1) as the unit circle. As usual, i = √ −1, and

Quantum Signal Processing
To understand how the eigenvalue transformation (signal processing) works, it is convenient to consider the action of control-W restricted to an arbitrary but fixed eigenstate |θ of W . The eigenvalue e iθ associated with |θ is kicked back to the control qubit to induce a unitary |0 0| + e iθ |1 1| on the control qubit. Conjugating the control-W by a single qubit unitary on the control qubit, we see that |0 and |1 can be any orthonormal basis vectors |0 , |1 of the control qubit. If we allow ourselves to implement the inverse of the control-W , which is reasonable, we can also implement where {|0 , |1 } and {|0 , |1 } are arbitrary orthonormal bases. 5 When we alternate an even number of control-W and control-W † , this trick allows us to assume that an implementable unitary on the control qubit is a product of primitive matrices where t = e iθ/2 and P = I − Q is a projector of rank 1. Thus, an even number n of control-W and control-W † , together with an extra unitary E 0 independent of t, induces on the control qubit. The product F (t) can be thought of as an SU (2)-valued function over the unit circle in the complex plane. By the same formulas, we define E P (t) and F (t) on the entire complex plane except the origin , then post-selection on |+ of the control qubit enacts V . Here, the choice of |+ is a convention, as it can be any other state due to E 0 . The success probability of the post-selection depends on the magnitude of f (e iθ ). A natural question is then what F (t) is achievable in the form of Eq. (3). Note that it makes no difference to insert many unitaries that are independent of t in between E P j (t)'s, rather than a single E 0 at the front, because U E P (t)U † = E U P U † (t) for any unitary U . The answer to the achievability question turns out to be quite simple, as we show in the next section.
3 Polynomial functions U (1) → SU (2) Definition 1. For any integer n ≥ 0, let P n be the set of all Laurent polynomials F (t) = n j=−n C j t j in t with coefficients C j in 2-by-2 complex matrices, such that F (η) ∈ SU (2) for all complex numbers η of unit modulus. We say that F (t) ∈ P n has degree n if C n = 0 or C −n = 0. We define E n to be the subset of P n consisting of all F (t) where the exponents of t in F (t) belong to {−n, −n + 2, −n + 4, . . . , n − 2, n}. Note that P 0 = E 0 = SU (2), and for any orthogonal projector P we have E P (t) ∈ E 1 . We define F † (t) to be j C † j t j . 6 In a set-theoretic notation, the definitions are as the following.
Note that for any F (t) ∈ P n , we have F (t)F † (1/t) = F † (1/t)F (t) = I; this is true for every t on the unit circle, an infinite set, and any (Laurent) polynomial is determined by its values on an infinite set.

Theorem 2.
Any n-fold product E P 1 (t) · · · E Pn (t) belongs to E n . Conversely, every F (t) ∈ E n of degree n has a unique decomposition into primitive matrices and a unitary, as in Eq. (3).
This completely characterizes polynomial functions U (1) → SU (2) and covers all previous results on "achievable functions" in quantum signal processing [2,4]. Indeed, for any Laurent polynomial function U (1) z → F (z) ∈ SU (2), the Laurent polynomial function t → F (t 2 ) belongs to E n for some n and has a unique product decomposition of the theorem. Our version is slightly more general since previous results implicitly assume that Tr(P j Z) = 0.
Proof. The first statement is trivial by definition. The proof of the converse is by induction in n where the base case n = 0 is trivial. The induction step is proved as follows. We are going to prove that for any F (t) ∈ E n of degree n > 0 there exists a unique E K (t) such that F (t)E K (t) ∈ E n−1 . 7 Consider F (t) = n j=−n C j t j as a 2-by-2 matrix of four Laurent polynomials. The defining property det F (t) = 1 holds for infinitely many values of t, and therefore it should hold as a polynomial equation. Taking the leading term, we have t 2n det C n + (lower order terms) = 1. Similarly, taking the leading term in t −1 , we have t −2n det C −n + (higer order terms) = 1. Hence, Similarly, from the equation Since the degree of F (t) is n, at least one of C n and C −n is nonzero. Suppose C n = 0. Let K be a rank-1 projector such that C n K = 0, and let L = I − K. Such K is unique since C n is a two-by-two matrix of rank one; the singular value decomposition of C n is |a b| for some unnormalized vectors |a , |b , and K has to annihilate |b . Then, we claim that F (t)(tK + t −1 L) ∈ E n−1 . Indeed, expanding the left-hand side we have (This is the only place we use E instead of P.) If C −n = 0, this implies the claim. If C −n = 0, then, considering the singular value decomposition of C −n , we have K ∝ C † −n C −n and therefore C −n L = 0, implying the claim. The case C −n = 0 is completely parallel.
Actually, for any F (t) ∈ E n of degree n, C n = 0 if and only if C −n = 0: C n is a product of n rank-one operators E 0 P 1 · · · P n = E 0 |p 1 p 1 |p 2 · · · p n−1 |p n p n |, where p j |p j+1 = 0 for all j, and this implies Q j Q j+1 = (I − P j )(I − P j+1 ) = 0 for all j, which is to say that C −n = E 0 Q 1 · · · Q n = 0.

Parity constraints
Any member of SU (2) can be written as aI + biX + ciY + diZ where the real numbers a, b, c, d satisfy a 2 + b 2 + c 2 + d 2 = 1, and this decomposition is unique. (The group SU (2) is identified with the group of all unit quaternions.) Thus, a member F (z) ∈ P n can be written uniquely as

, and each takes real values on U (1).
Recall that under the standard representation of Pauli matrices, Z is diagonal, and X, Y are off-diagonal. Suppose an SU (2)-valued function θ → F (e iθ ) = a(e iθ )I + b(e iθ )iX + c(e iθ )iY + d(e iθ )iZ has even functions (reciprocal in t = e iθ ) in the diagonal and odd (anti- and c(t) = −c(1/t). We claim that if F (t) is such an element of E n , then the primitive matrix E Pn (t) factored from F (t) by Theorem 2 has a property that Tr(ZP n ) = 0. Since for any projector P = 1 , the condition Tr(ZP ) = 0 is to say p z = 0. This implies that E Pn (t) has reciprocal diagonal and anti-reciprocal off-diagonal. To prove the claim we observe that It follows that 0 = Tr(Z) = Tr(F (t)ZF † (t)) = t 2n Tr(C n ZC † n ) + · · · + t −2n Tr(C −n ZC † −n ) as a polynomial in t, and hence Tr(C † n C n Z) = 0 = Tr(C † −n C −n Z) when n > 0. The matrix C † ±n C ±n is proportional to the projector P n or I − P n in the factor E Pn (t) as shown in Theorem 2, and therefore we have Tr(ZP n ) = 0.
Moreover, it then follows that F (t)E † Pn (1/t) also has reciprocal diagonal and anti-reciprocal off-diagonals. Therefore, the unique projectors P 1 , . . . , P n in the decomposition, which define the primitive matrices E P j (t), have zero Z-component. This means that all projectors P j are of form where φ j ∈ R is some angle. In fact, Ref. [2] exclusively considered E P (t) of this form. This is contrasted to the general case where P j is identified with a point on the Bloch sphere. The constraint Tr(ZP j ) = 0 forces P j to lie on the equator of the Bloch sphere.
Note that E 0 , the residual SU (2) factor in the decomposition, has generally nonzero Zcomponent. Since E P (1) = I for any projector P , we know Hence, there would be quadratic loss in accuracy to omit E 0 (i.e., to set φ 0 = 0), even though a(1) ≈ 1 suggests that one would not need E 0 .

Complementing polynomials
Quantum signal processing does not use F (t) itself, but rather a certain matrix element of it. Hence, it is important to know what a matrix element can be. Let us first introduce classes of Laurent polynomials.

Definition 3.
A (Laurent) polynomial with real R coefficients is called a real (Laurent) polynomial. The degree of a Laurent polynomial is the maximum absolute value of the exponent of the variable whose coefficient is nonzero The term "pure" is because a Laurent polynomial f (z) with complex coefficients is realon-circle if and only if both are real Laurent polynomials. (Proof : Write f (e iθ ) = j a j e ijθ , with the complex conjugate being j a j e −ijθ = j a −j e ijθ . Thus, a j = a −j , and the claim follows.) This simply rephrases the fact that a real-valued function θ → f (e iθ ) has a trigonometric function series with real coefficients. Hence, for any real-on-circle Laurent polynomial f (z), it is real if and only if it is reciprocal. In addition, a real and reciprocal Laurent polynomial is real-on-circle. That is, among the three properties, real, real-on-circle, and reciprocal, any two imply the third. Note that a real-on-circle Laurent polynomial is not necessarily real, and a real Laurent polynomial is not necessarily real-on-circle. Also, note that real-on-circle Laurent polynomials form an algebra over the real numbers.
We are now ready to state a sufficient condition under which a complex polynomial qualifies to be a matrix element of some F (t) ∈ P n . We think of a(z) and b(z) below as the real and imaginary parts of a complex function, respectively. A reader might want to compare the following lemma with the first paragraph of Section 3.1.

Lemma 4.
Let a(z) and b(z) be real-on-circle Laurent polynomials of degree at most n such that a(η) 2 The conditions in the lemma on reciprocity are due to a technical reason in the proof. If there were some other reason under which one can guarantee the existence of the complementing polynomials, it would be possible to use them in the algorithm below, and the scope of the input functions in our algorithm would be enlarged; the existence of the complementing polynomials is more important than the reciprocity constraints. However, we note that the reciprocity conditions are not severe restrictions since any periodic function is the sum of an even and an odd function, and one can "combine" two functions by "flexible" quantum signal processing [14]. 2 is reciprocal real of degree n that is at most 2n; the leading terms of a(z) 2 and b(z) 2 might cancel each other so that n < 2n. Due to the reciprocity, there are 2n roots in total with multiplicity taken into account and any root r must come in a pair (z, z −1 ) where one is inside the unit disk and the other outside the unit disk, but neither is on the unit circle. We collect all the roots inside the unit disk: This is a list rather than a set as we take the multiplicities into account; D has exactly n elements. Consider a factor of 1 − a(z) 2 − b(z) 2 : The monomial in front of the product is to balance the greatest exponent of z with the least exponent; the degree of e(z) is n /2 . The list D is closed under complex conjugation due to the reality of 1 − a(z) 2 − b(z) 2 , and hence e(z) is a real Laurent polynomial. Then, the product e(z)e(1/z) is real reciprocal and has degree n . 8 Now the two Laurent polynomials e(z)e(1/z) and 1 − a(z) 2 − b(z) 2 have the same roots. Therefore they differ by a factor of cz k for some nonzero number c and an integer k, but the reciprocity fixes k = 0 and the reality puts c into R. That is, Evaluating this expression at z = 1, we see that α is positive. Thus, we finish the proof by observing 8 The degree of a Laurent polynomial in our definition is only subadditive under multiplication of two Laurent polynomials. For example, the product (z − 1)(z −1 − 2) has degree one.
Note that given a(z) and b(z) the complementing Laurent polynomials c(z), d(z) are not unique in general. As long as the joint of a conjugate-closed list D and its reciprocal [1/r ∈ C : r ∈ D] is the list of all the roots of 1 − a(z) 2 − b(z) 2 , we can construct c(z) and d(z) satisfying the conditions in the lemma.

Efficient implementation with bounded precision
In this section we consider an algorithm to find interspersing single-qubit unitaries given a complex function A(e iϕ )+iB(e iϕ ). The algorithm consists of two main parts: first, we have to find an SU (2)-valued function of ϕ such that a particular matrix element is the input function. It suffices to find a good approximation. Second, we have to decompose the SU (2)-valued function into a product of primitive matrices. We have already given constructive proofs for both the steps, but we tailor the construction so that numerical error is reduced and traceable. We will outline our algorithm first, deferring certain details to the next subsection. The computational complexity will be analyzed subsequently.
Input: A real parameter ∈ (0, 1 100 ), a list of 2N + 1 complex numbers ζ k (k = −N, . . . , N ) specified using at most log 2 (100N/ ) bits in the floating point representation, and two bits p re and p im .
Here, the list is the Fourier coefficients ζ k for frequencies between −N and N of a complex-valued 2π-periodic function A(e iϕ ) + iB(e iϕ ) = N k=−N ζ k e ikϕ subject to conditions that (i) each of real-valued functions A(e iϕ ) and B(e iϕ ) has definite parity (even or odd parity as functions of ϕ), recorded in the two bits p re , p im , and (ii) A(e iϕ ) 2 + B(e iϕ ) 2 ≤ 1 for any real ϕ.
The function ϕ → A(e iϕ ) + iB(e iϕ ) must be sufficiently close to an ultimate target function, where the latter is, strictly speaking, not a part of the input for us. This approximation has nothing to do with the algorithm below, but should be analyzed independently for each quantum signal processing problem.
Here, each P m is a (approximate) rank-one projector represented by Ω(log(N/ )) bits of precision and defines the primitive matrix Time: The computational time complexity is O(N 3 polylog(N/ )) on a random-access memory machine.
Alternatively, an input may be a list of function values from which Fourier coefficients can be computed. Having Fourier coefficients for frequencies between −N and N is equivalent to having a Laurent polynomial A(z) + iB(z) of degree at most N .
Our Lemma 4 would allow more general input functions where the real and imaginary parts are not necessarily of definite parity, but we restrict our algorithm to inputs of definite parity, since we require A 2 + B 2 to be of definite parity which may not be satisfied after the rational approximation in Step 1 below if individual components are not of definite parity. As mentioned before, this parity constraint is not too restrictive since quantum signal processing can be easily adopted to handle functions of indefinite parity [14].

Algorithm
and D is a power of 2 that is larger than 2n + 1. One should not expand c(z), d(z) before evaluation, but should substitute numerical values for z with accuracy 2 −R in the factorized form of e(z) in Eq. (13), and then read off the real (c(z)) and imaginary (d(z)) parts. where , where k = −m + 1, −m + 3, . . . , m − 3, m − 1.

Further details and analysis
Step 1. Let the rational approximation be performed by truncating the binary expressions of real numbers. To inherit parities, the rational approximation should be done only for terms with nonnegative powers of z, from which we should infer the negative power terms. Then, a(z) and b(z) satisfy The reason we speak of rational Laurent polynomials is mainly for the convenience of analysis, as its evaluation can be made arbitrarily accurate since the coefficients of a(z) and b(z) are exact; each coefficient is stored as a rational number, a pair of integers, rather than a floating point number. In a concrete implementation of our algorithm, floating point numbers that are carefully handled may substitute rational numbers. If µ = f × 2 d is a real number where 1 2 ≤ f < 1 and d ∈ Z, then the rational approximation of µ may beμ = 0.b 1 b 2 · · · b p ×2 d for some p where b j are bits in the binary representation of f . Some care may be needed in order not to discard any bits ofμ in arithmetic. For example, whenμ is added to another floating number of p > p bits of precision, thenμ should be mapped to a floating number of whatever needed bits of precision by padding zeros. In practice, this should cause hardly any complication since most high precision arithmetic libraries (e.g. The GNU Multiple Precision Arithmetic Library) treat inputs as exact numbers, but control the precision of the result according to other rules set by a user. The number of bits to represent all the rational coefficients is O(N log(N/ )).
Step 2. There exists a root-finding algorithm with computational complexityÕ(n 3 + n 2 R) under the assumption that all the roots have modulus at most 1 [15]. In our case, the rational Laurent polynomial p(z) = 1 − a(z) 2 − b(z) 2 does not satisfy the modulus condition; however, this is a minor problem. Every coefficient of p(z) is the Fourier coefficient of the periodic function p(e iθ ) < 1, and hence is bounded by 1. By the condition (iv) of Step 1, the leading coefficient of p(z) is Ω(( /N ) 2 ) in magnitude. (The reason is as follows. Since our rational approximation is by truncating binary expressions of real numbers, the denominator of any coefficient is a power of 2 and is at most 2 log 2 (N/ ) . Hence, 4 log 2 (N/ ) p(z) has integer coefficients.) Say the polynomial p(z) = 1 − a(z) 2 − b(z) 2 has degree n , which may be less than 2n even if a(z) and b(z) have degree n. Converting p(z) into a monic polynomial q(z) (after multiplying by z n and a normalization factor), we have q(z) = z 2n + 2n −1 j=0 q j z j with |q j | ≤ O((N/ ) 2 ). Note that q(z) has (exactly represented) rational coefficients. If q(z 0 ) = 0 with |z 0 | > 1, then implying |z 0 | ≤ O(N 3 / 2 ). (If |z 0 | ≤ 1, this is trivial.) We can use the algorithm of Ref. [15] after rescaling z by a known factor. The overhead due to the potential loss of precision from the rescaling is negligible since R = Ω(N log(N/ )).
Step 3. We need to evaluate e(z) of Eq. (13) that is defined by the roots found in Step 2. For z ∈ U (1), the following Lemma 5 guarantees that any root of 1 − a(z) 2 from the unit circle. In particular, with the prescribed accuracy 2 −R there is no numerical ambiguity to determine whether a root is inside the unit disk. That is, the list D of Eq. (12) can be obviously computed under our bounded precision arithmetic. Let us analyze the evaluation accuracy of e(z) for z ∈ U (1) more closely. When evaluating a linear factor z − r of e(z) where both z and r are accurate up to additive error 2 −R , the number of lost significant bits is O(log(N/ )) which is negligible compared to R. The function value of e(z) is thus evaluated accurately up to relative error 2 −R+O(log(N/ )) . Hence, the function value of c(z) (the real part of e(z) √ α) and d(z) (the imaginary part of e(z) √ α) by Eq. (14) are determined up to additive error 2 −R+O(log(N/ )) .
More concretely but still loosely, let us assume 24N 2 2 −R −1 ≤ 1/(16N ). The relative error of a linear factor z − r is at most 2 · 2 −R (4N 2 / ). The factor of e(z) for the complex roots can be evaluated to relative error at most 3·2·2 −R (4N 2 / ). There are at most N factors in e(z), so the value of e(z) is determined up to relative error δ = (1 + 24N 2   Proof. Pick any root z 0 and choose the closest point u ∈ U (1) so that f (z 0 = u + η) = 0. We will lower bound the magnitude of η. If |η| ≥ 1/(2d), then there is nothing to prove, so we assume |η| < 1/(2d). Since f is analytic except at z = 0, the Taylor series of f at u converges at z 0 = u + η.
Let us estimate the magnitude of derivatives at u ∈ U (1). Since the coefficients a j of the polynomial f (u) = d j=−d a j u j are the Fourier coefficients, meaning that a j is a "weighted" average of the function values on the unit circle, we know |a j | ≤ M . Thus there are 2d terms (or one more if k = 0 in which case the inequality is true anyway) and the maximum absolute value of the exponent increases by at most 1 every time we differentiate due to the negative degree term. Therefore, where the first inequality is the assumption, the second inequality is by rearranging Eq. (21) and applying triangle inequality, and in the last inequality we use the fact that (1 − x) −d − 1 is a convex increasing function of positive x and valued at most 1 at x = 1/(2d) for d ≥ 1.
Step 4. This is essentially expanding the polynomials c(z) and d(z) found in the root finding step, but we use the fast Fourier transform (FFT) for its better accuracy. It has been shown [16] that the FFT on a k-component input F where each component (that is assumed to be a complex number in [16]) is accurate to relative error δ, gives Fourier coefficientsF ω with error whereF ω = k −1 k =1 e i ω/k F (e i /k ) is the true Fourier spectrum. (Note the normalization factor k −1 here.) In our case, F consists of 2-by-2 matrices, and Eq. (24) also holds with Frobenius or operator norms in place of absolute values. Since the input "vector" F in this Step is a list of unitary matrices, the root-mean-square factor is O(1), and the distinction between relative and absolute error is immaterial. By the analysis of Step 3 above, δ is 2 −R+O(log(N/ )) . Thus, the (additive) error δ 2n in any Fourier coefficient C is at most 2 −R+O(log(N/ )) . The error here is the operator norm of the difference between the computed and the true C (2n) 2j . Crude and concrete bounds can be obtained more directly (without using Eq. (24)): The coefficients of a(z) and b(z) are exactly known. Those of c(z) and d(z) are computed from the value table of e(z) from the previous Step, which has entry-wise additive error δ ≤ 48N 3 2 −R −1 . The (slow) discrete Fourier transform on a k-component vector v is the matrix multiplication by V = k −1/2 U for a k × k unitary U . (Hence, V ≤ k −1/2 .) If we compute the input-independent trigonometric factors in the FFT, often called the twiddle factors, accurately up to additive error 2 −R , then k −1/2 U is accurate up to additive error δ F F T ≤ k 1/2 2 −R in operator norm. (This bound is loose compared to that in the previous paragraph, since in this paragraph we do not consider the fast Fourier transform that is numerically more stable.) Since k ≤ 2N + 1, the conversion from ∞ -norm to 2-norm incurs a factor of at most √ 2N + 1. Hence, the additive error δ 2n in C (2n) 2j for any j is at most Step 5. This is an implementation of Theorem 2. Thanks to the condition (iv) of Step 1, we know Put F (t 2 ) = E 0 E 1 (t) · · · E 2n (t). Then, for any m = 1, 2, . . . , 2n, we observe that C (m) ±m is the product In particular, by the submultiplicative rule of operator norm we have that is, the leading coefficients never become smaller in norm through the loop over m. Let δ m be the maximum additive error of C (m) k for any k. We assume that δ m ≤ /(2N ). For brevity, let C = C (m) ±m , and letC be the approximate C due to numerical error. Then, Hence, we see that the additive error β m in P (m) (or Q (m) ) is In turn, assuming β m ≤ 1 we have Combining Eqs. (25) and (34) we conclude that Step 6. Now we have approximate E 0 up to additive error β 0 and P m up to β m (m = 1, . . . , 2n). The evaluation of E m (t) is then accurate up to 2β m for t ∈ U (1), and that of a(t 2 ) + ib(t 2 ) = +| E 0 E 1 (t) · · · E 2n (t) |+ is accurate up to (suppressing t for brevity) We want this to be smaller than . A sufficient condition is thus All the (ad hoc) assumptions in the course of error estimation above, are satisfied by this choice of R.
The correctness of the algorithm is clear from the construction and error analysis above. The final error guarantee is from the condition (ii) of Step 1 and Eq. (37).

Computational complexity
All arithmetic in the algorithm above operates with at most R-bit numbers, where R is chosen to be Θ (N log(N/ )). The number of elementary bit operations (AN D, OR, N OT ) to perform one basic arithmetic operation (+, −, ×, /) on u-bit numbers is upper bounded by O(u polylog(u)) [17]. 9 Let us count the number of arithmetic operations.
Step 1 : Assuming that the coefficients of the true function A(z) and B(z) are given to accuracy O( /N ), it takes O(N log(N/ )) arithmetic operations to find rational approximations. There could be O(polylog(N/ )) additional cost in full complexity in simplifying rational numbers by Euclid's algorithm.
Step 2 : The root finding takes time O(N 3 + N 2 R) ≤ O(N 3 log(N/ )) [15]; this count includes all the cost of bit operations for high precision arithmetic.
Step Step 5 : Updating the Fourier coefficient list C Overall, the computational complexity is O(N 3 polylog(N/ )), under the random-access memory model of computation.
Practically, it may be useful to run the algorithm with arithmetic precision with, say, R 0 = 64 bits initially, test the final decomposition on O(N )-th roots of unity (which takes only O(N 2 ) operations with O(log(N/ ))-bit arithmetic), and repeat the whole process under an exponential scheduling on the number R r+1 = 2R r of bits of precision in the arithmetic, until the test reveals that the answer is acceptable. In this way the arithmetic uses no more than twice the number of bits of precision that is actually needed to handle the numerical instability of our algorithm for a given input, without knowing a tailored number beforehand. For the upper bound R = O(N log(N/ )) in the worst case, there will be at most r max = O(log N + log log(N/ )) rounds, but the overall time complexity is a constant multiple of the last round's due to the exponential scheduling.

Application to Hamiltonian simulation
In this section, we review and redo existing analysis [1,19], complemented with a minor modification to our algorithm exploiting a certain structure of the eigenvalue transformation function.
Suppose we are given with a unitary W whose eigenvalues ∓e ±iθ λ are associated with those λ of a Hermitian matrix H = λ |λ λ| with H ≤ 1 as The correspondence between W and H might seem contrived at this stage, but when H is represented as a linear combination of unitaries [5,6], it is possible to construct such W as a quantum circuit [3]. The relation of Eq. (39) is in fact common whenever quantum walk is used [20,21]. (See Appendix A for some detail.) So, the desired transformation is where F (t) should be constructed by quantum signal processing [1]. This will implement e −iτ H . Since the product of n primitive matrices yields a Fourier component of frequency at most n/2, F (t) must consist of at least 2τ factors. (The factor of 2 is due to the half-angle in the argument of F .) Note that the success probability of the post-selection is close to 1 since |f (e iθ λ )| = 1.
With e iϕ = z, we write where J k are the Bessel functions of the first kind; one can take Eq. (41) as a definition of the Bessel functions. This is the Fourier series of ϕ → exp(iτ sin ϕ). The substitution τ → −τ and z → −z together with the uniqueness of the Fourier series implies J k (−τ ) = (−1) k J k (τ ). Similarly, the substitution z → −1/z implies J −k (τ ) = (−1) k J k (τ ). We separate the reciprocal and anti-reciprocal parts of the expansion as This expansion, called the Jacobi-Anger expansion, converges absolutely at a superexponential rate. We can use the steepest descent method [22] which is generally applicable. Expressing the Fourier transform as a contour integral we see where C is the unit circle. Since the integrand is analytic except for z = 0, we may deform C. For 2k > τ > 0, z ≈ 2k/τ > 1 is a saddle point for the absolute value of the integrand. We take C to be a circle through this point, to have that It is important that the convergence of the series depends on the size of the region on which the function is analytic -this is a general fact [22]. For the Jacobi-Anger expansion the function is almost entire, and the convergence is superexponential. Note that [23, 9.1.62] asserts |J k (τ )| ≤ |τ /2| |k| /|k|! for any τ ∈ R and k ∈ Z which is tighter than Eq. (43). Now, a partial sum of the Jacobi-Anger expansion can be written as This is -close to the full expansion if N = Ω(|τ | + log(1/ )) by Eq. (43) for any z ∈ U (1). 10 Numerical experiments suggest that the bound is quite tight and it suffices to choose N ≈ e 2 |τ | + ln(1/ ) ≈ 1.36|τ | + 2.30 log 10 (1/ ).
The Laurent polynomialsÃ(z) andB(z) are pure real-on-circle. Applying our algorithm, we obtain real-on-circle Laurent polynomials a(z) = a The pure Laurent polynomials c(z), d(z) are calculated by Lemma 4 where we choose c(z) to be anti-reciprocal and d(z) reciprocal. (This choice is to have our results in the same convention as those of Ref. [2].) Note that every exponent of z of the polynomial 1 − a(z) 2 − b(z) 2 here, whose roots must be computed, is even since a(z) has only even exponents and b(z) has only odd exponents, so it is always better to feed a Laurent polynomial g of degree n, instead of 2n, where into the root finding routine. Given the expanded form of 1 − a(z) 2 − b(z) 2 , it takes no effort to find g. In this case, the intermediate polynomial e(z) in Eq. (13) of Lemma 4 is We have implemented our algorithm with constants chosen as above using Wolfram Mathematica 11, and measured the running time as a function of τ for two fixed values of . The result is shown in Fig. 1. The running time scales asymptotically as the cubic of τ as expected. We used internal routines of Mathematica for the rational approximation, high precision arithmetic, root finding, and Fourier transform. The computing was by Microsoft Surface Book with Intel Core i7-6600U at 2.6 GHz and 16 GB of RAM.

Application to Matrix inversion
While there are slightly more efficient implementations of matrix inversion problems [9] using quantum signal processing [4], here we contend ourselves with an eigenvalue transformation perspective. The techniques of Refs. [4] reduces the number of ancilla qubits by one or two, and hence relieves some burden of implementing controlled unitary, but the underlying mathematics, regarding polynomial approximations and finding interspersing single-qubit unitaries, is unchanged.
Suppose a hermitian matrix H of norm 1 that we wish to invert is block-encoded in a unitary W so that W has eigenvalues ∓e ±iθ λ associated with an eigenvalue λ of H. This encoding is the same as in the Hamiltonian simulation above. The condition for H being hermitian is not too restrictive since, for any matrix M , an enlarged matrix |0 1| ⊗ M + |1 0|⊗M † is always hermitian. Then, we want eigenvalue transformation ∓e ±iθ λ → 1/ sin θ λ . As we should not invert a singular matrix, we assume that eigenvalues of H are bounded away from zero by 1/κ where κ ≥ 1 is the condition number of H. (Stricly zero eigenvalues are fine if we are interested in a pseudo-inverse.) Thanks to the condition number assumption, we need to find a polynomial approximation to the function ∓e ±iθ λ → 1/ sin θ λ that is good for values sin θ λ away from zero by 1/κ. For this purpose, there is a useful polynomial [24]: Choosing b = κ ln(8/ ) , and feeding a real-on-circle anti-reciprocal Laurent polynomial f (z)/ ln (8/ ), which is at most 1 in magnitude by the last claim of Lemma 6, into our algorithm, we obtain a desired eigenvalue inversion quantum algorithm. The success probability can be as small as Ω(1/(κ log(1/ )) 2 ), and hence we had better amplify the amplitude for post-selection, enlarging the quantum gate complexity by a factor of O(κ log(1/ )). Overall, the quantum gate complexity is proportional to the product of the degree of the Laurent polynomial above and the number of iterations for the amplitude amplification.

Discussion
We have determined the scope of SU (2)-valued periodic polynomial functions and their decomposition (Theorem 2), and analyzed the algorithmic aspects. Our algorithm for the decomposition is not numerically stable in a usual sense -a numerically stable algorithm should only require polylog(N/ ) bits of precision, rather than poly(N log(1/ )). The instability appears to be unavoidable in any method that reduces polynomial degree iteratively by one at a time (such as our Step 5), at least in the early stage of the polynomial degree reduction. The numerical error arises due to the small norm of leading coefficients in our algorithm, and the small leading coefficients of an input polynomial are necessary if a nonpolynomial function admits converging polynomial approximations. However, it might be the case that the leading coefficient matrix C (m) m becomes large in norm rather quickly in Step 5 of our algorithm, in which case our analysis could be loose. For a schematic example, consider an identity t 2n P + t −2n (I − P ) = tP + t −1 (I − P ) 2n for any projector P . The numerical instability to decompose the left-hand side into the right-hand side is far less severe than that in the worst case of Step 5, and similar situations might occur during the execution of Step 5 for some class of input functions. This deserves further investigation.

Acknowledgments
I thank Guang Hao Low for valuable discussions, and Robin Kothari for useful comments on the manuscript.

A Jordan's Lemma and block encoding of Hamiltonians
The following is a well-known fact, but we include it here for completeness.

Lemma 7.
Let P and Q be arbitrary self-adjoint projectors on a finite dimensional complex vector space V . Then, V decomposes into orthogonal subspaces V j invariant under P and Q, where each V j has dimension 1 or 2. Proof. We will find a subspace W of dimension at most 2 that is invariant under both P and Q. This is sufficient since the orthogonal complement of W is also invariant and the proof will be completed by induction in the dimension of V .
Put P = I − P and Q = I − Q, and consider the identities P QP + P Q P + P QP + P Q P = I, supp(P QP ) + supp(P Q P ) + supp(P QP ) + supp(P Q P ) = V, where the second equality is because the intersection of the orthogonal complements of the four supports is zero. Therefore, at least one of the four supports is nonzero, and without loss of generality assume S = supp(P QP ) = 0. Let |ψ ∈ S be an eigenvector of P QP ; P QP |ψ = a |ψ . The associated eigenvalue a is nonzero by definition of S. Now consider W = span{|ψ , Q |ψ }. Observe that a |ψ = P QP |ψ = P P QP |ψ = aP |ψ , and hence P |ψ = |ψ . Moreover, P Q |ψ = P QP |ψ = a |ψ . Therefore, W is a nonzero invariant subspace under both P and Q.
This Jordan's lemma can be applied to two hermitian unitaries (reflections) U 1 , U 2 as any hermitian unitary is 2P − I for some projector P . It immediately follows that there is a basis where U 1 is diagonal and U 1 U 2 is block-diagonal with at most two-dimensional blocks and each block belongs to U (1) or U (2). In any such irreducible two-dimensional block, both unitaries cannot be scalar multiplications because of the irreducibility and hence we have for some real number λ ∈ [−1, 1] and an angle φ ∈ R, up to a permutation of rows and columns. Therefore, the product W = −iU 1 U 2 is a rotation in a two-dimensional subspace that appears in Grover search algorithm [25], and has eigenvalues ±e ∓iθ where sin θ = λ. This is relevant in a Hamiltonian simulation problem where the Hamiltonian is "block-encoded" as H = ( G| ⊗ I) U 2 (|G ⊗ I), which is the case if H is represented as, e.g., a linear combination of Pauli operators. Here |G is a state on a proper tensor factor of the Hilbert space on which U 2 act, and U 1 = (2 |G G| − I) ⊗ I .