Quantum algorithm for time-dependent differential equations using Dyson series

Time-dependent linear differential equations are a common type of problem that needs to be solved in classical physics. Here we provide a quantum algorithm for solving time-dependent linear differential equations with logarithmic dependence of the complexity on the error and derivative. As usual, there is an exponential improvement over classical approaches in the scaling of the complexity with the dimension, with the caveat that the solution is encoded in the amplitudes of a quantum state. Our method is to encode the Dyson series in a system of linear equations, then solve via the optimal quantum linear equation solver. Our method also provides a simplified approach in the case of time-independent differential equations.

Quantum computers offer tremendous potential advantages in computing speed, but it is a long-standing challenge to find speedups to important computational tasks.A major advance for quantum algorithms was the development of a way of solving systems of linear equations by Harrow,Hassidim,and Lloyd [11].This algorithm provides an exponential improvement in complexity in terms of the dimension as compared to classical solution, with some caveats.In particular, the matrix needs to be given in an oracular way (rather than as classical data), and the solution is encoded in amplitudes of a quantum state.
There has therefore been a great deal of follow-up work on applications of solving linear equations where the result is potentially useful despite these limitations.For example, in discretised partial differential equations (PDEs), the discretisation yields a large set of simultaneous equations.Then one may aim to obtain some global feature of the solution which may be obtained by sampling from the prepared quantum state.This was the principle used in [8], and there have been considerable further advances since then [7,16].
There is also the question of how to solve an ordinary differential equation (ODE).That is, spatial dimensions are not given explicitly (though the set of equations may have been obtained by a spatial discretization of a PDE), and the task is to solve the time evolution.The original algorithm for this task used linear multistep methods [1], and a further improvement was to use a Taylor series encoded in a larger system of linear equations to obtain complexity logarithmic in the allowable precision [3].For this and other quantum algorithms for solving differential equations, it is required that the equations are dissipative in order to provide an efficient solution.Krovi [14] showed how to solve the time-independent case with an alternative dissipative condition based on the logarithmic norm rather than the eigenvalues of the matrix having non-positive real part as in [3].
That then leaves open the question of how to solve time-dependent differential equations.Algorithms for time-dependent Hamiltonian evolution have been given based on a Dyson series in [12,15].A method for time-dependent differential equations was given in Ref. [5].That used a spectral method to provide near-linear dependence on T , with polylogarithmic factors in T and the allowable error.The poly-logarithmic factors were not explicitly given in [5], but they are quite large.In particular, n is chosen logarithmically in Eq. (8.6) of that work, then there is a factor of n 4 for the complexity in Eq. (8.15) of Ref. [5].Another limitation of the approach of Ref. [5] is that it requires smoothness of the differential equation, with a complexity depending on arbitrary-order derivatives of the parameters.Reference [10] used a time-marching strategy for time-dependent differential equations, but gave quadratic dependence of the complexity on the overall evolution time T , making it considerably more costly.Moreover, it only analyses the homogeneous case.
Here we provide an algorithm for inhomogeneous time-dependent differential equations with significantly improved logarithmic dependence on the error and linear dependence on T up to logarithmic factors.Our logarithmic factors are significantly improved over those in [5].Moreover, we need no smoothness condition as in Ref. [5].Our oracle complexity is entirely independent of derivatives of the parameters, and the complexity in terms of elementary gates depends only on the log of the first derivatives of parameters.We combine methods from both [3] and [12,15].We use a block matrix similar to [3], but we do not use extra lines of the block matrix to implement terms in the series as in that work.Instead we construct this matrix via a block encoding using a Dyson series, in an analogous way as the block encodings in [12,15].
In Section 1 we describe the form of the solution in terms of the sum of a solution to the homogeneous equation and a particular solution.Then in Section 2 we express the method in terms of a set of linear equations and determine the condition number.In Section 3 we describe the method to block encode the linear equations.Lastly we use these results to give the overall complexity in Section 4, and conclude in Section 5.

Form of solution
In this section we summarise standard methods of solving a linear differential equation via a Dyson series.A general ordinary linear differential equation has form where b(t) ∈ C N is a vector function of t, A(t) ∈ C N ×N is a coefficient matrix and x(t) ∈ C N is the solution vector as a function of variable t.We are also given an initial condition x(t 0 ) = x 0 .Note that we can always express a higher-order ODE as a system of first-order differential equations by defining new parameters.We will write the general solution to (1) as where x H (t) is the solution to the homogeneous equation ẋH = A(t)x H and x P (t) is a particular solution.In the next two subsections we show how to compute them.

Solution to the homogeneous equation
First we solve a differential equation of the form A general solution for x H can be expressed in the form of a Dyson series x H (t) = W (t, t 0 )x H (t 0 ) with where T indicates time ordering.We will briefly review the algorithm for the first segment.We can explicitly order the solution and truncate the infinite sum at a cutoff K to give Note that the k! disappears because the times without ordering have k! permutations, so sorting them gives k! multiplicity.This expression is just equivalent to the Dyson series solution for Hamiltonian evolution used in Refs.[12,15], and the fact that A is a more general matrix does not affect the form of the equations.Then, using Here and throughout the paper we are taking ∥ • ∥ to be the spectral norm for operators, or the 2-norm for vectors.
If the goal is to give error no larger than ϵ due to the truncation, then we can choose K ∝ log(1/ϵ)/ log log(1/ϵ).To solve for a longer time we break the time into r segments and use the truncated Dyson series on each segment.Therefore the error in each segment should be no larger than ϵ/r.The choice of K should then be

Particular solution
Generalising to an inhomogeneous equation the particular solution (i.e. with initial condition of zero) is of the form x Note that this is very similar to W (t, t 0 ) for the homogeneous solution, except we have replaced A(t k ) with b(t k ).Note also that, in W (t, t 0 ) we have k = 0 giving a product of none of the A, so that term gives the identity.Here the sum starts from k = 1, and k = 1 gives the integral of b(t 1 ), so we can rewrite the solution as To see that this is the correct form of the solution, one can use which is the form of the differential equation.Truncating the solution, we have The complete approximate solution, as the sum of the homogeneous and particular solutions with truncations, is then Defining b max = max t ∥b(t)∥, then the error due the truncation of the Dyson series is where the second norm in the first line results from ∥v(t, t 0 ) − v K (t, t 0 )∥.Again we require that A max ∆t is O (1).In our solution we will discretise the integrals, and to bound the overall error we also bound the error due to the discretisation; that is discussed in Section 4. This complete expression in Eq. ( 14) has an extra factor of ∥x(t 0 )∥ because we are considering the error in the state.The error in W being no greater than ϵ implies that the error in the state is no larger than ϵ times the norm of x, which is the form we require the bound on the error for our theorem.Appropriately bounding the error in the second term in Eq. ( 14) can require taking K somewhat larger than in Eq. (7).The choice of K is discussed further in Section 4.

The time-independent solution
We are also going to look at the case where A ∈ C N ×N is time-independent, which gives our usual ODE to solve In [3] this was solved by using the Taylor series.Here we apply a similar principle, except that we will encode into the block matrix in a simpler way.Here we summarise the Taylor series solution, then give the encoding into the matrix in the next section.
The solution can easily be seen by substituting a constant A into the solution for the time-varying A. We obtain and The error in the solution can be bounded as 2 Encoding in linear equations Next we describe how the Dyson series (or Taylor series) for the solution may be encoded into block matrices which can then be solved via the quantum algorithm for solving linear systems.The preceding section described how the series may be applied to accurately approximate the solution over a short time.In the usual way, for solving the solution over a longer time we divide the total time T into r steps of length ∆t = T /r.The solution for the shorter times is obtained by a series, and all time steps are encoded in a block matrix.

Encoding
First we describe the more complicated case of the block encoding of the Dyson series solution.This step shares many similarities with techniques for time-independent equations [3] but elements of the block matrix A −1 will need to be computed through integration described in the preceding section.The initial time for the multiple steps will be taken to be zero without loss of generality, since it is always possible to apply a time shift in the definitions.Then we can use notation t 0 for the starting time for individual steps of the Dyson series.
To illustrate the method, we first give the example of three time steps, where the encoding is where and x indicates the approximate solution for x.The top row (numbered as 0 here) sets the initial condition and rows 1 to 3 give steps of the solution for time ∆t as described by Eq. (13).These rows all describe forward evolution as for m ∈ {1, 2, 3}.In rows 4 to 6, the system is constant in time, x(m∆t) = x((m − 1)∆t).
Including rows after the final evolution steps might appear unnecessary, but these additional rows will boost the success probability of the resulting quantum algorithm [2].Note that v m and V m involve multiple integrals over times, which would need to be addressed via the block encoding which is described in Section 3.This is a major departure from the method for time-independent differential equations from [3], where it was possible to encode the terms of the sum via extra lines in the block matrix.
In general, we can define the system of linear equations as where A is a block matrix, A ∈ C N R×N R and X and B are vectors X , B ∈ C N R .Here R is the number of time steps, including those steps at the end where the solution is held constant.Taking R − r ∝ r will give a success probability roughly corresponding to the square of the amplitude of the solution.If the solution has not decayed too much, then this success probability will give an acceptable overhead to the complexity.The individual blocks in A, X , B can be given explicitly as It is easily checked that this is a general form of the example given above.By construction, this a lower bidiagonal matrix which will be be useful for computing its properties.

Condition number
The complexity of solving the system of linear equations is proportional to the condition number of A. To bound the condition number of A, we just need to bound the norm of A and its inverse.The norm of A is easily bounded as O(1), but the norm of the inverse will depend on how much the approximate solution x can grow as compared to the initial condition x(0) and driving v m .This means that the condition number can be well-bounded provided the differential equations are stable.That can be obtained provided the real parts of the eigenvalues of A(t) are non-positive.This was the approach applied in [3], where a diagonalisation was used so the complexity was also dependent on the condition number of the matrix that diagonalises A. It has been pointed out in [14] that the complexity of this approach can be bounded in a number of other ways which do not depend on the diagonalisability of A.
As discussed in [14], a useful way to describe the stability of the differential equation is in terms of the logarithmic norm of A(t), which can be given as the maximum eigenvalue of [A(t) + A † (t)]/2.A standard property of the logarithmic norm µ(A(t)) is that This means that, if the logarithmic norm is non-positive, then ∥W (t, t 0 )∥ ≤ 1.That is, this approach can be used to bound the norm of the time-ordered exponential, in a similar way as the exponential is bounded in the time-independent case in [14].In the following we will require that the logarithmic norm is non-positive for stability, though other conditions bounding the norm of the time-ordered exponential can be used.
Starting with A from Eq. ( 26), it can be separated into a sum of the identity (the main diagonal) and a matrix which is just the −V n and 1 on the offdiagonal.Using the triangle inequality, the norm can therefore be upper bounded as Here max m ∥V m ∥ is obtained because the spectral norm of the block-diagonal matrix can be given as the maximum of the spectral norms of the blocks.The maximum is over the values of m from 1 to M .Now, V m is defined in terms of W K which is an approximation of the exact evolution over time ∆t.We will require that the overall solution is given to accuracy ϵ < 1, so the norm of W K cannot deviate by more than ϵ from the norm for W , which is upper bounded by 1 according to our stability requirement on the logarithmic norm.That means max m ∥V m ∥ = O(1), and so ∥A∥ = O (1).An alternative bound that does not depend on the stability is We choose the number of steps such that A max ∆t = O (1), which again gives the upper bound ∥A∥ = O (1).
Next, we bound the norm of the inverse.Since A is a lower bidiagonal matrix, it has the explicit form of the inverse [13] In our example for r = 3, A −1 takes the form Again we can use the triangle inequality.We express A −1 as a sum of matrices, each of which contains one of the diagonals.For each of the block diagonal matrices, the spectral norm is given by the maximum spectral norm of the blocks on the diagonal.More explicitly, we expand A −1 in the sum of block-diagonal matrices where and we take the convention that a product with no factors gives the identity.We have by the triangle inequality Then The bound in (36) can be interpreted as R times the norm of the largest block of A −1 .This will be an identity or some products of V ℓ s corresponding to an approximation of the time-evolution operator over a period of time within the interval [t 0 , T ].We will choose the parameters such that the approximation is within ϵ of the exact operator, and so the norm is within ϵ of the exact operator.According to our condition on the stability in terms of the logarithmic norm, the time-ordered exponential has its spectral norm upper bounded by 1.The product of V ℓ gives the solution of the homogeneous equation approximated using a Dyson series, and we will choose the parameters such that the error in this solution is no more than ϵ.This means that the error in the spectral norm of the product is also no more than ϵ, and is upper bounded by 1 + ϵ.Hence the spectral norm of the inverse is upper bounded as where we have used the fact that we are considering complexity for small ϵ.Therefore the condition number of A is

Time-independent equations
The encoding of the time-dependent case is exactly the same of the independent case, except that now the V m are independent of m, so we can write for example where now we have which is independent of m.Similarly, the v m are replaced with We can write the general form in a similar way as for the time-dependent case, except that the V m , v m are replaced with the time-independent V, v.
In exactly the same way as for the time-dependent case, the spectral norm can be upper bounded by expressing the matrix as a sum of block-diagonal matrices and using the triangle inequality Again ∥V ∥ will be an accurate approximation of the exponential, which is upper bounded by 1 according to the stability criterion, and so ∥A∥ = O(1).Similarly to the timedependent case, the spectral norm of A −1 can be upper bounded by using the triangle inequality on the explicit expression for the inverse.Again the stability condition guarantees that the norm of the exponential is no larger than 1, and the condition on the approximation of the solution being with error ϵ implies that the norm of the powers of V is within ϵ of 1.Since there is a sum over R of these norms, we again have 3 Block encoding

Time-dependent block encoding
Here we address how to give a block encoding of the matrix in order to apply the quantum linear equation solver.The block encoding is not trivial, because the blocks we have given above are composed of multiple integrals.For our result we just assume that the matrix A(t) is given by a block encoding, rather than considering how it would specifically be done for a particular encoding (such as sparse matrix oracles).Similarly, we will assume that there is a block encoding for a preparation of the state corresponding to b(t) as well as the initial state x 0 .We assume that the block encoding can be given the time register as a quantum input.The overall complexity is then in terms of the number of calls to these block encodings.More specifically, given the target system of dimension N , a time register of dimension N t and ancillas of dimension N A , N b , there are unitaries U A , U b , and U x such that where n t is a binary encoding of the time t, the subscripts A, b on the state indicate the ancillas for the block encodings of A(t) and b(t), and the subscript s indicates the target system.The operator A(t) acts upon the target system, which is omitted in the first line for simplicity.The quantities λ A , λ b , λ x are the factors for the block encoding and are assumed to be known.The value of λ x is just that needed to ensure normalisation.The value of λ b also ensures normalisation, but since the norm of b(t) can vary over time we define the oracle in terms of a block encoding so that λ b can be taken to be independent of time.It would also be possible to define an oracle with a unitary preparation and time-dependent λ b , but that would make the later algorithms much more complicated.We apply the standard encoding of the vector in amplitudes of the state so, for example, for computational basis states |n⟩.Note that this state is not defined in a normalised way, which is why we have, for example, division by λ x above.We also use the standard assumption that the block-encoding unitaries can be applied in a controlled way and we can also apply their inverses.
There are standard methods for combining block encodings of matrices we will use.When we are multiplying operations in a block encoding, the general procedure is that if matrices A and B are block encoded as The understanding is that U and V are acting on different ancillas.That is, we need the ancilla space for both when multiplying.This means that if A and B are block encoded with respective λ-values of λ A and λ B then the λ-value for the product is λ A λ B .
In adding operations, we would build a controlled operation that performs controlled by a qubit ancilla.If we aim to block encode aA + bB (with positive a, b), then we can use a state preparation operation P such that Then we obtain The first P puts the qubit ancilla in a superposition, then C applies either U or V controlled on that ancilla, and P † inverts the preparation.The resulting λ-value is then aλ A +bλ B , so we are just applying the same arithmetic to determine the λ-values as for the operators.If we want negative weights in a or b then the sign can be applied using C, and the resulting These primitives mean that whenever we have a polynomial in something that is block encoded we can construct a block encoding for the polynomial.The value of λ is determined by using exactly the same arithmetic as in the polynomial for the operators, except we take the absolute values of any weights.This reasoning also holds for discretised integrals.The integral can be discretised into a sum, and the value of λ for the discretised integral can be determined using the same arithmetic for the λ-values.In particular, we use ancilla registers with times to give an approximation of the time integral in the Dyson series, in a similar way as was done for Hamiltonian simulation.
Here we want to block encode an operation of the following form, given for the example of three time steps This matrix can be written in the form 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 Here there are two operations that are trivial to implement.One is the identity, and the other is an increment on the time register.It is not unitary, because the top row is zero.
It can easily be block encoded by incrementing the register in a non-modular way, and using the carry qubit.That is, recall that the block encoding involves a projection onto the |0⟩ state, so if the carry qubit is flipped to |1⟩ then that part is eliminated in the block encoding.
The difficult operation to block encode is the one with V m on the diagonal.To block encode that matrix, we use the intermediate matrix where δt is used to index an offset between 0 and ∆t.This matrix can be block encoded using the block encoding of A with a time register as input.This may be achieved simply by using a number of time intervals for each ∆t that is a power of 2. Then we may use the qubits encoding δt, and the qubits encoding the line in the block matrix, together to give the time input for the block encoding of A(t).For the zeros in the lower-right of A(δt) we can flip an ancilla qubit to eliminate that part in the block encoding.That is most easily performed when M is a power of 2 as well, so a single qubit will flag the lower-right part of the matrix.Then we use this matrix as input to a truncated Dyson series of the form In order to block encode the Dyson series, the same procedure as in [12] or [15] can be used.
To be more specific on the complexity, there are K time registers which are generated in equal superposition states and then sorted via a sorting network (as is needed for quantum algorithms).The sort has an optimal complexity of O (K log K) steps, each of which has a complexity corresponding to the number of bits used to store δt.Using M for the number of these time steps, the number of bits is log M , and so the gate complexity is O (K log K log M ).The block encoding of each of the K applications of A(δt) is used, which has complexity of K calls to the block encoding of A(t).
If the inequality testing approach of [15] is used then the complexity of the preparation of the time register is O (K log M ), because there are K − 1 inequality tests on registers of size log M .Despite this, in practice it is preferable to use the sorting approach because it provides an improved constant factor in the complexity (which is ignored here in the order scaling).
The preparation of the register with values of k can be performed simply using a unary representation and controlled rotations.The precision of the truncated Dyson series needs to be O (ϵ/r), and since there are K rotations each needs precision O (ϵ/(Kr)), and therefore has complexity O (log(Kr/ϵ)).That complexity is no larger than O (log(K) log(r/ϵ)), so multiplying by K for the K rotations gives O (K log(K) log(r/ϵ)).
It is also possible to perform the preparation of k using the improved approach of [18] based on inequality testing.That yields similar complexity for the order scaling because the number of bits needed is O (K log K).In the method of [15] for preparing time registers, the preparation over k only needs an equal superposition, which may be prepared with trivial O (log K) complexity [17].
Choosing λ A ∆t ≤ 1 (and the sorting approach for time registers), the Dyson series in Eq. ( 56) is block encoded with a λ-value no more than e.In particular, in Ref. [12] the normalisation factor for the Dyson series sum with times prepared by a sort is given in Eq. (55) of that work.That is for a Hermitian Hamiltonian, but the analysis is unchanged for the general matrix.In the notation of [12], λ is equivalent to λ A here, and T /r is equivalent to ∆t here.That means the normalisation factor (λ-value) is no larger than e λ A ∆t ≤ e with λ A ∆t ≤ 1.
This result can also be seen using the principles described above for determining λvalues for block-encodings of polynomials.Equation (56) can be described by using weights of 1/k! then a time-ordering of the values of t j .Using the same arithmetic for the λ-values as for the operators, we obtain the value of λ There is a further constant factor in the block encoding of the sum for A in Eq. ( 54), which may be ignored in our analysis because we are providing the order scaling.That is, in Eq. (54) the matrix with V j is the Dyson series, which is block encoded with λ-value no larger than e.The identity has λ-value of 1, as does the increment operator (the matrix with identities in the off-diagonal).In the usual way for describing the block encoding of sums and products of operators, the overall λ-value is no larger than e + 1.
The linear equations to solve are of the form AX = B, or We therefore need to consider the complexity of preparing the vector on the right B, which is composed of x 0 which we are given an oracle for, as well as v j which needs to be constructed by block encoding integrals of b(t).The integral to encode is of the form The block encoding of this vector can be applied in almost an identical way as for the block encoding of the truncated Dyson series, except the initial block encoding of A(t k ) is replaced with the block encoding of the preparation of |b(t k )⟩.Therefore, the gate complexity of preparing and sorting the time registers is again O (K log K log M ), or O (K log M ) if one were simply to perform inequality testing as in [15].There are K − 1 calls to block encodings of A(t), as well as a single call to the block encoding of b(t k ).
For the block encoding of Eq. ( 59), the choice λ A ∆t ≤ 1 means that λ-value of the block encoding is no larger than (e − 1)λ b ∆t.As before, the λ-value can be determined using the same arithmetic, and we can use a factor of 1/k! followed by a time ordering of t j .That gives the λ value as ≤ (e − 1)λ b ∆t , (60) For the complete preparation, it is necessary to prepare x(0) as well.We would initially prepare a state on the time registers Then a preparation of |x(t 0 )⟩ controlled on |0⟩ and a preparation of |v m ⟩ controlled on |m⟩ gives a state of the form The amplitude for success is then The expression here is in terms of v m , which will be constructed with the truncated series and discretised integrals, but the parameters will be chosen so that the error in each is no more than ϵx max /r.As a result ∥v m ∥ ≥ ∥v(m∆t, (m − 1)∆t)∥ − ϵx max /r . (64) It would be expected that the correction here would typically be small enough to ignore.If the norm of b does not vary too much and there are not cancellations in the integral, then the expression is proportional to b max ∆t.Since it would be expected that λ b is comparable to b max , in such a case the amplitude for success would be at least a constant.For full generality we use the full expression rather than considering typical cases.
The preparation of B needs to be performed twice in the walk step used in the solution of the linear equations using the approach in Ref. [9].That would lead to the square of this amplitude for success in the overall amplitude for this walk step.To avoid this square factor, we can perform amplitude amplification on the state preparation to boost its amplitude to Θ(1).The number of steps of amplitude amplification will be the inverse of this amplitude, or Usually the amplitude will be unknown, but it can be determined at the beginning of the procedure with a logarithmic overhead.That amplitude estimation is considerably less costly than the cost of solving the linear equations, which has an extra factor of the condition number κ A .
There is also the initial preparation needed for the time register for the state.This can be performed as follows.First perform a rotation on an ancilla qubit to obtain the appropriate weighting between m = 0 and m = 1, . . ., r.If r is a power of 2, then controlled on this qubit being |0⟩ we perform Hadamards on the log r qubits for the time, giving values from 0 to r − 1.In the more general case where r is not a power of 2, one can perform a controlled preparation over a number of basis states that is not a power of 2, which has complexity O (log r) [17].
We can perform CNOTs from the ancilla qubit to the remaining qubits for the time, so that if this control qubit was |1⟩ then we have all ones.Then adding 1 in a modular way with the ancilla as the most significant bit, if the ancilla was |1⟩ then we get all zeros, and if it was |0⟩ we get values from 1 to r.The complexity of this procedure is O (log(1/ϵ)) for the initial rotation, and O (log r) for the remaining steps.This complexity can be disregarded because it is smaller than many other parts of the procedure.

Time-independent block encoding
The time-independent case is substantially simplified over the time-dependent case.First, the block encodings are simplified to where there is now no time register for the input and the preparation of |b⟩ is just via a unitary operation.We now omit the s subscript because there is just the one register.
This means that now λ b = ∥b∥.Again the block encoding can use Eq. ( 54), except with V m replaced with the Taylor series V as given in Eq. ( 40), and v m replaced with v given in Eq. ( 41).The block encoding proceeds in exactly the same way as before, except that there is no need for the time integrals.This means that we need to prepare a register with k for both the Taylor series and the series for v. Again, if λ A ∆t ≤ 1, then the Taylor series is block encoded with a factor of at least 1/e.
We can also circumvent the problem we have in the time-dependent case that there are possible cancellations in v m .That is, we have where we have used ∥A∥∆t ≤ λ A ∆t ≤ 1.We would then find that the amplitude for success of the state preparation would, using Eq. ( 63), be given by Thus the amplitude for success of the state preparation is at least a constant here, so cannot affect the scaling of the complexity and can be omitted in the O expressions.

Complexity of solution 4.1 Time-dependent complexity
Before giving the overall complexity, we give further discussion of the choice of K.As explained above, the error due to the truncation is given in Eq. ( 14), and the error in the first term (due to the homogeneous component of the solution) can be appropriately bounded by choosing K as in Eq. ( 7).
The complete expression for the error in Eq. ( 14) accounts for the inhomogeneity in the second term.The expression for the inhomogeneous error can also be bounded with λ A replacing A max because λ A ≥ A max , which gives this term as For our algorithm we will be choosing ∆t such that λ A ∆t ≤ 1, which means that this error is bounded as Then we can choose K to make this error no larger than ϵx max /r by taking With ∆t ≈ 1/λ A , we have r = O (λ A T ), which gives Using r = O (λ A T ) in Eq. ( 7) for the choice of K for the homogeneous error gives Since we need to ensure that both error terms in Eq. ( 14) are appropriately bounded, we should take the maximum of these two scalings of K, which can be expressed as where We will be measuring the error between pure states via the 2-norm distance, but also need to account for cases where the approximate state generated is not pure.We therefore use the Bures-Wasserstein metric [4] This measure is for states that are not normalised, which is why it includes the traces.In the case that they are normalised this measure is the Bures distance.For the case where the two states are pure, so ρ = |ψ⟩ ⟨ψ| and σ = |ϕ⟩ ⟨ϕ|, then provided ⟨ψ|ϕ⟩ is real (and positive) we obtain Therefore this measure can be regarded as a generalisation to mixed states of the 2-norm distance.Another feature of this measure is that d 2 BW is convex in ρ.This property follows from concavity of the fidelity and square root.
The overall complexity of the quantum algorithm for the time-dependent differential equations can be described as in the following theorem.
where b(t) ∈ C N is a vector function of t, A(t) ∈ C N ×N is a coefficient matrix with non-positive logarithmic norm, and x(t) ∈ C N is the solution vector as a function of t.
The parameters of the differential equation are provided via unitaries U A , U b , and U x with known λ A , λ b , λ x such that A quantum algorithm can provide an approximation x of the solution |x(T )⟩ satisfying d BW ( x, |x(T )⟩ ⟨x(T )|) ≤ ϵx max using an average number where λ Ax = max (λ A , b max /x max ), given constants satisfying Here The condition that the logarithmic norm is non-positive ensures that the differential equation is dissipative.We use the convention that state vectors such as |x 0 ⟩ s are the unnormalised form with amplitudes exactly equal to the coefficients of the corresponding complex vector.For the unnormalised solution state | x⟩, this is defined in the sense that for the normalised quantum state, there exists some constant that we can multiply it by to give | x⟩ that satisfies the accuracy constraint.
We require that we are given upper bounds R, D, and so on, because it may be too difficult to determine the maxima and minima required exactly.For completeness we have given a complicated expression for R to account for the steps of amplitude amplification needed, but in practice if the solution does not decay significantly, and b(t) does not vary such that it cancels and gives small v, then it can be expected that R = O (1) and can be ignored in the scaling of the complexity.
For the count of additional gates, we are not allowing arbitrary precision rotations.Instead it would be for a fixed set of gates, such as Toffolis (or T gates) plus Clifford gates.The complexity would be equivalent (up to a constant factor) to the non-Clifford count that is often used in quantifying complexities for algorithms intended for error-corrected quantum computers with magic-state factories.

Proof.
The principle we use for the solution is to express the solution in terms of a Dyson series over many time steps encoded in a system of linear equations as shown in Eq. (58).We then use the linear equation solver of Ref. [9] which has complexity O (κ A log(1/ϵ)), in terms of calls to the block-encoded matrix A and vector B. We then need to perform amplitude amplification to obtain the solution at the final time from the vector with the solution over all times.In order to determine the complexity, we therefore need to determine κ A , the complexity of block encoding A, B, and the amplitude of the component of the solution at the final time.We will show that the factor of R comes from the complexity of that amplitude amplification, as well as amplitude amplification as part of preparing B.Moreover, we will show that λ A T comes from the factor of κ A , and log (λ Ax T /ϵ) comes from the order K used in the Dyson series for block encoding A, B.
The simplest part is the condition number.The condition number is κ A = O (R) as in Eq. (38).Here, R is the total number of time steps, including those at the end where the solution is held fixed at the final time.This is the standard approach to ensure that there is a reasonable amplitude for the solution at the final time.We use an equal number of time steps for the time evolution r as those where the solution is held fixed, so that R = 2r and κ A = (r).Because r = T /∆t, to minimise the complexity we should choose ∆t as large as possible.The restriction limiting how large ∆t can be is that λ A ∆t ≤ 1, which we use to ensure that the λ-value for the block encoding of A is O (1).That means we should take ∆t ≈ 1/λ A so r = λ A T , and therefore The complexity of solving the system of linear equations is O (λ A T log(1/ϵ QLSP )) in terms of calls to block encodings of A, B, where ϵ QLSP is the allowable error in the solution of the system of linear equations.For the complexity we need to relate that error to the allowable error in the solution.
To achieve that, let us denote the approximate solution obtained at time t by | x(t)⟩, to be compared with the correct solution of the linear equations |x(t)⟩.Because the solution of the linear equations is approximate, the components of the approximate solution vector X need not be equal in the second half as they are for the exact solution.We are using | x(t)⟩ for the components of X , so to account for the unequal components in the notation, we use | x(t)⟩ with values of t > T , and similarly for x.
When the solution of the linear equations is accurate to within error ϵ QLSP , it means that Therefore, the approximate solution x will satisfy Because we take R = 2r we can take ϵ QLSP ∝ ϵ and the desired precision will be obtained.
Note that there will be error in other steps in the algorithm so the error in solving the linear equations will be taken to be some fraction of the total allowable error ϵ.As usual in analysis of this type, the constant factor can be omitted when giving complexity scalings.Therefore the complexity of solving the linear equations can be given as O (λ A T log(1/ϵ)) in terms of calls to block encodings of A, B.
The complexity of block encoding A can be determined from the complexity of block encoding the Dyson series in Eq. (56), with other costs being trivial.That Dyson series uses calls to the oracles U A , but not U b , U x .Because the Dyson series is truncated at order K, it can be block encoded with K calls to U A , with K as given in Eq. (76).That gives the factor of log (λ Ax T /ϵ) in the complexity for calls to U A .
The complexity of block encoding B is similar, except we also need to account for calls to U b , U x , and we also need to account for the complexity of amplitude amplification.The form of the Dyson series used to prepare B is given in Eq. (59).The block encoding of this Dyson series uses K − 1 calls to U A , and a single call to U b .Moreover, a single call to U x is used to prepare the component of B with the initial state.Therefore we have the factor of K for calls to U A , but not to U b , U x .
Next, the number of steps of amplitude amplification for preparing the state for B is given in Eq. (65).Using ∆t ∝ 1/λ A gives the number of steps as That is the second factor in the expression for R.
Next we consider the number of steps of amplitude amplification needed to obtain the solution at the final time.This amplitude amplification is performed on the state obtained from the linear equation solver.The amplitude of the state at the final time is given by which implies an average number of steps of amplitude amplification For the numerator we obtain For the denominator of Eq. (97) we similarly obtain where in the last line we have used ϵ QLSP as suitable fraction of ϵ.Now, provided ∥x(T )∥ ≥ ϵx max we obtain If ∥x(T )∥ < ϵx max then this is a pathological case where the size of the solution is smaller than the precision required in the algorithm.For any output state we can give x = 0 and the solution will have the desired precision.Therefore, excluding that pathological case we obtain The overall factor in the complexity required is this number of steps of amplitude amplification for obtaining the final time, multiplied by the number of steps of amplitude amplification for preparing B given in Eq. (95).That product is the lower bound for R given in the statement of the theorem.A minor subtlety in the use of amplitude amplification on the result of the quantum linear equations algorithm is that the algorithm as given in Ref. [9] uses measurements to ensure the correct result of filtering, which would not be compatible with amplitude amplification.However, it is trivial to just use omit that measurement, and perform the amplitude amplification jointly on the successful result of the filtering and obtaining the final time in the solution state.That introduces no further factors in the complexity.
Hence, we have proven the factor R in the complexities for U A and U b , U x , with the factor λ A T arising from the factor of κ A in the complexity for solving linear equations, log(1/ϵ) coming from the factor in the complexity of solving linear equations, and the factor of log (λ Ax T /ϵ) for the U A complexity coming from the order K used in the Dyson series.It just remains to account for the number of additional gates.
The majority of the complexity for the additional gates is in constructing the registers for the time.We need to construct K registers for the time, and sort in order to obtain the correctly ordered time registers.Similarly to the case for Hamiltonian simulation in Ref. [12], the complexity of these steps is then O (K log K log M ), where the factor of K log K is for the number of steps in the sort, and the factor of log M is for the number of qubits to store each time.Taking K as in Eq. (76), we obtain K log K = O (log(λ Ax T /ϵ)).
That accounts for the factor of log(λ Ax T /ϵ) before the square brackets given for the number of additional gates.Next we consider the contribution to the complexity from the factor of log M .The value of M needs to be chosen to be large enough such that the error from discretisation of the integrals in the Dyson series is no larger than ϵ/r for each of the r time segments.Exactly the same derivation as for the time-dependent Hamiltonian in Ref. [12] holds, where the result is given in Eq. ( 58) and the following text of that work.For completeness we give the derivation of the error in Appendix A, which gives where W K (t α , t β ) is given by Eq. ( 5), t α − t β = T /r and This expression corresponds to discretising the integral over each time variable to approximate it by a sum with M terms.In Eq. ( 102) we also have A ′ (t) which is the time derivative of A(t).
Similarly, it is easy to see that for the error in the integral for the driving term due to time discretisation of Eq. ( 9).The first term in Eq. ( 104) comes from k = 1, and the second comes from k = 2. See Appendix A for details of the derivation.Now, because the error in each evolution operator W K should be no larger than ϵ/r, the expression for the error given in Eq. (102) implies that we should choose Recall that the error in W K needs to be no larger than ϵ/r, because we multiply this operator with the vector x to give a final error in the vector no larger than ϵx max .The error in v K should be no larger than ϵx max /r, because it directly translates to error in x.
The requirement to bound the first term in Eq. (104) implies that we should take In order to bound the second term in Eq. (104), we should take The smallest M satisfying all three conditions (105), (106), and (107), is then with our bound on D. Taking the log of this expression gives the first term in square brackets for the number of additional gates in the theorem.Lastly, we consider the complexity of the rotations used for preparing the k register.As explained above, these give complexity O (K log K log(r/ϵ)).Using Eq. (76) for K gives K log K = O (log(λ Ax T /ϵ)), then O (log(r/ϵ)) gives another factor of O (log(λ A T /ϵ)), which is given as the second term in the square brackets for the gate complexity.In practice it would be expected that this term is smaller.
There are also operations needed on the approximately log r qubits for preparing the time registers, but this complexity is no more than O (K) for each time step.That is smaller than the other complexities so can be omitted in the order scaling.□ A similar form of scaling of complexity was given for the spectral method in [5].The statement of the complexity in that work gave the log factors in Theorem 1 just as polylog, but the true scaling can be seen in their proof in Eq. (8.15).There the complexity is given as proportional to n 4 times a polylog factor coming from solving systems of linear equations.Their quantity n scales as the log of 1/ϵ, T , and the derivatives of the solution.
Therefore the approach of [5] gives scaling as the fourth power of these logs, times whatever factor is obtained from the solution of linear systems.In that work the linear equation solver of Childs, Kothari, and Somma [6] was used, which gives a complexity superlinear in log(1/ϵ).The approach of [5] can also be used with the optimal solver of [9] with just linear complexity in log(1/ϵ), as we do here.
Assuming that optimal solver for a fair comparison, the scaling of the number of calls to a block encoding of A(t) in terms of T and ϵ in [5] is as log 4 (T /ϵ) log(1/ϵ) (with the second log from the solver).In contrast our complexity is log(T /ϵ) log(1/ϵ).Moreover, [5] has the same log 4 (T /ϵ) log(1/ϵ) complexity in calls to block encodings of b(t), whereas our complexity is greatly improved to just the log(1/ϵ) factor from the linear equation solver.That is because b(t) is only needed once in the Dyson series solution.
A further improvement in our approach is that the number of oracle calls is independent of derivatives of A(t) and b(t).In [5] the complexity scales as the fourth power of the log of arbitrary-order derivatives through their variable g ′ .In our approach there is dependence on derivatives only in the number of additional gates (needed for addressing time registers), and that complexity only depends on the first-order derivatives.

Time-independent complexity
In the time-independent case the complexity becomes as follows.
Theorem 4.2 We are given an ordinary linear differential equation of the form where b ∈ C N is a vector function of t, A ∈ C N ×N is a coefficient matrix with non-positive logarithmic norm, and x(t) ∈ C N is the solution vector as a function of t.The parameters of the differential equation are provided via unitaries U A , U b , and U x such that A quantum algorithm can provide an approximation x of the solution |x(T )⟩ satisfying d BW ( x, |x(T )⟩ ⟨x(T )|) ≤ ϵx max using an average number where λ Ax = max (λ A , ∥b∥/x max ), given constants satisfying Proof.The proof proceeds in exactly the same way as for the time-dependent case, except for a few minor amendments.First, the state preparation succeeds with probability at least a constant, so the factor from this amplitude does not appear in R. The other difference in the proof is that the number of additional gates is greatly reduced.We no longer need to perform any arithmetic on time registers, so the log (T D/λϵ) factor from the time-dependent cost is removed.However, we do still need to perform rotations and controlled rotations to prepare the register with k.As a result, we still have the extra factor of log (λT /ϵ).□

Conclusion
We have given a quantum algorithm to solve a time-dependent differential equation in the sense that it outputs a quantum state with amplitudes encoding the solution vector.This is a distinct method than that used for the time-independent case in [3], where different orders of the sum were encoded in successive lines of the block matrix.In our approach the block matrix has each successive line encoding a new time step, and employs a block encoding of the Dyson series in a similar way as for Hamiltonian simulation in [12].This approach makes the analysis of the complexity simpler than in [3], because it is not necessary to account for the extra lines in the encoding.As in [14], the expression for the complexity is simplified by using a condition on the logarithmic norm of A(t).This approach avoids needing A(t) to be diagonalisable.These techniques also give a significantly simplified result for the complexity in the time-independent case.
The complexity is, up to logarithmic factors, linear in the total evolution time T and the constant λ in the block encoding of A(t).By applying the optimal quantum linear equation solver of [9], we obtain excellent scaling of the complexity in log(1/ϵ).The number of calls to encodings of the driving and initial state is linear in log(1/ϵ), whereas the number of calls to the block encoding of A(t) is quadratic in log(1/ϵ).
The complexity only depends logarithmically on the first derivatives of A(t) and the driving, and only the additional gates depend on these derivatives not the calls to the block encoding.We have expressed the complexity in terms of maxima of these quantities.It is also possible to express the complexity in terms of averages via the approach in [15], though the derivation is considerably more complicated.A bound could also be given in terms of the total variational distance as in [10].That is useful when there are discontinuities.Those bounds are just from a more careful analysis of the error in the discretisation of the integrals, rather than a different algorithm.
The way the result for the complexity is expressed is complicated by the need to account for the number of steps of amplitude amplification, which in turn depends on the norms of the v vectors.This full expression is to account for pathological cases, which could potentially be ruled out by a more careful analysis to give a simplified expression.
Our result gives a significant improvement in the log factors over the spectral method from [5].This raises the question of whether our complexity is optimal, or whether further improvements are possible.We have log factors appearing from the optimal linear equation solver and from the order of the Dyson series, so any further improvement would require a radically different approach.

A Error estimates for time discretisation
First we consider the error in the sum approximation of the integrals in the definition of W K .We can write W K as where T reorders the A matrices in descending time order, and we are taking t α ∈ {0, ∆t, 2∆t, • • • , T − ∆t} and t β = t α + ∆t.The sum approximation of W K is then where δt = ∆t/M .The sum can be rewritten as where t j ℓ is t ℓ rounded down to the nearest multiple of δt as This integral form of the sum is obvious, because each t j ℓ is constant over the interval δt.
Then we can write the error due to the time discretisation as This gives the bound used in Eq. ( 102).
Next we bound the error of the particular solution Eq. ( 9) when the truncated Dyson series is approximated by the discretised integrals Here the time ordering operator T is used to mean that the times are sorted in ascending order, not that the order of A and b is changed, because b must always go on the right.Again we can express the summation in the form of an integral with the times t j ℓ rounded down to the nearest multiple of δt.Now we can upper bound the error in the discretised integrals for each k as