Real-Time Krylov Theory for Quantum Computing Algorithms

Quantum computers provide new avenues to access ground and excited state properties of systems otherwise difficult to simulate on classical hardware. New approaches using subspaces generated by real-time evolution have shown efficiency in extracting eigenstate information, but the full capabilities of such approaches are still not understood. In recent work, we developed the variational quantum phase estimation (VQPE) method, a compact and efficient real-time algorithm to extract eigenvalues on quantum hardware. Here we build on that work by theoretically and numerically exploring a generalized Krylov scheme where the Krylov subspace is constructed through a parametrized real-time evolution, which applies to the VQPE algorithm as well as others. We establish an error bound that justifies the fast convergence of our spectral approximation. We also derive how the overlap with high energy eigenstates becomes suppressed from real-time subspace diagonalization and we visualize the process that shows the signature phase cancellations at specific eigenenergies. We investigate various algorithm implementations and consider performance when stochasticity is added to the target Hamiltonian in the form of spectral statistics. To demonstrate the practicality of such real-time evolution, we discuss its application to fundamental problems in quantum computation such as electronic structure predictions for strongly correlated systems.


Introduction
Quantum computers offer the promise of improvements over their classical counterparts for tackling a Yizhi Shen: yizhis@lbl.gov Norm M. Tubman: norman.m.tubman@nasa.gov class of problems central in the mathematical and physical sciences by encoding information as quantum many-body states. However, given current limitations on the assembly and control of scalable quantum computers, efficient usage of quantum resources for specific tasks [1][2][3][4][5][6][7][8][9] is considered essential in the noisy intermediate-scale quantum (NISQ) era [10][11][12]. As one of the most prominent algorithms, quantum phase estimation (QPE) [13] resolves the core task of Hamiltonian diagonalization but necessitates a relatively high simulation cost. Consequently, approaches relying on variational algorithms [14][15][16][17][18][19] have been pursued, focused on balancing resource allocation. They generally do so by preparing and measuring parametrized states on a quantum computer while steering parameter updates through optimization routines on a classical computer. This hybridization allows for a speedup of high-dimensional problems on near-term hardware, yet comes with complexities depending on choice of the variational ansatz. Fortunately, these additional complexities may be alleviated by clever and flexible ansatz design that fully accommodates the architecture of a given quantum device. [20,21] Among such hybrid quantum-classical approaches, subspace expansion techniques employing real time quantum dynamics [22][23][24][25] have shown evidence of advantages on near-term hardware. One representative approach is the so-called variational quantum phase estimation (VQPE) studied and developed recently [26]. VQPE shares the merits of variational approaches and bypasses conventional optimization procedures by solving generalized eigenvalue equations with information gathered from real-time evolution [27][28][29][30], which is unitary and thereby native to quantum hardware. Moreover, real-time evolution with VQPE enables access to the excited state manifold and requires quantum measurements merely linear in the dimension of expansion subspace. Because of its compactness, VQPE stands out as a promising algorithm for the NISQ era.
Recent theoretical development [26] of real-time evolution highlights the phase cancellation intuition for perfect spectral recovery, where eigenspaces of a target Hamiltonian operator are extracted exactly provided (i) the number of evolution timesteps matches the size of the Hilbert space and (ii) the timeevolved phases satisfy a set of geometrically meaningful sum rules. However, fulfillment of such phase conditions is only a serious consideration when the full spectrum of the Hamiltonian is needed. In reality, low energy part of the spectrum often suffices under many circumstances of interest. In this regard we present a complementary perspective on VQPE for its main use case, where the number of timesteps is kept significantly smaller than the size of Hilbert space, demonstrating that real-time evolution remains powerful for ground and low-lying excited state recovery. For generality, we formalize the real-time approach as a parametrizable variant of the Krylov method [31][32][33][34][35] with evolution timestep acting as the hyperparameter. We suggest weaker phase cancellation conditions for accurate spectral approximation, and examine the effects of stochasticity on observed convergence. To illustrate its appealing practicalities, we also discuss how the real-time Krylov theory can be integrated into quantum computing algorithms.

Contributions
In the following sections, we share four main results for understanding the properties of real-time Krylov method based on the generation of states from Hamiltonian evolution. We first demonstrate and visualize the convergence for single-step simulation, and then turn to multi-step simulation (Sections 3-5). Next we provide a proof of the convergence with an increasing number of timesteps (Section 5). Finally, we consider and assess an iterative implementation of the method for generating real-time states and show how this can further improve the convergence behaviors (Section 6).

Review of the Krylov method
The Krylov subspace method [32] is a common numerical tool to extract useful spectral information from some operatorĤ over a Hilbert space H. The method proves particularly powerful for approximating the extreme ends of the operator spectrum. Here we briefly review how the method works and set up the notational convention for the remaining sections. Throughout this work we assume the operators to be self-adjoint,Ĥ =Ĥ † .
The Krylov method computes the eigenspaces by compressing the target operator,Ĥ, onto a lower-dimensional subspace known as the Krylov subspace, (1) where a number of N T repeatedĤ-multiplications is applied to an initial vector |Φ 0 ⟩. In language of matrix algebra, diagonalization within the Krylov subspace amounts to solving the eigenvalue problem, where H and S represent the target and overlap matrices in the Krylov basis, while ⃗ c n give the expansion coefficients of an approximate eigenvector having the eigenvalue Eñ. In practice, an initial vector |Φ 0 ⟩ can be chosen to make the Krylov vectors all linearly independent. The Krylov method thereby extracts a subset of the target spectrum by factorizing the reduced matrix H. Its efficiency is manifested especially when N T ≪ dimH.

Generalized Krylov method from unitary action
We consider a parametrizable variant of the standard Krylov method, where a series of discrete time values are used to exponentiateĤ and thus constitute the free (hyper)parameters. Specifically, we allow the following generalized notion of Krylov subspace: for the initial vector |Φ 0 ⟩, we apply unitary evolution of the form,Û where 0 = t 0 < t 1 < · · · < t N T < ∞ records the timestamps of the evolution and i 2 = −1 denotes the imaginary unit. The evolved vectors generate a subspace, over which we can solve the eigenvalue problem. The free hyperparameter in this algorithm, the time grid ⃗ t = (t 1 , · · · , t N T ), effectively accommodates the linear independence of {|Φ j ⟩} N T j=0 . For linear time grid t j = j∆t, VQPE reduces to the standard Krylov subspace method applied to the operator, where {|n⟩} n and {E n } n label the true eigenstates and eigenvalues respectively. Note thatÛ clearly shares the same eigenstates with our target operatorĤ.

Real-time Krylov method
Here we provide a summary of the real-time method by outlining its key algorithmic steps. We will refer to the spectrum ofĤ interchangeably as a set of energies from now on (the lowest eigenvalue being the ground state energy, etc...). For concreteness, we focus on the task of ground state estimation as described in Alg. 1 below, Algorithm 1: ground state estimate (VQPE) Solve j × j projected eigenvalue problem H⃗ c n = EñS⃗ c n ; where the algorithm terminates when the convergence variable ε computed within an iteration, such as set by the difference in consecutive energy estimates, reaches reasonable tolerance ε tol . The tolerance value is userdefined and depends on the specific problem or target operator of interest. For example a molecular problem typically concerns a chemical accuracy of ε tol ≈ 10 −3 in energy units of Hartree.
3 Why does phase cancellation converge so quickly?

Understanding phase cancellation
A main goal of this work is to motivate a simple understanding towards the use of real-time states. We first address why superposing these equi-energy states helps generate the ground state, as previous work [26] has demonstrated that the ground-state convergence can be reached with a surprisingly small number of real-time states. Here we exploit eigenfunction expansions to demonstrate the suppression of amplitudes on highly energetic eigenstates. Note that such analysis appears natural for purely projective approaches such as the power method [36] and imaginary time evolution [37,38], where the convergence is shown exponentially fast. In this section, we present a visual representation of the single step solution. Later we provide a proof of the convergence as well as visualization of the multi-step solution. Figure 1: Generic single step suppression of a uniformly dense spectrum. Eigenstate population pn = |⟨n|Ψg⟩| 2 of the approximate ground state |Ψg⟩ is plotted over the eigenstate index 1 ≤ n ≤ 1000. The black dashed curve and solid purple curve show the initial and time-evolved population profiles respectively. The spectral spacing between low-lying eigenstates is also displayed inset for visualization.
In Fig. 1 we demonstrate the single step suppression, a generic behavior that occurs for a dense spectrum across the test simulations. In our demonstration, we start from an equal superposition over the entire Hilbert space and generate our ground state approximation, |Ψ g ⟩, by taking a single timestep. The eigenstate amplitudes, |⟨n|Ψ g ⟩| 2 , exhibit that a single eigenvalue is exactly suppressed, and the nearby spectral region is also suppressed within some width of the minimum.
Recall that we collect two real-time states, namely |Φ 0 ⟩ and |Φ 1 ⟩ = exp (−iĤt 1 )|Φ 0 ⟩ in a single step. Intuitively, we know from subspace diagonalization that there exists some choice c 1 ∈ C satisfying, where S 01 = ⟨Φ 0 |Φ 1 ⟩ gives the overlap between states |Φ 0 ⟩ and |Φ 1 ⟩. In the second line, we make use of an eigenbasis expansion of the real-time states, in terms of the coefficients z n = ⟨n|Φ 0 ⟩, to show the eigenphase evolution. So the amplitude decay at the observed eigenstate arises due to the phase associated with that eigenstate rotated to a value of −1, canceling out the +1 initial phase along the real axis. Meanwhile the eigenstates nearby are also rotated to fulfill nearly the same phase cancellation, thus there is a finite width to the decay. As eigenstates close in energy will pick up similar phases under real-time evolution, amplitudes on many excited states can be simultaneously suppressed. Accordingly, we expect reduced convergence for much larger timesteps as phases acquired by adjacent eigenstates in the spectrum become more separated. For concreteness, let us work out the simplest case of time evolution under a single Pauli-Z operator, corresponding physically to Rabi oscillations of spin-1/2 particle in a quantum mechanical description. We denote the Z-computational basis by |↑⟩ and |↓⟩ where Z |↑⟩ = |↑⟩ and Z |↓⟩ = − |↓⟩. Starting with the spin polarization |Φ 0 ⟩ = (|↑⟩ + |↓⟩)/ √ 2, we can evolve our initial state by the discrete timestep t 1 = π/2 so that, where the |↑⟩ eigenphase undergoes a sign flip. Phase cancellation in this case is thus explicit: We can immediately proceed to the ensuing case of time evolution under a sum of two Pauli-Z operators. For the initial bipartite spin state |Φ 0 ⟩ = (|↑↑⟩ + |↑↓⟩ + |↓↑⟩ + |↓↓⟩)/2, a linear time grid of (t 1 , t 2 ) = (π/4, π/2) ensures that |Φ 1 ⟩ and |Φ 2 ⟩ pick up an extra minus sign, respectively, on the |↑↑⟩ and {|↑↓⟩ , |↓↑⟩} eigenphases relative to the |↓↓⟩ eigenphase. Phase cancellation again facilitates the ground state recovery, Likewise, taking additional timesteps in a real-time evolution leads to further amplitude suppressions, each peaked around a different eigenstate in the spectrum. This general picture persists beyond simple cases enumerated here and points to the basic phase cancellation heuristics.

Single step examples and convergence properties
Now we consider various types of convergence tests tailored for different applications. In general, the associated quantum circuit enacts (i) time evolution of the initial state, |Φ 0 ⟩ → exp (−iĤt)|Φ 0 ⟩, which is difficult to simulate classically, and (ii) subsequent measurements of the matrix elements, H ij and S ij , for example through the Hadamard test or shadow tomography [39]. The unitary formulation of VQPE exploits the toeplitz structure of the Hamiltonian and overlap matrices, reducing the number of required measurements to being linear in the number of timesteps. Although we regard the real-time algorithm as broadly suited in many quantum computing applications, we will also discuss scenarios where the algorithm proves inefficient. For example, unstructured search, as discussed in the beginning of the section, turns out to be rather impractical due to implementation barriers which we will describe. Let us first focus on the single timestep limit, i.e., N T = 1, such that dimK = 2. For convenience, we define Q = dimH throughout the remaining sections.

Unstructured search
Given some Boolean function f : B → Z 2 over a set of candidate database elements B = {n} Q n=1 , the task of an unstructured search is to locate, without any a priori knowledge of the database structure, the unique flagged element n 1 ∈ B for which f (n 1 ) = 1. Such search can be formulated as an eigenspace search through the identification, where we assume n 1 = 1 andĤ acts on the Hilbert space H = span |n⟩ : 1 ≤ n ≤ Q . For any initial state, a single VQPE step with evolution time t 1 will send it to, where ∆E = E 2 −E 1 is the spectral gap. Observe that the linear combination |Φ 0 ⟩ + c 1 |Φ 1 ⟩ ∈ KÛ (Φ 0 ; N T = 1) with a choice of c 1 = − exp (iE 2 t 1 ) (upon absorbing the global phase exp (iE 1 t 1 )) simply returns a scalar multiple of our target state |1⟩. Therefore VQPE converges exactly after one step for unstructured search, as long as we avoid specific timesteps that cause phase rotation by integer multiples of 2π. We note that this result does not change the analysis of unstructured search problem in regards to previous bounds established in the literature [40]. The creation of the state from which the flagged state can be sampled comes with a low probability of success, at least when employing a linear combination of unitaries (LCU) type of preparation, due to the low target amplitude typically limited to the initial state. [41] Moreover, if the number of flagged states is unknown, relevant matrix elements have to be calculated on quantum computer.

4.2
Exponentially fast convergence of the ground state of a harmonic spectrum In fact, we could regard the solution to the search problem as the ground states of many-body Hamiltonian operators found in condensed matter physics and quantum chemistry. In previous work we specifically examined the ground state wavefunctions of molecular Hamiltonians [26]. Instead, here we aim to understand the single step performance for some basic yet important model Hamiltonians. We first consider the Hamiltonian with a linear spectrum, characteristic of a harmonic oscillator as a ubiquitous model in quantum mechanics. For normalized initial state |Φ 0 ⟩, a single-step VQPE solves the linear equations introduced in Eq. (2) where give the 2 × 2 Hamiltonian and overlap matrix. For purpose of implementation, we choose |Φ 0 ⟩ to be the uniform superposition over eigenbasis (z n ≡ 1/ √ Q) so it gets mapped to, under time evolution. Let |Ψ g (t 1 )⟩ denote the ground state estimate, including a parametric dependence on the evolution time t 1 . To solve the subspace-projected eigenvalue problem of Eq.
(2), we identify a change-ofbasis matrix B that orthogonalizes the overlap matrix (B † SB = I). This allows us to transform our original problem into the conjugated form of B † HB ⃗ d = E g ⃗ d. Once obtaining the solution in the new basis, we undo the basis change ⃗ c = B ⃗ d, mounting to a weighted sum in the real-time basis |Ψ g ⟩ = j c j |Φ j ⟩. For a singlestep evolution, it is straightforward to show that with d * 1 = −d 1 and d * 2 = d 2 , hence implying|c 0 | = |c 1 |. Assuming that t 1 satisfies a mild condition (discussed in Appendix C), we can analytically derive the eigenstate population after the single timestep, where χ and Z represent, respectively, some phase offset and normalization constant determined by the matrix elements H ij and S ij (specified in Appendix C). The sinusoidal dependence in Eq. (17) is explicitly displayed in Fig. 2 with an optimal timestep ∆Et 1 ∈ (0, π/Q) observed. For the special case ∆Et 1 = π and Q ∈ 2Z + , Eq. (17) simplifies such that the extracted ground state becomes, with half of the population amplitude eliminated and the other half doubled due to constructive and destructive interference. In fact, such interference readily underlies our earliest example of a single spin subject to the Pauli-Z Hamiltonian in Sec. 3.1, where we trivially recognize that |↓⟩ = |Ψ g (π/2)⟩ for ∆E = 2 and Q = 2. The simple result of Eq. (18) implies that given a linear spectrum, exact recovery of the ground state in a Hilbert space of dimension 2 N only takes a sequence of N single steps (once we recalibrate the excitation energy ∆E → 2∆E after each step). This recovery procedure is schematically illustrated in Fig. 3, exhibiting a particular yet clear manifestation of the exponential convergence from real-time evolution. As a final note on the convergence properties from a single step evolution, we also examine in Appendix B a minimally modified Hamiltonian based on Eq. (12), featuring a variable spectral gap, denoted as ϵ 12 , between the ground and first excited state. Predictably, we find that a change in spectral gap induces a population transfer among the eigenstates, where the lower energy population can be enhanced by a larger gap. In the extreme case of infinitely large gap, we effectively recover unstructured search for which the VQPE solution becomes exact after a single timestep.

Continuum modeling of spectrum
In the large Q limit, we may treat the spectrum as some continuum with a prescribed density of states (DOS) that reflects the probability of observing a certain energy level. We remark that the single step expression of Eq. (17) remains valid for arbitrary spectrum {E n } Q n=1 , and a spectrum dilation E n → cE n preserves the eigenstate population up to a stretch of time t 1 → t 1 /c. Consequently, we assume that the spectral range, E Q − E 1 , is bounded from above by some fixed finite constant C > 0. For sufficiently large Q, we simply approximate the Hamiltonian and overlap matrix elements via a properly normalized spectral density of states ω(E), i.e., so that, for relevant functions f of energy. Specifically, where the f -integrals evaluate to the characteristic functionω(t 1 ) of the DOS and its first derivative. Eq. (20) establishes the real-time subspace on the mean level via the approximation (H, S) → (EH, ES), where the mean E is taken over the joint spectral dis- The sum-integral relation from Eq. (20) holds precisely if ω(E) is the empirical DOS of a discrete spectrum. As an illustrative example, we compare a linear spectrum and a spectrum with uniform DOS in Fig. 4. The population profiles show reasonable agreement as expected, and the difference that emerges at short evolution time will vanish in the large Q limit.

Locating spectral suppression
The spectral location of the characteristic suppression seen in Fig. 1 can be calculated by extremizing Eq. (17), where the population profile p(E; t 1 ) and its derivatives are understood from our continuum modeling (Sec. 4.3) in the large Q limit. Let us scale our spectrum to a range of [0, 1] so that the resulting suppression occurs around E x = (1 − x)E 1 + xE Q for some x specifying the center of the suppressed region. For a linear spectrum E n = n∆E and suitably short evolution, independent of the spectral spacing ∆E and evolution time t 1 , which is consistent with our observation.

Beyond Single
Step 5.1 Multi-step convergence VQPE leads to an exponential suppression of the excited state population as we take more timesteps. In particular, a multi-step evolution facilitates delocalized spectral decays, where the number of decay centers over the spectrum grows in proportion to the number of timesteps. Such structured suppression of the eigenstate population p n,j = |⟨n|Ψ g (t 1 , · · · , t j )⟩| 2 can be visualized in Fig. 5.
For multi-step VQPE, fast convergence relies on a suitable time grid ⃗ t. We want the real-time states {|Φ j ⟩} N T j=0 to be sufficiently independent in the sense that the singular values s j ∈ [0, N T + 1] of the overlap S stay bounded, for example, below by a threshold value s SV . In practice, we simply solve Eq. (2) on a truncated subspace for which s j ≥ s SV to avoid numerical instabilities [42] and filter out noise. The insets within Fig 5 show the convergence measured by the ground state energy error. Observe that a small timestep introduces linear dependency in the realtime states and slows down the convergence, which is also manifested in the population profile. On the other hand, a large timestep deteriorates the evolution by introducing degeneracies in the phase interference pattern and hence undesirably suppressing the low energy population. This happens when where large t values result in phase wrappings around the origin in the complex plane. Clearly the condition above imposes periodicity in eigenstate population, where degeneracies arise in the phases accumulated by eigenstates that are nonadjacent in the spectrum. Therefore leveraging the two notions of independence, we may cleverly choose the timesteps so that VQPE converges the fastest. Such optimal time choice for recovering the ground state differs from that investigated in the previous work for recovering the full spectrum.
The ground state convergence also admits a native dependence on the spectral gap ϵ 12 . In the single Profile color in all panels indicates the number of timesteps taken and interpolates between purple (j = 1) and red (j = 10). The insets display the log-scale energy error δE1,j = ⟨Ψg(j)|Ĥ|Ψg(j)⟩ − E1 as the number of timesteps increases.
step limit, the presence of a spectral gap changes the location of the population suppression. For a gapped linear spectrum and suitably short evolution, E x = − lim t1→0 χ ′ (t 1 |ϵ 12 ) determines the suppressed energy in the spectrum from Eq. (69). Notice that E x is naturally associated with an eigenstate |n 1 ⟩ for which monotonically decreases with the gap value. Therefore one expects a red shift of the decay center n 1 (ϵ 12 ) Ground state energy error δE1,j = ⟨Ψg(j)|Ĥ|Ψg(j)⟩ − E1 with j = 5 is plotted as a function of the timestep size t1∆E = κ2π/Q. The relative energy error is normalized by the initial error ⟨Φ0|Ĥ|Φ0⟩ − E1 and determines the optimal timestep size. Bottom: The resulting eigenstate population from the optimal timestep is plotted over the eigenstate index 1 ≤ n ≤ Q = 1000. Curve color distinguishes the gap value and interpolates between yellow (ϵ12 = 0) and dark green (ϵ12 = 400∆E). The inset zooms over the population decays in the observed profile.
relative to n 1 (0) ≈ xQ if ϵ 12 > 0 and a blue shift otherwise. In either situation, the shift originates from the additional phase separation exp (−iϵ 12 t 1 ) between the ground and first excited state. Borrowing our intuition from the single step limit, we expect a larger spectral gap to red shift and broaden the decay regions in a multi-step simulation, as is shown in Fig 6. Accompanied with the red shift is a faster convergence, since a larger gap better separates the excited state phases from the ground state phase. Effectively, the higher energy phases are squeezed together so that they undergo more thorough phase cancellations. As ϵ 12 → ∞, a single step suffices to recover the ground state as already discussed in Secs 4.1 and 4.2.

Proof of multi-step convergence
Even for real-time evolution employing a simple linear time grid, the error of our spectral approximation can be bounded based on an extension of the Kaniel-Paige-Saad formalism [32,[43][44][45]. In particular, we establish an error bound through the following theorems. Theorem 1.1. Let E1(j) label the approximate lowest eigenvalue within the subspace KÛ (Φ 0 ; j), and δE 1 (j) = E1(j)−E 1 the energy error. Then for j ≥ 1, there exists time grid spacing ∆t such that, where cos 2 Ξ = |⟨Φ 0 |1⟩| 2 measures the squared overlap between the initial state and the true ground state characterizes the normalized spectral gap.
[proof.] Let us define, which returns the expected energy of state |v⟩. We focus on the rightmost inequality since the left simply restates, Notice that up to a spectral flipĤ → −Ĥ, it suffices to prove the equivalent statement on δE Q for which a more natural argument is entailed. By definition, for which P j denotes the set of degree j polynomials over C andÛ ≡Û (∆t). Although yet to be identified, we know that there exists a unique set of coefficients {z n } Q n=1 of |Φ 0 ⟩ with respect to the true eigenbasis {|n⟩} Q n=1 such that the expression above can be rewritten as, where λ n = exp (−iE n ∆t) and we have exploited the unitarityÛ † =Û −1 so that p(Û ) † |n⟩ = p(λ n ) * |n⟩ with * denoting the complex conjugation. Relaxing the numerator in Eq. (32), we have In the original Kaniel-Paige-Saad formalism, an advantageous choice of p ∈ P j that realizes a tight bound is the real-valued Chebyshev polynomials, where the minimal supremum norm property of T j , over the interval [−1, 1] ⊂ R helps establish the suitable bound on the fraction in Eq. (34). Note that here λ n = exp (−iϑ n ) for ϑ n = E n ∆t ∈ [0, 2π) so we seek a family of polynomials defined over the unit circle S 1 = {z : |z| = 1} ⊂ C to bound the fraction. Let q ∈ (0, 1] and now we will consider the handy choice of complex-valued Rogers-Szegő polynomials [46,47], over the circle S 1,q = {z : denotes an angular phase. A prefactor of −1 is included to periodically translate the polynomials so that W j (ϑ|q) adapts the symmetry W j (−ϑ) = W j (ϑ) * (we also omit a conditional dependence of W j on q for notational clarity). Such family of polynomials shares the key properties that  Note that the constant q controls the width of our truncated angular window W. In the limit q → 1, one can verify that these polynomials converge to, which simply gives the sum of evenly spaced points on S 1 weighted by the binomial coefficients. As a consequence, sup ϑ |W j (ϑ)| ≈ 2 j for q ≈ 1. To bound the fraction from Eq. (34) tightly, we want a suitable linear transformation L acting on the eigenphases {λ n } Q n=1 such that L nudges ϑ n≤Q−1 all inside the truncated window W while keeping ϑ Q outside. Without loss of generality, we may assume ϑ Q−1 − ϑ 1 ≤ 2Ω and 2Ω ≤ ϑ Q − ϑ 1 ≤ π + Ω by choosing suitable time grid ∆t, e.g., with Ω c = π − Ω. Hence a natural L is the phase multiplicative transformation, which circularly shifts {λ n } Q n=1 so |L(ϑ 1 )| ≤ L(ϑ Q−1 ) = Ω ≤ L(ϑ Q ) as desired. With our pick of L, we can establish a variational upper bound by substituting the trial polynomials p = W j • L into (42) where in arriving at Eq. (42) we have utilized property (i) of W j and defined an overlap angle Ξ by cos 2 Ξ = |z Q | 2 = |⟨Φ 0 |Q⟩| 2 that specifies the projection of our initial state onto the top eigenstate. For the limiting case q = 1, it is rather straightforward to show that Ω = π/3 and, forε Q−1,Q = 1 + Γϵ Q−1,Q ≥ 1. After flippingĤ → −Ĥ, we have proved the statement in the theorem as claimed. Notice that our result is analogous to the classical Krylov result except thatε Q−1, was used in the original convergence theory [32,43,44]. □ Corollary 1.2. Let Eñ(j) label the approximate nth lowest eigenvalue and δE n (j) = Eñ(j)−E n the energy error. Then for j ≥ n ≥ 1, there exists time grid ∆t such that, where Y n,j is a prefactor containing the (n − 1) lowest approximations, while cos 2 Ξ n = |⟨Φ 0 |n⟩| 2 andε n,n+1 = 1 + 3(E n+1 − E n )∆t/2π denote the relevant squared overlap and interior spectral gap respectively. Recall that we have defined the phase factors λ ℓ = exp (−iE ℓ ∆t) asscoiated with the true eigenvalues in Thm 1.1.
[proof.] Again we present the argument for δE Q−n+1 due to the identification E n ↔ E Q−n+1 through a spectral flip. For simplicity, let ⋆n = Q − n + 1. First observe that by the min-max characterization of operator eigenvalues [48] as embodied in Eq. (28), whereÎ denotes the identity operator and R labels an n-dimensional subspace. Thus it suffices to establish RHS of Ineq. (45). By construction ≤ min where the minimum is taken over the subset of polynomials p ∈ P j satisfying ⟨m|p(Û )|Φ 0 ⟩ = 0 for ⋆n + 1 ≤ m ≤ Q (we have reserved the same notations as in Thm.1.1). Here we extend Saad's main idea and consider reducible complex polynomials of the form, with p factorizable into two polynomials q ∈ P j−n+1 and p ↓ ∈ P n−1 . By design, the complex exponentials {exp (−iEm∆t)} Q m=⋆n+1 are zeros of p so p(Û )|Φ 0 ⟩ ∈ M is guaranteed with (n − 1) orthogonality conditions above fulfilled. On the other hand, p ↓ (λ ⋆n ) = 1 implies, and we may simply relax the numerator by recognizing, where A(⋆n; ∆t) ⊂ S 1 gives a circular arc with arc angle [−ϑ ⋆n−1 , −ϑ 1 ]. For example, we expect the supremum to occur at λ 1 = exp (−iϑ 1 ) when the time grid ∆t satisfies ϑ Q − ϑ 1 ≤ π. It is clear that the rest of our proof follows from direct application of Thm 1.1 to the spectral sector {E ℓ } ⋆n ℓ=1 . □ 6 Alternative Implementation analysis 6

.1 Vanilla and iterative time evolution
VQPE evolving a fixed reference |Φ 0 ⟩ for N T timesteps solves a linear system in one shot, which requires O(N β T ) complexity with an exponent β ∈ [2, 3]. Now consider an evolution for which we dynamically update the reference after each timestep. Specifically, we update based on our current best guess |Ψ g (j)⟩ on an (N I + 1)-dimensional subspace defined iteratively by, where t j−1 = t j,0 < t j,1 < · · · < t j,N I = t j denotes a partition of [t j−1 , t j ] with ∆t j,k = t j,k − t j,k−1 (∆t j,0 ≡ 0 as our convention). The trivial case N T = 1 corresponds to a vanilla VQPE. For the simplest nontrivial case N I = 1, the reference state after j steps takes the form, starting with |Ψ g (0)⟩ = |Φ 0 ⟩, our input initial state. Such iteration requires O(N T N β I ) complexity and thus offers a speedup when N I ≪ N T . In this section, we focus on the case N I = 1, again with our initial state being a uniform superposition.
First observe that a vanilla time evolution always outperforms an iterative one if we employ a linear time grid, ⃗ t = (t 1 , 2t 1 , · · · , N T t 1 ), based on the update rule from Eq. (55). As a minimal example, |Ψ g (j = 2)⟩ takes the free form c 0 |Φ 0 ⟩+c 1 |Φ 1 ⟩+c 2 |Φ 2 ⟩ under a vanilla evolution and the constrained form d 0 (c 0 |Φ 0 ⟩ + c 1 |Φ 1 ⟩) + d 1 (c 0 |Φ 1 ⟩ + c 1 |Φ 2 ⟩) under an iterative evolution. Instead, we consider the adaptive time grid, where γ t defines an additional free parameter discounting any time interval [t j−1 , t j ] with respect to its precursor [t j−2 , t j−1 ]. Now |Ψ g (j = 2)⟩ from the example above takes the form d 0 (c 0 |Φ 0 ⟩ + c 1 |Φ 1 ⟩) + d 1 (c 0 |Φ 2 ⟩ + c 1 |Φ 3 ⟩) after an iterative evolution with γ t = 2, thus including a new state |Φ 3 ⟩ that can potentially facilitate the convergence. For subsequent comparisons, let t ⋆ 1 denote the size of a timestep from the optimal linear grid. We then look at convergence of the iterative VQPE (IVQPE) with an adaptive time grid versus the vanilla VQPE with an optimal linear time grid t j = jt ⋆ 1 . When γ t ≪ 1, both vanilla and iterative evolution degrade with more timesteps due to linear dependency issues. When γ t > 1, we expect the iterative evolution to gain reasonable convergence at (t 1 , γ t ) = (t ⋆ 1 , 2) which we term near optimal parameters, since we iteratively evolve onto a larger unexplored subspace of size 2 j that stays maximally independent from the explored one. The near optimality over our restricted two-dimensional parameter space is explicitly displayed in Fig. 8. The population profiles of ground states extracted from IVQPE with linear and adaptive time grids are displayed in Fig. 9, where we observe very different population suppression depending on the time grid that guides the phase rotations. For 0 ≪ γ t < 1, we expect the performances of both vanilla and iterative evolution to progressively degrade as γ t decreases, where the vanilla evolution will experience a sharper slowdown in its convergence rate due to difficulties in simultaneously resolving all the time evolved states for desirable phase cancellation. However, the use of γ t < 1 turns out to be particularly beneficial for special scenarios. For example, recall that in Sec. 4.2, we have deduced an exact and exponentially fast ground state recovery from an iterative evolution with adaptive timesteps ∆t j = 2 1−j π/∆E and thus γ t = 1/2. In that case, specific restrictions lie in the spectral density (linear spectrum) and Hilbert space size (log 2 Q ∈ Z + ). The implementation of IVQPE as a quantum circuit via a sequence of intermediate state preparation is discussed in Appendix E. We remark that an accurate sampling of the Hamiltonian and overlap matrix elements relies on the faithful preparation of the intermediate states. Regardless of the time parametrization, we only need to measure the off-diagonal matrix elements (the diagonal elements are determined before each IVQPE step). For the simplest case N I = 1, the total number of measurements is 2N T , i.e., still linear in the number of timesteps taken.

Effect of stochasticity
Within the context of ground state computation from real-time evolution, sources of stochasticity may include dynamical noises due to dissipative systembath interactions and statistical uncertainties due to measurements on hardware, both of which evolve with the number of timesteps taken. By simulating perturbations on our target spectrum, we can examine sus-ceptibility of the multi-step convergence to induced spectral disorder.
Let us absorb such disorder into the spectral DOS ω(E) introduced in Sec. 4.3. Without stochasticity, ω(E) = Q n=1 δ(E −E n ) is a collection of sharp peaks in the energy domain. These peaks will broaden in the presence of probabilistic perturbations so that ω(E) = Q n=1 g n (E), where the broadening is dictated by the distributions, g n , from which the energy levels are drawn. For concreteness, a phenomenological instance of the spectral broadening is derived in Appendix F using perturbation theory.Here we consider a random spectrum with fixed ground state energy E 1 and i.i.d. level spacing, where p ∆ (∆ n ) gives the spacing statistics. In this specific case, the distributions g n are given by, with all the spacing variables ∆ m integrated out through convolutions of their statistical weights. From here on, we will use the single random variable ∆E ∼ p ∆ (∆E) to denote the level spacing in the presence of stochasticity. Upon averaging over the spectral disorder associated with g n , we get an effectively linear spectrum, E eff n = E 1 + (n − 1)E∆E. Note by the generalized Jensen's inequality, where E n denotes an expectation against the level distribution g n and η > 0 defines a nondimensional level spacing variance such that var∆E = η(E∆E) 2 . Thereby for suitably short time evolution ⃗ t (as remarked in Sec. 5.1) and spacing disorder p ∆ with controlled variance, a standard continuity argument suggests that the approximate eigenvalues of a target operator subject to stochastic perturbations above will, on average, closely resemble those of a deterministic operator having a harmonic spectrum. As a result, these perturbations can be significantly tempered through ensemble average over the spectral disorder, especially for unperturbed target operators with a relatively flat DOS. To illustrate this robustness of VQPE with respect to spectral disorder from Eq. (58), we display in Fig. 10 the ground state convergence for random spectra and their linear equivalent, given simple forms of p ∆ with η = O(1) ≪ Q. Agreement between the mean of the convergence envelope over spectral realizations and the convergence on the mean spectrum observed in Fig. 10 tends which generates the well-known Gaussian unitary ensemble (GUE) in random matrix theory. It is worth mentioning that the spectral disorder from Eq. (61) can be reproduced alternatively from the spacing statistics, where D = E∆E and p ∆ in this case vanishes quadratically for small ∆E, exhibiting the phenomenon of level repulsion. From either prescription, one may prove that the DOS follows a semicircle law ω(E) = Q √ 4 − E 2 /2π with mass concentrated around E = 0. Such concentration can be contrasted with the dispersion ω(E) ∝ 1/D when p ∆ (∆E) ∝ exp (−∆E/D). Fig. 11 demonstrates the convergence behavior of random spectra sampled within GUE upon proper scaling and shifting, where we note a drift of the convergence envelope away from our convergence benchmark on the averaged spectrum E eff n .
To further investigate the dependence of the convergence envelope on the DOS concentration, we also include in Fig. 11 the behavior of random spectra sampled according to the Gaussian density ω(E) ∝ exp (−E 2 /2σ 2 ) where σ tunes the mass concentration. For a chosen mean spacing E∆E, stochasticity with Gaussian DOS shows a faster ground state convergence compared to that with semicircular DOS as displayed in Fig. 11. Thus the shape of ω(E) takes a decisive part in regulating the convergence of VQPE. In general, ω(E) is uniquely determined by its characteristic functionω(t). But a suitably short time evolution allows us to extrapolateω(t) only in terms of the derivativesω (k) (0), which are nothing but the cumulants of our DOS. Consequently, we comment that the different convergence behaviors due to different disorder may be exploited as a spectral fingerprint from which useful local information about the eigenvalue density can be revealed.

Choice of initial vector
Throughout the previous sections, we have asserted a simplifying assumption that we initialize with a uniform superposition state. In practice, this assumption seems tailored for certain tasks such as the combinatorial searches (creation of equally weighted bitstrings from Hadamard gates) but becomes less effective to implement for other tasks such as the electronic structure predictions in quantum chemistry. Specifically, in contrast to massively degenerate spectra whose eigenstates could be easily resolved within few timesteps as seen earlier in Sec. 4.1, molecular Hamiltonians generally exhibit a full-rank structure accompanied by, at most, moderate spectral degeneracies. Consequently, achieving convergence for full-rank chemical problems requires exploring larger subspace, as is typical for the canonical Krylov methods. Given that phase cancellation is only relevant within the support of a real-time Figure 11: Ground state convergence after multiple timesteps (random spectrum and linear time grid tj = jt1 with κ = t1QE∆E/2π = 0.4). Population profiles are plotted for two differently concentrated DOS ω(E). Top: Semicircular spectral density. Bottom: Gaussian spectral density. The profile color in both panels indicates the number of timesteps taken and interpolates between purple (j = 1) and red (j = 10). The inset displays the log-scale energy error where the solid curve benchmarks the convergence for a linear spectrum with a flat DOS while the hollow circles mark the convergence for a randomly realized spectrum. The pink shade shows the convergence envelope for 10 3 random realizations and the dashed curve traces out the average convergence.
state, it is favorable to strategically prepare the initial state in order to taper the high Hamiltonian rank and minimize the runtime of the real-time evolution.
Here we consider the transition metal dimer Cr 2 (def2-SVP basis set [49], 30 orbitals and 24 electrons), where we restrict the simulation to the widely studied 30-orbital active space, as a prototypical molecular system that exhibits strong electronic correlations. We then examine the role of initial state preparation in the VQPE ground state computation. Due to implementation feasibility, we truncate the Hilbert space and employ only a subset of all the Slater determinants in the active space. The determinants are chosen using the adaptive sampling configuration interaction (ASCI) algorithm [50][51][52]. This is an iterative selected configuration interaction approach that explores the Hilbert space and identifies the most important determinants for a ground state approximation, thus providing highly accurate and moderately sized truncations. The data shown below for Cr 2 includes 4000 determinants, selected by taking a one million determinant Hilbert space truncation with ASCI and picking the 4000 determinants with the largest coefficients in the one million. Although the full Hilbert space for Cr 2 is much larger than what we have studied here, we remark that in previous work [26] we performed a finite-size effect study of VQPE by comparing the dynamics of progressively larger truncations of Cr 2 , from one thousand to one million determinants, demonstrating that vastly different truncation sizes result in the same convergence.
We test two candidate initializations, uniform superposition |Φ U ⟩ and Hartree-Fock |Φ HF ⟩, whose ground state convergences are shown in Fig. 12. As the lower energy reference, the Hartree-Fock state also accelerates the rate of convergence and gives chemical accuracy on the order of 10 timesteps. The observed fast convergence can be attributed to two primary features of |Φ HF ⟩: (i) its significant overlap with the low energy eigenstates, which persists across different system sizes and ensures an exponential decrease of the energy error, and (ii) its relatively tight support in the Hilbert space, which effectively reduces the rank of the Hamiltonian and enlarges the spectral gap. Although this speedup necessitates a Hartree-Fock preparation beforehand, a handful of known techniques can be invoked to minimize the cost of such preprocessing so that |Φ HF ⟩ remains an advantageous choice for eigenstate recoveries in molecular systems. Moreover, we show in Fig. 12 the ground state convergence when the initial state is randomized as encountered in the standard Krylov subspace method, e.g. the Lanczos algorithm. The randomization often involves a draw of i.i.d. variables {ϕ n } Q n=1 followed by a normalization |Φ 0 ⟩ = (ϕ 1 , · · · , ϕ Q ) → |Φ 0 ⟩/ ⟨Φ 0 |Φ 0 ⟩. We take our drawing distribution to be uniform U[0, 1] and plot the convergence envelope. By the central limit theorem, we expect the envelope to narrow and match the convergence behavior for |Φ U ⟩ in the large Q limit.

Conclusion
In this work, we study the class of subspace expansion algorithms utilizing a real-time evolved basis, providing detailed analysis for the underlying ideas in the original VQPE formalism from our previous work [26]. The main new results that we have presented here are the following: We have demonstrated a visualization of the convergence of the single-step and multi-step real-time algorithm. We have supplemented our visualization with a proof of the observed convergence, analogous to that constructed to justify Lanczos-type algorithms. Finally, we have introduced different algorithmic implementations for generating real-time states. This includes an iterative implementation that holds interesting convergence properties of its own. Given the significant recent interests in realtime algorithms on quantum devices [24-26, 53, 54], we believe that our work provides a timely and important step forward in understanding the properties of these algorithms as they are further developed for quantum computation. Our analysis, additionally, remains quite general and can also be used to advance these types of algorithms on classical architectures.

Acknowledgments
We are grateful for support from NASA Ames Research Center. We acknowledge funding from the NASA ARMD Transformational Tools and Technology (TTT) Project.
Calculations were performed as part of the XSEDE where ϵ 12 ∈ (−∆E, ∞) denotes the signed excess excitation between the ground and first excited state. Unlike any spectrum shift which physically translates to a reset of the zero point energy and thus preserves the population (one may check that Eq. (17) remains invariant under energy shift E n → E n +E 0 ), a change in the spectral gap ϵ 12 can induce a population transfer where the lower energy population gets enhanced by a larger gap value. Such nontrivial influence of the gap is manifested in the eigenstate population, whose functional form is immediately accessible once we identify how the matrix elements transform under a gap change E n → E n + (1 − δ 1,n )ϵ 12 . Figure 13: Dependence of eigenstate population on the single timestep (linear spectrum with spectral gap En = n∆E + (1 − δ1,n)ϵ12). The ground state energy error δE1 = ⟨Ψg|Ĥ|Ψg⟩ − E1 is plotted over the timestep size t1∆E = κ2π/Q. Here δE1 is plotted relative to the initial error ⟨Φ0|Ĥ|Φ0⟩ − E1 and takes a value between 0 (exact recovery of ground state) and 1 (no improvement over the initial estimate). Curve color indicates the value of the spectral gap and interpolates between yellow (ϵ12 = 0) and dark green (ϵ12 = 400∆E).
Note that increasing ϵ 12 in this case can enhance the population p 1 , but at a likely cost of compromising energy accuracy in the single step limit. Such a trade-off is shown in Fig. 13. Even with the optimal timestep, the relative error in the approximate ground state energy (with respect to the equal superposition starting state) exhibits nonmonotonic dependence on the gap value. In the extreme case ∆E → 0 and ϵ 12 → ∞, we recover the unstructured search for which the single step VQPE gives the exact result.

C Exponentially fast convergence of the ground state of a harmonic spectrum
Recall that the eigenstate population, takes a sinusoidal form after single step. In particular, represents a phase offset determined by the Hamiltonian and overlap matrix elements, while gives a scaling to normalize the eigenstate population. In the expressions above, arg(·) denotes the argument of a complex number andμ, µ,ρ, ρ, g, λ ± are all auxiliary variables in Eqs. (71)-(72). In terms of the matrix elements, these auxiliary variables are where ℜ and ℑ label the real and imaginary part of the matrix elements respectively. Notice that the dependence on t 1 is implied in the definitions of the auxiliary variables. Fig. 14 shows the phase and normalization of eigenstate population for the gapped Hamiltonian as a function of the evolution time t 1 . Note that the phase offset χ stays linear for a short time and then undergoes damped oscillations, where the spectral gap sets the slope and envelope in both the linear and oscillatory regimes respectively. The normalization factor Z grows quadratically for short time and then plateaus to an asymptotic value around Q with intertwined oscillations, whose envelope is likewise set by the gap value. The condition on the phase factor ∆Et 1 for derivation of Eq. (70) is as follows: at fixed ∆E, invariance of H and S under periodic shift of ∆Et 1 by 2πZ suggests that we can always make the assumption ∆Et 1 ∈ (0, 2π). To ensure that the constituent expressions given by Eqs. (73)-(79) are well-defined, we also exclude the measure zero subset of t 1 values for which g(t 1 ) = 0 and ρ(t 1 ) ≤ 0. When Q∆Et 1 ∈ 2πZ, it is straightforward to check that S = I and our previous formula seems to fail with ℜS 01 = ℑS 01 = |S 01 | = 0. However, lim ∆Et1→2πk/Q p n (t 1 ) exists for integer 1 ≤ k ≤ Q − 1 and matches with the analytical expression from diagonalization of H. Hence Eq. (70) remains valid almost surely on the phase interval (0, 2π). For the case ∆Et 1 = π and Q ∈ 2Z + − 1, we can show that the extracted ground state takes a form, almost identical to the case Q ∈ 2Z + specified within the main text, except that the solution above is degenerate.
On the other hand, a generic choice of ∆Et 1 ∈ (0, 2π) influences the extracted population profile in a nonmonotonic way. For each eigenstate |n⟩, the population p n (t 1 ) oscillates between its local extrema at a characteristic rate of 2π/[ χ (t 1 ) + nt 1 ∆E] as we vary ∆Et 1 . Consequently, we expect some region in the phase parameter space where the population of the low energy eigenstates fully dominates that of the higher energy eigenstates and hence the extracted ground state |Ψ g ⟩ is optimal. Such nonmonotonicty has been explicitly shown in Fig. 2 using an optimal timestep ∆Et 1 ∈ (0, π/Q).

D Phase cancellation from optimized Möbius transformations
For multi-step time evolution, we can rewrite the extracted ground state, whereR j : v → |Φ j−1 ⟩ + c j v defines the nested linear combinations. In the second equality, ∆t j = t j − t j−1 whileT j is the |Φ 0 ⟩-translation of the image subspace ImÛ (∆t j ). Eq. (81) recapitulates that a pairwise combinator of the form |Φ j−1 ⟩ + c j |Φ j ⟩ will rotate accumulated phases exp (−iE n ∆t j ) commonly via c j (up to a stretch) and tilt the rotated phases via addition of 1. If we project this operator identity onto eigenstate |n⟩, we may view the emergent algebraic recursion z 0 (n) = 1 and, as a sequence of Möbius transformations which direct simple geometric moves in the complex plane. In particular, each geometric move G j consists of a phase calibration z → exp (−iE n ∆t N T −j+1 )z followed by a stretching rotation z → c j z and an additive tilt z → z + 1 as described above, where the last move yields the eigenstate population |z N T (n)| 2 = |⟨n|Ψ g (t 1 , · · · , t N T )⟩| 2 . Now recall that the weights {c j } N T j=1 are optimized to maximally restrict the excited states population. Geometrically, the weights hence encode an optimal sequence of phase moves that best lower the energy cost, where the roots {n k } N ℓ k=1 designate N ℓ spectral landmarks of high energy cost. Heuristically, the N T complex degrees of freedom available in a move sequence are capable of handling a maximum number of N ℓ = N T independent nodal constraints z N T (n k ) = 0 so we expect a one-to-one correspondence between moves {G j } N T j=1 and distinct landmarks {n k } N T k=1 given a suitably short evolution ⃗ t. For N T = 1, the move G 1 involves a supplementary rotation that locks the calibrated phase exp (−iE n1 t 1 ) → −1 and a subsequent counteractive tilt. For N T ≥ 2, a sequential move G j involves a stretching rotation that clusters the calibrated phases around the negative real axis and a subsequent tilt that sends the phase cluster across the imaginary axis and thus successively flips the relative before the final counteraction of G 1 . These different moves are schematically shown within Fig. 15. For the specific scenario N ℓ = N T = Q − 1, the set of nodal constraints may be regarded as a restriction over the energy domain dual to the phase cancellation conditions (PCCs) over the time domain explored in our previous work [26].  Figure 15: Linear combination of time evolved states interpreted as phase rotation and tilting. The transformed variables zj(n k ) are plotted as filled markers where the defining recursion in Eq. (82) is regarded as a sequence of geometric moves acting on the complex plane. Marker color highlights action of the NT moves Gj and interpolates between purple (j = NT ) and red (j = 1). For any given color, marker transparency distinguishes the NT eigenstates n k whose population is to be suppressed after the sequence of moves. For reference, the calibrated phases exp (−iEn k t1) are displayed alongside as half-filled markers on the unit circle |z| = 1.

E Preparation of Ground State from IVQPE
We note that the preparation of ground states in quantum circuit by IVQPE is no different than that by VQPE, even though IVQPE runs its own classical processing and generates a sequence of intermediate states as the time-evolved states. For convenience, we focus our discussion on the simple case N I = 1.
In particular, our phase cancellation intuition suggests that each IVQPE step acts as a sum of two unitary gates,Î + exp (iϑ)Û (∆t j ), where the interfering phase ϑ changes after each iterative step. Childs and Wiebe in [41] provide the probability of applying linear combinations of unitary operations (LCU), thus yielding a success probability of the first iterative step, where E x specifies the center of spectral decay. Upon an LCU measurement that indicates success, we obtain the first intermediate state |Φ 0 ⟩ → |Ψ g (j = 1)⟩ with |z n | 2 → |z n | 2 sin 2 (Ex−En)t1 2 (up to some overall normalization). We then proceed to implement the next sum of unitaries on this intermediate state.
Hence an approximate ground state from an iterative evolution of N T timesteps can be prepared via LCU with a success probability, For the probability above to take any reasonable value, the low energy amplitudes |z n | must be significant. This can be realized in quantum chemistry applications, for example if we start with a Hartree-Fock state. Given such an initial state, preparation of the intermediate states using a quantum circuit can be simulated and the associated probabilities of implementing the first few IVQPE steps are displayed in Fig. 16 for a model molecular Hamiltonian. As we tune the size of the timestep, we observe individual probabilities that are relevant for practical implementation. A general time grid dependence is further investigated in Fig. 17 where, with reasonable P IVQPE , the approximate ground state can be prepared from a range of time parametrizations.

F Noise Modeling from Spectral Statistics
To see how the target spectral DOS ω(E) can broaden in the presence of noise, let us consider a phenomenological model [59,60] for which the Hamil-tonianĤ undergoes a Hermitian stochastic perturba-tionĤ →Ĥ +V (t) during the time evolution.V (t) in the computational basis is taken to be a Gaussian random matrix, and for now we assume a memoryless perturbation, i.e.,V (t) is uncorrelated withV (t ′ ) unless t = t ′ . Without loss of generality, we are free to make a change basis since Eq. (86) is invariant under any similarity transformation. Thus in the eigenbasis ofĤ, we have using standard results from perturbation theory in quantum mechanics. Up to first order (in the operator norm of the perturbation), we notice that the DOS becomes where the broadening g n is given by a Gaussian centered at E n . Similarly, higher order corrections leads to a distinct functional form of g n as long as we remain in the perturbative regime. Such spectral broadening is illustrated in Fig. 18