Measurement optimization of variational quantum simulation by classical shadow and derandomization

Simulating large quantum systems is the ultimate goal of quantum computing. Variational quantum simulation (VQS) gives us a tool to achieve the goal in near-term devices by distributing the computation load to both classical and quantum computers. However, as the size of the quantum system becomes large, the execution of VQS becomes more and more challenging. One of the most severe challenges is the drastic increase in the number of measurements; for example, the number of measurements tends to increase by the fourth power of the number of qubits in a quantum simulation with a chemical Hamiltonian. This work aims to dramatically decrease the number of measurements in VQS by recently proposed shadow-based strategies such as classical shadow and derandomization. Even though previous literature shows that shadow-based strategies successfully optimize measurements in the variational quantum optimization (VQO), how to apply them to VQS was unclear due to the gap between VQO and VQS in measuring observables. In this paper, we bridge the gap by changing the way of measuring observables in VQS and propose an algorithm to optimize measurements in VQS by shadow-based strategies. Our theoretical analysis not only reveals the advantage of using our algorithm in VQS but theoretically supports using shadow-based strategies in VQO, whose advantage has only been given numerically. Additionally, our numerical experiment shows the validity of using our algorithm with a quantum chemical system.


Introduction
With the advent of noisy intermediate scale quantum (NISQ) computers [1], the next milestone is to realize their practical applications. A class of quantum algorithms called hybrid quantum-classical algorithms (see [2,3]) came to attention because a load of computation can be distributed between quantum computers and classical computers; accordingly, the depth of the quantum circuit may be reduced. Among hybrid algorithms, variational quantum algorithms (VQAs) [4][5][6][7][8][9] have been extensively studied for diverse applications such as quantum computational chemistry [5,6], quantum machine learning [10], quantum metrology [11,12], etc. In VQAs, parametrized quantum states are generated from shallow quantum circuits, and the parameters are iteratively updated with the optimization using classical computers. VQAs can be broadly classified into two types: variational quantum optimization (VQO) [4][5][6][7][8] and variational quantum simulation (VQS) [9]. VQO computes and optimizes a problem-specific cost function; for example, in the famous variational quantum eigensolver (VQE) [5,6], the energy of parametrized quantum states is evaluated by measuring Pauli operators, which constitutes the problem Hamiltonian in quantum computers, and the minimization of the energy results in the ground state and its energy. Meanwhile, VQS simulates the dynamics of the quantum system, e.g., quantum real and imaginary time evolution [9,13] the variance of the observable; the analysis is also applicable for assessing the merit of using the shadow-based strategy in VQO, • We show the validity of our algorithm and the theoretical discussion by numerical demonstration.
We show that the derandomization [19], one of the shadow-based strategies, greatly reduces the number of measurements compared to the original approach.
We note that even though we focus on the shadow-based methods in this paper, we can also utilize the grouping strategies for measurement optimization in VQS due to our circuit synthesis. Particularly, the recent result [31] shows that they can outperform derandomization in VQE by optimizing parameters in the grouping so that the variance of the observable is minimized. Also, in [32,33], authors provide methods that may outperform the method in [31]. Thus, we expect better performance in VQS by using those grouping methods than shadow-based methods. It is still worth mentioning that those grouping methods need a reference quantum state for calculating the variance; the reference state is chosen to be a quantum state close to the ground state of the Hamiltonian in their work since they mainly target VQE. However, in most cases of VQS, our goal is not to find the ground state of the Hamiltonian, and we need to investigate the choice of a reference state. We leave combining our methods with those grouping strategies for future work. The rest of the paper is organized as follows. In Section 2, we give a brief review of VQS. We also state the challenge of VQS in the same section. We review the shadow-based strategy in Section 3; particularly, we discuss the classical shadow [17] and the derandomization [19] in detail. Section 4 is dedicated to proposing our algorithm that applies the shadow-based strategy to VQS. The advantage of utilizing the shadow-based strategy is discussed in the same section. In Section 5, we numerically demonstrate the performance of our algorithm. Finally, we conclude our paper with some discussions in Section 6.

Variational quantum simulation without measurements optimization
In this section, we review the variational quantum simulation (VQS) proposed and studied in the previous literature [9,13,14] and state a challenge of the algorithm. In Section 2.1, we give a general description of the VQS based on [37]. In Section 2.2, we show the variational real and imaginary time evolution as examples. Section 2.3 is dedicated to stating a problem of the previous approach in the VQS.

General processes
We define the quantum time evolution for a state |v(t) in this paper as the following differential equation, where A(t) and B(t) denote matrices. In the VQS, we parameterize |v(t) as v( θ(t)) by using a set of parameters. From Mclachlan's principle [15],

2)
where |||ψ || ≡ ψ|ψ , N P is the number of parameters, and We note that there are two other conventional variational principles than the Mclachlan's principle [15]: Dirac and Frenkel's variational principles [38,39] and the time-dependent variational principle [40,41]. A detailed comparison of different variational principals is made in [14]. The Dirac and Frenkel variational principle is not suitable for VQS since the equation of the parameters may involve complex solutions, which contradicts the requirement that parameters are real. Even though the time-dependent variational principle gives a real solution, the time-dependent variational principle is shown to be more unstable and not applicable to the evolution of density matrices and imaginary time evolution. In contrast, McLachlan's principle generally produces stable solutions, and it is also applicable to all VQS problems. Equation (2.2) is equivalent to for all k. We can further simplify (2.4) as

5)
where with [w] as the real part of the complex number w.
Once we compute M k and V k , we can update the parameter by the Euler method with a small parameter δt as θ(t + δt) = θ(t) + M −1 V · δt, (2.8) where M (V ) is the matrix (vector) representation of M k (V k ). Next, we show how to compute M k and V k in the following.

Estimation of M k and V k using quantum circuits
Let R be the unitary operator corresponding to the parameterized quantum circuit (PQC); namely v( θ(t)) = R|v ref , (2.9) with |v ref as a reference state. We often call R the ansatz. We expand R as R ≡ R N P (θ N P )R N P −1 (θ N P −1 ) · · · R 1 (θ 1 ), (2.10) where R k is the k-th unitary operator applied to the reference state and θ k is the parameter of R k . Let us describe 11) where N g is the number of Pauli observables when expanding the derivative, g k,i is a coefficient, and P k,i is a tensor product of Pauli operators and identity: P k,i ∈ {X, Y, Z, I} ⊗n . Then In typical settings, N g = O(1) is satisfied. For example, if we embed each parameter as exp(−iθ k P/2), where P denotes a tensor products of Pauli operators, N g = 1 and g k,1 = −i/2. Also, let us describe where P q and P r are tensor products of Pauli operators and identity, and where N BB and N BA are the numbers of terms when decomposing B † (t)B(t) and B † (t)A(t) as the sums of Pauli operators. By substituting (2.12) and (2.14) into (2.6), we obtain Similarly, by substituting (2.12) and (2.15) into (2.7), we obtain For computing (2.16), we need to evaluate each as a phase, which is possible by using the Hadamard test. In Fig. 1, we show the quantum circuit to compute the value for k ≤ . Note that since M k = M k , we do not need to consider the case when k > . To compute (2.17), which is possible by using the quantum circuit in Fig. 2.
We use the circuit enclosed by the dotted rectangle in Section 4. We note that the output quantum state after applying the gates in the dotted box is |ψ φ Figure 1, we use the circuit enclosed by the dotted rectangle in Section 4. We note that the output quantum state after applying the gates in the dotted box is |ψ φ

Variational real and imaginary time evolution
Here we show M k and V k in real-time evolution (RTE) and imaginary time evolution (ITE) as examples  of the VQS. The real-time evolution is an algorithm to simulate the Schrödinger equation under a timeevolution Hamiltonian H: Thus, A(t) = −iH and B(t) = 1 and we obtain where we expand the Hamiltonian as H = N BA j=1 α r P r with α r as a real coefficient. Note that we see The imaginary time evolution is the operation obtained by replacing it to τ and H to H − H in ∂t . (2.23) The second term, which corresponds to H , vanishes since the norm of v( θ(t)) preserves under the transformation of θ(t), and therefore, we obtain M k and V k by setting B(t) = 1 and A(t) = −H when computing (2.16) and (2.17). Thus, in the variational imaginary time evolution, we obtain The circuit used for evaluating each term of V k in real and imaginary time evolution is Fig. 2, which we introduced in Section 2.1. On the other hand, a simpler circuit is used for evaluating M k in real and imaginary time evolution; we show the circuit in Fig. 3. We can readily check that the output of the circuit returns

Problems of the current approach
The approach mentioned above to performing the VQS is challenging to implement because the number of measurements linearly increases as the number of terms denoted by N BB and N BA increases. That is because we need to evaluate the circuit corresponding to each term inside the summations of (2.16) and (2.17) one by one (the number of measurements also linearly increases according to the value of N g , but it is not problematic since N g is typically taken to be small). For example, in RTE and ITE, the number of measurements for evaluating V k linearly scales with the number of terms in H, which could be impractical when H has many terms. Suppose that we use a molecular Hamiltonian as the time-evolution Hamiltonian. In that case, the number of terms in the Hamiltonian is O(n 4 ) with n as the number of qubits, and therefore, the number of measurements also increases in proportion to O(n 4 ).
The challenge of a large number of Pauli observables in Hamiltonian is also addressed in the variational quantum eigensolver (VQE), and many solutions have been proposed [6, 16-22, 26, 27, 30, 42-47]. However, to our best knowledge, no previous work has explored this issue in the context of the VQS. Importantly, there is a crucial difference between VQS and VQE; namely, to evaluate each term in the summation in (2.14) and (2.15), we need the controlled operations associated with each Pauli observable to evaluate each term in the summation of (2.14) and (2.15) in VQS, while the VQE only requires estimating the expectation value of each Pauli observable. Thus, it is not straightforward to apply the methods discussed in [6, 16-22, 26, 27, 30, 42-47] to VQS. In the rest of the paper, we show a way to overcome the difficulties and discuss how to apply the proposed methods for the VQE to the VQS. Before discussing the main point, we will review the proposed methods for the VQE in the next section.
It should be noted that in the case of ITE, the parameter shift-rule [48] is applicable to calculate V k under a suitable condition (e.g., N g = 1 and g k,1 = 0 is real), meaning that the evaluation of V k boils down to the evaluation of the expectation value of H = r α r P r with respect to a quantum state generated by a PQC. In that case, we can reduce the number of measurements by utilizing the techniques proposed in VQE. Still, in most of the applications of VQS, the parameter-shift rule is not applicable; therefore, we need to construct a measurement optimization procedure tailored for VQS.

Measurements optimization for sum of Pauli observables
In this section, we review strategies to reduce the number of measurements in the VQE, when the Hamiltonian has many terms such as H = K j=1 a j P j for K 1. In the VQE, we need to estimate the value H = K j=1 a j Tr(P j ρ) where {a j } are real coefficients, {P j } are Pauli observables (P j ∈ {X, Y, Z, I} ⊗n ), and ρ is a n-qubit quantum state generated by a quantum circuit. When K increases, the number of measurements to achieve a given precision linearly increases as far as we naively evaluate each term.
For improving the scaling of the number of measurements, several solutions have been proposed [6, 16-22, 26, 27, 30, 42-47]. The most promising solutions are arguably shadow-based strategies [17][18][19][20][21], which we review in this section. In Section 3.1, we first give a general discussion about shadow-based strategies [17][18][19][20][21]. Next, in Section 3.2, we show the detail of some of the shadow-based strategies that we will examine in our numerical experiment in Section 5.

General discussion for shadow-based strategies
Shadow-based strategies for estimating H [17][18][19][20][21] are mostly composed of three steps. The first step of the estimation process takes the number of measurements N shot , {a j } K j=1 and {P j } K j=1 as its inputs. Let us denote a measurement basis by one of the Pauli observables {X, Y, Z} ⊗n , where each measurement is performed on the basis where the Pauli observable is diagonalized. For example, when n = 3, the measurement basis can be expressed as XZX, meaning that the quantum state ρ is measured in the computational basis after we operate the Hadamard gates to the first and the third qubits. The first step returns the array of measurement basis as {M r } N shot r=1 ; how to select those basis depends on the algorithm. We define the function corresponding to the first step as buildMeasurements({a j }, {P j }, N shot ). The second step takes {M r } N shot r=1 and ρ as its inputs and returns the measurement results of ρ as {b r } N shot r=1 , where b r ∈ {1, −1} ⊗n . We denote the function corresponding to the second step as getMeasurementResults({M r } N shot r=1 , ρ). The third step takes {M r } N shot r=1 , {b r } N shot r=1 , {P j } K j=1 , and {a j } K j=1 as its inputs, and returns the following ν: Here, f (P, M ) denotes a covering function, q(P j ) denotes a covering probability function, and µ(P, b) denotes an estimation function. These are defined so that ν should be an unbiased estimator of Tr(Hρ).
We show the detailed definitions as follows.
Covering function f (P, M ) The covering function f (P, M ) takes a Pauli obserable P and a measurement basis M as its inputs, and returns outputs as follows  [19].

Covering probability function
We define the covering probability function q(P ) depending on whether we generate the measurement basis probabilistically or not. When they are probabilistically generated according to a probability distribution q m (M ), q(P ) is the probability that the Pauli observable P is covered, namely When measurements are chosen deterministically, we define the covering probability function by the ratio that P is covered when measurements are {M r } N shot r , namely, Note that in this deterministic case, q(P ) depends on the chosen measurement basis {M r } N shot r =1 , but we omit the corresponding index from q(P ) for simplicity.

Estimation function
The second function is the estimation function µ(P, b) that takes the Pauli observable and the bit array b ∈ {1, −1} ⊗n as its inputs and operates as follows: As we show in Appendix A, ν becomes an unbiased estimator of Tr(Hρ), i.e., E[ν] = Tr(Hρ). Also, the variance of ν is given by (3.11) and if {M r } N shot r=1 is deterministically chosen, We write the three steps of obtaining ν as estimateNu(ρ, {a j } K j=1 , {P j } K j=1 , N shot ) and summarize it in Algorithm 1. It should be noted that the variance Var(ν) determines the number of measurements N shot to achieve a given precision ; namely, N shot scales as N shot = O(Var(ν)/ 2 ).

Examples of shadow-based strategies
The shadow-based strategy is originally proposed by the literature [17] and various methods motivated by the technique have been proposed [17][18][19][20][21], which mainly target to reduce the value of Var(ν) in (3.10). Among them, we review the classical shadow and the derandomization in the following discussion. SET results[r] = M . 9: end for return results

Classical shadow
In the classical shadow, we randomly choose the measurement basis. Namely, each M r [i] (i-th Pauli matrix in r-th measurement basis) is chosen from {X, Y, Z} with equal probability, i.e., q m (M ) = 1/3 n . We show the pseudo-code for buildMeasurements(N shot ) in classical shadow in Algorithm 2. We can readily show that the covering probability is given by where locality(O) is the number of non-identity Pauli matrix in the Pauli observable O. As we show in Appendix B, we obtain where Therefore,

Derandomization
As we see in (3.10), the value of Var(ν) tends to be small if q(P j ) is large. From this point of view, the random choice of measurements in the classical shadow is not always efficient; particularly, when the locality of a Pauli term P j is large, the value of q(P j ) is exponentially suppressed, as we see in (3.13).
Thus, strategies to choose the measurement basis set that more efficiently cover Pauli observables are proposed as consecutive works [18][19][20][21]. One of such strategies is the derandomization [19]. Unlike the classical shadow, a pre-defined measurement basis set is used in the derandomization (thus, it corresponds to the deterministic case). For obtaining the pre-defined measurement basis {M r } N shot r=1 , the procedure of the derandomization iteratively determines them in the following order: is maximized (here we assume i 0 < n for simplicity, but we can give the same argument by replacing The score function C is constructed so that the increase of C leads to the increase of {q(P j )} K j=1 in our notation. Let where γ, η are hyperparameters. In [19], they give some options for the choice of the score function C; among them, the one used in their numerical experiment is For example, suppose that the observable is and N shot = 10. By minimizing (3.20) step by step, {M r } 10 r=1 includes five XXXZ and Y Y ZX, which leads to In the classical shadow, Thus, we see that the covering probability improves in the derandomization compared to the classical shadow.

Measurements optimization in the VQS
In this section, we propose the VQS with shadow-based strategies reviewed in the previous section. In Section 4.1, we synthesize the circuits in Fig. 1 and Fig. 2 and remove unnecessary two-qubit operations. Based on the result of the circuit synthesis, we propose an algorithm applying shadow-based strategies to the VQS in Section 4.2. In Section 4.3, we discuss the advantage of using our proposed algorithm. Note that we generally discuss our proposed algorithm in this section and demonstrate its performance by using RTE and ITE in the numerical demonstration in Section 5. Section 4.4 is dedicated to apply our implication obtained in Section 4.3 to the general shadow-based strategies discussed in Section 3

Circuit synthesis
Let us define ρ φ k,i; ,i as the quantum state generated by operating the partial circuits enclosed by the dotted rectangle in Fig. 1 to the reference state: Let CP q also be the controlled-P q operation and I n be the n-qubit identity operator. Then, for each q, k, and , the output of the circuit shown in Fig. 1 has the following form, Similarly, let ρ φ k,i be the quantum state generated by operating the partial circuits enclosed by the dotted rectangle in Fig. 2 to the reference state. The specific form of ρ φ k,i is given by The output of the circuit shown in Fig. 2 has the following form, In general, given an n-qubit unitary operator as U , estimating an observable written byÕ ≡ U (X ⊗ I n )U † requires many two-qubit gates. In fact, previous literature utilizes many two-qubit gates to estimate O q in (4.5) as we see in Fig. 1 and Fig. 2. However, in our case, we can write O q in (4.5) by only one Pauli observable. That is because CP q is a Clifford operator, and X ⊗ I n is a Pauli observable; a Clifford operator generally maps one Pauli observable to another Pauli observable. Direct calculation gives the explicit form of O q as To summarize, we do not need the two-qubit operations in Fig. 1 and Fig. 2 and we can compute each summand of (2.16) and (2.17) as

VQS with shadow-based strategies
The benefit of writing each O q and O r as a Pauli observable is not limited to the reduction of the two-qubit gates. Although we consider a more general case later, we begin by describing a simplified scenario that each phase of the summand in (2.16) does not depend on the index q and that in (2.17) does not depend on the index r; namely where G k,i, ,i ,q and G k,i,r are real values, and e iφ k,i, ,i and e iφ k,i are phases independent of indices q and r. Then, we obtain a j Tr(P j ρ) with {a j } as real coefficients, {P j } as Pauli observables, and ρ as a quantum state. Therefore, we can use shadow-based strategies to efficiently compute the summation inside the brackets. Namely, by our estimating become unbiased estimator of M k and V k . We summarize our proposed estimation process of M k and V k in Algorithm 3 and Algorithm 4. It should be noted that our circuit synthesis is also useful for grouping strategies [8,16,18,[22][23][24][25][26][27][28][29][30][31][32][33], mainly targeting VQO problems, where we need to calculate M i,i k and V i k . The variances ofM k andṼ k are given by whereP q = X ⊗ P q , and by using (3.10) with whereP r = X ⊗ P r . The above computed variances when using the shadow-based strategy are different from the ones when we naively estimate each summand of k,i , which is done by the previous literature (we simply call the strategy as the naive strategy in the rest of the paper). Let N naive be the number of measurements for each circuit in the naive strategy, and let the corresponding estimators of M k and V k beM naive (4.21) The difference in the variances between the naive strategy and the shadow-based strategy results in the difference in the infidelity due to the shot noise for a given number of measurements. In the following subsection, we quantitatively evaluate how the variance affects the infidelity and clarifies the benefit of using the shadow-based strategy in the VQS. Note that in the above introduction of our proposed algorithm, we require that each phase of the summand in (2.16) does not depend on the index q and that in (2.17) does not depend on the index r. However, we can remove the requirement as follows. Instead of (2.14) , we can expand B † (t)B(t) as where the symbol (z) is the imaginary part of z. Then, we can evaluate (β q )P q independently by the shadow-based strategy though the number of measurements increase. That is also true for the computation of B † (t)A(t).

Hybrid strategy in RTE and ITE
In RTE and ITE, N BB = 1 is set to calculate M k and there are no advantages for using a shadow-based strategy for the computation of M k . Therefore, in RTE and ITE, we take the hybrid strategy, meaning that we estimate V k by the shadow-based strategy and M k by the naive strategy. At the end of Section 4.3, we discuss the advantage of the hybrid strategy.

Algorithm 3
Estimation of M k with a shadow-based strategy.

Advantage of using shadow-based strategy
To clarify the advantage of using the shadow-based strategy over the naive strategy, we compare the infidelity due to the shot noise after the duration T . The time derivative of the parameter vector can be ideally computed as Algorithm 4 Estimation of V k with a shadow-based strategy. 3: for i = 1...N g do 4: SET ν k,i = estimateNu(ρ

6:
end for 7: end for return result However, we estimate M and V with a finite number of measurements, and therefore, the estimated time derivative includes the shot noise: where δM and δ V are the error to M and V due to the shot noise. As in literature [9], we use the following infidelity as a metric for evaluating the effect of error: between v( θ ideal (T )) evolved by the ideal time derivative where ∆ max naive ≡ max 0≤t≤T (∆ naive ), ∆ max shadow ≡ max 0≤t≤T (∆ shadow ), || · || F is the Frobenius norm, and || · || 2 is the two-norm.
Note that the total number of measurements for each step in the case of the shadow-based strategy is N total . Thus, the advantage of using a shadow-based strategy arises if N total shadow /N total naive < 1 when (4.26) is satisfied, i.e., N total shadow N total (4.28) In the rest of this section, we give a rough estimate of the leftest-hand side of the inequality.
Estimation of N total shadow /N total naive We assume that the values of ||M −1 || F and ||M || 2 F || V || 2 when using the naive strategy is almost the same as those when using the shadow-based strategy. The assumption is valid since the time evolutions with both strategies are close to each other (recall that we consider the regime where D naive Under the assumption, we obtain the inequality: (4.29) We evaluate each term on the right-hand side in the following. For the numerator in the square root of the first term, it holds Since V i k (ρ) depends on the quantum state ρ, it is difficult to directly estimate the value. Instead of evaluating V i k (ρ), let us estimate the typical size of V i k (ρ) in the overall Hilbert space by computing where Haar is the average over the Haar distribution. By calculating those statistical values, we can see how V i k (ρ) distributes when ρ is uniformly sampled. As we show in Appendix D, we can readily show where n is the number of qubits in PQC. By using Tr(P rPr P uPu ) ≤ 2 n+1 , we can show Var(V i k ) ∝ O(1/2 n ), which becomes asymptotically zero as n becomes large. Since the variance is exponentially suppressed, V i k (ρ φ k,i k,i ) takes the value almost equal to the average with high probability when uniformly choosing ρ φ k,i k,i . In the rest of this section, given a function of a density operator as ξ(ρ), if ξ(ρ) Haar = ξ 0 holds with ξ 0 as a constant and Var(ξ) Haar is exponentially suppressed, we denote ξ(ρ) ξ 0 . Also, for given another function of a density operator as ξ (ρ), if ξ (ρ) ≤ ξ(ρ) ξ 0 , we denote ξ (ρ) ξ 0 . Thus, with a large n, holds and In the Eq. (4.29), for the denominator of the square root in the first term, where As in the case of V i k (ρ), we can compute the average in the overall Hilbert space (See Appendix D) as (4.38) Thus, for a large enough n, G i,r k (ρ) (G k,i,r ) 2 and, we obtain By combining (4.34) and (4.39), the first term in the right hand side of (4.29) is bounded as where · G k,i,r is a weighted average defined by Similarly, where · G k,i,r is another weighted average defined by (4.43) By substituting (4.40) and (4.42) to (4.29), we obtain Combining with (4.28), we obtain (4.45) The factor (1 + N P N g )/(N BA + N P N g N BB ) < 1 corresponds to the benefit of simaltaneously evaluating the N BA terms in the calcualtion of V k and the N BB terms in the calculation of M k in the shadow-based strategy. The other factor 1/q(P r ) G k,i,r + 1/q(P q ) G k,i, ,i ,q corresponds to how well measurements in the shadow-based strategy cover the observables; if observables are not covered efficiently, the benefit of the simaltaneous evaluation may disappear due to this factor. Particularly, when we use the classical shadow as the shadow-based strategy, we can explicitly write the latter factor as (4.46) where we use (3.13). Thus, the locality of the observables should be small in order for enjoying the benefit of the classical shadow in the VQS. It should be noted that locality(P r ) = locality(P r ) + 1, (4.47) becauseP r = X ⊗ P r .
Advantages of using the shadow-based strategy in RTE and ITE Finally, we discuss the advantage of using the shadow-based strategy in RTE and ITE. As we noted at the end of Section 4.2, we take the hybrid strategy that utilizes the shadow-based strategy for computing V k and the naive strategy for computing M k . Unlike the general case (where we measure both V k and M k by using the shadow-based strategy), we can reduce the number of measurements for estimating each M k to smaller than that for estimating V k (=N shot ); we set the number of measurements as N shot α with 0 < α < 1. Let Then, as shown in (C. 24) in Appendix C, we can show that if the error due to the shot noise in the naive strategy is almost the same as that in the hybrid strategy, where we define ∆ max hybrid ≡ max 0≤t≤T (∆ hybrid ). Given N total hybrid as the total number of measurements for each step in the hybrid strategy, it holds N total hybrid = N shot (N P N g +α(N P N g ) 2 ). Under the same assumption to obtain (4.29), it holds (4.50) (4.52) Note that the literature [49] states that given a circuit structure, the number of measurements for estimating {M k } N P k, asymptotically becomes negligible compared to that for estimating {V k } N BA k=1 as the number of qubits n increases and N BA increases faster than quadratically with n. Though the literature assumes the naive strategy to estimate {V k } N BA k=1 , the asymptotic behavior would be the same in our hybrid strategy. The condition that the number of measurements for {M k } N P k, is negligible, corresponds to the case when N α 1. In that case, it holds For example, let us calculate the right-hand side value of the right-hand side of (4.53) when the time evolution Hamiltonian is given by (3.22). We can write

54)
and therefore, the number of terms is given by N BA = 6. Recall that eachP r in (4.53) corresponds to each term in (4.54). Let us first calculate when we use the classical shadow. Since the locality of each term in (4.54) is equal to or larger than three, where we use (3.13). Thus, in the case of the classical shadow, the right-hand side of (4.53) is and there is no advantage in using the classical shadow with the Hamiltonian (3.22). On the contrary, when using derandomization, half of the chosen measurements are XXXXZ, and the other half is XY Y ZX (see the discussion around (3.22)). Therefore, and the right-hand side of (4.53) is Thus, there is merit in using derandomization with the Hamiltonian (3.22). We note that we utilize the Hamiltonian with just six terms in this example, and the advantage of using the shadow-based methods is limited. In Section 5, we will demonstrate the result of VQS when using the molecular Hamiltonian and see the significant advantages of using the shadow-based methods.

Remark on the general shadow-based strategies
In Section 4.3, we showed that in the context of VQS. The same argument holds even in the general estimation problem of H = K j=1 a j Tr(P j ρ) discussed in Section 3. Let ν naive be the estimator of H when estimating each term of H one by one with N naive measurements. Also, recall that ν is the estimator of H when using a shadow-based strategy with N shot measurements. Then, with the same arguments in Section 4.3, it can be shown that The total number of measurements is N total shadow = N shot in the case of using the shadow-based strategy and N total naive = KN naive in the case of naively measuring each term one by one. For achieving Var(ν) = Var(ν naive ), the following relation is necessary (4.61) Notably, as long as the covering probability q(P j ) is O(1), we can reduce the number of measurements by the factor O(1/K) in the shadow-based strategy. The above observations consolidate the validity of using the shadow-based strategies [17][18][19][20][21] for estimating the sum of Pauli observables. In Eq. (18) of [18], they also give the upper bound for the variance of ν. However, it is as loose as 62) in our notation (in [18], they write the number of terms as |H 0 | instead of K). We can easily check that if Var(ν) takes the value of the upper-bound (4.62), N total shadow > N total naive , and there are no advantages of using shadow-based strategies. Conversely, previous research does not theoretically show the advantages of using a shadow-based strategy to estimate the sum of Pauli observables. Our upper bound, however, successfully removes the effect from the second term in (3.10) by using the average over the whole quantum state and shows that in most of the quantum states ρ, there is an advantage to using the shadow-based strategy.

Numerical Demonstration
Here we demonstrate our shadow-based measurement optimization strategy for the VQS. In Section 4, we generally discussed our strategy, but in this demonstration, we focus on RTE and ITE, which is the algorithm most actively studied in the VQS algorithms. As we noted at the end of Section 4.2, we take the hybrid strategy, meaning that we obtain M k by the naive strategy and V k by the shadow-based strategy.
We perform two types of demonstrations. The first one is the evaluation of the infidelity due to the shot noise, which is defined in (4.25), in various Hamiltonians, and in various measurement strategies. The second one is the evaluation of the variances in each strategy. Particularly, as a result of the Haar average in the Hilbert space, we obtained

Setting of numerical experiments
Here we summarize Hamiltonians, the reference state, and the ansatz used in the experiments. Also, we describe strategies to estimate M k and V k .

Hamiltonians
We use three types of problem Hamiltonians H: • H 2 molecule in the 6-31g basis with the Bravyi-Kitaev transformation (8- We write the number of qubits in the Hamiltonian as n. Here we define our Heisenberg model Hamiltonian by with σ j a as a Pauli operator operating on the j-th qubit, and σ 7 a = σ 1 a .

Reference state and ansatz
As the reference state |v ref , we use the |v ref = H ⊗n |0 ⊗n . As the choice of the ansatz R (defined in (2.10)), we use a hardware efficient ansatz, which contains multiple layers of single-qubit rotation gates and entanglers. Each parameter is embedded in each single-qubit rotation gate as exp(−iσ a θ j /2), where σ a is a Pauli matrix with a ∈ {x, y, z} (therefore, N g = 1 and g k,1 = −i/2). The type of each single-qubit rotation gate 'a' is randomly chosen at the initialization. As the two-qubit gates in entanglers, we use the controlled-Z gates. At each layer, we first perform the single qubit gates and, secondly, perform the controlled-Z gates. The way of operating the controlled-Z gates is different in the odd layers and the even layers. Let us define a set of integers N 1 by N 1 = {m 1 |m 1 ∈ N and 1 < 2m 1 ≤ n} and another set of integers N 2 by N 2 = {m 2 |m 2 ∈ N and 1 < 2m 2 + 1 ≤ n}. For example, when n = 8, N 1 = {1, 2, 3, 4} and N 2 = {1, 2, 3}. Also, we denote by (j, k) the pair of control and target qubits on which we perform a two-qubit operation. Then, in each odd layer, pairs of (2m 1 − 1, 2m 1 ) for all m 1 ∈ N 1 are firstly entangled by the controlled-Z gates, and next, pairs of (2m 2 , 2m 2 +1) for all m 2 ∈ N 2 are entangled. On the contrary, in each even layer, pairs (2m 2 , 2m 2 + 1) for all m 2 ∈ N 2 are firstly entangled by the controlled-Z gates, and next, pairs (2m 1 − 1, 2m 1 ) for all m 1 ∈ N 1 are entangled. In Fig. 4, we show our ansatz with the reference state |v ref when n = 8 as an example. For all experiments, we fix the number of layers as four, and therefore, N P = 4n. All parameters are initialized as zero, and therefore, the initial state generated by the ansatz is |v ref . Figure 4: The reference state and the ansatz used in the numerical experiment. As an example, we shot the case when n = 8.

Strategies to estimate M k and V k
For the estimation of V k , we compare following four strategies including two shadow-based strategies: • Classical shadow (=a shadow-based strategy); • Derandomization (=another shadow-based strategy); • Naively estimate each term of V k one by one; • Largest degree first (LDF) grouping [16,18].
The LDF grouping is a method to reduce the number of measurements by grouping the simultaneously measurable Pauli observables. In the LDF grouping, the measurement basis is chosen so that all of the observables in one of the groups should be covered by each measurement. The group to be covered by each measurement is chosen according to the probability distribution weighted by the importance of each group, i.e., the larger the norm of the coefficients of the observables in the group is, the larger the probability becomes. As shown in previous literature [18][19][20], the LDF grouping successfully decreases the number of measurements in the VQE problems without increasing the depth of the circuit. Due to the circuit synthesis in Section 4.1, M k and V k are written as the sum of Pauli observables, and the LDF grouping is also usable for the evaluation of V k and M k . Thus, it can be a good benchmark for evaluating the shadow-based strategy. For a more detailed explanation of the LDF grouping, please see [18]. For the estimation of M k , there is no room for measurement optimization since N BB = 1. To emphasize the difference in the performances of the above four estimation strategies of V k , we assume that M k can be exactly computed (which corresponds to the case when the number of measurements for estimating {M k } is negligible). We can obtain the exact value of M k in Eq. (2.16) by directly calculating v ref | R † k,i P q R ,i |v ref with the state vector representation. Finally, throughout our numerical experiment, for simulating the quantum process, we use Qulacs [50].

Evaluation of the infidelity
Under the setting we described in Section 5.1, we execute the RTE and ITE for the initial five steps such as T = xδt (x ∈ {0, 1, 2, 3, 4, 5}) where we set δt = 0.01, and see how D I v( θ ideal (T )) , v( θ(T )) evolves depending on the strategies to estimate V k . For T = 0, we have D I v( θ ideal (T )) , v( θ(T )) = 0.
For computing v( θ ideal (T )) , we exactly compute V by using the state vector representation, while for estimating v( θ(T )) , we evaluate V by using each measurement strategy. Given the total number of measurements for estimating each V k as N total , it is set to be N total = 5 × N BA for each strategy (thus,

2)
where we use that the locality of most of the terms in (5.1) is two and in the observable to evaluate V k , the locality increases by one (see (4.47)). By substituting these into (4.53), meaning that there is not an advantage of using the classical shadow algorithm in the Heisenberg model Hamiltonian as far as we see the upper bound. Even though the classical shadow algorithm successfully works in the other problems compared to the naive strategy, the LDF grouping outperforms the classical shadow in every problem. That is because the classical shadow contains inefficiency regarding the choice of the measurement basis. The most significant inefficiency might exist in the choice of the measurement basis for the ancilla qubit. For evaluating V k , we  need to evaluate the following summation of the observables: Since all the observables have X in the first qubit, the measurement basis that does not have X in the first qubit does not give any information about V k . Conversely, only the measurement basis that has X in the first qubit covers the terms in (5.4). However, the measurement basis is chosen with the probability of 1/3, which is obviously inefficient in the computation of V k . The inefficiency is the reason why the LDF grouping outperforms the classical shadow. We enjoy the benefit of the shadow-based strategy with a more intelligent selection of the measurement basis. The derandomization, which optimizes the measurement basis so that 1/q(P r ) becomes smaller, outperforms the LDF grouping other than the problem with the Heisenberg model Hamiltonian. In the Heisenberg model Hamiltonian, all observables are covered by the only three measurement basis: Finding such a measurement basis is easy both for the LDF grouping and the derandomization. In our numerical experiment, the LDF grouping successfully groups all the observables into three groups, where each group is covered by one of the above measurement basis. Thus, the measurements in the LDF grouping are nearly optimal and equal to the derandomization in the Heisenberg model Hamiltonian. However, in the other problems, the observables are not covered by a few measurement basis. Thus the derandomization, which is a relatively intelligent way of finding the measurement basis, outperforms the LDF grouping.
We also compute D I v( θ ideal (T )) , v( θ(T )) at T = 1 with various total numbers of measurements N total . Similarly to Fig 5, each data point is the mean and the standard error of the mean in five trials. The results are shown in Fig. 6 (the labels of the subfigures are the same as those of Fig. 5). We note that both the horizontal and vertical axes are in the logarithmic scale in the figure. We see that the same conclusion holds for the performance of each measurement strategy. Additionally, we see that almost decreases with the square root of N total , which is consistent with (C.23) in Appendix C.

Evaluation of the variance
Finally, we numerically compare the variance Var(Ṽ k ) and Var(Ṽ naive k ) for a given total number of measurements. We set the total number of measurements as 5 × N BA . We compute the variance in the two problems: ITE with H 2 molecule Hamiltonian, which is shown in The detail of the contents of the tables is as follows. In the 'Variance' column, we show the average of the variance; namely, Var(Ṽ k ), (5.6) for the shadow-based strategy and for the naive strategy. Additionally, checking in the 'Approximation' column, we show the approximate value of 'Variance', which is denoted by for the shadow-based strategy, and for the naive strategy. Finally, in the 'Diff' row, we show the averege difference between the true value and the approximate value of the variance, namely, for the shadow-based strategy, and  TABLE 1 and TABLE 2, we see that all 'Variance' are almost equal to the corresponding 'Approximation', which shows the validity of the approximation in (4.34) and (4.39) in Section 4.3. Second, the variances are consistent with the results in Section 5.2. Namely, the smaller variance results in the smaller infidelity in Fig. 5 and Fig. 6. Finally, variances in the derandomization are much smaller than those in the other measurement methods. Particularly, the large difference in the variances between the classical shadow and the derandomization shows the importance of optimizing the covering probability q(P r ) for decreasing the variance (hence the infidelity) and implies that other shadow-based strategies [18,20], which also optimizes the covering probability, also achieve good performance. Studying the effect of using [18,20] in VQS is left for future work.

Discussion
The variational quantum simulation (VQS) successfully gives a way of simulating quantum systems in those devices, which is considered as one of the practical applications the near-term quantum devices. However, when applying VQS to a large quantum system where we have practical interests, the number of running a quantum device, i.e., the number of measurements, dramatically increases to reduce the effect of shot noise. Even though the grouping methods and the shadow-based strategies have recently been proposed to decrease the number of measurements in the variational quantum optimization (VQO), they are not utilized in VQS because the way of evaluating the observables in VQS is different from the one in VQO and we cannot straightforwardly apply the shadow-based strategies to VQS. In this paper, by changing the circuit and observables in VQS, we make it possible to apply shadow-based strategies and grouping methods to VQS. We particularly focus on the shadow-based strategies; we theoretically show that with the shadow-based strategy, we can greatly decrease the number of measurements in VQS compared to the naive strategy, which has been utilized since the original proposal of VQS [9,13,14]. Our numerical demonstration of ITE and RTE using various Hamiltonians, including chemical Hamiltonians, also implies the validity of using the shadow-based strategy. Particularly, the derandomization [19], one of the shadow-based strategies, achieves the best performance in most of the Hamiltonians. Though we examine only the classical shadow and the derandomization in our numerical experiments as the shadow-based strategies, other shadow-based strategies [20,21] can also be utilized to reduce the number of measurements in VQS. For example, the literature [20] shows that their proposed shadow-based strategy using decision diagrams effectively reduces the estimator's variance to smaller than that in the LDF grouping, which achieves the second-best performance in our numerical demonstration. Additionally, as we note in Section 1, recent literature [31] reports that their grouping strategies outperform shadowbased strategies, such as derandomization, by optimizing parameters in the grouping so that the variance of the observable is minimized. Studying the effect of using those shadow-based strategies and the grouping strategies in VQS is a good direction for future work. We believe that our work takes the first step toward measurement optimization in VQS and paves the way for building practical applications for simulating quantum systems.

A Statistical properties of ν
Proof that ν is an unbiased estimator of Tr(Hρ) In this Section, we show that ν is the unbiased estimator of Tr(Hρ). When measurements are chosen probabilistically, a j Tr(P j ρ) = Tr(Hρ), where we use (3.9) in the second equality, and we use (3.5) in the third equality. Thus, E[ν] = N shot r=1 E[ν r ]/N shot = Tr(Hρ).
When measurements are chosen deterministically, where we use (3.6) in the last equality. Therefore,

Variance of ν
The variance of ν is defined by When measurements are probabilistically chosen, As long as M r covers both P j and P br∈{−1,1} ⊗n p(ρ, M r ; b r )µ(P j , b r )µ(P , b r ) = Tr(P j P ρ).
Substituting (A.6) to (A.5), we obtain and therefore, As a result, (A.10) In the deterministic case, by using (A.6), By using a calculation similar to (A.8), we can separate the diagonal term and obtain

B Variance in classical shadow
Here we derive the variance of ν in the classical shadow [17]. From (3.11), we obtain g(P j , P ) = 3 locality(Pj ) 3 locality(P ) Consequently, C Derivation of the bound for shot noise In [9], the following infidelity is evaluated when not using the shadow-based strategy. In this section, we firstly show the value of the previously computed D I and next, we evaluate D I when using the shadow-based strategy. Suppose that there are N -steps for the update of the parameters, i.e., N δt = T , and let us write the state evolved by (4.24) and the ideal state evolved by (4.23) at r-th step as |v r and |v r ideal . Then, v N = v( θ(T )) and v N ideal = v( θ ideal (T )) . We also assume that |v 0 = |v 0 ideal . By using the triangle inequality, we obtain for r ≥ 1, where U r is the unitary transformation correspoinding to the r-th ideal update of the parameters by (4.23), and |v r 0 ≡ U r v r−1 . By applying (C.1) repeatedly, we obtain Then, the direct calculation shows [9] D I (|v r 0 , |v r ) = δ( θr ) T W r δ θr (δt) 2 + O((δt) 3 ), (C. 4) where each element of the matrix W r is given by Here, we use ∂ v r−1 | ∂t |v r−1 = − v r−1 | ∂|v r−1 ∂t , which is derived from ∂ ∂t v r−1 |v r−1 = 0. By ignoring the O((δt) 3 ) term inside the square root function, we obtain δt δ( θr ) T W r δ θr ≤ T ||W || max ||δ θ || max , where ||W || max = max r ||W r || F and ||δ˙ θ|| max = max 0≤t≤T ||δ θ (t)|| 2 . where we ignore the higher order terms of δM and δV with the assumption that δM and δV are sufficiently small. Then, where || · || F is the Frobenius norm and || · || 2 is the two-norm. Therefore, let The value ∆ M V depends on how to estimate M and V as we see in the following.
The infidelity in the case of using naive strategy In case we naively measure each term of M and V one by one, from the central limit theorem, where we use (4.21). Therefore, we obtain Thus, we obtain where ∆ naive max ≡ ||∆ naive || max .
The infidelity in the case of using the shadow-based strategy Similarly, in case of shadow-based measurements, we obtain where we assume that the value of ||W || max is almost the same in the naive strategy and the shadow-based strategy.
The infidelity in the case of using the hybrid strategy In case of the hybrid strategy, where we use in case N BB = 1, the number of measurements for estimating each M k can be smaller than that for estimating V k (=N shot ); we set the number of measurements as N shot α with 0 < α < 1. Then, in the hybrid strategy, from the central limit theorem. We obtain Then, where we assume that the value of ||W || max is almost the same in the naive strategy and the hybrid strategy.

D Derivation of the Haar averages
In this section, we derive results of the Haar integration. We use the following itegration formulae [51]: where N is the dimension of the Hilbert space of U .
Derivation of (4.33) By using (D. (G k,i,r ) 2 q(P r ) + N BA r=1 r =r G k,i,r G k,i,r g(P r ,P r ) Tr(P rPr ) 2 n+1 .
G k,i,r G k,i,r G k,i,u G k,i,u g(P r ,P r )g(P u ,P u ) Tr(P rPr )Tr(P uPu ) + Tr(P rPr P uPu ) 2 n+1 (2 n+1 + 1) G k,i,r G k,i,r G k,i,u G k,i,u g(P r ,P r )g(P u ,P u ) Tr(P rPr P uPu ) 2 n+1 (2 n+1 + 1) , where in the last equality, we use Tr(P rPr ) = 0.