Scalable and Flexible Classical Shadow Tomography with Tensor Networks

Classical shadow tomography is a powerful randomized measurement protocol for predicting many properties of a quantum state with few measurements. Two classical shadow protocols have been extensively studied in the literature: the single-qubit (local) Pauli measurement, which is well suited for predicting local operators but inefficient for large operators; and the global Clifford measurement, which is efficient for low-rank operators but infeasible on near-term quantum devices due to the extensive gate overhead. In this work, we demonstrate a scalable classical shadow tomography approach for generic randomized measurements implemented with finite-depth local Clifford random unitary circuits, which interpolates between the limits of Pauli and Clifford measurements. The method combines the recently proposed locally-scrambled classical shadow tomography framework with tensor network techniques to achieve scalability for computing the classical shadow reconstruction map and evaluating various physical properties. The method enables classical shadow tomography to be performed on shallow quantum circuits with superior sample efficiency and minimal gate overhead and is friendly to noisy intermediate-scale quantum (NISQ) devices. We show that the shallow-circuit measurement protocol provides immediate, exponential advantages over the Pauli measurement protocol for predicting quasi-local operators. It also enables a more efficient fidelity estimation compared to the Pauli measurement.


Introduction
Quantum technology is advancing rapidly. A central task is to characterize and exploit the features of many-qubit quantum states created in the lab [9,15,23,34]. To fully determine the density matrix ρ of a quantum system of n qubits, exponential (∼ 4 n ) amount of repeated measurements and clas-  sical processing is needed [18,24,42]. Therefore full quantum state tomography is not scalable to large systems. However, for many purposes (e.g. estimating physical observables on the quantum state), a far less complete description is adequate [1,3], and the amount of measurement and classical processing can be drastically reduced. Much of the recent progress has been made by exploiting the randomized measurement strategy [16,41,43], particularly through the classical shadow tomography [10, 13, 25, 26, 30-32, 35, 38, 48, 49, 54].
Classical shadow tomography converts quantum states to classical data with a superior sample efficiency. It can estimate expectation values of M observables using only ∼ (log M )∥O∥ 2 shd independent randomized measurements [32,45], saturating the theoretical optimal bound on sample efficiency. Nevertheless, the constant coefficient ∥O∥ 2 shd , known as the operator shadow norm, does depend on the type of observable O and the randomized measurement scheme. For example, the single-qubit (local) Pauli measurements are efficient for predicting local observables, while the global Clifford measurements are efficient in estimating certain global properties such as quantum fidelity. However, an efficient and scalable approach to interpolate the local and global limits is still missing in classical shadow tomography.
As illustrated in Fig. 1, the randomized measurement protocol in classical shadow tomography is generally realized by first transforming the quantum state ρ of interest by a random unitary circuit U sampled from a random unitary ensemble U, then performing a computational basis measurement and collecting the measurement outcomes b (as a bit-string). Much of the literature has focused on two measurement protocols: the single-qubit Pauli measurements (where U = Cl(2) ⊗n ) and the global Clifford measurements (where U = Cl(2 n )). In terms of the depth L of the random unitary circuit U ∈ U, we may think of these two randomized measurement schemes as the L → 0 and L → ∞ limits of randomized measurement protocols defined on more general finite-depth unitary ensembles (see Fig. 1). This work aims to explore the intermediate measurement schemes based on finitedepth circuits between these two limits. For simplicity, we focus on the collection of unitary ensembles defined by the brick wall arrangement of two-local random unitary gates as shown in Fig. 1, but our approach is directly applicable to other circuit structures as well.
The ability to adjust the circuit depth L (or even the circuit structure) provides great flexibility to modify the measurement scheme adaptively so as to optimize the tomography efficiency for the given target observables. This can be particularly useful when the target observables are not neatly local or lowrank (such that neither Pauli nor Clifford measurement is optimal). The shallow circuit implementation of randomized measurements is also friendly to nearterm quantum devices. Motivated by these objectives, Ref. [29] develops the foundation for shallow-circuit classical shadow tomography based on the theory of locally scrambled quantum dynamics and the entanglement feature formalism [4,17,28,36,52,53]. The key finding is that for a large class of randomized measurements, the classical shadow reconstruction map only depends on the entanglement properties of the classical snapshot states U † |b⟩, which can be computed by solving a quantum entanglement dynamics problem.
However, the reconstruction algorithm demonstrated in Ref. [29] is based on a brute-force computation, which is not scalable to larger systems (the complexity for classical post-processing will be exponential in system size). The largest system size achieved in Ref. [29] was nine qubits. Facing this challenge, in this work, we further develop efficient numerical methods based on matrix product state (MPS) [14,19,27,44] for calculating entanglement features and solving reconstruction coefficients. The key idea is to pack the entanglement feature (purities in different entanglement regions) of the classical snapshot state U † |b⟩ into a fictitious quantum many-body state and represent this entanglement feature state by an MPS. This enables efficient computation of the entanglement dynamics and finding the classical shadow reconstruction map. Our tomography procedure is outlined in Fig. 2. Using this approach, we can perform the classical shadow post-processing on 22 qubits (n = 22) for the randomized measurement scheme based on three-layer random Clifford circuits (L = 3) The snapshot statesσ is collected from randomized measurements given U and ρ. The reconstruction map M −1 is determined by the entanglement features only, and is encapsulated by the reconstruction coefficients r. These reconstruction coefficients r, alongside the snapshot statesσ and the observable O are used to make predictions. Importantly, the entire algorithm only involves matrix product states and stabilizer states. Both are efficient representations of quantum states that can be processed on classical computers.
with superior sample efficiency. The primary advantages of this approach are its scalability and flexibility, i.e. the classical post-processing can be performed for large system sizes with polynomial complexity (given a fixed amount of measurements), while the method applies to a wide range of random Clifford measurement schemes. The paper is organized as follows. In Sec. 2, we review the basic setup of classical shadow tomography and the key concepts of entanglement feature formalism. In Sec. 3, we describe the general algorithm/procedure for computing the expectation values of generic observables and analyzing the corresponding shadow norm. In Sec. 4, we demonstrate the shallow-circuit classical shadow tomography on the Greenberger-Horne-Zeilinger (GHZ) state and the cluster state. The simulation confirms an unbiased tomographic reconstruction with favorable sample efficiency compared to that of Pauli measurements. In Sec. 5, we suggest some future lines of inquiry and summarize the key results.

General Framework of Classical Shadow Tomograhy
Classical shadow tomography is an efficient protocol for predicting features of a quantum state based on a few measurements [32]. The trick is to use entanglement generated from a random unitary ensemble U to form approximate classical "shadows"ρ of the original state ρ which are then used for predicting the desired observable. Different unitary ensembles pro-duce different collections of shadows, which are suited for predicting different kinds of observables.
The protocol consists of two steps: randomized measurement and classical post-processing, as illustrated in Fig. 1. Starting with the initial state ρ, apply a randomly sampled unitary U ∈ U so that ρ → U ρU † . Then perform a projective measurement on the transformed state U ρU † in the computational basis (Z-basis on each qubit independently). The resulting bit-string state |b⟩ (labeled by the bitstring b of measurement outcomes), as well as the performed unitary transformation U , will be recorded. These two pieces of information define a snapshot stateσ = U † |b⟩⟨b|U . The randomized measurement protocol can be formulated as a quantum channel M, which maps the initial state ρ to the average snapshot state σ by , 1} ×n , U ∈ U} denotes the prior ensemble of all possible classical snapshots.
In the prior snapshot ensemble, the joint probability is the probability to sample U and 2 −n is the probability to sample b uniformly (independent of ρ). Given the observation of ρ, the posterior snapshot ensemble E σ|ρ is defined by deforming the joint probability distribution to P ρ (b, U ) = ⟨b|U ρU † |b⟩P (U ). The measurement channel M can be formulated based on either of the snapshot ensembles equivalently, as stated in Eq. (1). Furthermore, if this measurement channel M is tomographically complete, i.e. distinct states ρ can be distinguished by different measurement outcomes (b, U ), then there is an inverse map M −1 called the reconstruction map such that ρ = M −1 (σ). The image of a given snapshot stateσ under the reconstruction map is a classical shadowρ = M −1 (σ), which provides a single-shot classical representation of the initial quantum state ρ. The initial state ρ can be restored as the ensemble expectation of classical shadows ρ = Eσ ∈E σ|ρ M −1 (σ), which enables us to predict physical observables of the initial state based on the classical snapshots collected from randomized measurements, This computation will be carried out on a classical computer. It will be efficient if the snapshot states are stabilizer states. Since the reconstruction map M −1 is not a quantum channel (because it is not a positivity-preserving map), the resulting shadow statesρ may not be positive semi-definite. Therefore, one may encounter unphysical single-shot estimation values of O for certain individual samples. Nevertheless, the ensemble average of all single-shot estimations is physical and unbiased.
In practice, quantum devices can have a significant amount of noise which can introduce randomness into meaurement outcomes. One solution to deal with noise is to incorporate the noise into the measurement channel itself. The idea is that the prior snapshot ensemble is composed of several layers of channels acting on an initial product state E σ = E 1 • E 2 . . . E L (|b⟩⟨b|) which are averaged over to form the measurement channel. Incorporating noise amounts to simply introducing new noise layers resulting in a new measurement channel M ′ based on the statistical properties of the noise. If we assume that the noise is local and does not have a preferential basis, then the approach outlined in this paper would still be applicable and the resulting reconstruction map (M ′ ) −1 should have an efficient MPO description in the shallow circuit limit.
For any finite-sized classical shadow ensemble, the estimation exhibits statistical fluctuations around the true expectation value ⟨O⟩. The variance in the single-shot estimation of any observable O can be quantified through its operator shadow norm ∥O∥ 2 shd , introduced in Ref. [32]. The estimated expectation value obtained from averaging over M samples will be within the error bound of ∥O∥ 2 shd /M with high probability. For a given class of observables, it is possible to choose the random unitary ensemble U wisely so as to reduce the shadow norm and to optimize the sample efficiency [29].

Locally-Scrambled Classical Shadow Tomography
Restricting the randomized measurement scheme to the scope of locally-scrambled ensembles proves to be advantageous [29]. Locally-scrambled ensembles [36] are random unitary ensembles which are invariant under both left and right local basis transformations, i.e. the probability distribution for all U ∈ U satisfies being any local basis transformation. Many typically studied random circuit models are in fact, local basis invariant, such as random Haar circuits [6,12,17,39,40,56] and quantum Brownian circuits [11,20,37,51,55]. Furthermore, since the reconstruction map only depends on the second moment of the unitary ensemble, Clifford circuits, which are typically used for classical shadow tomography because of their classical simulability, are equivalent in terms of their tomographic properties to the random Haar circuit. Therefore, the formulation of locallyscrambled classical shadow tomography in Ref. [29] is applicable to generic random Clifford circuits of any depth and any gate arrangement.
In the locally-scrambled case, it was shown that the reconstruction map is determined entirely by the average purity (exponential of 2nd Rényi entropy) of the (prior) classical snapshot states [29] in all possible sub-regions, which are also known as the entangle-ment features [4,17,36,52]. One might be daunted by the fact that there are exponentially many entanglement features to keep track of, one for every possible sub-region of the system. Thus the classical postprocessing will not be scalable. This paper aims to answer the challenge. In particular, two questions will be addressed: 1. Is there any hope of representing the entanglement features efficiently?
2. Furthermore, can we utilize such representation to determine and apply the reconstruction map efficiently to large systems?
The answer to the first question is yes: average subregion purities of locally-scrambled systems have efficient representations as matrix product states (MPS) due to the fact that purities are completely positive and many-body states of positive wave functions are typically area-law entangled [22], such that the MPS bond dimension is a constant that does not scale with the system size n. The key idea is that the average purity of a n-qubit quantum system over all sub-regions can be compactly encoded in a fictitious many-body state vector, called the entanglement feature state. Given an ensemble E of random states (random density matrices) of the system, the entanglement feature state |W E ⟩ associated with E is a vector whose components are where A denotes a sub-region of the system (as a subset of qubits) andĀ is the complement region of A.
For each state ρ in the ensemble E, ρ A = TrĀ ρ is the reduced density matrix of ρ in the region A, by tracing out qubits in the regionĀ. The entanglement feature (W E ) A in the region A is simply the purity of ρ A averaged over all random states ρ in the ensemble E. The entanglement feature state |W E ⟩ is a superposition of the basis states |A⟩ labeled by sub-regions, and the superposition coefficients are the average purities in the corresponding sub-regions. Note that the entanglement feature state is not a physical state of the quantum system. It merely encodes the entanglement properties of the underlying physical quantum states. For example, the ensemble of pure product states has the following entanglement feature state because the purity is unity in any sub-region of any pure product state. In this way, we can efficiently encode the purity information over all sub-regions in a single entanglement feature state |W E ⟩.
The advantage of the entanglement feature formalism lies in the fact that there exists an efficient numerical method to calculate the time evolution of the entanglement feature state as the underlying quantum state evolves in time together. Let U be a locallyscrambled random unitary ensemble and E be a random state ensemble. One can define a new random state ensemble E ′ = {U ρU † | ρ ∈ E, U ∈ U }, which can be interpreted as the ensemble of the random states evolved by the random unitaries. Then the entanglement features of E ′ and E are related by whereŴ U is called the entanglement feature operator associated with the random unitary ensemble U, whose matrix components are given by ⟨A|Ŵ U |B⟩ = EU∈U Tr A,B (TrĀ ,B U ) 2 with A and B being the subregions on the future and the past sides of U respectively.Ŵ 1 is the entanglement feature operator for the identity operator, and its inverse operator is denoted asŴ −1 1 . Therefore, each step of the unitary evolution of the physical state ensemble can be mapped to a step of transfer matrix evolution of the entanglement feature state.
An efficient numerical approach to compute the entanglement feature dynamics Eq. (5) has been developed [4,17] based on the MPS representation of entanglement feature states.
(6) Here W 0 E,i and W 1 E,i are two sets of matrices parametrizing the MPS, where i labels the qubit/site. If the state ensemble E is translation invariant, we may choose to drop the index i on the MPS matrices W 0 E,i , W 1 E,i . Furthermore, for the pure state entanglement feature, the MPS matrices must satisfy various constraints. For example, for pure states, the purity of the whole system is one i.e. Tr(W 0 E ) n = Tr(W 1 E ) n = 1. In addition, because the purity of complementary regions is the same, i.e. (W E ) A = (W E )Ā, the entanglement feature state has a Z 2 : A →Ā symmetry, so the MPS tensor (formed by combining the two matrices into a single tensor) must carry definite Z 2 representation. These symmetries greatly constrain the form of the MPS matrices (see Ref. [4] for more discussions).
As a fictitious many-body state, the entanglement feature state is typically area-law entangled due to its completely positive sign structure [22]. Thus it generally admits efficient MPS representations if the system is one-dimensional. For example, the entanglement feature state of pure product states Eq. (4) can be represented as a trivial MPS (a product state) with bond dimension one. Most typical entanglement feature states will have a low bond dimension even as the underlying physical state ensemble is of volume-law entanglement. For nearly all locally-scrambled dynamics with various entanglement production rates, it has been numerically observed [4] that a bond dimension D W = 2 MPS state could capture the purity evolution in all sub-regions to high accuracy. The accuracy can be further improved with a larger bond dimension. The evolution of the entanglement feature state, for example, by applying alternating layers of brick wall unitaries, can also be performed at the MPS level using the entanglement feature transfer matrix following Eq. (5). To implement that, we use a timeevolved block decimation (TEBD) approach and truncate at a fixed bond dimension D W (see Appendix C of Ref. [17] or Appendix A, C of Ref. [4] for further details). With these previously developed tools, one can efficiently calculate the evolution of the average purity given a unitary ensemble. Furthermore, given the same circuit geometry, all unitary two-designs have the same entanglement feature dynamics, and so the Clifford circuit entanglement feature transfer matrix is equivalent to the random Haar one.
For the purpose of classical shadow tomography, what we need to know is the entanglement feature of the classical snapshots. Given any randomized measurement scheme (as specified by the random unitary ensemble U), all possible classical snapshots form a random state ensemble, denoted as This is also the prior snapshot ensemble defined below Eq. (1). Following the definition in Eq.
(3), we can define the entanglement feature state |W Eσ ⟩ for the classical snapshot ensemble E σ , whose components are Note that the classical snapshot stateσ = U † |b⟩⟨b|U is always constructed from a (reversed) unitary evolution U † on a pure product state |b⟩. So to obtain |W Eσ ⟩, we only need to take the MPS representation of |W prod ⟩ in Eq. (4) as the initial state and use the TEBD algorithm to evolve the entanglement feature MPS step by step following the entanglement feature dynamics in Eq. (5), as the underlying snapshot state gets evolved by the (reversed) Clifford circuit U † layer by layer. Computing the entanglement feature |W Eσ ⟩ from the random unitary ensemble U is denoted as "EF Solver" in Fig. 2. The algorithm complexity is linear in the circuit depth L and independent of the number of qubit n (assuming translation symmetry). According to Ref. [29], the entanglement feature of classical snapshots |W Eσ ⟩ is all we need to obtain the classical shadow reconstruction map M −1 for the corresponding randomized measurement scheme, as long as the random unitary ensemble U is locally scrambled (which applies to random Clifford circuits). The steps to obtain M −1 are as follows: First, one solves for the reconstruction coefficients |r⟩ (or r A = ⟨A|r⟩) from |W Eσ ⟩ through the following consistency equation derived in Ref. [29] (which is essentially requiring M −1 M = 1 at the quantum channel level) Figure 3: Illustration of the MPS equation Eq. (9). The fusion tensor fi is defined in Tab. 1. The reconstruction coefficient |r⟩ can be determined from the entanglement feature |WE σ ⟩ by solving this MPS equation.
Here, A, C are summed over all sub-regions of the system (as subsets of qubits). Ω stands for the full system (as the full set of qubits) and δ B = 1 if B = Ω otherwise, δ B = 0. Also, r A and (W Eσ ) C are components of the corresponding vectors |r⟩ and |W Eσ ⟩, and f A,B,C is a fusion tensor that can be factorized to each site as f = n i=1 f i , where components of the on-site tensor f i are given by Tab. 1. Table 1: Components of the fi tensor (for qubit systems). The systematic formula for fi in generic qudit systems can be found in Ref. [29].
Using MPS representations for both |r⟩ and |W Eσ ⟩, and the tensor product structure of f , the left hand side of Eq. (9) may be represented as an MPS equation, as shown in Fig. 3. The left-hand side of Eq. (9) is made up of an MPS of bond dimension D r × D W , whereas the right-hand side of Eq. (9) is a trivial MPS (a product state). We may then solve for the MPS tensors describing |r⟩ by minimizing the following loss function The optimization is achieved by the gradient descent algorithm using the PyTorch package [46] for autodifferentiation. The loss function can be suppressed to L ≲ 10 −3 with a modest bond dimension D r = 6 for circuit depths L ≤ 5. In summary, for shallow circuits, we numerically observe that the reconstruction coefficients can be well described by a low bonddimension MPS. Solving the reconstruction coefficient |r⟩ from the entanglement feature |W Eσ ⟩ is denoted as the "M −1 Solver" in Fig. 2. The algorithm complexity is linear in the number n of qubits and independent of the circuit depth L. Then, the reconstruction map is a linear functional of the reconstruction coefficients.
Each reconstruction coefficient is weighted by a partial trace of the input σ embedded back into the Hilbert space so that the sum is over normalized density matrices on the full Hilbert space. More specifically, σ A = 2 |A|−n (TrĀ σ) ⊗ 1 ⊗(n−|A|) , which is a linear functional on σ. In practice, one works with individual snapshot states as opposed to their ensemble average. For Clifford circuit-based randomized measurement, each individual snapshot state is a stabilizer state, which admits efficient post-processing on a classical computer.
We would like to mention that the analytic expression for the reconstruction coefficient |r⟩ is known from [8] by solving Eq. (9) Nevertheless, it does not seem to help our purpose, as we do not know how to encode this expression as a MPS efficiently. Directly evaluating Eq. (12) for every sub-region A is not scalable. Therefore, we will still follow our approach to solve Eq. (12) by MPS optimization.
In the next section, we will answer the second question: how can we leverage the scalability and efficiency of matrix product states for prediction using classical shadow tomography?
3 Applications for scalable prediction 3

.1 Matrix Product State Representation of Reconstruction Coefficients
We consider that the quantum system is made of n qubits arranged on a one-dimensional lattice. We assume that there exists an efficient MPS representation for the reconstruction coefficients r A . The existence of an efficient representation for |r⟩ is implied from the tensor network structure of the self-consistency equation and the efficient MPS representation for the entanglement feature. Let R j i , j = 0, 1, i ∈ {1 · · · n} be the MPS matrices for |r⟩ at site i of type j, i.e. R j i is a D r × D r real matrix, satisfying where B is a D r × D r real matrix serving as a twisted periodic boundary condition for the MPS. Here the site index i = 1, 2, · · · , n labels the qubits on the onedimensional lattice. For L = 0, ∞, we may drop the site index altogether because there is translation invariance. For 0 < L < ∞, there is only a two-site translation invariance, so only the parity of the site is important. For example, consider the deep circuit limit (L = ∞), which corresponds to random Clifford measurements. In this case, r ∅ = −1 and r Ω = 1 + 2 −n , where 2 is the local Hilbert space dimension and n is the number of qubits. One choice of tensors here is As the MPS matrix R j i is translationally invariant, we will drop the site index i and denote it as R j . Introducing the boundary tensor can help simplify calculations. For example, without inserting the boundary tensor (or equivalently setting B = 1), we have that λ n 1 + λ n 2 = −1 for the ∅ component, where λ 1 , λ 2 are the eigenvalues of R 0 . When n is even, the equation has no real solutions, so the eigenvalues must be complex. In practice, this makes solving for the R matrices more difficult, as one has to optimize over complex tensors; in addition, one has to perform complex tensor network contractions, which can also significantly slow down calculations. This can be avoided, as suggested above, through a boundary tensor.
On the other hand, in the shallow circuit limit L = 0, the circuit reduces to random Pauli measurements. In this case, a trivial product state MPS suffices with and B = 1. Note that translation invariance is restored is both limits because R 0 , R 1 does not depend on the site index i explicitly. For finite-depth circuits, 0 < L < ∞, the brick wall structure breaks the translation invariance into the subgroup of two-site translations. Also, as expected, the MPS representation of |r⟩ both the shallow and deep circuit limits have low bond dimensions, although the underlying snapshot states have very different entanglement scalings (from arealaw in L = 0 to volume-law in L → ∞). The MPS representation of the reconstruction coefficients r A allows us to apply the reconstruction map Eq. (11) in prediction algorithms in a scalable way, which we will see next. Even simply knowing the reconstruction coefficients exactly does not solve this problem because there are an extensive number of them. Therefore, the MPS representation of the reconstruction coefficients is key for classical shadow tomography (beyond the Pauli measurement) to be feasible on near-term quantum devices with large numbers of qubits; otherwise, we cannot reconstruct classical shadows efficiently.
Given the MPS representation of the reconstruction coefficient |r⟩, the reconstruction map M −1 may be viewed as a matrix product operator (MPO). The local tensors are a generalized dephasing channel D Ri defined in terms of the MPS tensor for the reconstruction coefficients at a particular site i. It acts on a Figure  generic operator O as Then the reconstruction map on an arbitrary operator can be given as a super-operator defined in terms of the MPS tensors for the reconstruction coefficients, see Fig. 4, where the trace and product are over the virtual indices of the MPS. When acting on a single-qubit Pauli operator P i , where P i ∈ {1, X, Y, Z}, the generalized dephasing channel evaluates to Hence, single-qubit Pauli operators are eigenoperators of the generalized dephasing channel, and Pauli strings are eigen-operators of the reconstruction map. On a arbitrary n-qubit Pauli string P = n i=1 P i , where P i ∈ {1, X, Y, Z} the reconstruction map evaluates to The above formula is most useful for designing prediction algorithms. As we will see below, we can use the self-adjoint property of the reconstruction map to apply it to the operator whose expectation value we want to predict. This then allows us to take the expectation value of the transformed observable under the snapshot state, which has an efficient representation as a stabilizer state.

Pauli Estimation
Suppose we wish to estimate a Pauli observable P = n i=1 P i , where P i ∈ {1, X, Y, Z}. Efficient classical shadow tomography schemes for the prediction of such operators only exist using random Pauli measurements and become quickly infeasible as the weight of the Pauli string grows. Here, we show how the scheme can be extended to finite-depth circuits given the reconstruction coefficients. In practice, the expectation value is given by an average over the snapshot statesσ j , j = 1 · · · M .
Since the measurement channel M defined in Eq. (1) is self-adjoint, so as its inverse we may move M −1 onto P , and apply Eq.
where ⟨P ⟩ σ = 1 M M m=1 Tr(σ m P ). This is a key result of this work. It enables us to extend classical shadow tomography beyond the Pauli measurement limit without losing the classical post-processing efficiency. Every term above is efficiently computablethe trace is over a product of n low-dimensional matrices and can be thought of as an inner product over MPS states, and the expectation value of the Pauli observable P on a stabilizer stateσ j can also be computed efficiently according to the Gottesmann-Knill theorem [2,21], which is implemented in [30].

Generic Observable and Fidelity Estimation
What about more general operators? To effectively utilize the results of the previous section for scalable prediction, we need local tensor network descriptions of operators. One approach is to view the general operator O as a state in the Pauli basis i.e.
The operator O may be viewed as a state in the 4 n dimensional Hilbert space with basis states labeled by the different Pauli strings. Given this state description, we can now assign onsite matrices O j i where i labels the site and the j labels the Pauli at site i in the matrix-product expansion of the operator. If O has little operator entanglement, then the corresponding matrices O j i will also have a small bond dimension, and the decomposition can be considered efficient.
One context in which more general operator estimation is useful is when estimating fidelity. The fidelity F (ρ, ρ ′ ) provides a way to characterize the closeness of one state ρ to another ρ ′ . It is defined as F (ρ, ρ ′ ) = (Tr √ ρρ ′ √ ρ) 2 ∈ [0, 1]. The greater the value of F , the closer the two states are. If any of the states ρ or ρ ′ is pure, their fidelity simplifies to By viewing F (ρ, ρ ′ ) as the expectation value of the operator O = ρ ′ on the state ρ, the fidelity estimation amounts to predicting a low-rank, non-local observable O = ρ ′ using the means described in this paper.
For fidelity estimation, one might be worried that the operator entanglement of the reference state ρ ′ is too large to represent efficiently as MPS. Though this may seem like an obstacle, ground states, lowentangled states, and stabilizer states of shallow, local circuits, all have an efficient MPS description of the form above. The last one, in particular, is useful in fidelity estimation, since we will use the MPS description of snapshot states to convert the trace into a tensor network. We will lay out the general formula given the matrix-product representation of ρ ′ per Eq. (24), and leave details about how to construct the MPS forσ in the Appendix A. A straightforward application of Eq. (19) gives the desired formula.
Using the MPS representation for the Pauli coefficients ofσ and ρ ′ , the summation above can be viewed as a tensor network, as shown in Fig. 5. The idea is to take the MPS matrix forσ, ρ ′ and r and contract them along an appropriate vertex tensor u = n i=1 u i . If we label the indices of u i as (u i ) Pi,P ′ i ,j , where P, P ′ are Pauli strings in the basis expansion ofσ, ρ ′ , respectively, and j labels the local reconstruction MPS matrix R j i , then Note that this procedure applies also for predicting generic local operators O, by expanding the operator in its Pauli basis representation first. One only needs to replace ρ ′ by O in Eq. (26), then ⟨O⟩ = Tr(ρO) can be evaluated by a similar tensor network of the same structure in Fig. 5. The procedure outlined above only relies on an efficient Pauli basis representation for the state or operator in question.

Locally-Scrambled Shadow Norm
Apart from estimating the expectation value of an observable O from the classical shadow data, we also want to bound the variance of our estimation. This is provided by the operator shadow norm ∥O∥ 2 shd introduced in Ref. [32]. In the following, we will focus on the case that O is a traceless operator, i.e.
The conventional shadow norm ∥O∥ 2 shd is defined as a maximum over all possible states ρ of the quantum system. This definition requires information about the third moment of the snapshot state ensemble E σ , which could take some effort to compute in general. In the context of locally-scrambled ensembles, Ref. [29] introduces an alternative version of the shadow norm, called the locally-scrambled shadow norm, denoted as ∥O∥ 2 Eσ . It only requires the second moment of E σ , which can be conveniently read out from the entanglement feature |W Eσ ⟩ computed already. The key modification in the definition is to replace the maximization max ρ by an average of ρ over its locally scrambled ensemble E ρ = {V ρV † |V = i V i ∈ U(2) ⊗n } with contains all states related to ρ by local basis transformations V . The locally-scrambled shadow norm is defined as where the dependence on ρ drops from the result due to the locally scrambled property of E ρ . So ∥O∥ 2 Eσ is only a function of the randomized measurement scheme specified by the prior snapshot ensemble E σ (defined in Eq.
where r A is the reconstruction coefficient given by the solution of Eq.
It is also possible to represent |W O ⟩ as an MPS, similar to the state entanglement feature in Eq. (6),  Fig. 6 illustrates the tensor network contraction, which can be computed efficiently with complexity linear in the system size. The locally-scrambled shadow norm is upper bounded by the conventional shadow norm as ∥O∥ 2 Eσ ≤ ∥O∥ 2 shd . It plays a similar role in variance estimation, var ⟨O⟩ ≃ ∥O∥ 2 Eσ /M.
The shadow norm ∥O∥ 2 Eσ has two complementary physical meanings: (i) it characterizes the estimation variance var ⟨O⟩ given a fixed number of samples M (operators with larger shadow norms will have proportionally higher variances), (ii) it determines the sample complexity given the variance level (i.e. M ∼ ∥O∥ 2 Eσ /ϵ 2 samples are needed in order to control the variance below a given threshold var ⟨O⟩ ≤ ϵ 2 set by a small ϵ).
For Pauli estimation O = P , the operator entanglement feature |W P ⟩ can be computed easily from its support. We define the support of a Pauli operator P as the subset of sites on which it acts non-trivially, denoted as supp P = {i|P i ̸ = 1}. Define the operator weight k = |supp P | as the size of the support of P . We find that the operator entanglement feature of a Pauli string P is simply given by The corresponding MPS matrices for Pauli observables W 0 P,i , W 1 P,i take different values depending on if site i is inside the support of P or not. Given the general MPS form in Eq. (32), it is easy to check that Nonetheless, the matrices have bond dimension one because of the tensor product structure of P . Plugging in (34) into (30), we obtain the shadow norm of a Pauli string as a function of the reconstruction coefficients, Using the solution of |r⟩ in Eq. (12), we have where k = |supp P |. This result shows how the shadow norm of a Pauli operator P depends on both its support and the entanglement feature of the snapshot state ensemble.
In the following, we will directly evaluate the locally-scrambled shadow norm ∥O∥ 2 Eσ for several choices of the circuit depth L following Eq. (30). For notational convenience, we will treat |W O ⟩ as translation-invariant unless evaluating the shadow norm for specific operator types that explicitly break translation symmetry, e.g. Pauli operators.

L = 0
Here, the unitary ensemble is simply Pauli measurement. The vector |r⟩ is a product states, and the reconstruction coefficients are given in (15) as R 0 = −1, R 1 = 3/2. In terms of the matrices for |W O ⟩, the expression for the shadow norm reduces the trace of a matrix power: We can see that in this limit, the shadow norm is closely related to the locality of the operator, since if O is the identity on some site i, then the corresponding contribution to the shadow norm is a factor of 1, i.e. it doesn't affect the shadow norm. Consider the case O = P . In the translationsymmetry breaking version of Eq. (38), the power of n breaks into individual factors for each site, with each term in the product depending on the operator entanglement feature matrix at that site: Using Eq. (35), the product contributes a factor of 1 where P i is identity and a factor of 3 where P i is not.
Hence we see that the locally-scrambled shadow norm for P grows exponentially in the weight k = |supp P |, This result agrees with the conventionally defined shadow norm. It can also be seen to agree with (37) since the entanglement feature state of the unitary ensemble |W Eσ ⟩ is a trivial product state in the L = 0 case. Furthermore, although the locally scrambled shadow norm is less than the conventionally defined shadow norm, for locally-scrambled unitary ensembles, it provides accurate bounds on the single-shot variance of Pauli observables [29]. Support for this claim is given in Sec. 4.

L = 1
Another important case we may check is L = 1. In this case, the ensemble is a product of two-local, twodesign unitary gates. Each gate effectively scrambles site 2i with its neighbor at 2i + 1, and generates no entanglement outside this unit cell. Within each unit cell, the unitary is Clifford random. Thus the MPS for reconstruction coefficients in each block for the L = 1 case is given by (14). Therefore, Consider next the case O = P , and suppose n is even for simplicity. Then the translation-symmetry breaking version of (41) for a Pauli operator is 4 .
(42) There are two types of factors depending on the support of the operator P in that unit cell. If the Pauli operator P is trivial in the unit cell, then the factor is one. Otherwise, if the Pauli operator P has any support in the unit cell, then the factor is five. Hence, for a contiguous Pauli operator of weight k, there is a (staggered) exponential growth in the shadow norm due to the unit cell structure of the brick wall circuit i.e.
Thus, as the operator weight k grows, the shadow norm, and hence the number of samples required for reliable prediction, is already exponentially smaller for L = 1 than L = 0, since 3 ) k . We find that this pattern, that longer operators can be better estimated by deeper circuits, continues for L > 1 in Sec. 4. For a given operator weight k, there is an ideal short circuit depth L * ≥ 0 such that the shadow norm and, therefore, the singles-shot variance are minimized. This shows that classical shadow tomography beyond Pauli measurements has practical applications since we can use circuits of varying depths to minimize the number of samples one has to take.
When L ≥ 2, the shadow norm is more difficult to evaluate analytically. This is because both the entanglement features of the state ensemble and the reconstruction coefficients are more complicated. While the cases L = 0, 1, ∞ can be helpful, for 1 < L < ∞, we resort to numerical evaluation of the shadow norm. See Sec. 4. We find that the shadow norm is a reliable indicator of the estimation variance in the case where O is a tensor product of local operators.

L = ∞
When L → ∞, there are only two non-zero reconstruction coefficients r ∅ = −1 and r Ω = 1 + 2 −n , which correspond to the empty set and the total system, respectively. Then the shadow norm ∥O∥ 2 L=∞ for a traceless Hermitian operator reduces to This is reminiscent of the conventional shadow norm introduced in [32], as both are proportional to Tr(O 2 ), indicating that the locally-scrambled shadow norm in the deep circuit limit also depends strongly on the rank of the operator. However, the locallyscrambled shadow norm has a smaller proportionality factor since the conventional shadow norm is defined by taking a maximum over all state ρ. We also note that afterwards in [5], some closed form analytic expressions were developed for the reconstruction map in brickwall circuits that are consistent with these results.

Numerical Demonstrations
In this section, we demonstrate the prediction algorithms for various system sizes based on numerical simulation 1 . We evolve the system using random twolocal Clifford gates arranged in a brick wall pattern with periodic boundary conditions. The depth L of the circuit refers to the number of layers of Clifford gates. For example, for L = 2, we apply one even layer and then one odd layer of independently random two-qubit local Clifford gates. After the Clifford circuit evolution, we then measure the qubits and store the resulting snapshot stateσ = U † |b⟩⟨b|U for later use in predicting various quantities. For demonstration purposes, we collect the classical snapshots by numerically simulating the randomized measurement. Suppose the underlying original state ρ is a stabilizer state; the simulation can be efficiently carried out on a classical computer using the stabilizer table algorithm, thanks to the Gottesmann-Knill theorem.
In our numerical demonstrations, we consider two example initial states: the cluster state ρ ZXZ and the Greenberger-Horne-Zeilinger (GHZ) state ρ GHZ . They can be specified by their stabilizer groups, Each stabilizer state ρ = |G| −1 g∈G g is defined as the invariant state of the corresponding stabilizer group G.
In the following figures, the initial state will be indicated using squares (for ρ ZXZ ) and triangles (for ρ GHZ ), respectively. The cluster state ρ ZXZ is the ground state of a gapped, local, stabilizer Hamiltonian, and therefore carries strictly area-law entanglement. The GHZ state ρ GHZ is the symmetric ground state of the Ising model, which contains long-range entanglement and is more challenging for prediction based on the randomized measurement on shallow local circuits. For example, the extensive generators (i.e. i X i in the G GHZ stabilizer group) will be hard to probe by local circuits of a finite depth L, because the generators of the snapshot state are at most ∼ 2L in length.
We then use the reconstruction coefficients, together with the algorithms outlined above, to perform prediction using the snapshot states. The reconstruction coefficients are calculated using PyTorch, specifically the AdamW optimizer, by minimizing the loss function L in Eq. (10). The minimization procedure is continued until L ≤ 10 −3 .
We would like to briefly comment on the flexibility of this approach with respect to different circuit evolutions. As explained in Fig. 1, the input for determining the reconstruction coefficients is the entanglement features of the underlying unitary ensemble. One might be wondering if the brick-wall Clifford unitary circuit is the only specialized case which admits efficient entanglement feature dynamics. For example, one may be interested doing tomography using other locally-scrambled unitary ensembles, such as random Hamiltonian-generated evolution, which was proposed in [29]. It was found in prior work that locally-scrambled dynamics generated by continuously scrambling gates is also well described by a simple D = 2 MPS ansatz [4]. Similar efficiency was found in [17]. It is believed that this locally-scrambled purity dynamics is generically efficiently representable as MPS away from finely tuned points like the measurement-induced entanglement transition. Therefore, the shallow-circuit tomography scheme presented in this paper can be adapted to other circuits which are also locally-scrambled at the two-design level. We will leave the analysis of the tomography scheme in these more general contexts to future work.
Lastly, a similar approach using MPS to do efficient shallow circuit tomography was developed concurrently to ours in [7]. In this paper, the Pauli weight instead of the entanglement feature is computed. Since these quantities are related by local basis transformation in the entanglement-feature Hilbert space, we suspect that the efficiency of their approach will be comparable to ours.

Pauli Estimation
To demonstrate the efficiency of our approach in estimating Pauli observables, we will take Z ⊗k := k i=1 Z i as the target observable without loss of generality. The operator weight (operator size) k can be adjusted. Since the randomized measurement schemes are invariant under local Clifford transformations, any Pauli string P of the same weight k will exhibit the same sample efficiency (as the shadow norm ∥P ∥ 2 Eσ only depends on the support of P ). Nevertheless, the expectation value ⟨P ⟩ will still depend on the choice of the observable P and the underlying state. On the GHZ state, the expectation value of Z ⊗k is 1 or 0 depending on if k is even or odd, respectively. On the cluster state, it is 0 for all k ≥ 1. In Fig. 7, we demonstrate that our method can reliably predict the expectation value of the Pauli string observable Z ⊗k on a system of n = 22 qubits via randomized measurements on the random Clifford circuit of L = 3 layers. Our numerical result is obtained by first simulating the randomized measurement on a classical computer and then processing the data using the algorithm described in Sec. 3.2. Our classical post-processing approach provides unbiased estimations of Z ⊗k on both the cluster state (blue squares) and the GHZ state (red triangles) for various operator weights k, although the estimation variance is growing with k (which will be analyzed soon). Here we only demonstrate the case of circuit depth L = 3, but our method is applicable for other circuit depths, and the quality of estimation remains similar.
The variance of the estimation is associated with the sample complexity. A larger variance (a larger shadow norm) means more samples are needed to reduce the variance to the desired accuracy level. The traditional classical shadow tomography with Pauli measurements (L = 0) quickly requires an extensive number of samples to estimate a weight-k Pauli string observable (as the shadow norm ∥P ∥ 2 L=0 = 3 k grows exponentially with k). However, with shallow circuits L ≥ 1, we can easily achieve the same accuracy with far fewer samples. We have analytically proved this statement in Sec. 3.4.2 for L = 1 by calculating the locally-scrambled shadow norm. For large L, it is difficult to obtain a closed-form analytic expression for the shadow norm, but we can numerically investigate the shadow norm as a function of L according to Eq. (30) using tensor network techniques. Our results are presented in Fig. 8.
GHZ var ZXZ var exact Figure 8: The locally-scrambled shadow norms for circuit depth L = 0, 1, 3, 5 (yellow, red, green, and blue lines, respectively), with n = 22 qubits. We also plot the corresponding variances over all samples for the cluster (squares) and GHZ (triangles) states. These agree with the shadow norms. The staggered behavior of the shadow norm for L > 0 is because these circuits break translation invariance.
We first verify that the locally-scrambled shadow norm ∥Z ⊗k ∥ 2 L computed from Eq. (30) correctly captures the variance var Z ⊗k of the estimated expectation value observed in our numerical simulation. Based on Eq. (33), we expect the variance and the shadow norm to match each other, as var ⟨Z ⊗k ⟩ = ∥Z ⊗k ∥ 2 L /M , where M is the number of samples. This is indeed the case as shown in Fig. 8. The numerically simulated variance and the computed shadow norm are in good agreement as we vary L and k, even as k approaches a quarter of the system size. This confirms the correctness of our approach to computing the shadow norm. The shadow norm is useful as it can give us an idea of the number of samples needed to control the estimation variance to the desired level, even before we start performing the classical shadow tomography experiment.
As shown in Fig. 8, the shadow norm ∥Z ⊗k ∥ 2 L always grows with the operator weight k, meaning that we always need to perform more measurements to estimate longer Pauli strings accurately. However, the growth rates of shadow norms are different for different circuit depths L. For example, the L = 1 shadow norm increases with k slower than that of L = 0, meaning that the L = 1 circuit will be more advantageous for probing Pauli string operators of sufficiently large weight k. To study this trend more quantitatively, we use (37) to compute the shadow norm of Pauli operators for various weights k under various circuit depths L. The result is presented in Fig. 9(a). Given the Pauli operator size k, it is clear that there is an optimal circuit depth L * that minimizes the shadow norm. We conjecture that L * can be fitted by an empirical formula The formula fits the numerically found L * nicely with parameters a = 0.14, b = 0.35, as shown in Fig. 9(b), where the rounded value of the fitting function is shown since L * must always be an integer. This confirms the sub-polynomial growth in the optimal circuit depth L * as proposed in Eq. (47). The result indicates that a relatively shallow circuit can efficiently optimize the sample complexity for classical shadow tomography of large Pauli strings, which demonstrates the advantage of our shallow-circuit measurement approach. The logarithmic scaling of the optimal circuit depth was confirmed afterwards in [33]. In this work, the authors were able to derive a bound on the shadow norm by studying the evolution of the weight distribution of contiguous operators under the random brick-wall circuit. We would like to point out that for estimating the Pauli string with a fixed operator weight k, the shadow norm remains constant with the system size n, meaning that classical shadow tomography is scalable for this task. Our approach of using shallowcircuit measurements enables further improvement in the sample complexity compared to the Pauli measurement without losing the scalability and feasibility on near-term quantum devices.

Fidelity Prediction
Fidelity estimation is another important task in quantum information. We can also compare the fidelity of ρ GHZ and ρ ZXZ against various states using classical shadow tomography on shallow circuits. A high fidelity F = 1 ensures that the state produced in the lab is close to the desired state, whereas a low fidelity means that the state produced in the lab has very little overlap with the desired state. To demonstrate our approach, we simulate the shallow-circuit randomized measurement on a classical computer and then use the algorithm outlined in Sec. 3.3 to process the measurement data collected from the simulation. We estimate the fidelity of the classical shadow reconstructed state with the underlying original state and show that our approach successfully reconstructs the original quantum state from measurement outcomes with high fidelity. Although the shallow-circuit classical shadow tomography still has exponential sample complexity for the fidelity estimation task, increasing the circuit depth a bit can significantly reduce the base of this exponential complexity, which makes our approach useful for quantum state tomography on near-term quantum devices.
First, we compare the fidelity of the cluster state with itself as a function of the qubit number n for n = 6, 10, 14, 18, 22, using a circuit depth of L = 3, using 50000 samples. The result is shown in Fig. 10(a). We find the reconstructed states have close-to-one fidelities with negligible error bars up to 22 qubits. We also observe a (rather slow) exponential growth in the variance of the fidelity, in agreement with Ref. [29]. In that paper, a scaling form is proposed with var F ∝ exp(cn/(L + 1) α ). Fixing α = 0.72, per Ref. [29], we can fit the parameter c based on our data. We find that the fitted value c = 0.4 agrees with the value c = 0.47 ± 0.08 previously observed in Ref. [29]. This empirical relation suggests that although the variance of fidelity estimation (and hence the sample complexity) grows exponentially with the number of qubits n, increasing the circuit depth L in the shallow circuit regime can significantly reduce the variance (by making the exponential growth very slow).  Figure 10: (a) The fidelity and its variance as a function of system size, for systems of size n = 6, 10, 14, 18, 22 for a fixed circuit depth of size L = 3. We use the cluster state as the reference state. Each point is the average of 50000 measurement samples. The mean fidelity is tightly centered around one, which is expected. The growth of the sample variance is indicated in the inset. (b) The fidelity and its variance versus circuit depth L for a system of n = 22 qubits. The two states are the cluster state (squares) and the GHZ state (triangles). We use a median-of-means average to estimate the fidelity, using 50000 total samples broken up into 12 groups. The error bar corresponds to two standard deviations. The inset shows that the variance decays with L rapidly.
Next, we study the efficiency of the protocol as a function of the circuit depth L for a system of size n = 22, as shown in Fig. 10(b). We plot the fidelity of both the cluster state and the GHZ states with themselves and find that both are centered around one, as expected. For both states, we observe a rapid decline in the variance of the sample fidelity as we increase circuit depth. However, we notice that the GHZ state has a much higher sample variance. This is likely due to the fact that the GHZ state has long-range correlations, unlike the cluster state which only has finite-range correlations. Thus quasi-local measurements will not be very efficient in probing the GHZ state. For example, at L = 0 (Pauli measurements), the reconstructed state has very low fidelity F < 0.5 and very large sample variance varF ∼ 1 for the GHZ state. We see a sharp (more than an order of magnitude) decline in the sample variance for L = 1, and a continued exponential decline as we increase L. By L = 3, the fidelity approaches one for both states within a variance less than 10 −3 . This is because a deeper scrambling circuit will enable the final computational basis measurements to probe more complex (higher weight) operators in the original basis, which benefits the fidelity estimation. This demonstrates the great advantage of entanglement-assisted shallow-circuit measurements. The rapid decline of the variance in fidelity versus circuit depth L makes it promising to use classical shadow tomography for fidelity estimation with shallow circuits achievable on near-term quantum devices.

Conclusion and Open Questions
In this paper, we demonstrate a novel approach to predicting expectation values of observables using shallow-depth circuit classical shadow tomography. Our work extends the well-known Pauli measurement and Clifford measurement protocols to the case of finite-depth brick wall circuits, which are available on near-term devices on many qubits. This allows us to use shallow circuit measurement to "approximate" the Clifford measurement limit to do fidelity estimation efficiently. It also allows us to use the scrambling properties of circuits with L > 0 to more efficiently probe large-weight operators, e.g. long Pauli strings. The approach that we take is based on our entanglement features formalism, which allows us to simply represent the purity data as MPS and consequently determine the reconstruction coefficients. Given the reconstruction coefficients and that they can be represented with a low-bond dimension MPS, we propose a scalable approach to predict many properties of quantum states.
There are still several questions that remain and that we wish to explore in future works. Firstly, is there an exact and efficient representation of the reconstruction coefficients as MPS? In our paper, we determine the reconstruction coefficients by learning their MPS tensors via gradient descent on the consistency equation Eq. (9). This procedure only needs to be done once, but it is bound to produce some error that can propagate through the prediction algorithm. In Ref. [8], the exact, individual reconstruction coefficients were determined. In practice, we want to be able to organize all 2 n coefficients efficientlyis there some way to represent the exact individual coefficients scalably? Such an approach would also give us insight into how the bond dimension or complexity of the reconstruction map evolves with circuit depth. We know that it is small in the shallow circuit limit and also small at L = ∞. Is it possible that the bond dimension remains small and finite for all values of the circuit depth L? Furthermore, is there some way to generalize the results in this paper to higher dimensions? For example, do the entanglement feature states and reconstruction coefficients now a low-dimensional PEPS structure?
Moreover, the prediction procedures described in Sec. 3 assume that the operator whose expectation value we want to predict has an efficient Pauli basis expansion. This means that the operator with a large amount of operator entanglement will be difficult to predict. Furthermore, this procedure is limited by the growth in operator entanglement of the snapshot states over time. The snapshot states' bond dimension grows exponentially in L, which can be seen from the fact that the typical size of the generators of the snapshot states grows linearly in L. This means this procedure will be costly for deep circuits. What approach can we take in this case?
In addition, we only considered measuring quantities that depend on the first moment of ρ i.e. Tr(Oρ). We could also consider quantities that depend on higher moments of ρ, e.g. Tr(Oρ ⊗ ρ), where O is an operator on the doubled Hilbert space. How can we generalize the approach developed here to expectation values based on higher moments of ρ?
In this paper, we only considered a particular class of circuit ensembles formed by varying the circuit depth L. However, the procedure we outlined is independent of the particular circuit geometry. How might other geometries be useful in other contexts? For example, instead of considering depth L circuits, we could also consider circuits composed of adjacent k-qubit unitaries. Such circuits should also be feasible on near-term devices, provided that k is small. More ambitiously, one could also consider circuits with holographic geometries. Such circuit geometries may be optimal for prediction on gapless or scale-invariant states. We leave these cases to future research.
There are still many conceptual questions that have yet to be answered. For example, what is the physical meaning of the locally-scrambled shadow norm? The norm is closely related to the scrambling properties of the unitary ensemble, which begs the question of how it can be related to other information-theoretic and thermodynamic quantities.
Our setup assumes a qubit system, where the local degrees of freedom are finite and commuting. How do we apply our approach in Sec. 3 to the fermionic context or other experimental contexts which may not be easily described in terms of qubits?
One may also consider different types of locallyscrambled or approximately locally-scrambled ensembles than the Clifford ensemble. For example, one may consider instead quantum Brownian motion ensembles, where the evolution is generated by a random Hamiltonian coupling nearby sites. The entanglement features of such ensembles have been extensively studied in prior work [4,36,52]. What are the reconstruction coefficients for such ensembles? What are their prediction properties? Classical shadow tomography is a rapidly developing field with many interesting applications and open questions. We hope that our paper provides some insights that can motivate further interesting developments.
A MPS-representation of a finite-depth stabilizer states Here, we outline an algorithm for constructing the MPS representation of a stabilizer state, given the generators g i of the corresponding stabilizer group G = ⟨g 1 · · · g n ⟩. In the classical shadow protocol, we start with a trivial product state |b⟩ = ⊗ n i=1 |b i ⟩ which represents a measurement outcome in the computational basis, and evolve by a two-local Clifford circuit. The initial stabilizers therefore can transform into more complicated Pauli group elements.
For a Clifford circuit of depth L > 0, the size of g i is at most 2L. The state ρ is then a sum over stabilizer group elements. Similarly, we can also represent the state as an MPS in the Pauli-basis, described by coefficients c ⃗ s where ⃗ s ∈ {0 · · · 3} n denotes a Pauli string.
For example, the GHZ state on two sites has two generators, g 1 = ZZ, g 2 = XX, and four nonzero components in the Pauli basis, corresponding to II, XX, Y Y, ZZ. Therefore, there are four nonzero components for c, namely c 00 = c 11 = c 33 = −c 22 = 1/4, and all other components are zero.
We describe a procedure to determine c ⃗ s as an MPS, given a set of generators g 1 · · · g n and corresponding phases (−1) s1 · · · (−1) sn . The idea is that the local MPS tensor at site x should transmit through its virtual indices which of the stabilizers that have support on x are included in the product (i.e. have b i = 1). First, we need to determine a contiguous set of sites that contain the support of each generator. Let us call this the extent of an operator. Given the extent of each generator, let P i = I + (−1) si g i be an projection operator (or state in the Pauli basis) defined on the extent. Schematically, the local MPS tensor [P j i ] at site j is where k is the size of the extent, and g j i is the Pauli operator at site j in g i . Given the MPS P i , we only need one more ingredient to determine the MPS representation of the stabilizer state: Pauli algebra fusion tensor V c a,b . This tensor implements the product of Pauli operators, and evaluates to the correct phase associated to the product e.g.
Then, to determine the MPS tensor for the stabilizer state at site j, first collect all the generators whose extent is on j. Then, combine their MPS along the physical dimension using the fusion tensor. If there is more than two generators with support at j, then we can use multiple fusion tensors.