Learning quantum many-body systems from a few copies

Estimating physical properties of quantum states from measurements is one of the most fundamental tasks in quantum science. In this work, we identify conditions on states under which it is possible to infer the expectation values of all quasi-local observables of a state from a number of copies that scale polylog-arithmically with the system’s size and poly-nomially on the locality of the target observables. We show that this constitutes a provable exponential improvement in the number of copies over state-of-the-art tomography protocols. We achieve our results by combining the maximum entropy method with tools from the emerging fields of classical shadows and quantum optimal transport. The latter allows us to fine-tune the error made in estimating the expectation value of an observable in terms of how local it is and how well we approximate the expectation value of a fixed set of few-body observables. We conjecture that our condition holds for all states exhibiting some form of decay of correlations and establish it for several subsets thereof. These include widely studied classes of states such as one-dimensional thermal and high-temperature Gibbs states of local commuting Hamiltonians on arbitrary hyper-graphs or outputs of shallow circuits. More-over, we show improvements of the maximum entropy method beyond the sample complexity that are of independent interest. These include identifying regimes in which it is possible to perform the postprocessing efficiently as well as novel bounds on the condition number of covariance matrices of many-body states.


Introduction
The subject of quantum tomography has as its goal devising methods for efficiently obtaining a classical description of a quantum system from access to experimental data.However, all tomographic methods for general quantum states inevitably require resources that scale exponentially in the size of the sys-Cambyse Rouzé: rouzecambyse@gmail.comDaniel Stilck França: daniel.stilck_franca@ens-lyon.fr tem [33,55], be it in terms of the number of samples required or the post-processing needed to perform the task.
Fortunately, most of the physically relevant quantum systems can be described in terms of a (quasi)local structure.These range from that of a local interaction Hamiltonian corresponding to a finite temperature Gibbs state to that of a shallow quantum circuit.Hence, locality is a physically motivated requirement that brings the number of parameters describing the system to a tractable number.Effective tomographic procedures should be able to incorporate this information.And, indeed, starting from physically motivated assumptions, many protocols in the literature achieve a good recovery guarantee in trace distance from a number of copies that scales polynomially with system size [5,10,23,29,64,67].
Furthermore, in many cases, one is interested in learning only physical properties of the state on which tomography is being performed.These are mostly encoded into the expectation values of quasi-local observables that often only depend on reduced density matrices of subregions of the system.By Helstrom's theorem, obtaining a good recovery guarantee in trace distance is equivalent to demanding that the expectation value of all bounded observables are close for the two states, a much larger class of observables than quasi-local ones.
It is, in turn, desirable to design tomographic procedures that can take advantage of the fact that we wish to only approximate quasi-local observables, instead of demanding a recovery in trace distance.And some methods in the literature take advantage of that.For instance, the overlapping tomography or classical shadows methods of [22,24,40,43] allow for approximately learning all k-local reduced density matrices of an n-qubit state with failure probability δ using O(e ck k log(nδ −1 )ϵ −2 ) copies without imposing any assumptions on the underlying state.This constitutes an exponential improvement in the system size compared to the previously mentioned many-body setting at the expense of an undesirable exponential dependency in the locality of the observables.
In light of the previous discussion, it is natural to ask the guiding question of our work: is it possible to devise a tomography protocol that has a sample complexity that is logarithmic in system size and polyno-mial in the locality of the observables we wish to estimate?
At first, this might sound like a tall order: as we show in Section G by importing results of [27], even if we start from the assumption that the underlying state we wish to learn is a high-temperature product state with n qubits, the number of samples required to obtain an estimate that is ϵ close in trace distance from the target state scales like Ω(nϵ −2 ).Thus, to obtain a sample complexity that is logarithmic in system size we cannot quantify closeness in trace distance and need to resort to more physically motivated distinguishability measures.Moreover, we show in Section G that even for product states the classical shadows protocol will fail to produce a good estimate for klocal observables if the number of samples is not exponential in k.We conclude that protocols like shadow tomography on their own cannot achieve our goal of a sample complexity that is polynomial in the locality of the underlying observables and needs to be combined with other estimation methods in a nontrivial way.
Despite these challenges, we provide an affirmative answer for the guiding question above for a large class of physically motivated states.We achieve this by combining two insights.First, we observe that recently introduced Wasserstein distances [21,26,31,45,56,59] are better suited than the trace distance to estimate by how much the expectation values of physically motivated observables can differ on two states.We introduce these distances and motivate this claim below.But in summary, the Wasserstein distance quantifies how well we can distinguish states through observables whose expectation value does not change much when we apply a unitary acting only on a few qubits.By focusing on the Wasserstein distance instead of the trace distance we can bypass the Ω(nϵ −2 ) lower bound we mentioned previously.Intuitively, this means that exponentially fewer samples are required to estimate all such local expectation values than arbitrary, global ones.
The second insight is to combine techniques from quantum optimal transport with the well-established maximum entropy method [42] and the classical shadows protocols in a novel way.In particular, we will demonstrate that so-called transportation cost inequalities [21,26,31,45,56,59] allow us to control how well we approximate the expectation value of k-local observables by how well we approximate certain observables that only act on a constant number of qubits.Thus, we only use the shadows protocol to estimate the expectation of many observables that are highly local, the regime in which classical shadows excel, and bypass the exponential scaling of only using shadows for such an estimation task.This way we obtain a provable exponential improvement over known methods of many-body tomography [5,10,23,29,64,67] that focus on the trace dis-tance and recent shadow tomography or overlapping tomography techniques [22,24,40,43], as summarized in Table 1.
Examples for which we obtain exponential improvements include thermal states of 1D systems and hightemperature thermal states of commuting Hamiltonians on arbitrary hypergraphs and outputs of shallow circuits.Furthermore, based on results by [34,49], we conjecture that our results should hold for any hightemperature Gibbs state, even.More ambitiously, we conjecture that our results can be extended to states exhibiting exponential decay of correlations.This would allow us to extend our findings to classes of states that are not known to be tractable classically, such as ground states of gapped Hamiltonians in higher dimensional lattices [39].
The main ingredient to obtain our improvements are so-called transportation cost (TC) inequalities [61].They allow us to bound the difference of expectation values of Lipschitz observables, a concept we will review shortly, on two states by their relative entropy.Such inequalities constitute a powerful tool from the theory of optimal transport [65] and are traditionally used to prove sharp concentration inequalities [58,Chapter 3].Moreover, they have been recently extended to quantum states [26,56,59].By combining such inequalities with the maximum entropy principle, we are able to easily control the relative entropy between the states and, thus, the difference of expectation values of Lipschitz observables.
Our revisit of the maximum entropy principle is further motivated by recent breakthroughs in Hamiltonian learning [5,34], shadow tomography [40], the understanding of correlations and computational complexity of quantum Gibbs states [35,46,49,51,52] and quantum functional inequalities [18,26] that shed new light on this seasoned technique.
Before we summarize our contributions in more detail, we first define and revise the main concepts required for our results, namely Lipschitz observables, transportation cost inequalities and the maximum entropy principle.

Lipschitz observables
In the classical setting, given a metric d on a sample space S, the regularity of a function f : S → R can be quantified by its Lipschitz constant [58, Chapter 3] ∥f ∥ Lip = sup ( For instance, if we consider functions on the ndimensional hypercube {−1, 1} n endowed with the Hamming distance, the Lipschitz constant quantifies by how much a function can change per flipped spin.It should then be clear that physical quantities like average magnetization have a small Lipschitz constant.Some recent works [56,59]  approach of [56] in the main text.This is justified by the fact that it is more intuitive and technically simpler.For the approach followed in [56], the Lipschitz constant of an observable on n qudits is defined as tr [O(ρ − σ)] , (2) where D d n denotes the set of n-qudit states.That is, ∥O∥ Lip,□ quantifies the amount by which the expectation value of O changes for states that are equal when tracing out one site.It is clear that ∥O∥ Lip ≤ 2 √ n∥O∥ ∞ always holds by Hölder's inequality, but it can be the case that ∥O∥ Lip,□ ≪ √ n∥O∥ ∞ .For instance, consider for some k > 0 the n-qubit observable where for each site j, Z j denotes the Pauli observable Z acting on site j and we take addition modulo n.It is not difficult to see that ∥O∥ Lip,□ = 2kn − 1 2 , while ∥O∥ ∞ = 1.We refer to the discussion in Fig. 1 for another example.
Moreover, one can show that shallow local circuits or short-time local dynamics satisfying a Lieb-Robinson bound cannot substantially increase the Lipschitz constant of an observable when evolved in the Heisenberg picture.That is, if we have that Φ * t is the quantum channel that describes some local dynamics at time/depth t in the Heisenberg picture and it satisfies a Lieb-Robinson bound, then we have: where v denotes the Lieb-Robinson velocity.This result is discussed in more detail in Section B.1 of the supplemental material.Thus, averages over local observables and short-time evolutions thereof all belong to the class of observables that have a small Lipschitz constant when compared to generic observables.These facts justify our claim that quasi-local observables are Lipschitz.
Once we are given a Lipschitz constant on observables or a set of quasi-local observables, we can define a Wasserstein-1 distance on states by duality [56,59]. 1 the definition of [56] has a different normalization and does not have the √ n term.This normalization will be convenient to treat the constants of [59] and [56] on an equal footing.
The latter quantifies how well we can distinguish two states by their action on regular or local observables and is given by The definition (4) is in direct analogy with the variational definition of the trace distance, given by: ∥ρ − σ∥ tr := sup Note, however, that the two quantities have different scalings.To illustrate this point, let us consider the observable in Eq. (3).If we measure the distance in trace distance, then we need that ∥ρ − σ∥ tr ≤ ϵ to ensure that σ approximates the expectation of value of ρ up to ϵ on O. On the other hand, as 2k is sufficient to guarantee the same approximation.This difference in scaling is at the heart of our results, as we will see now.

Transportation cost inequalities
The paragraphs before motivated the idea that observables with a small Lipschitz constant capture quasilocal observables, and thus, that controlling the Wasserstein distance between two states gives rise to a more physically motivated distance measure than the trace distance.However, it is a priori not clear how to effectively control the Wasserstein distance between states, as it does not admit a closed formula in terms of eigenvalues like the trace distance.
In this work, we will achieve this by relating Wasserstein distances to the relative entropy between two states, D(ρ∥σ) := tr [ρ (log(ρ) − log(σ))], for σ of full-rank.This can be achieved through the notion of a transportation cost inequality: an n-qudit state σ is said to satisfy a transportation cost inequality with parameter α > 0 if the Wasserstein distance of σ to any other state ρ can be controlled by their relative entropy, i.e.
holds for all states ρ ∈ D d n .Such inequalities are particularly powerful whenever the constant α does not depend on the system size n or does so at most inverse polylogarithmically, and can be thought of as a strengthening of Pinsker's inequality.
Transportation cost inequalities are closely related to the notion of Gaussian concentration [12,56,59], i.e. that Lipschitz functions strongly concentrate around their mean.Establishing analogs of such concentration inequalities for quantum many-body systems has been a fruitful line of research in the last years and they are related to fundamental questions in statistical physics, see e.g.[3,15,49,50,62].Although we are certain that inequalities like Eq. ( 6) also shed new light on this matter, here we will focus on their application to learning a classical description of a state through maximum entropy methods.We refer to Table 2 for a summary of classes of states known to satisfy it, as discussed in more detail below.Unfortunately, for some important classes, the inequalities are only known for the more technically involved variations of the Wasserstein distance, and we refer the reader to the supplemental material, Section B.2 for precise definitions.
Recent works have established transportation cost inequalities with α either constant or logarithmic in system size for several classes of Gibbs states of commuting Hamiltonians [18,26,56].In summary, they are known to hold for local Hamiltonians on arbitrary hypergraphs at high enough temperatures or in 1D.In this work we enlarge the class of examples by showing them for outputs of short-depth circuits in Sec.C.1.Note that Eq. ( 6) is trivial for pure states, as then the relative entropy between that state and any other is always +∞.Thus, we first find an appropriate fullrank approximation of the pure state for which the inequality holds, as we will discuss below.

Maximum-entropy methods
Let us now show how transportation cost inequalities can be combined with maximum entropy methods.Such methods start from the assumption that we are given a set of self-adjoint, linearly independent Hermitian observables over an n-qudit system, a maximal inverse temperature β > 0 and the promise that the state we wish to learn can be expressed as: where λ ∈ R m with sup norm ∥λ∥ ℓ∞ ≤ 1 and the partition function.Denoting by e(λ) the vector with components e i (λ) = tr [E i σ(λ)], the crux of the maximum entropy method is that λ is the unique op-timizer of which gives us a convex variational principle to learn the state given e(λ).We refer to Sec.A for a discussion of the maximum entropy principle and its properties.Typical examples of observables E i are e.g.all 2−local Pauli observables corresponding to edges of a given graph.This models the situation in which we are guaranteed that the state is a thermal state of a Hamiltonian with a known locality structure.More generally, for most of the examples discussed here the E i are given as follows: we start from a hypergraph G = (V, E) and assume that there is a maximum radius r 0 ∈ N such that, for any hyperedge A ∈ E, there exists a vertex v ∈ V such that the ball B(v, r 0 ) centered at v and of radius r 0 includes A. The E i are then given by a basis of the traceless matrices on each hyperedge A. This definition captures the notion of a local Hamiltonian w.r.t. the hypergraph.
Our framework also encompasses pure states after making an appropriate approximation.For instance, in this article we will also consider the outputs of shallow quantum circuits and believe our framework extends to unique ground states of gapped Hamiltonians, which are pure.Indeed, although it might not be a-priori clear, we will show below how the outputs of constant depth constant circuits are contained in the class of ground states of gapped, commuting Hamiltonians.And these are well-approximated by Gibbs states at finite temperature.Although this connection between Gibbs states of commuting Hamiltonians and outputs of shallow circuits might not be obvious at first, it makes some intuitive sense that the outputs of a shallow circuit are characterized by the reduced density matrix of the lightcone of each qubit.And the same holds for Gibbs states: one of their defining properties is that they are uniquely determined by the marginals.And this is the property that links these two classes of states.
Let us specify further what it means to learn the output of a shallow circuit.Suppose that |ψ⟩ = U |0⟩ ⊗n is the output of an unknown shallow circuit U of depth L with respect to a known interaction graph (V, E) on n vertices.That is, where each E ℓ ⊂ E is a subset of non-intersecting edges.Thus, in this setting the locality of the circuit is known, but the underlying unitary is not.As we then show in Theorem C.1, the state |ψ⟩ is ϵ 2 close in Wasserstein distance to the Gibbs state σ corresponding to the Hamiltonian with local terms UZ i U † at the inverse temperature β = log(ϵ −1 ).By a simple light-cone argument we can bound the support of each UZ i U † , since we know the underlying structure of the circuit.We then show in Thm.C.1 that it is indeed possible to efficiently learn the outputs of such circuits as long as the support of each time-evolved Z i is at most logarithmic in system size.
We see from Eq. (8) that the expectation values of the E i completely characterize the state σ(λ).But it is possible to obtain a more quantitative version of this statement through the following identity, also observed in [5]: In addition to showing that if e(λ) = e(µ) then σ(µ) = σ(λ), Eq. ( 9) implies that by controlling how well the local expectations values of one state approximate another, we can also easily control their relative entropies.In particular, if m = O(n) and ∥e(µ) − e(λ)∥ ℓ1 = O(ϵn) for some ϵ > 0, we obtain from an application of Hölder's inequality that: We refer to Section A for more details.Thus, if we can find a state that approximates the expectation value of each E i up to ϵ, we are guaranteed to have a O(βϵ) relative entropy density.This observation is vital to ensure that the maximum entropy principle still yields a good estimate of the state even under some statistical noise in the vector of expectation values e(λ).Indeed, the variational principle of Eq. (8) would allow us to recover the state exactly if we had access to the exact values of e(λ).However, it turns out that solving Eq. (8) with some estimate ê(λ) such that ∥ê(λ)−e(λ)∥ ℓ∞ ≤ ϵ still yields a Gibbs state σ(µ) satisfying Eq. (10).The maximum entropy problem is a strictly convex optimization problem.Thus, it can be solved efficiently with access to the gradient of the target function.The gradient turns out to be proportional to e(λ) − e(µ), where µ is the current guess for the optimum.Although we will discuss the details of solving the problem later in Sec.A, in a nutshell, the maximum entropy problem can be solved efficiently if it is possible to efficiently compute expectation values of the observables E i on the family of Gibbs states under consideration.

Combining TC with the maximum entropy principle
Suppose now that we have that each of the E i acts on at most l 0 qudits.Then, by using e.g. the method of classical shadows, we can estimate the expectation values of all E i up to ϵ with failure probability at most δ > 0 with O(4 l0 ϵ −2 log(mδ −1 )) samples.From our discussion above, we see that this is enough to obtain a state σ(µ) satisfying Eq. (10).Further assuming that we have a TC with some constant α > 0 for σ(λ) we conclude that: Finally, recall that for sums of k local operators on a 2D lattice like in Fig. 1, where we have k = L 2 , the Lipschitz constant satisfies ∥O∥ Lip = O( √ n) and we require a precision of O(ϵn/k) to obtain a relative error of ϵ.Putting all these elements together, we conclude that by setting ϵ = β ε2 /(k 2 ) for some ε > 0 we arrive at which constitutes a relative error for the expectation value.In particular, we see that the sample complexity required to obtain this error was We then obtain: Theorem 1.1 (Learning Gibbs states).Let σ(λ) be a Gibbs state as defined in Eq. ( 7) and such that each E i acts on at most l 0 qudits.Moreover, suppose that σ(λ) satisfies a TC inequality with α depending at most inverse logarithmically with system size.Then with probability of success 1 − δ we can obtain a state σ(µ) such that for all observables from O(4 l0 β 2 poly(ϵ −1 , log(mδ −1 ))) samples of σ(λ).Moreover, if it is possible to compute the expectation values of the E i on σ(τ ) for ∥τ ∥ ℓ∞ ≤ 1, then the postprocessing can also be in polynomial time.
We once again stress that the recovery guarantee in Eq. (13) suffices to give good relative approximations for the expectation value of quasilocal observables.Furthermore, if we did not resort to the Wasserstein distance but rather the trace distance, as in known results for the tomography of many-body states [5,10,23,53,67], the sample complexity would be exponentially worse, as we prove in Sec.G.More precisely, any algorithm that estimates Gibbs states on a lattice at inverse temperatures β = Ω(n − 1 2 ) up to trace distance ϵ requires Ω(nϵ −2 ) samples.Thus, even for states whose inverse temperature goes to 0 as the system size increases, a focus on the trace distance instead of the Wasserstein distance implies an exponentially worse sample complexity.
Theorem 1.1 also provides an exponential improvement over shadow techniques in the locality of the observables, as we argue in Sec.G.However, unlike our methods, shadow techniques do not need to make any Table 2: Performance of the algorithm under various assumptions of the underlying state to obtain a state that is ϵ √ n close in Wasserstein distance.All the estimates are up to polylog(n) factors.We refer to Sec.B.3 for proofs of the TC used and C for how to combine them with maximum entropy methods.In Section D we explain how to obtain the sample complexity by combining Thm.A.1 with strong convexity bounds.For the postprocessing we refer to section E. The case of shallow circuits is discussed in more detail in Sec.C.1.By lightcone l0 we mean the size of the largest lightcone of each qubit in the circuit.
assumptions on the underlying states.Thus, we see that Theorem 1.1 opens up the possibility of highly efficient characterization of quantum states and provably exponentially better sample complexities when compared with recovery in trace distance.
We also remark that it is possible to improve the scaling in accuracy in Eq. ( 12) from ϵ −4 to the expected ϵ −2 .To do that, it is important to bound the condition number of the Hessian of the log-partition function, as we explain in the methods., log(n), ϵ −1 ) samples, an exponentially worse dependency in L.Even for moderate values of L, say L = 5, this can lead to 10 7 factor savings sample complexity and gives an exponential speedup for L = poly(log(n)).Other many-body methods have a poly(L, n, ϵ −1 ) [5,10,23,53,67] scaling, which in turn is exponentially worse in the system size.

Numerical results
We will now compare the performance of our method to the classical shadow protocol [40] to estimate the average of a local observable on a Gibbs state.To ensure that we can still generate samples for a high number of qubits, we will consider the following family of commuting Gibbs states in 1-D: where S is the shift operator, λ ∈ B ℓ∞ (0, 1) and we assumed n is even.We will then estimate the expectation value of the observable: The results for one particular choice of Gibbs state in this class are shown in Fig. 2. It shows that even for observables of moderate locality like the one in Eq. ( 15), shadows are outperformed by maximum entropy methods by orders of magnitude.Also note that the quality of our estimates decays like ∼ 1/ √ s, where s is the number of samples, showing how the quality of the recovery is essentially independent of the system's size.
We also remark that for obtaining these results, we obtained the expectation values of the X i X i+1 terms by measuring in the X basis on each qubit and of the XXY Y terms by measuring in a sequence of XXY Y bases followed by the same basis shifted by 2.

Conclusion
In this article we have demonstrated that ideas from quantum optimal transport yield provable exponential improvements in the sample complexity required to learn a quantum state when compared to stateof-the-art methods.More precisely, we showcased how the interplay between maximum entropy methods and the Wasserstein distance, which is mediated by a transportation cost inequality, allows for finetuning the complexity of observables whose expectation value we wish to estimate and the number of samples required for that.Through our techniques, we essentially settled most questions related to how efficiently it is possible to learn a commuting Gibbs state and significantly advanced our understanding of general Gibbs states.With the impressive growth in  14) and the 8-local observable in Eq. (15).We have set the number of qubits to 100, β = 1.1 and the λi uniformly at random between 0.5 and 0.9.The x-axis denotes the logarithm of the number of samples in base 10 and the y the error in absolute value to the true value.We ran each protocol 300 times on the same Gibbs state to see how the estimate varied.We see that even with 10 4 samples the shadows method still has errors of the order 10 0 in the 75 percentile, whereas the maximum entropy already yields good estimates when the number of samples is of order 10 2 , showcasing that maximum entropy methods outperform classical shadows by orders of magnitudes for observables of moderate locality like those in Eq. (15).
the size of quantum devices available in the lab over the last years, we believe that the polylogarithmic in system size sample complexities obtained here will come in handy to calibrate and characterize systems containing thousands or millions of qubits.
We believe that the framework and philosophy we began to develop here will find applications in other areas of quantum information theory.Indeed, although a bound in trace distance is the golden standard and one of the most widely accepted and used measures of distance between quantum states in quantum information and computation, we argued that in many physically relevant settings, demanding a trace distance bound might be too strong.More importantly, replacing the trace distance by a Wasserstein distance bound can lead to exponential complexity gains, as in this article.Thus, we believe that this approach is likely to lead to substantial gains and improvements also in other areas like quantum machine learning, process tomography or in quantum manybody problems.Some of the outstanding open questions raised by this article are establishing that a suitable notion of exponential decay of correlations in general implies a transportation cost inequality and showing that TC holds for a larger class of systems.We believe that our framework should also extend to ground states of gapped Hamiltonians in 1D, however, such a statement would still require us to refine our bounds.The results presented here also make us conjecture that any high-temperature Gibbs state satisfies a TC inequality, even for long-range interactions, which would make our techniques applicable to essentially all physically relevant models at high temperatures.
That being said, to the best of our knowledge, all states that are known to satisfy a TCI can also be simulated efficiently classically.However, there is a-priori no reason to believe that a TCI necessarily implies that the underlying states can be simulated classically and we believe that this is more of a reflection of the fact that the study of TCI is still incipient.Furthermore, although we explained how to do the tomography of the Gbibs state with the maximum entropy principle, other Hamiltonian learning procedures also apply, as all we need is to upper-bound the relative entropy through Eq. (9).Thus, breakthroughs in Hamiltonian learning would then also apply to the setting in this paper.In particular, through efficient Hamiltonian learning methods, it might be possible to use the results derived here to verify if a quantum device correctly implemented a shallow circuit.
Moreover, it would also be interesting to investigate other applications of Gaussian concentration in many-body physics [3,15,49,50,62] from the angle of transportation cost inequalities.

Summary of the maximum entropy procedure and contributions
Now that we have discussed how our results yield better sample complexities for some classes of states, we discuss the maximum entropy algorithm in more detail and comment on how our results equip it with better performance guarantees.
As the maximum entropy principle in Eq. (20) corresponds to solving a convex optimization problem, it should come as no surprise that promises on the strong convexity of the underlying functions being optimized can be leveraged to give improved performance guarantees [14].For the specific case of the maximum entropy principle, strong convexity guarantees translate to bounds of the form for constants L, U > 0 and all µ ∈ B ℓ∞ (0, 1).We refer to Sec.A of the supplemental material for a thorough discussion.We note that in [5] the authors show such results in a more general setting, although with U, L polynomial in n, which is not sufficient for our purposes.For us it will be important to ensure that the condition number of the log-partition function is at most polylogarithmic in system size (i.e. ).
If we define the function f : µ → log(Z(µ)) + µ i e i (λ), then ∇f = β(e(λ) − e(µ)).It then follows from standard properties of strongly convex functions that: That is, whenever the expectation values are close, the underlying parameters must be close as well.In this case, we have from ∥e(λ) − e(µ) As we will see in the proposition below, L −1 = O(e β β −2 ) for commuting Hamiltonians, which gives a quadratic improvement in ϵ in Eq. (17) and yields the expected ϵ −2 scaling for the sample complexity in terms of the accuracy.
Proposition 3.1 (Strengthened strong convexity constant for commuting Hamiltonians).For each µ ∈ B ℓ∞ (0, 1), let σ(µ) be a Gibbs state at inverse temperature β corresponding to the commuting Hamiltonian where tr [E i E j ] = 0 for all i, j and each local operator E j is traceless on its support.Then for β such that the states σ(µ) satisfy exponential decay of correlations, the Hessian of the log-partition function is bounded by for some constant c > 0.
After the completion of the first version of this work, in [34,Corollary 4.4] Tang et al proved strong convexity bounds for high-temperature, (not necessarily geometrically) local Hamiltonians.More precisely, where k is the maximal number of qudits each term acts on, they show that L −1 ≤ 2β −2 .Although the result in Eq. ( 18) has the advantage of giving an estimate at any temperature, we see that also for noncommuting Gibbs states strong convexity holds at high enough temperatures.
The flowchart in Figure 3 gives the general scheme behind the maximum entropy method.Besides the exponential improvements in sample complexity laid out in Table 2, we also provide structural improvements which we elaborate on while also explaining the general scheme: Input: The input consists of m linearly independent operators E i of operator norm 1, some upper bound β > 0, precision parameter ϵ > 0 and step size η.Moreover, we are given the promise that the state of interest satisfies (7).Although we will be mostly concerned with the case in which the observables are local, we show the convergence of the algorithm in general in Sec. A. The step size should be picked as η = O(U −1 ) with U satisfying (16), as explained in Sec.A.1.

Require:
We assume that we have access to copies of σ(λ) and that we can perform measurements to estimate the expectation values of the observables E i up to precision ϵ > 0. For most of the examples considered here, this will mostly require implementing simple, few-qudit measurements.

Output:
The output is in the form of a vector of parameters µ of a Gibbs state σ(µ) as in Eq. (7).Note that unlike [5], our goal is not to estimate the vector of parameters λ, but rather obtain an approximation of the state satisfying σ(λ) ≃ σ(µ).Here we will focus on quantifying the output's quality in relative entropy.More precisely, the output of the algorithm is guaranteed to satisfy D(σ(µ)∥σ(λ)) = O(ϵn).
Step 1: In this step, we estimate the expectation values of each observable E i on the state σ(λ) up to an error ϵ.The resources to be optimized here are the number of samples of σ(λ) we require and the complexity to implement the measurements.Using shadow tomography or Pauli grouping methods [13,22,40] we can do so requiring O(4 r0 ϵ −2 polylog(m)) samples and Pauli or 1−qubit measurements, where r 0 is maximum number of qubits the E i act on.This is discussed in more detail in Sec. C.
Step 2: The maximum entropy problem in Eq. ( 8) can be solved with gradient descent, as it corresponds to a strictly convex optimization problem [14].At this step we simply initialize the algorithm to start at the maximally mixed state.
Step 3: It turns out that the gradient of the maximal entropy problem target function at µ t is proportional to e(µ t )−e(λ).Thus, to implement an iteration of gradient descent, it is necessary to compute e(µ t ), as we assumed we obtained an approximation of e(λ) in Step 1.Moreover, it is imperative to ensure that the algorithm also converges with access to approximations to e(µ) and e(λ).This is because most algorithms to compute e(µ t ) only provide approximate values [35,37,46,51] .In addition, they usually have a poor scaling in the accuracy [51], making it necessary to show that the process converges with rough approximations to e(µ t ) and e(λ).Here we show that it is indeed possible to perform this step with only approximate computations of expectation values.This allows us to identify classes of states for which the postprocessing can be done efficiently.These results are discussed in more detail in Sec.A.2.
Convergence loop: Now that we have seen how to compute one iteration of gradient descent, the next natural question is how many iterations are required to reach the stopping criterium.As this is a strongly convex problem, the convergence speed depends on the eigenvalues of the Hessian of the function being optimized [14, Section 9.1.2].For max-entropy, this corresponds to bounding the eigenvalues of a generalized covariance matrix.In [5] the authors already showed such bounds for local Hamiltonians implying the convergence of the algorithm in a number of steps depending polynomially in m and logarithmically on the tolerance ϵ for a fixed β.Here we improve their bound in several directions.First, we show that the algorithm converges after a polynomial in m number of iterations for arbitrary E i , albeit with a polynomial dependence on the error, as discussed in Sec.A.2.We then specialize to certain classes of states to obtain various improvements.For high-temperature, commuting Hamiltonians we provide a complete picture and show that the condition number of the Hessian is constant in Prop.3.1.This implies that gradient descent converges in a number of iterations that scales logarithmically in system size and error.

Stopping condition and recovery guarantees:
can be immediately converted to a relative entropy bound between the target state and the current iterate by the identity (9).This justifies its choice as a stopping criterion.Since we already discussed the sample complexity of the maximum entropy method, let us now discuss some of its computational aspects.There are two quantities that govern the complexity of the algorithm: how many iterations we need to perform until we converge and how expensive each iteration is.
As the maximum entropy problem is strongly convex, one can show that O(U L −1 log(nϵ −1 )) iterations suffice to converge.Here again U, L are bounds on the Hessian as in Eq. ( 16).Nevertheless, we also show how to bypass requiring such bounds in Sec.A.1 and obtain that the maximum entropy algorithm converges after O(mϵ −2 ) iterations without any locality assumptions on the E i or strong convexity guarantees.That is, the number of iterations is at most polynomial in m.
Let us now discuss the cost of implementing each iteration of the algorithm on a classical computer.This boils down to estimating e(µ t ) for the current iterate, which can be achieved in various ways.In the worst case, it is possible to just compute the matrix exponential and the expectation values directly, which yields a complexity of O(d 3n m).However, for many of the classes considered here it is possible to do this computation in polynomial time.For instance, in [51] the authors show that for high-temperature Gibbs states it is possible to approximate the partition function efficiently.Thus, for the case of hightemperature Gibbs states, not necessarily commuting ones, we can do the postprocessing efficiently.It is also worth mentioning tensor network techniques to estimate e(µ t ).As we only require computing the expectation value of local observables, recent works show that it is possible to find tensor network states of constant bond dimension that approximate all expectation values of a given locality well [2,25,41].From such a representation it is then possible to compute e(µ t ) efficiently in the 1D case by contracting the underlying tensor network.Unfortunately, however, in higher dimensions the contraction still takes exponential time.Table 2 provides a summary of the complexity of the postprocessing for various classes.
It is also worth considering the complexity of the postprocessing with access to a quantum computer, especially for commuting Hamiltonians.As all hightemperature Gibbs states satisfy exponential decay of correlations, the results of [17] imply that hightemperature Gibbs states can be prepared with a circuit of depth logarithmic in system size.Thus, by using the same method we used to estimate e(λ) we can also estimate e(µ t ) by using the copies provided by the quantum computer.The complexity of the postprocessing for shadows is linear in system size.Thus, with access to a quantum computer we can perform the post-processing for each iteration in time Õ(mϵ −2 ).As in this case we showed that the number of iterations is Õ(1), we conclude that we can perform the postprocessing in time Õ(mϵ −2 ).That is, for this class of systems our results give an arguably complete picture regarding the postprocessing, as it can be performed in a time comparable to writing down the vector of parameters, up to polylogarithmic factors.Furthermore, given that the underlying Gibbs states are known to satisfy TC and Prop.3.1 gives es- sentially optimal bounds on the covariance matrices, we believe that the present work essentially settles the question of how efficiently we can learn such Hamiltonians and corresponding Gibbs states.We discuss this in more detail in Sec.F.
Finally, an example of our bounds is illustrated in Fig. 4, where we show that the number of samples required to estimate a local observable to relative precision is essentially system-size independent.We then computed the upper bound on the trace distance predicted by Eq. ( 9) and Pinsker's inequality and compared it to the actual of discrepancy for a Lipschitz observable on the reconstructed and actual state.The Lipschitz observable was chosen as i n −1 U ZiZi+2U † , where we picked U as a depth 3 quantum circuit.We observe that the error incurred is essentially independent of system size, and we get good predictions even when the number of samples is smaller than it.

Supplemental Material
This is the supplemental material to "Learning many-body states from very few copies".We will start in Sec.A with a review of the basic properties of the maximum entropy principle to learn quantum states.This is followed by a discussion of Lipschitz constants, Wasserstein distances and transportation cost inequalities in Sec.B. After that, in Sec.C we more explicitly discuss the interplay between the maximum entropy method and transportation cost inequalities.We then briefly discuss scenarios in which the postprocessing required for the maximum entropy method can be performed efficiently in Sec.E. In Sec.F we discuss a class of examples where we show that all technical results required to obtain the strongest guarantees of our work hold, that is, Gibbs states of commuting Hamiltonians at high enough temperature and 1D commuting Hamiltonians.Finally, in Sec.G we discuss lower bounds on the sample complexity of both shadow protocols and many-body algorithms that focus on a recovery in trace distance.We start by setting some basic notations.Throughout this article, we denote by M k the algebra of k × k matrices on C k , whereas M sa k denotes the subspace of self-adjoint matrices.The set of quantum states over x.The identity matrix is denoted by I.The adjoint of an operator A is denoted by A † and that of a channel Φ with respect to the trace inner product by Φ * .For a hypergraph G = (V, E) we will denote the distance between subsets of vertices induced by the hypergraph by dist.

A Maximum entropy principle for quantum Gibbs states
One of the main aspects of this work concerns the effectiveness of the maximum entropy method for the tomography of quantum Gibbs states in various settings and regimes.Thus, we start by recalling some basic properties of the maximum entropy method.Our starting assumption is that the target state is well-described by a quantum Gibbs state with respect to a known set of operators and that we are given an upper bound on the inverse temperature: denotes the partition function.In what follows, we will denote σ by σ(λ) and i λ i E i = H(λ), where the dependence of σ(λ) on β is implicitly assumed.
We are mostly interested in the regime where m ≪ d n .Then the above condition can be interpreted as imposing that the matrix log(σ) is sparse with respect to a known basis E. A canonical example for such states are Gibbs states of local Hamiltonians on a lattice, for which m = O(n) and the observables E i are taken as tensor products of Pauli matrices acting on neighboring sites.But we could also consider a basis consisting of quasi-local operators or some subspace of Pauli strings.
Next, we review some basic facts about quantum Gibbs states.One of their main properties is that they satisfy a well-known maximum entropy principle [42].This allows us to simultaneously show that the expectation values of the observables E completely characterize the state σ(λ) and further provides us with a variational principle to learn a description from which we can infer an approximation of other expectation values.Let us start with the standard formulation of the maximum entropy principle: Proposition A.1 (Maximum entropy principle).Let σ(λ) ∈ D d n be a quantum Gibbs state (19) with respect to the basis E at inverse temperature β and introduce e i (λ) := tr [σ(λ)E i ] for i = 1, . . ., m.Then σ(λ) is the unique optimizer of the maximum entropy problem: Moreover, σ(λ) optimizes: Proof.The proof is quite standard, but we include it for completeness.Note that for any state ρ ̸ = σ that is a feasible point of Eq. ( 20) we have that: where we have used the fact that tr [E i (ρ − σ(λ))] = 0 for all feasible points and that the relative entropy between two different states is strictly positive.This shows that σ(λ) is the unique solution of (20).Eq. ( 21) is nothing but the dual program of Eq. ( 20).
Eq. ( 20) above gives a variational principle to find a quantum Gibbs state corresponding to certain expectation values.As it is well-known, one can use gradient descent to solve the problem in Eq. ( 21), as it is a strongly convex problem.Various recent works have discussed learning of Gibbs states [5,48,60] and it is certainly not a new idea to do so through maximum entropy methods.Nevertheless, we will discuss how to perform the postprocessing in more detail, as some recent results allow us to give this algorithm stronger performance guarantees.Finally, it should be said that although we draw inspiration from [5], our main goal will be to learn a set of parameters µ ∈ R m such that the Gibbs states σ(µ) and σ(λ) are approximately the same on sufficiently regular observables while optimizing the sample complexity.This is in contrast to the goal of [5], which was to learn the vector of parameters λ.Learning λ corresponds to a stronger requirement, in the sense that if the vectors of parameters are close, then the underlying states are also close, as made precise in the following Prop.A.2.
One of the facts that we are going to often exploit is that it is possible to efficiently estimate the relative entropy between two Gibbs states σ(λ) and σ(µ) given the parameters λ, µ and the expectation values of observables in E. This also yields an efficiently computable bound on the trace distance.Indeed, as observed in [5], we have that: Proof.The equality in Eq. ( 22) follows from a simple manipulation.Indeed: The bound on the trace distance then follows by applying Pinkser's inequality.
The statement of Proposition A.2 allows us to obtain quantitative estimates on how well a given Gibbs state approximates another one in terms of the expectation values of known observables.In particular, a simple application of Hölder's inequality shows that if two Gibbs states are such that then the sum of their relative entropies is at most where the outer bound arises from our assumption that ∥λ∥ ℓ∞ , ∥µ∥ ℓ∞ ≤ 1.Moreover, it is straightforward to relate the difference of the target function in Eq. (21) evaluated at two vectors to the difference of relative entropies between the target state and their corresponding Gibbs states: Lemma A.1.Let σ(λ) ∈ D d n be a Gibbs state with respect to a set of observables E at inverse temperature β > 0 and define, for any µ ∈ R m , Then for any other two vectors µ, ξ ∈ R m with ∥µ∥ ℓ∞ , ∥ξ∥ ℓ∞ ≤ 1: Proof.The proof follows from straightforward manipulations.
Thus, we see that a decrease of the target function f when solving the max entropy problem is directly connected to the decrease of the relative entropy between the target state and the current iterate.We will later use this to show the convergence of gradient descent for solving the max entropy problem with arbitrary E. However, before that we discuss how the convergence of the state σ(µ) to σ(λ) is related to the convergence of the parameters µ to λ.

A.1 Strong convexity and convergence guarantees
The maximum entropy problem (20) being a convex problem, it should come as no surprise that properties of the Hessian of the function being optimized are vital to understanding its complexity and stability.For the maximum entropy problem, the Hessian at a point is given by a generalized covariance matrix corresponding to the underlying Gibbs state.As the results of [5] showcase, the eigenvalues of such covariance matrices govern both the stability of Eq. ( 21) with respect to µ and the convergence of gradient descent to solve it.To see why, we recall some basic notions of optimization of convex functions and refer to [14] for an overview.
is called strongly convex with parameters U, L > 0 if we have for all x ∈ C that: The optimization of strongly convex functions is well-understood.Indeed, we have: be a convex set and f : C → R be strongly convex with parameters L, U as in the definition above.Then, for all ϵ > 0, the optimal value α := min x∈C f (x) is achieved up to error ϵ by the gradient descent algorithm initiated at x 0 ∈ C with step size U −1 after at most S steps for Moreover, the gradient norm satisfies ∥∇f (x k )∥ 2 ℓ2 ≤ δ after at most S ∇ steps with Finally, we have for all µ, λ ∈ C that: Proof.These are all standard results that can be found e.g. in [14, Section 9].
To see the relevance of these results for the maximum entropy problem, we recall the following Lemma: be a Gibbs state with respect to a set of operators E at inverse temperature β and define f : C → R as in Eq. (24).Then: and with where ν β (t) is a probability density function whose Fourier transform is given by: Proof.The quantum belief propagation theorem [36] states that: The claim then follows from a simple computation.
Thus, we see that in order to compute the gradient of the target function f for the maximum entropy problem, we simply need to compute the expectation values of observables E on the current state and on the target state.Moreover, the Hessian is given by a generalized covariance matrix of the quantum Gibbs state.That this should indeed be interpreted as a covariance matrix is most easily seen by considering commuting Hamiltonians.Then indeed we have: For any Gibbs state it holds that: Proposition A.4.For all µ ∈ B ℓ∞ (0, 1) ⊂ R m , inverse temperature β > 0 and set of operators E of cardinality m, we have: Proof.Note that tr E i σ(µ)e iH(µ)t E j e −iH(µ)t ≤ 1 , by Hölder's inequality, the submultiplicativity and unitary invariance of the operator norm and the fact that The proof above also showcases how exponential decay of correlations can be used to sharpen estimates on the maximal eigenvalue of ∇ 2 f , since in that case ∇ 2 f (µ) ij will have exponentially decaying entries.We discuss this in more detail when we focus on many-body states, for which we also consider the more challenging question of lower bounds.

A.2 Convergence with approximate gradient computation and expectation values
Proposition A.3 already establishes the convergence of gradient descent whenever we can compute the gradient exactly and have a bound on L.Moreover, we see from Lemma A.2 that, in order to compute the gradient of the function f above, it suffices to estimate local expectation values.Moreover, it is a standard result that gradient descent is strictly decreasing for strictly convex problems [14].
However, in many settings it is only possible or desirable to compute the expectation values of quantum Gibbs states approximately.Moreover, the expectation values of the target state are only known up to statistical fluctuations.It is then not too difficult to see that gradient descent still offers the convergence guarantees if we only approximately compute the gradient.We state the exact convergence guarantees and precision requirement for completeness.
Theorem A.1 (Computational complexity and convergence guarantees).Let σ(λ) ∈ D d n be a quantum Gibbs state at inverse temperature β with respect to a set of operators E and C E be the computational cost of computing e ′ (µ) satisfying ∥e ′ (µ) − e(µ)∥ ℓ2 ≤ δ µ for µ ∈ B ℓ∞ (0, 1) and δ µ > 0.Moreover, assume that we are given an estimate e ′ (λ) of e(λ) satisfying and that the partition function is strongly convex with parameters U, L. Then gradient descent starting at µ = 0 with step size 1 cU and input data e ′ (λ) converges to a state σ(µ * ) satisfying: time.
We will prove this theorem at the end of this section, as before we will need some auxiliary statements.But the reader familiar with basic concepts from convex optimization should feel comfortable to skip them.Proposition A.5 (Convergence of gradient descent with constant relative precision).Let σ(λ) ∈ D d n be a quantum Gibbs state at inverse temperature β with respect to a set of operators E, and for a Gibbs state σ(µ) let z(µ) ∈ R m be a vector such that for some c > 10.Then we have that: where U is a uniform bound on the operator norm of the Hessian of the function f defined in Eq. (24).
where in the last step we used the Cauchy-Schwarz inequality.By our assumption in Eq. ( 30) for z ≡ z(µ) we have: and it can be readily checked that ≤ 1 10 for c ≥ 10.To conclude the proof, note that by Lemma A.1: Thus, we see that we make constant progress for the gradient descent algorithm if we only compute the derivative up to constant relative precision.We show now how to pick our stopping criterium based on approximate computations of the gradient which ensure convergence in polynomial time.
Proposition A.6.Let σ(λ) ∈ D d n be a quantum Gibbs state at inverse temperature β with respect to a set of operators E. Suppose that at each time step t of gradient descent we compute an estimate e ′ (µ t ) of e(µ t ) that satisfies and set the stopping criterion to be: for some constant c > 10.Then gradient descent starting at µ = 0 with update rule µ t+1 := µ t − β(e(λ)−e ′ (µt)) cU will converge to a state σ(µ * ) satisfying: Proof.First, we show that the relative precision bound required for Proposition A.5 holds under these assumptions.By our choice of the stopping criterium, at each time step we have the property that, while we did not stop, Now, suppose that we did not stop before T iterations.It follows from a telescopic sum argument and Proposition A.5 that: Since we proved in Proposition A.4 that U = O(β 2 m), it follows that the number of iterations is O(nm).Thus, we see that having a lower bound on the Hessian is not necessary to ensure convergence, but it can speed it up exponentially: Proposition A.7 (Exponential convergence of gradient descent with approximate gradients).In the same setting as Proposition A.5 we have: In particular, gradient descent with approximate gradient computations starting at µ 0 = 0 converges after Proof.For any strongly convex function f and points µ, ξ ∈ C we have that: As explained in [14, Chapter 9], the R.H.S. of the equation above is a convex quadratic function of ξ for µ fixed.
One can then easily show that its minimum is achieved at ξ = µ − 1 L ∇f (µ).From this we obtain: where the last identity follows from Eq. (27).By subtracting f (λ) from both sides of the inequality (31) in Proposition A.5 and rearranging the terms we have that: This yields the claim in Eq. (32).To obtain the second claim, note that applying Eq. ( 32) iteratively yields that after k iterations we have that, for By our choice of initial point and Lemma A.1 we have that f (0) − f (λ) = O(n), which yields the claim solving for k and noting that − log 1 − 18L 10cU = Ω( L U ). Remark A.1 (Comparison to mirror descent).It is also worth noting that the convergence guarantees of Proposition A.6 and update rules of gradient descent are very similar to the ones of mirror descent with the von Neumann entropy as potential, another algorithm used for learning of quantum states [1,16,30,68].In this context, mirror descent would use a similar update rule.However, instead of computing the whole gradient, i.e. all expectation values of the basis, for one iteration, mirror descent just requires us to find one i such that |e i (λ) − e i (µ)| ≥ δ and updates the Hamiltonian in the direction i.This implies that the algorithm can be run online while we still estimate some other e i , but we will not analyze this variation more in detail here.
Finally, we assumed so far that we knew the expectation values of the target state, e(λ), exactly.However, it follows straightforwardly from Proposition A.6 that knowing each expectation value up to an error ϵ is sufficient to ensure that the additional error due to statistical fluctuations is at most ϵm.More precisely, if we have that ∥e(λ) − e ′ (λ)∥ ∞ ≤ ϵ for some ϵ > 0, then any Gibbs state σ(µ * ) satisfying ∥e(µ * ) − e ′ (λ)∥ ℓ2 ≤ δ satisfies: by Proposition A.2 and a Cauchy-Schwarz inequality.With these statements at hand we are finally ready to prove Thm.A.1.
Without making any assumptions on L we can then bound by Hölder inequality and our assumptions on e ′ (λ).Let us now discuss how strong convexity can improve these estimates.First note that by strong convexity and Cauchy-Schwarz we have: which yields the claim.
In short, we see that we can perform the recovery by simply computing the gradient approximately.In particular, as already hinted at in [5], this implies that recent methods developed to approximately compute the partition function of high-temperature quantum Gibbs states can be used to perform the postprocessing in polynomial time [35,46,49,51].This and other methods to compute the gradient are discussed in more detail in Sec.E. Furthermore, it should be noted that usually L = Ω(β −2 ) in the high temperature regime, making the bound independent of β for such states.We refer to Sec.D for a summary of the cases for which bounds on L are known.

B Lipschitz constants and transportation cost inequalities
In this section, we identify conditions under which it is possible to estimate all expectation values of k-local observables up to an error ϵ by measuring O(poly(k, log(n), ϵ −1 )) copies of it, where n is the system size, which constitutes an exponential improvement in some regimes.To obtain this result, we combine the maximum entropy method introduced in Section A with techniques from quantum optimal transport.In order to formalize and prove the result claimed, we resort to transportation cost inequalities and the notion of Lipschitz constants of observables, which we now introduce.

B.1 Lipschitz constants and Wasserstein metrics
Transportation cost inequalities, introduced by Talagrand in the seminal paper [61], constitute one of the strongest tools available to show concentration of measure inequalities.In the quantum setting, their study was initiated in [19,20,56,59].Here we are going to show how they can also be used in the context of quantum tomography.On a high level, a transportation cost inequality for a state σ quantifies by how much the relative entropy with respect to another state ρ is a good proxy to estimate to what extent the expectation values of sufficiently regular observables differ on the states.As maximum entropy methods allow for a straightforward control of the convergence of the learning in relative entropy (cf.Section A), they can be combined to derive strong recovery guarantees.But first we need to define what we mean by a regular observable.
We start by a short discussion of Lipschitz constants and the Wasserstein-1 distance.To obtain an intuitive grasp of these concepts, one way is to first recall the variational formulation of the trace distance of two quantum states σ, ρ: Seeing probability distributions as diagonal quantum states, we recover the variational formulation of the total variation distance by noting that we may restrict to diagonal operators P .Thus, the total variation distance quantifies by how much the expectation values of arbitrary bounded functions can differ under the two distributions.However, in many situations we are not interested in expectation values of arbitrary bounded observables, but rather observables that are sufficiently regular.E.g., most observables of physical interest are (quasi)-local.Thus, it is natural to look for distance measures between quantum states that capture the notion that two states do not differ by much when restricting to expectation values of sufficiently regular observables.These concerns are particularly relevant in the context of tomography protocols, as they should be designed to efficiently obtain a state that reflects the expectation values of extensive observables of the system.As we will see, one of the ways of ensuring that the sample complexity of the tomography algorithm reflects the regularity of the observables we wish to recover is through demanding a good recovery in the Wasserstein distance of order 1 [56,59].
In the classical setting [58, Chapter 3], one way to define a Wasserstein-1 distance between two probability distributions is by replacing the optimization over all bounded diagonal observables by that over those that are sufficiently regular: given a metric d on a sample space Ω, we define the Lipschitz constant of a function f : Ω → R to be:

y) .
Denoting the Wasserstein-1 distance by W 1 , it is given for two probability measures p, q on Ω by That is, this metric quantifies by how much the expectation values of sufficiently regular functions can vary under p and q, in clear analogy to the variational formulation of the trace distance.We refer to [58,65] for other interpretations and formulations of this metric.

B.1.1 Quantum Hamming Wasserstein distance
It is not immediately clear how to generalize these concepts to noncommutative spaces.There are by now several definitions of transport metrics for quantum states [26,45,56,59].As already noted in the main text, de Palma et al. defined the Lipschitz constant of an observable O ∈ M d n as [56]: That is, the Lipschitz constant quantifies the amount by which the expectation value of an observable can change when evaluated on two states that only differ on one site.This is in analogy with the Lipschitz constants induced by the Hamming distance on the hypercube, so we denote it with □.Note that in our definition we added the √ n factor, which will turn out to be convenient later.Armed with this definition, we can immediately obtain an analogous definition of the Wasserstein distance in Eq. (33) for two states ρ, σ: The authors of [56] also put forth the following equivalent expression for the norm: It follows from an application of Hölder's inequality combined with the variational formulation in Eq. (34) that ∥O∥ Lip,□ ≤ 2 √ n∥O∥ ∞ .However, it can be the case that ∥O∥ Lip ≪ √ n ∥O∥ ∞ , which are exactly those observables that should be thought of as regular.This is because this signals that changing the state locally leads to significantly smaller changes to the expectation value of the observable than global ones.Two examples of this behavior are given by the observables: and where Z i c acts as identity on i and Z i else, i.e.
On the other hand, by considering the states ρ = |0⟩⟨0| ⊗n and σ = |1⟩⟨1| ⊗ |0⟩⟨0| ⊗n−1 , we see that and we denote by supp(O i ) the qudits on which O i acts nontrivially, then: That is, the maximal number of intersections of the support of O i on one qubit.From these examples we see that for local observables, the ratio of the operator norm and Lipschitz constant reflects the locality of the observable.

B.1.2 Quantum differential Wasserstein distance
The Wasserstein distance W 1,□ generalizes the classical Orstein distance, that is the Wasserstein distance corresponding to the Hamming distance on bit strings.Another definition of a Lipschitz constant and its attached Wasserstein distance was put forth in [59], where the construction is based on a differential structure that bears more resemblance to that of the Lipschitz constant of a differentiable function on a continuous sample space, e.g. a smooth manifold [58].Let us now define the notion of a noncommutative differential structure (see [21]): Such a differential structure can be used to provide the set of matrices with a Lipschitz constant that is tailored to σ, see e.g.[21,59] for more on this.In order to distinguish that constant from the one defined in (34), we will refer to it as the differential Lipschitz constant and denote it by ∥X∥ Lip,∇ .It is given by: The quantity [L k , X] should be interpreted as a partial derivative and is sometimes denoted by ∂ k X for that reason.Then, the gradient of a matrix A, denoted by ∇A with a slight abuse of notations, refers to the vector of operator-valued coordinates (∇A) i = ∂ i A. For ease of notation, we will denote the differential structure by the couple (∇, σ).The notion of a differential structure is also intimately connected to that of the generator of a quantum dynamical semigroup converging to σ [21], and properties of that semigroup immediately translate to properties of the metric.This is because the differential structure can be used to define an operator that behaves in an analogous way to the Laplacian on a smooth manifold, which in turn induces the heat semigroup.We refer to [21,59] for more details.
To illustrate the differential structure version of the Lipschitz constant, it is instructive to think of the maximally mixed state.In this case, one possible choice would consist of picking the L k to be all 1−local Pauli strings and ω j = 0. Then the Lipschitz constant turns out to be given by: where P k are all 1−local Pauli matrices.Thus, we see that this measures how much the operator changes if we act locally with a Pauli unitary on it.If we think of an operator as a function and conjugating with a Pauli as moving in a direction, the formula above indeed looks like a derivative.In fact, it is possible to make this connection rigorous, see [21].
As before, the definition in Eq. (38) yields a metric on states by duality: It immediately follows from the definitions that for any observable X: Although this geometric interpretation opens up other interesting mathematical connections for this definition, the differential Wasserstein distance has the downside of being state dependent.It however induces a stronger topology than the quantum Hamming Wasserstein distance in some situations (see [26,Proposition 5])).In particular, the results of [26, Proposition 5]) imply that for commutative Gibbs states a TC inequality for W 1,∇ implies the corresponding statement for W 1,□ .

B.2 Local evolution of Lipschitz observables
As already discussed in Subsections B.1.1 and B.1.2when we defined ∥.∥ Lip,∇ and ∥.∥ Lip,□ , Lipschitz constants can be easily controlled by making assumptions on the locality of the operators.Indeed, if we apply a local circuit to a local observable, it is straightforward to control the growth of the Lipschitz constant in terms of the growth of the support of the observable under the circuit.More precisely, in [56, Proposition 13] the authors show such a statement for discrete time evolutions with exact lightcones: if we denote by |L| the size of the largest lightcone of one qubit under a channel Φ, then for any observable Here we will extend such arguments to the evolution under local Hamiltonians or Lindbladians.By resorting to Lieb-Robinson bounds, we show that the Lipschitz constants ∥.∥ Lip,∇ and ∥.∥ Lip,□ of initially local observables evolving according to a quasi-local dynamics increase at most with the effective lightcone of the evolution.Thus, short-time dynamics and shallow-depth quantum channels can only mildly increase the Lipschitz constant.This further justifies the intuition that observables with small Lipschitz constant reflect physical observables.
Lieb-Robinson (LR) bounds in essence assert that the time evolution of local observables under (quasi)-local short-time dynamics have an effective lightcone.There are various formulations of Lieb-Robinson bounds.Reviewing those in detail is beyond the scope of this work and we refer to [9,38,47,57] and references therein for more details.For studying the behaviour of ∥.∥ Lip,∇ under local evolutions, the most natural formulation is the commutator version: the generator L of a quasi-local dynamics t → Φ t = e tL on n qudits arranged on a graph G = (V, E), with graph distance dist, is said to satisfy a LR bound with LR velocity v if for any observable O A supported on a region A and any other observable B supported on a region B, we have: for g : N → R + a function such that lim x→∞ g(x) = 0. We then have: Proof.The proof follows almost by definition.We have: By a triangle inequality we have that: For any term in the sum we have ∥[Φ * t (O i ), L j ]∥ ∞ ≤ 2 by the submultiplicativity of the operator norm, a triangle inequality and ∥L j ∥ ∞ ≤ 1.In case O i and L j do not overlap, the stronger bound in Eq. (LR1) holds.
To illustrate this bound more concretely, let us take and g(dist(i, j)) = e −µ|i−j| , for some constant µ.I.e.we have a 1D differential structure and a local time evolution on a 1D lattice.Then for any j: Thus, We see that for constant times the Lipschitz constant is still of order √ nk.Let us now derive a similar, yet somehow stronger, version of Prop.B.1 for ∥.∥ Lip,□ .In some situations, bounds like (LR1) can be further exploited in order to prove the quasi-locality of Markovian quantum dynamics [9]: for any region C ⊂ D ⊂ V , there exists a local Markovian evolution t → Φ (D) t that acts non-trivially only on region D, and such that for any observable O C supported on region C, for some other function h : N → R + such that lim x→∞ h(x) = 0 and constant c ′ > 0. In other words, at small times, the channels Φ * t can be well-approximated by local Markovian dynamics when acting on local observables.Let us now estimate the growth of Lipschitz constants for the definition of [56] given a Lieb-Robinson bound: Proposition B.2. Assume that Φ t satisfies the bound (LR2).Then, for any two quantum states ρ, σ ∈ D d n and any ordering {1, . . ., n} of the graph: Moreover, for any observable Proof.From [56], the Wasserstein distance W 1,□ arises from a norm ∥.∥ W1 , i.e.W 1,□ (ρ, σ) = ∥ρ − σ∥ W1 .Moreover, the norm ∥.∥ W1 is uniquely determined by its unit ball B n , which in turn is the convex hull of the set of the differences between couples of neighboring quantum states: Now by convexity, the contraction coefficient for this norm is equal to where M sa,0 d n denotes the set of self-adjoint, traceless observables.Let then X ∈ N n .By the expression (36), and choosing without loss of generality an ordering of the vertices such that tr 1 (X) = 0, we have where µ denotes the Haar measure on one qudit, and where (1) follows from the fact that tr 1 (X) = 0, with Φ defined as in Eq. (LR2) with k < i − 1. Next, by the variational formulation of the trace distance and Eq.(LR2), we have for i ≥ 3 that where (2) follows from [56,Proposition 6].By picking k = i − 2 and inserting this estimate into Eq.( 43) for i ≥ 3 and the trivial estimate )(X)∥ 1 ≤ 2∥X∥ 1 for i = 1, 2 we obtain Eq. ( 41).Eq. ( 42) follows by duality.

B.3 Transportation cost inequalities
Although interesting on their own, the relevance of the Lipschitz constants introduced above becomes clearer in our context when we also have a transportation cost inequality [32,58].A quantum state σ satisfies a transportation cost inequality with constant α > 0 if for any other state ρ it holds that: where In what follows, we simply write ∥.∥ Lip and W 1 to denote either of the Lipschitz constants, and their corresponding Wasserstein metrics, defined above.This inequality should be thought of as a stronger version of Pinsker's inequality that is tailored to a state σ and the underlying Wasserstein distance.
One of the well-established techniques to establish a transportation cost inequality for W 1,∇ is by exploiting the fact that it is implied by a functional inequality called the modified logarithmic Sobolev inequality.It is beyond the scope of this paper to explain this connection and we refer to e.g.[59] and references therein for a discussion on these topics.But for our purposes it is important to note that in [18] the authors and Capel show modified logarithmic Sobolev inequalities for several classes of Gibbs states.More recently, one of the authors and De Palma derived such transportation cost inequalities for W 1,□ in [26].In Theorem B.1 below we summarize the regimes for which transportation cost inequalities are known to hold: Theorem B.1 (transportation cost for commuting Hamiltonians [8,18,26]).Let E 1 , . . ., E m ⊂ M d n be a set of k-local linearly independent commuting operators with ∥E i ∥ ∞ ≤ 1.Then σ(λ) satisfies a transportation cost inequality with constant α > 0 for all λ ∈ B ℓ∞ (0, 1) in the following cases: Remark B.1.In [26,Proposition 5], the authors show that W 1,□ ≤ c(k) W 1,∇ holds up to some constant c(k) depending on the locality of the differential structure.This implies that a transportation cost inequality for W 1,∇ implies one for W 1,□ up to c(k).Thus, although the authors of [8,18] only obtain the result for W 1,∇ , we can use it to translate it to W 1,□ .We conclude that for commuting Hamiltonians TC are available for essentially all classes of local Hamiltonians for which they are expected to hold.

C Combining the maximum entropy method with transportation cost inequalities
With these tools at hand, we are now ready to show that by resorting to transportation cost inequalities it is possible to obtain exponential improvements in the sample complexity required to learn a Gibbs state.First, let us briefly review shadow tomography or Pauli regrouping techniques [22,24,40,43].Although these methods all work under slightly different assumptions and performance guarantees, they have in common that they allow us to learn the expectation value of M k-local observables O 1 , . . ., O M ∈ M 2 n such that ∥O i ∥ ∞ ≤ 1 up to an error ϵ and failure probability at most 1 − δ by measuring O(e O(k) log(M δ −1 )ϵ −2 ) copies of the state.
For instance, for the shadow methods of [40], we obtain a O(4 k log(M δ −1 )ϵ −2 ) scaling by measuring in 1-qubit random bases.The estimate is then obtained by an appropriate median-of-means procedure for the expectation value of each output string.The computation for obtaining the expectation value of E i through this method then entails evaluating the expectation value of the observables on O(4 k log(M δ −1 )ϵ −2 ) product states.For klocal observables E i , evaluating the expectation value of E i on a product state takes time O(e ck log(M δ −1 )ϵ −2 ) for some c > 0. Thus, we see that for k = O(log(n)) also the postprocessing can be done efficiently.
The application of such results to maximum entropy methods is then clear: given E 1 , . . ., E m assumed to be at most k-local, with probability at least 1 − δ we can obtain an estimate e ′ (λ) of e(λ) satisfying: using O(4 k log(mδ −1 )ϵ −2 ) copies of σ(λ).We then finally obtain our main theorem, Theorem 1.1, restated here for the sake of clarity: Theorem C.1 (Fast learning of Gibbs states).Let σ(λ) ∈ D 2 n be an n-qubit Gibbs state at inverse temperature β with respect to a set of k-local operators E = {E i } m i=1 that satisfies a transportation cost inequality with constant α.Moreover, assume that µ → log(Z(µ)) is L, U strongly convex in B ℓ∞ (0, 1).Then samples of σ(λ) are sufficient to obtain a state σ(µ) that satisfies: for all Lipschitz observables O with probability at least 1 − δ.
Proof.Using the aforementioned methods of [40] we can obtain an estimate e ′ (λ) satisfying the guarantee of Eq. ( 29) with probability at least 1 − δ.We will now resort to the results of Thm.A.1 to obtain guarantees on the output of the maximum entropy algorithm.Now we solve the maximum entropy problem with e ′ (λ) and set the stopping criterion for the output µ * as for c > 10.Then it follows from Thm. A.1 that: This can then be combined with transportation cost inequalities.Indeed, we have: Inserting our bound on the relative entropy in Eq. ( 47) we obtain: We conclude by suitably rescaling ϵ.
The theorem above yields exponential improvements in the sample complexity to learn the expectation value of certain observables for classes of states that satisfy a transportation cost inequality with α = Ω(log(n) −1 ).As discussed in Sec.B.2, extensive observables that can be written as a sum of l-local observables have a Lipschitz constant that satisfies ∥O∥ Lip = O(l √ n).Shadow-like methods would require O(e O(l) log(mδ −1 ) ϵ −2 ) samples to learn such observables up to a relative error of nϵ.Our methods, however, require O(e O(k) poly(l, ϵ −1 ) log(mδ −1 )), which yields exponential speedups in the regime l = poly(log(n)).Of course it should also be mentioned that classical shadows do not require any assumptions on the underlying state.
Furthermore, considering the exponential dependency of the sample complexity in the locality for shadow-like methods, we believe that our methods yield practically significant savings already in the regime in which we wish to obtain expectation values of observables with relatively small support.For instance, for high-temperature Gibbs state of nearest neighbor Hamiltonians and observables supported on 15 qubits, shadows require a factor of ∼ 10 7 more samples than solving the maximum entropy problem for obtaining the same precision.
On the other hand, previous guarantees on learning quantum many-body states [10,23,53,67] required a polynomial in system size precision to obtain a nontrivial approximation, which implies a polynomial-time complexity.Thus, our recovery guarantees are also exponentially better compared to standard many-body tomography results.

C.1 Results for shallow circuits
Let us be more explicit on how to leverage our results to also cover the outputs of short-depth circuits.To this end, let G = (V, E) be a graph that models the interactions of a unitary circuit and suppose we implement L layers of an unknown unitary circuit consisting of 1 and 2-qubit unitaries laid out according to G.That is, we have an unknown shallow circuit U of depth L with respect to G.More precisely, where each E ℓ ⊂ E are a subset of the edges such that any e, e ′ ∈ E ℓ do not share a vertex.Our goal is to show how to approximately learn the state The overall idea consists in finding a Gibbs state approximating |ψ⟩ in Wasserstein distance.We will then find a differential structure for (approximations) of shallow circuits and then showing that the (approximation of the) output satisfies a TC inequality with respect to it.Thus, it suffices to control the relative entropy with this approximation to ensure a good approximation in Wasserstein distance.
Let us find the appropriate approximation.First, note that for β ϵ = log(ϵ −1 ) and H 0 = − n i Z i where Z i denotes the Pauli matrix Z = |0⟩⟨0| − |1⟩⟨1| acting on site i, we have that: Thus, if the states U e −βϵ H 0 tr[e −βϵ H 0 ] U † satisfy a transportation cost inequality with some constant α, this would allow us to conclude that Moreover, defining . As we know the geometry of U, we can bound the support of UZ i U † .Thus, we only need to find a suitable transportation cost inequality to see that this approximation fits into our framework.
Let us now find a suitable differential structure for the state e be the anihilation operator acting on qubit i. Defining L i,0 = (p(1 − p)) 1 4 a i and L i,1 = L † i,0 , we get a differential structure for τ ⊗n p with {L i,k , ω i,k } for i = 1, . . ., n, k = 0, 1, ω i,0 = p 1−p and ω i,1 = 1−p p .That this is indeed a differential structure follows from a simple computation.One can readily check that the induced Lipschitz constant is given by: Thus, we see that the Lipschitz constant takes a particularly simple form for this differential structure.Moreover, it is not difficult to see that {UL i,k U † , ω i,k } provides a differential structure for the state Uτ ⊗n p U † .Indeed, it is easily checked that this new differential structure still gives eigenvectors of the modular operator.Importantly, the result of [11,Theorem 19] establish that the state τ ⊗n p satisfies a transportation cost inequality with constant 1 2 .Putting all these elements together we have: Theorem C.1 (transportation cost for shallow circuits).Let U be an unknown depth L quantum circuit on n qubits defined on a graph G = (V, E) and |ψ⟩ = U |0⟩ ⊗n .Define for ϵ > 0 tr[e −βϵH U ] with β ϵ = log(ϵ −1 ).Then for any state ρ and all observables O we have: Proof.We have: The claim then follows from the discussion above, as σ(U) satisfies a transportation cost inequality with constant Of course the result above has the downside that the Lipschitz constant depends on the unknown circuit U.However, as we can estimate the locality of each term Ua i U † , it is also possible to estimate the Lipschitz constant by controlling the overlap of the observable O with each Ua i U † .
The result of Thm.C.1 and the discussion preceding it also give us a method of efficiently learning the outputs of shallow circuits, as illustrated by the following proposition: Proposition C.1 (Learning the outputs of shallow circuits).Let U be an unknown n-qubit quantum circuit with known locality structure as in Eq. ( 48) and |ψ⟩ = U |0⟩ ⊗n .Moreover, define l 0 as For some ϵ > 0 we have that samples of |ψ⟩ suffice to learn a Gibbs state σ(µ) that satisfies with probability of success at least 1 − δ.Along similar lines we have that samples suffice to learn a state σ(λ) such that Proof.Let E i be a basis of Pauli operators for matrices on the support of UZ i U † .By our assumption on l 0 , we know that for each i we have that |E i | ≤ 4 l0 .For simplicity we will assume that there are no Pauli words that are contained in two distinct E i and we will enumerate all different Pauli words as {E i,j } for 1 ≤ i ≤ n and 1 ≤ j ≤ 4 l0 indicating the elements of the different E i .Thus, there is a λ ∈ R m with m ≤ n4 l0 and ∥λ∥ ℓ∞ ≤ 1 such that Let ε > 0 be given.It follows from Eq. ( 49) and Pinsker's inequality that picking ϵ = ε n16 is sufficient to ensure that copies of |ψ⟩ is sufficient to obtain estimates of tr [|ψ⟩⟨ψ|E i,j ] up to an ε/4 error.By Eq. ( 53) and a triangle inequality they are also ε/2 away from tr σ(U, ε n4 )E i,j .Thus, running the maximum entropy principle with these estimates and the basis of operators given by ∪ n i=1 E i will yield us an estimate σ(µ) that satisfies: To obtain the estimate in Wasserstein distance in Eq. (50), we can pick ε = O(ϵ/(4 l0 log 2 (n4 l0 ϵ −1 ))), as in this case The claim then follows from the results of Thm.C.1 and substituing ε into Eq.( 54).
This shows that shallow circuits can be learned efficiently as long as l 0 = O(log(n)).However, it is not immediately obvious how to also estimate the expectation values of the Gibbs states required to run the maximum entropy algorithm.Thus, at least with the methods presented here, the postprocessing takes exponential time in the number of qubits.
Ground states of gapped systems: in light of our results for shallow circuits, it is natural to ask to what extent our framework can be extended to ground states of gapped Hamiltonians, specially in 1D.Thus, let us briefly comment on the technical barriers on the way of such statements.First, notice that the statement of Thm C.1 required inverse temperatures scaling logarithmically in system size for the approximation of the ground state.Most known TC inequalities have an exponential scaling with the inverse temperature and, thus, TC at this inverse temperature the savings compared to Pinsker are not quadratic, hindering a straightforward extension to gapped systems.There are some nontrivial examples of ground states satisfying a TC inequality with the constant depending inverse linearly with the temperature, like graph states [63].But, as they can also be prepared from a constant depth quantum circuit almost by definition, they fall into the assumptions of the previous statement.
However, the results of [25] assert that k-local density matrices of ground states of gapped Hamiltonians in 1D can be approximated by constant depth circuits, giving evidence that our framework should also extend to such states.And to get there, a technical obstacle has to be overcome in the proof of Thm.C.1.Essentially we need to show that local reduced density matrices of the ground state are already well-approximated at inverse temperature log(ϵ −1 ).With such a statement we could show that we can still approximate the expectation values of E i at inverse temperature log(ϵ −1 ) from samples from the ground state |ψ⟩.

D Summary of known strong convexity constants
As we see in the statement of Thm.A.1 and C.1, having a bound on the strong convexity constant L −1 can give a quadratic improvement in the sample complexity in terms of the error ϵ.Here we will briefly summarize for which cases estimates on L are known in the literature for the classes of states we considered here.
General many-body quantum: first, we should mention the results of [5].There the authors show bounds on L −1 for arbitrary many-body Hamiltonians and temperatures that scales linearly in m.Thus, although certainly nontrivial, these bounds do not improve the sample complexity for the regimes we are interested in this work, namely that of logarithmic sample complexity in system size.To obtain improvements in this regime, L −1 needs to be at most polylogarithmic in system size.
In the case of high-temperature Gibbs states, the recent work of [34] shows that this is indeed the case.I.e., in their Corollary 4.4 they show that for β = O(k −8 ), where k is the locality of the Hamiltonian, we indeed have . It should also be noted that their results do not hold only for geometrically local Hamiltonians, but rather any Hamiltonian such that each term acts on at most k qudits.This implies that for the high temperature regime, for which we also have the TC inequality in Thm.B.1, the improved sample complexity yielded by our methods holds.Note, however, that there is a slight mismatch between the inverse temperature range for which the two results hold: for the strong convexity we need β = O(k −8 ), whereas for TC β = O(k −1 ) suffices.
Commuting Hamiltonians: as we will prove later in Prop.F.1, in the case of commuting Hamiltonians we have that L −1 = O(e β β −2 ).Thus, for any constant inverse temperature β > 0 we have an improved sample complexity.However, in order to analyse ground states in 1D, our current proof techniques still require an inverse temperature scaling logarithmically in system size, so for such states we do not obtain improvements through strong convexity.We plan to address this gap in future work.
Besides the cases mentioned above, we also considered the case of local circuits in this work.For those there are no nontrivial estimates on L available to the best of our knowledge.

E Regimes of efficient postprocessing
The only question we have still to answer is how to perform the postprocessing efficiently, namely how the parameter C E appearing in Theorem C.1 scales and how we obtain the bounds in Table 2.
There have been many recent breakthroughs on approximating quantum Gibbs states efficiently on classical computers [35,46,49,51,52].The gradient descent method only requires us to estimate the gradient of the partition function for a Gibbs state at each iteration.Thus, any algorithm that can approximate the log-partition function Z(µ) efficiently or approximate e(λ) suffices for our purposes.
For Gibbs states on a lattice, the methods of [51] yield that we can perform such approximations on a classical computer in time polynomial in n for temperatures β < β c = 1/(k8e 3 ), where k is the locality of the Hamiltonian, and ϵ inverse polynomial in system size.Thus, we conclude that for this temperature range, which coincides with the range for which the results of Thm.B.1 hold, C E is polynomial in system size and we can also obtain efficient classical postprocessing.
For the case of Gibbs states of 1D systems, to the best of our knowledge, the best results available are those of [52].They show how to efficiently obtain efficient tensor network approximations for 1D Gibbs states for β = o(log(n)).As such tensor networks can be contracted efficiently as well, this gives an efficient classical algorithm to compute local expectation values of such states, which suffices for our purposes.Thus, the results of [51,52] ensure that for the systems considered in Thm.B.1 we can also perform the postprocessing efficiently on a classical computer.
It is also interesting to consider what happens if we have access to a quantum computer to estimate the gradient.We will discuss the implications of this in the next section for the case of commuting Hamiltonians.
Finally, for local quantum circuits we are not aware at this stage of any method that could yield a better postprocessing complexity than computing the partition function explicitly.This would yield a postprocessing that is exponential in the system size, as it requires diagonalizing an exponentially large matrix.

F A complete picture: commuting Gibbs states
In this section, we discuss two classes of states for which the strongest version of our theorems holds, namely that of commuting 1D Gibbs states, and the one of high-temperature commuting Gibbs states on arbitrary hypergraphs.We already discussed that they satisfy transportation cost inequalities in Thm.B.1.Thus, the last missing ingredient to obtain an optimal performance is to show that the partition function is indeed strongly convex.More precisely, we will now establish that, for these classes of states, both the upper and lower bounds on the Hessian of the log partition function are order 1.In addition to that, with access to a quantum computer, it is possible to perform the post-processing in time Õ(m).As writing down the vector λ takes Ω(m) time, we conclude that the postprocessing can be performed in a time comparable to writing down a solution to the problem.Thus, our procedure is essentially optimal.
Also in the setting of commuting Gibbs states, it is worth noting that after the completion of this work, we became aware of [4], which gives another method to learn the Gibbs state and its Hamiltonian that neither involves the maximum entropy method nor requires strong convexity guarantees.Their algorithm works by learning local reduced density matrices and showing that the parameters λ of the Hamiltonian of a commuting Gibbs state can be efficiently estimated from that.In principle, obtaining a bound on λ also suffices for our purposes and we could alternatively use their methods to bypass having to solve the maximum entropy problem for such states.In particular, this means that the postprocessing with their methods could be performed even for temperatures at which the partition function cannot be estimated efficiently but we still have access to samples from the state.However, as we ultimately are interested in the regime in which TC inequalities hold, which corresponds to the high-temperature regime, we do not further comment on their results.
In this section, we consider a hypergraph G = (V, E) and assume that there is a maximum radius r 0 ∈ N such that, for any hyperedge A ∈ E, there exists a vertex v ∈ V such that the ball B(v, r 0 ) centered at v and of radius r 0 includes A. In what follows, we also denote by S(v, r) the sphere of radius r centered at vertex v ∈ V , and define for all r ∈ N: Next, we consider a Gibbs state σ(λ) on H V := v∈V H v , where dim(H v ) = d for all v ∈ V .More precisely, σ(λ) := e −βH(λ) /Z(λ), where with a slight abuse of notations with λ i E i .

F.1 Upper bound on Hessian for commuting Gibbs states with decay of correlations
In this section, we prove tightened strong convexity constants for the log partition function in the case when the Gibbs state arises from a local commuting Hamiltonian at high-temperature.In fact, the upper constant found in Proposition A.4 can be tightened into a system size-independent one under the condition of exponential decay of the correlations in the Gibbs state.Several notions of exponential decay of correlations exist in the literature [18].Here, we will say that a Gibbs state σ has correlation length ξ if for all observables O A , O B supported on non-overlapping regions A and B respectively, we have that: for some constant c > 0. In the classical setting, this condition is known to hold at any inverse temperature for 1D systems, and below a critical inverse temperature β c that depends on the locality of the Hamiltonian when D ≥ 2 [28].The same bound also holds in the quantum case for 1D systems [6], or above a critical temperature on regular lattices when D ≥ 2 [35,46].Using these bounds, we obtain the following improvement of Proposition A.4 which shows that for this class of states U = O(1).
Lemma F.1 (Strengthened upper strong convexity at high-temperature and for 1D systems).For each µ ∈ B ℓ∞ (0, 1), let σ(µ) be a Gibbs state at inverse temperature β < β c corresponding to the Hamiltonian defined on the hypergraph G = (V, E) in (55).Then where ξ is the correlation length of the state.Moreover, this result holds for all β > 0 in 1D.
Proof.Let us first use the assumption of commutativity to simplify the expression for the Hessian.We find for all The rest follows similarly to Proposition A.4 from Gershgorin's circle theorem together with the decay of correlations arising at β < β c : for all α A , where we also used that the basis operators {E i } have operator norm 1.The claim then follows by bounding the number of basis operators whose support is at a distance r of A for each r ∈ N: the latter is bounded by the product of (i) the number of vertices in A; (ii) the number of vertices at a distance r of a given vertex; (iii) the number of hyperedges containing a given vertex; and (iv) the number of basis operators corresponding to a given hyperedge.A simple estimate of each of these quantities gives the bound B(r 0 ) S(r) B(2r 0 ) d 2B(r0) .Therefore:

F.2 Lower bound on Hessian for commuting Gibbs states
Whenever the Gibbs state is assumed to be commuting, the lower strong convexity constant L can also be made independent of m, by a direct generalization of the classical argument as found in [66][Lemma 7] or [54] (see also [5]).
Before we state and prove our result, let us introduce a few useful concepts: given a full-rank quantum state σ, we denote the weighted 2-norm of an observable Y as . and refer to the corresponding non-commutative weighted L 2 space with inner product ⟨X, Y ⟩ σ := tr σ 1 2 X † σ 1 2 Y as L 2 (σ).The Petz recovery map corresponding to a subregion A ⊂ V with respect to σ(µ) is defined as We will also need the notion of a conditional expectation E A with respect to the state σ(µ) into the region A ⊂ Λ (see [7,44] for more details).For instance, one can choose E A := lim n→∞ R n A,σ(µ) , where R A,σ(µ) is the Petz recovery map of σ(µ).In other words, the map E A is a completely positive, unital map that projects onto the algebra N A of fixed points of R A,σ(µ) .This algebra is known to be expressed as the commutant [7] N Moreover, the maps E A commute with the modular operator ∆ σ(µ) (.) := σ(µ)(.)σ(µ)−1 , and for any

The commutativity condition for H(µ) implies frustration freeness of the family of conditional expectations {E
. The next technical lemma is essential in the derivation of our strong convexity lower bound.With a slight abuse of notations, we use the simplified notations σ x (λ) := σ {x} (λ), H x (λ) := H {x} (λ) and so on.Lemma F.2. Let H = j µ j E j be a local commuting Hamiltonian on the hypergraph G = (V, E) defined in (55), each local operator E j is further assumed to be traceless.The following holds for any x ∈ V : Proof.We first prove that X = R x,σx(µ) (X) is equivalent to ∥R x,σx(µ) (X)∥ 2,σx(µ) = ∥X∥ 2,σx(µ) .One direction trivially holds.For the opposite direction, assume that X ̸ = R x,σx(µ) (X).This means that X = Y + Z, with Y, Z two operators that are orthogonal in L 2 (σ x (µ)), with R x,σx(µ) (Y ) = Y and Z ̸ = 0. Now, since R x,σx(µ) is self-adjoint and unital, it strictly contracts elements in the orthogonal of its fixed points and we have which contradicts the condition of equality of the norms.Now, since the map R x,σx(µ) is unital, it suffices to prove that R x,σx(µ) (H x ) ̸ = H x , or equivalently E x (H x ) ̸ = H x , in order to conclude.Let us assume instead that equality holds.This means that, for all observables A x supported on site x, and all t ∈ R: However, this contradicts the fact that H x is traceless on site x.Therefore R x,σx(µ) (H x ) ̸ = H x and the proof follows.
We are ready to prove our strong convexity lower bound.
Proposition F.1 (Strengthened lower strong convexity constant for commuting Hamiltonians).For each µ ∈ B ℓ∞ (0, 1), let σ(µ) be a Gibbs state at inverse temperature β corresponding to the commuting Hamiltonian H(µ) = j µ j E j on the hypergraph G = (V, E) defined in (55), where tr [E i E j ] = 0 for all i ̸ = j and each local operator E j is traceless on its support.Then the Hessian of the log-partition function is lower bounded by where c(β) := max v∈V c(v, β).
Proof.We first use the assumption of commutativity in order to simplify the expression for the Hessian.As in (56), we find Therefore, for any linear combination H ≡ H(λ) = j λ j E j of the basis vectors, we have . It is thus sufficient to lower-bound the latter.For this, we choose a subregion A ⊆ V such that any basis element E i has support intersecting a unique vertex in A. We lower-bound the variance by where the first inequality follows by the L 2 (σ(µ)) contractivity of id − E A .Now, the weighted norm can be further simplified into a sum of local weighted norms as follows: first, for any two E i , E j whose supports intersect with a different vertex of A, we have where in the third line we used the commutation of the modular operator ∆ σ(µ) (X) := σ(µ)Xσ(µ) −1 with E A together with the commutativity of σ(µ) and E i .Now, denoting supp(E i ) ∩ A = {x} and supp(E j ) ∩ A = {y}, we show that In order to prove these two identities, we simply need to prove for instance that E x [E i ] belongs to the image algebra N A of E A since N A ⊆ N x by definition.Hence, it is enough to show that E x [E i ] commutes with operators of the form σ(µ) it X A σ(µ) −it for any t ∈ R and any X A ∈ B(H A ).This claim follows from where the fourth line follows from the fact that the support of E x [E i ] does not intersect A\{i}, together with the fact that E x [E i ] is locally proportional to I on site x, by definition of N x .Therefore, using ( 59) into (58), we get where in the second line we used that E x [E i ] is a fixed point of E y , and then that E j is a fixed point of E x , by the support conditions of E i and E j .Therefore, the variance on the right-hand side of (57) can be simplified as where we recall that H x := j| supp(Ej )∋x λ j E j .Now, for any x ∈ V , we denote x∂ := {A ∈ E| x ∈ A} and decompose the Hamiltonian H(µ) as Clearly, Now, The first and second identities above follow once again from the commutativity of the Hamiltonian similarly to (58), where for the second one we also use the disjointness of x∂ and supp(K 1 x (µ)).The first inequality follows from (61).The third identity is a consequence of the fact that E x is a projection with respect to L 2 (σ x (µ)).The second inequality follows from ∥R for all X (see Proposition 10 in [44]).The last inequality is a consequence of Lemma F.2. Finally, we further bound the weighted L 2 norm on the last line of the above inequality in terms of the Schatten 2 norm to get where λ min (σ x (µ)) denotes the minimum eigenvalue of σ x (µ).The result follows from the simple estimates ∥K 0 x (µ)∥ ∞ ≤ B(4r 0 ) and λ min (σ x (µ)) ≥ e −βB(2r0) d −B(2r0) .

F.3 Summary for 1D or high-temperature commuting
Now we genuinely have all the elements in place to essentially give the last word on estimating Lipschitz observables for Gibbs states of nearest neighbor 1D or high-temperature commuting Gibbs states.
Theorem F.1.For each µ ∈ B ℓ∞ (0, 1), let σ(µ) be a Gibbs state at inverse temperature β corresponding to the commuting Hamiltonian H(µ) = m j=1 µ j E j on the hypergraph G = (V, E) defined in (55), where tr [E i E j ] = 0 for all i ̸ = j, each local operator E j is traceless on its support and acts on a constant number of qubits and m = O(n).Moreover, assume that σ(λ) satisfies the conditions of Thm.B.1.Then O(log(n)ϵ −2 ) samples of σ(λ) suffice to obtain a µ ∈ B ℓ∞ (0, 1) satisfying with probability at least 1 − p.Moreover, we can find µ in O(poly(n, ϵ −1 )) time on a classical computer.With access to a quantum computer, the postprocessing can be performed in Õ(nϵ −2 ) time by only implementing quantum circuits of Õ(1) depth.
Proof.To obtain the claim on the sample complexity, note that for such systems L = Ω(1) by Prop.F.1 and they satisfy a transportation cost inequality by Thm.B.1.Moreover, we can learn the expectation of all E i up to an error ϵ and failure probability δ with O(log(nδ −1 )ϵ −2 ) samples using shadows, as they all have constant support.The claimed sample complexity then follows from Thm. C.1.The classical postprocessing result follows from [51].The postprocessing with a quantum computer follows from the results of [44], [18] combined with the fact that L −1 U = O(1) by also invoking Lemma F.1.Indeed, [18,44] asserts that we can approximately prepare any σ(µ) on a quantum computer with a circuit of depth Õ(1).Moreover, once again resorting to shadows we can estimate e(µ) with Õ(ϵ −2 ) samples.We conclude that we can run each iteration of gradient descent in Õ(nϵ −2 ) time.As L −1 U = O(1), Thm.A.1 asserts that we converge after Õ(1) iterations, which yields the claim The theorem above nicely showcases how the joint use of transportation cost inequalities and strong convexity bounds to improve maximum entropy methods come together.Moreover, with access to a quantum computer, up to polylogarithmic overhead in system size factors, the computational complexity of learning the Gibbs state is comparable to reading out the results of measurements on the system.Together with the polylog sample complexity bounds that we obtain, this justifies our claiming that the result above almost gives the last word on learning such states.

G Comparison of sample complexity of previous methods
Our work arguably introduces two technical innovations to the literature of learning and tomography of quantum states that underly our exponential speedups in sample complexity.The first is the observation that most observables of physical interest have a small Lipschitz constant, and, thus, it might be more motivated to look for good approximations in Wasserstein distance instead of trace distance.The second is that for states that satisfy a TC inequality, it suffices to obtain an estimate of the state that has a small relative entropy density with the target state to recover Lipschitz observables.And that finding such an estimate can be achieved from a few samples through a combination of maximum entropy and classical shadow methods.
We will now argue that these two innovations are indeed crucial to ensure our exponential speedups.First, we will show that the shadow protocol will yield bad estimates for the expectation value of local observables with high probability even for product states if the number of samples is not exponential in the locality of the underlying observables.This shows that exploiting the locality of the underlying states is crucial to obtaining a polynomial sample complexity in the locality of the observables.After that, we will show in Subsec.G.2 that Ω( √ nϵ −2 ) samples are necessary for any algorithm that can recover any Gibbs state on a regular lattice at constant temperature up to trace distance ϵ.This will follow from the results of [27] and showcase the need to focus on the Wasserstein distance instead of the trace distance.

G.1 Lower bounds for sample complexity of classical shadows
One of the main advantages of our results compared with the classical shadows method of [40] is that whenever a TC inequality is available, we can learn all k−local observables with a number of samples that grows polynomially in k, whereas classical shadows require a number of samples that grows exponentially in k.However, the classical shadows framework does not assume any structure for the underlying state.Thus, it is natural to ask if this undesired exponential scaling of the shadows framework is due to this broader applicability.
In this section, we will demonstrate that this is not the case even for one of the simplest imaginable classes of states, namely tensor products of Pauli eigenstates.We will show that if the number of samples is not exponential in the locality of the desired observables, there will always be a k-local observable whose estimate will be wrong with constant probability.
But before we show that, let us briefly recall how the shadow method works.To recover local observables on n qubits, the methods of [40] proceed as follows.First, we sample a random unitary U = ⊗ n−1 i=0 U i , where each U i is an independent rotation into a Pauli basis.Then we proceed to measure the state of interest ρ in the basis defined by U , obtaining a n-bit classical string b 0 b 1 . . .b n−1 .The shadow corresponding to this measurement ρ is then defined to be given by ρ = ⊗ n i=1 (3U † i |b i ⟩⟨b i |U i − I).
We then repeat this procedure SM times for S, M ∈ N, obtaining shadows {ρ s,m } 1≤m≤M,1≤s≤S .We then set our estimate of the expectation value of an observable O to be given by That is, we partition the set of shadow samples into M subgroups, take the average on each of them and then take their median.The authors of [40] then proceed to show in Theorem 1 that if we take SM = O(4 k log(N δ −1 )ϵ −2 ), then with probability at least 1 − δ the expectation value any given N k-local observables of bounded operator norm will deviate by at most ϵ from the estimate given in Eq. (64).We will now show that this exponential scaling in k is unavoidable if we want to obtain nontrivial estimates with high probability.More precisely: Proposition G.1.Let ρ = ⊗ n i=1 |ϕ i ⟩⟨ϕ i | be an unknown n qubit state where each |ϕ i ⟩ is given by a Pauli eigenstate.For k ≥ log 3 (n) log(n), if SM ≤ n then there is an observable O of the form where each O i is k-local and such that: with probability at least 1 − e −1 .
Proof.We will let O i = ⊗ i+k j=i P j , where we take addition modulo n and let P i be the Pauli matrix such that P i |ϕ i ⟩ = 1, where we assume w.l.o.g. that each Pauli eigenstate corresponds to a +1 eigenstate.It is then clear that tr [Oρ] = 1.
Let us now analyse the performance of shadows.Note that as the random unitaries used to obtain each sample are random Pauli bases, we have that tr [O i ρs,m ] = 3 k if the unitary U s,m corresponds to the identity on qubits i, i + 1, . . ., i + k and 0 otherwise.This is because then the string we measure will be rotated to a Pauli basis different from that of P i on at least one of the qubits in that interval.Thus, as we pick the Pauli bases uniformly at random and there are three different possibilities for each one of them, we see that the probability that a shadow is different from 0 on a given O i is 3 −k .By a union bound, the probability that a shadow is different than 0 on any of the n O i is at most n3 −k .As the different shadows are independent, the probability that all of the SM shadows return 0 is at least (1 − n3 −k ) SM .As we picked k ≥ log 3 (n) log(n) and SM ≤ n, we have: for n large enough.Thus, as all shadows will have expectation value 0, the median and means procedure will clearly also output 0, which concludes the proof.
Despite the fact the proof above used quite rough estimates and simple observables and states, it still gives some intuition as to why classical shadows require an exponential number of samples in the locality.We see that the probability that the shadows "look in the right direction" is exponentially small in the locality of the observables, in the sense that the overlap between the state and a random Pauli eigenstate is likely to be exponentially small.And whenever it does look in a direction in which the underlying state has a significant overlap with that basis it has to compensate that direction exponentially.Thus, if the number of samples is not exponential in the locality, it is unlikely that we will measure in a direction that has significant overlap with our state or there will be significant fluctuations due to the exponential rewarding of the "good" directions.
However, by combining shadows with a locality structure and the maximum entropy principle, as we do in this work, we can bypass the need to measure in random directions for observables with a large locality, bypassing this exponential scaling.
We also note that the authors of [40] already proved the optimality of their protocol by only considering product states in Section 8 of their supplemental material.The main difference between their proof and ours is that we focus on a Lipschitz observable, whereas they focus on observables that only depend on k qubits.

G.2 Lower bounds for recovery in trace distance
In the main text, we claimed that one of the reasons why we obtain an exponential speedup compared to usual many-body methods is that we focus on a good recovery in the Wasserstein distance instead of trace distance.Moreover, by combining the maximum entropy method with a TC inequality we are able to obtain good recovery guarantees from a constant relative entropy density.That is, as long as two Gibbs states σ(λ), σ(µ) satisfy D(σ(λ)∥σ(µ)) ≤ ϵn, (66) for some ϵ > 0 we already obtain some nontrivial guarantees.
In this section, we will argue that focusing on the Wasserstein recovery instead of trace distance is essential to obtain nontrivial recovery guarantees from a number of samples scaling logarithmically with the system size.To achieve this, we will resort to results of [27,Theorem 1.3]: Proposition G.2. [Lower bound of sample complexity in trace distance] Let G = (V, E) be a graph on n vertices and m edges and for λ ∈ R 15m , ∥λ∥ ℓ∞ ≤ 1 let H(λ) be defined as where the H l i,j correspond to some ordering of the nonidentity Pauli strings acting on sites i, j.Then for any β = Ω(m − 1 2 ), let σ(λ) be the estimate of σ(λ) outputted by an algorithm with access to s samples from a state σ(λ).Then: Proof.This statement immediately follows from [27,Theorem 1.3], which shows the analogous statement when restricted to classical Ising models.As our class of Hamiltonians includes those as a subset, any algorithm that could provide an estimate for this more general class also can find one for the classical instances.Moreover, in the proof of [27,Theorem 1.3] the inverse temperature is absorbed into the coefficients of the Hamiltonian, which are assumed to have 2-norm bounded by a constant independent of the system's size.This is easily seen to be satisfied by our conditions since β = Ω(m − 1 2 ).
The statement above implies in particular that any algorithm that finds an estimate that is ϵ close in trace distance for all 2-local Gibbs state on a lattice and constant inverse temperature requires Ω(nϵ −2 ) samples.In contrast, we see that it is possible to obtain an estimate that is O(ϵ √ n) close in Wasserstein distance from O(ϵ −2 log(n)) samples of the Gibbs states, which is sufficient to already give nontrivial recovery guarantees for Lipschitz observables.Thus, we see from Prop.G.2 that resorting to the Wasserstein distance is essential to obtain recovery guarantees in the regime where the number of samples is logarithmic in the system's size.
Furthermore, it is interesting to note that the proof of [27] is based on a set of Gibbs states of the form: with δ = Θ(m − 1 2 ) and s i,j ∈ {±1}.Their proof then proceeds by finding a large subset of Gibbs states of the form in Eq. (68) which have a trace distance and relative entropy of constant order.The lower bound on the sample complexity then follows from standard information-theoretic arguments.We believe that this class of examples in the proof further illustrates why the trace distance is not necessarily the adequate distance measure when estimating the error on extensive observables.Indeed, for extensive, local observables the class of Gibbs states from the Hamiltonians in Eq. (68) behaves like the maximally mixed state, as each local term converges to 0 as the system size increases.

Figure 1 :
Figure 1: example of observable O = i Oi for 2D lattice system of size n.Each Oi is supported on a L × L square (L = 3 in the figure).We have ∥O∥ Lip = O( √ n) and ∥O∥ = n/L 2 .Thus our methods require poly(L, log(n), ϵ −1 ) samples to estimate the expectation value of all such observables.Shadow-like methods require poly(e cL 2 , log(n), ϵ −1 ) samples, an exponentially worse dependency in L.Even for moderate values of L, say L = 5, this can lead to 10 7 factor savings sample complexity and gives an exponential speedup for L = poly(log(n)).Other many-body methods have a poly(L, n, ϵ −1 ) [5, 10, 23, 53, 67] scaling, which in turn is exponentially worse in the system size.

Figure 2 :
Figure 2: Performance on a Gibbs state from the family of Eq. (14) and the 8-local observable in Eq.(15).We have set the number of qubits to 100, β = 1.1 and the λi uniformly at random between 0.5 and 0.9.The x-axis denotes the logarithm of the number of samples in base 10 and the y the error in absolute value to the true value.We ran each protocol 300 times on the same Gibbs state to see how the estimate varied.We see that even with 10 4 samples the shadows method still has errors of the order 10 0 in the 75 percentile, whereas the maximum entropy already yields good estimates when the number of samples is of order 10 2 , showcasing that maximum entropy methods outperform classical shadows by orders of magnitudes for observables of moderate locality like those in Eq.(15).

Figure 4 :
Figure4: Error in estimating a Lipschitz observable after performing the maximum entropy reconstruction method.The underlying state is a classical 1D-Gibbs state with randomly chosen nearest-neighbor interactions and inverse temperature β = 1.We estimated all the ZiZi+1 expectation values from the original state based from 10 3 samples of the original state.We then computed the upper bound on the trace distance predicted by Eq. (9) and Pinsker's inequality and compared it to the actual of discrepancy for a Lipschitz observable on the reconstructed and actual state.The Lipschitz observable was chosen as i n −1 U ZiZi+2U † , where we picked U as a depth 3 quantum circuit.We observe that the error incurred is essentially independent of system size, and we get good predictions even when the number of samples is smaller than it.
C k is denoted by D k .Typically, k will be taken as d n for n qudit systems.The trace on M k is denoted by tr.Given two quantum states ρ, σ, we denote by S(ρ) = − tr [ρ log(ρ)] the von Neumann entropy of ρ, and by D (ρ∥σ) the relative entropy between ρ and σ, i.e.D(ρ∥σ) = tr [ ρ (log(ρ) − log(σ))] whenever the support of ρ is contained in that of σ and +∞ otherwise.The trace distance is denoted by ∥ρ − σ∥ tr := tr [|ρ − σ|] and the operator norm of an observable by ∥O∥ ∞ .Scalar products are denoted by ⟨•|•⟩.Moreover, we denote the ℓ p norm of vectors by ∥ • ∥ ℓp , and for x ∈ R m and r ∈ R, B ℓp (x, r) denotes the ball of radius r in ℓ p norm around

( i )
The E i are classical or nearest neighbour (i.e.k = 2) on a regular lattice and the inverse temperature β < β where β c only depends on k and the dimension of the lattice, for both W 1,∇ , W 1,□ and α = Ω(1)[18].(ii) The operators E i are local with respect to a hypergraph G = (V, E) and the inverse temperature satisfies β < β c , where β c only depends on k and properties of the interaction hypergraph for W 1,□ and α = Ω(1) [26, Theorem 3, Proposition 4].(iii) The E i are one-dimensional and β > 0, for both W 1,∇ and W 1,□ and α = Ω(log(n) −1 ) [8].Moreover, the underlying differential structure (∇, σ) consists of L k acting on at most O(k) qudits.Theorem B.1 establishes that transportation cost inequalities are satisfied for most classes of commuting Hamiltonians known to have exponential decay of correlations.

Table 1 :
extended this notion to the noncommutative setting and we will focus on the Summary of underlying assumptions and sample complexity of other approaches to perform tomography on quantum many-body states.
1 (Growth of differential Lipschitz constant for local evolutions).Let (∇, σ) be a differential structure on M d n and let O = be an observable with ∥O i ∥ ∞ ≤ 1.Let A i denote the support of each O i and B j that of each L j .Moreover, let t → Φ t be an evolution satisfying Eq. (LR1) and set o(i, j)(t) = 2 if A i ∩ B j ̸ = ∅ and c (e vt − 1) g(dist(A i , B j )) else.Then: i O i