Quantum advantage from energy measurements of many-body quantum systems

The problem of sampling outputs of quantum circuits has been proposed as a candidate for demonstrating a quantum computational advantage (sometimes referred to as quantum “supremacy"). In this work, we investigate whether quantum advantage demonstrations can be achieved for more physically-motivated sampling problems, related to measurements of physical observables. We focus on the problem of sampling the outcomes of an energy measurement, performed on a simple-to-prepare product quantum state – a problem we refer to as energy sampling. For diﬀerent regimes of measurement resolution and measurement errors, we provide complexity theoretic arguments showing that the existence of eﬃcient classical algorithms for energy sampling is unlikely. In particular, we describe a family of Hamiltonians with nearest-neighbour interactions on a 2D lattice that can be ef-ﬁciently measured with high resolution using a quantum circuit of commuting gates (IQP circuit), whereas an eﬃcient classical simulation of this process should be impossible. In this high resolution regime, which can only be achieved for Hamiltonians that can be exponentially fast-forwarded , it is possible to use current theoretical tools tying quantum advantage statements to a polynomial-hierarchy collapse whereas for lower resolution measurements such arguments fail. Nevertheless, we show that eﬃcient classical algorithms for low-resolution energy sampling can still be ruled out if we assume that quantum computers are strictly more powerful than classical ones. We believe our work brings a new perspective to the problem of demonstrating quantum advantage and leads to interesting new questions in Hamiltonian complexity.

The problem of sampling outputs of quantum circuits has been proposed as a candidate for demonstrating a quantum computational advantage (sometimes referred to as quantum "supremacy"). In this work, we investigate whether quantum advantage demonstrations can be achieved for more physicallymotivated sampling problems, related to measurements of physical observables. We focus on the problem of sampling the outcomes of an energy measurement, performed on a simpleto-prepare product quantum state -a problem we refer to as energy sampling. For different regimes of measurement resolution and measurement errors, we provide complexity theoretic arguments showing that the existence of efficient classical algorithms for energy sampling is unlikely. In particular, we describe a family of Hamiltonians with nearest-neighbour interactions on a 2D lattice that can be efficiently measured with high resolution using a quantum circuit of commuting gates (IQP circuit), whereas an efficient classical simulation of this process should be impossible. In this high resolution regime, which can only be achieved for Hamiltonians that can be exponentially fast-forwarded, it is possible to use current theoretical tools tying quantum advantage statements to a polynomial-hierarchy collapse whereas for lower resolution measurements such arguments fail. Nevertheless, we show that efficient classical algorithms for lowresolution energy sampling can still be ruled out if we assume that quantum computers are strictly more powerful than classical ones. We believe our work brings a new perspective to the problem of demonstrating quantum advantage and leads to interesting new questions in Hamiltonian complexity.

Introduction
Impressive recent developments in experimental quantum physics are enabling the manipulation of manybody quantum systems of larger and larger sizes. The high degree of control and local resolution of measurement reached in experimental platforms such as quantum gas microscopes [1], Rydberg atoms manipulated with optical tweezers [2], ion traps [3,4], or superconducting circuits [5,6], are moving these experiments closer to the quantum advantage frontiera regime that is challenging to model using our traditional computers. Experiments at this scale should lead to new insights into important problems in manybody physics. For example, recent developments of many-body interferometric techniques to estimate the entanglement entropy [7,8] have opened experimental access to the investigation of quantum thermalization [9]. Similarly, the access to complex many-body correlators on large-size many-body systems has enabled the experimental study of quantum critical dynamics and dynamical phase-transitions [2,3,10], many-body localization [11,12], scrambling [13,14] and topological order [15,16].
Several experimental demonstrations of large-scale quantum simulators that outperform certain classical simulations methods have already been reported [2,3,11,17,18]. Unfortunately, the evidence for quantum advantage in these experiments is based solely on numerical benchmarks against available classical algorithms such as, e.g., DMRG [17]. Hence, this does not exclude the possibility that a new classical algorithm performs as efficiently as a given quantum simulator or quantum algorithm, for a problem where it was previously thought there was an exponential quantum speed-up. A remarkable example where this happened is the recent work "dequantizing" certain quantum machine learning algorithms [19].
For this reason, it is of utmost importance to put statements about quantum advantage on rigorous mathematical ground. This has been the subject of several recent works which demonstrate, based on strong complexity-theoretic evidence, that there are certain tasks that can be performed efficiently by quantum devices for which an efficient classical algorithm cannot exist. These are based on sampling problems that exhibit certain robustness against noise and are tailored to near-term hardware. Examples include boson sampling [20], IQP sampling [21], sampling from random quantum circuits [6,22] and quantum simulations of constant-time Hamiltonian evolutions [23,24]. The key strength of these results is that the existence of a quantum advantage is provable assuming plausible complexity theoretic conjectures, such as the non-collapse of the Polynomial Hierarchy (a commonly made assumption in theoretical computer science, which can be seen as a generalization of the P =NP conjecture) [20,21]. The prospect of demonstrating in a reliable way exponential quantum speed-ups has initiated a new field of theoretical and experimental activity coined quantum computational advantage (or quantum "supremacy") [25,26].
This has motivated several efforts to bring quantum advantage proposals closer to a realistic physical implementation. These efforts have largely focused on finding approximate sampling problems that are robust against certain experimental errors [20,21]; tailoring quantum sampling problems to existing implementations [22,24,27]; verifying such devices with efficient quantum resources [23,24,28] or exponential classical ones [22,[29][30][31]. Some works have also brought a many-body physics perspective to previously existing quantum advantage proposals, through the study of the connections between transitions in sampling complexity and dynamical phase transitions [32][33][34]. All these works, however, mostly rely on "unphysical" sampling problems that were discovered for the sole purpose of demonstrating a quantum advantage, and further connections with questions of central interest in many-body physics are yet to be explored.
In this work, we take a step further towards identifying physically-motivated quantum advantages in many-body quantum systems. We ask whether complexity-theoretic results can deliver reliable quantum advantages for the measurement of a physically meaningful observable or the calculation of quantities of physical relevance. Specifically, we investigate whether quantum advantages can be related to the measurement of the energy of a many-body quantum system. Such measurements can be implemented, for example, on a quantum computer via quantum phase estimation [35,36], or on analog quantum simulators [37].
In retrospective, one could interpret the work by Huh and collaborators [38] as a first attempt in this direction. This work connects a quantum "supremacy" device, namely a Gaussian boson sampler [39], to the problem of determining the vibrational spectrum of a molecule [40]. However, this work did not prove this problem to be hard for a classical computer and, in practice, there exist classical algorithms that build this spectrum for molecules of a hundred harmonic vibrational modes on a desktop within a few minutes [41].
Hence, demonstrating a conclusive physically motivated quantum advantage remains a difficult milestone. We would like such a quantum advantage proposal to meet two desiderata: • It describes a physical experiment that efficiently measures or estimates a relevant many-body observable or quantity.
• It provides a rigorous mathematical proof of the impossibility of simulating the outcome of the physical experiment efficiently with classical computers.
In this work we focus on the task of sampling outcomes of an energy measurements of a many-body quantum system, a problem we refer to as energy sampling. Naturally, the complexity this task depends on the different parameters characterizing the outcome probability distribution, such as the measurement resolution or other errors affecting the measurement device. Our main contribution is to present complexity theoretic arguments that likely exclude the existence of efficient classical simulators for the energy sampling problem in different parameter regimes. In particular, we demonstrate that for Hamiltonians that can be measured efficiently by quantum devices with very high resolution, it is possible to demonstrate quantum advantage for the energy sampling problem based on the widely believed conjecture of the non-collapse of the Polynomial Hierarchy, together with other standard assumptions [20,21]. We give an explicit example of a simple family of Hamiltonians (e.g., with nearest neighbour interactions on the 2D square lattice) for which energy measurements are hard to simulate on classical computers, yet, should be relatively feasible to measure on a near-term quantum device, which is able to approximately sample from 2D circuits of commuting gates [24]. Interestingly, for this example, the correct functioning of the quantum measurement device can be efficiently verified using existing fidelity-witness methods [24], if reliable single-qubit measurements are available. This leads to a conceivable quantum advantage proposal based on measurements of many-body Hamiltonians. We further discuss limitations of current theoretical tools to prove quantum advantage for energy sampling in low resolution regimes and how it connects to the fundamental problem of proving that quantum computers are strictly more powerful than classical ones. For our proof of quantum advantage, we introduce the concept of quantum diagonalizable Hamiltonians. This defines the set of Hamiltonians for which one can efficiently obtain, using a quantum computer, a description of its diagonalizing unitary as a polysize quantum circuit as well as efficiently compute its eigenvalues. We believe this concept can be of relevance outside the scope of this work and may lead to new investigations of speed-ups with respect to classical algorithms.

Measurement statistics and parameter regimes
Before summarizing our contributions in more detail we introduce some terminology regarding the parameters characterizing an energy measurement as well as the different measurement regimes achievable by quantum devices (a more detailed discussion is presented in Sec. 2).
A model of the measurement outcome statistics needs to take into consideration the imperfect nature of a realistic measurement. A schematic representation of an energy measurement and the parameters we use in this work to characterize it is depicted in Figure 1. Following Refs. [42][43][44], we characterize the quality of a measurement by the measurement resolution δ, which sets the smallest measurement unit, and the measurement confidence η. For example, an energy measurement of an eigenstate |ψ E with energy E is said to have resolution δ and confidence η if it outputs an estimate E such that (1) It will be useful to also define the parameter = 1−η, denoting the probability of failure of the measurement. A generalization of Eq. (1) for arbitrary input states (see Sec. 2) defines the target probability distribution we would like to sample from. The finite resolution and measurement confidence result from natural limitations such as a finite measurement time or energy, which are present even if we assume a noiseless measurement device. In addition, to take into account the unavoidable presence of noise in the implementation of a realistic measurement, we introduce the sampling error parameter β. This parameter quantifies the deviation in 1 -norm between the target outcome distribution and that of an ideal measurement of resolution δ and confidence η.
In order to achieve a certain measurement resolution δ, widely used quantum measurement models, such as the von Neumann model or quantum phase estimation (Sec. 2.1), require a scaling of the resources needed to perform the measurement which is polynomial in the inverse resolution i.e., poly(1/δ). Typically, the resources are quantified by the time required to perform the experiment (assuming a fixed interaction strength between system and a pointer variable [45]) or by the number of gates of a quantum circuit that implements the desired measurement. We will refer to such a measurement with this performance as a standard-resolution measurement, since it can efficiently achieve what we refer to as standard resolution, where δ = 1/poly(n). This can be seen as a coarse-grained energy measurement since, in general, it is not able to distinguish each of the exponentially many eigenvalues. Nevertheless, for unknown Hamiltonians it is the best that can be achieved efficiently. It has been demonstrated that if the Hamiltonian is unknown but its time-evolution can be implemented (as in an experimental setting), an energy-time uncertainty relation is obeyed implying that the measurement time will be inversely proportional to the targeted energy precision [46]. On the other hand, as discussed in [44] and in Sec. 2.3, in some specific situations one can exploit knowledge of the Hamiltonian to achieve what we refer to as a super-resolution measurement, where the scaling of resources is O(poly(log(1/δ))). This allows us to perform a much more accurate measurement which achieves super-resolution efficiently, i.e. an exponentially small measurement resolution δ = 1/ exp(n).
For our purpose it is sufficient to divide the sampling error parameter β into two regimes. We define the measurement as "approximate" [20,21] when the desired sampling error β is required to be only some constant independent of the system size n. Our results on hardness of approximate sampling extend to the regime β = 1/poly(n). Moreover, we define the "near-exact" sampling regime if the sampling error β is required to be inverse-exponential in the input size.

Summary of results
Our results on classical hardness of simulating energy measurements concern the previously defined regimes of resolution and sampling errors as summarized in Table 1 and in more detail below. For the sake of clarity we omit the confidence parameter η, which can be taken to be η = 1−O(β). We provide complexity theoretic evidence that an efficient classical simulation of energy measurements should not be possible, and discuss how the latter provides a suitable test of quantum advantage for suitable resolution and sampling error regimes. Due to their relevance in describing physical systems, we focus on measurements of k-local Hamiltonians acting on n qubits (two level quantum systems) i.e., Hamiltonians of the form H = j H j where each term H j acts on k qubits, for constant k ∈ O(1). Our main contributions are the following: (i) We provide quantum advantage protocols for approximate super-resolution energy measurements (Sec. 3). Specifically, we consider Hamiltonians with nearest-neighbor interactions on 2D lattices that can be efficiently diagonalized on a quantum computer. For the latter, we show, first, that approximate super-resolution measurements can be implemented by building an approximate sampler from the diagonalizing quantum circuit (Theorem 1). At the same time, we prove that these measurements are hard to simulate classically assuming plausible complexitytheoretic conjectures (Corollary 1). This leads to a verifiable quantum advantage result based on energy measurements that could be feasibly implemented in Figure 1: Given a Hamiltonian H, an energy measurement can be modelled as a procedure that takes as input a quantum state ρA and outputs a measurement outcome Ei, as well as a post-measurement state σ i A . We characterize a noiseless measurement (inside the smaller blue box) by its measurement resolution δ, which determines the accuracy of the output value Ei, and its failure probability (see Eqs. (1) and (4)). Moreover, to characterize a noisy measurement (inside larger brown box), we introduce an extra parameter β that quantifies the sampling error, in total variation distance, between the noiseless probability distribution and the observed one (represented by the blue and brown histogram bars, respectively). These probability distributions are defined explicitly in Definitions 2 and 3.
available quantum simulators. These results exploit a connection between quantum advantage proposals based on simulating constant-time Hamiltonian dynamics [23,24] and energy measurement problems.
(ii) Super-resolution measurement procedures for arbitrary Hamiltonians are unlikely to exist based on complexity theoretic evidence [44,47]. For this reason, we investigate the hardness of energy measurements with standard resolution. In Sec. 4, we give complexity-theoretic evidence that classical computers cannot efficiently simulate energy measurements with standard resolution, even for simple translation-invariant nearest-neighbor Hamiltonians on the square lattice (Theorem 3). Analogously to results obtained in Refs. [48][49][50] for other sampling problems, our hardness result is valid in the nearexact sampling regime where β = 0 or is inverseexponential. We give two hardness proofs, one being based on the quantum advantage proposal of [24], the other being based on circuit-to-Hamiltonian constructions [51].
(iii) Ideally, one would like a physically motivated quantum advantage experiment based on approximate sampling problems with standard resolution, which are more resilient to imperfections. However, in Sec. 5, we argue that, with current techniques, it is not possible to link the classical hardness of this problem to a Polynomial Hierarchy (PH) collapse as in Refs. [20,21]. As an intermediate step, we provide alternative hardness results inspired by the BQP-hardness of this problem [42,43]. Using circuit-to-Hamiltonian constructions [51], we show that a hypothetical classical simulator for energy measurement of local Hamiltonians could be used to approximate arbitrary marginals of the For the different regimes of resolution δ and sampling error β, we show the complexity theoretic implications of the existence of an efficient classical algorithm for sampling outcomes of energy measurements, corresponding to the local Hamiltonians we construct. The cells in grey correspond to problems that admit efficient quantum algorithms. In particular, in Sec. 3, we describe an efficient quantum protocol for approximate super-resolution energy measurements, which could be used to demonstrate a quantum advantage. This result, marked by "*", requires plausible complexity theoretic assumptions other than the collapse of the Polynomial Hierarchy (PH).
output distribution of any poly-sized quantum circuit (Theorem 4). Based on hardness of simulating universal quantum circuits [22,29,31], these results give evidence that approximately measuring a local Feynman-Kitaev Hamiltonian in the standard resolution regime is classically intractable. As we will discuss in our manuscript, an open challenge in complexity theory would be to tie these hardness results to a Polynomial Hierarchy (PH) complexity-theoretic collapse. Such a result could have additional implications for the development of quantum protocols exhibiting physically-motivated quantum advantages.

Setting
In this section we set up the framework to discuss quantum advantage for measurements of many-body Hamiltonians. We start in Sec. 2.1 by discussing two ubiquitous quantum measurement protocols: the Von Neumann pointer, for analog devices, and quantum phase estimation, for digital quantum computers. We discuss how our ability to measure is limited by experimental noise as well as physical restrictions on available resources, such as time, energy, or quantum gates counts. These limitations motivate us to define the problem of approximate Energy Sampling in Sec. 2.2, where we introduce precisely the parameters mentioned in Sec. 1.1 characterizing the probability distribution of an imperfect energy measurement. Finally, we discuss in Sec. 2.3 the different parameter regimes and how they can be achieved by quantum devices.

Measurement models and their limitations: from the von Neumann pointer to quantum phase estimation
Let us consider a physical observableÔ A , with eigenvectors |ψ i and eigenvalues λ i , and a quantum system A in state ρ A , where both operators act on a finite dimensional Hilbert space H. Upon an ideal measurement of this observable on system A, the probability of obtaining an outcome λ i is given by where Π i is the spectral projection on the eigenstates with eigenvalues λ i . In a realistic physical implementation of a quantum measurement, though, the outcome probability distribution deviates from the ideal one and is characterized by a finite resolution and other error parameters. To understand the fundamental limitations of quantum measurements, let us take as an example the Von Nenmann pointer model of quantum measurement [45]. In this model, the system under study interacts with a continuous-variable pointer register R for a time t through the unitary coupling wherep R is the momentum operator of the pointer register and α is a parameter that captures the strength of the interaction. If ρ A corresponds to an eigenvector |ψ ψ| i ofÔ A , the effect of U meas will be to apply a αλ i t continuous shift of the pointer register R. Therefore, by having access to the value of the position of the pointer we can infer the outcome λ i . For example, the proposal for quantum non-demolition energy measurements of many-body systems from [14] consists in an implementation of a von Neumann pointer. Naturally, experimental constraints, such as the finite width of the initial pointer state, a limited accuracy of the measurement of the pointer position, or a finite interaction strength α and interaction time t, impose intrinsic limitations on the resolution of the measurement process and the probability that it succeeds. In addition, a realistic measurement process is only able to achieve a noisy approximation of the ideal unitary U meas which results in further errors in the outcome probability distribution. These limitations justify the introduction of the parameters δ, η and β presented in Section 1.2 (see also Figure 1) and rigorously defined in the next subsection. Quantum algorithms for simulating measurements of physical observables also suffer from limitations and imperfections. In this case, the measurement resolution as well as the different measurement errors are determined by the finite number of quantum gates, as well as the noise and imperfections in the implementation of these gates. It is well known that an observableÔ A can be measured using the quantum phase estimation (QPE) algorithm [35,36], represented in Figure 2. The QPE algorithm is a quantum algorithm for estimating the eigenphases of a unitary matrix, which can easily be converted into an algorithm for estimating the eigenvalues ofÔ A by taking the unitaryÛ = e i2πÔ A /Λ , where Λ ≥ ||Ô A || is an upper bound on the norm ofÔ A . This way, the eigenphases ofÛ can be understood as a normalized outcome of the measurement ofÔ A . In fact, it was discussed in Ref. [52] that the QPE algorithm can be seen as a discretized simulation of a Von-Neumann measurement. In this scenario, the ancillary register of phase-estimation plays the role of a discretized von Neumann pointer, where every ancillary qubit encodes an additional bit of precision. Therefore, a quantum simulation of the measurement process will be also prone to error and necessitate the introduction of the parameters δ, η and β. The same holds for any potential classical algorithm trying to simulate the quantum measurement process.

The Energy Sampling problem
Due to their simplicity and significance in condensedmatter physics, we will focus on measurement problems for local many-body Hamiltonians on two-level systems (qubits). Our framework can be easily extended to measurements of general many-body observables.
Definition 1 (Measurement resolution). An energy measurement on a state ρ is said to have resolution δ and confidence η if the probability of outcome E is such that We will also define the parameter = 1 − η, representing the probability of failure of the measurement.
Without loss of generality, we assume all observables throughout the text to have spectra contained in [0, 1]: any observable can be written in this form via a suitable rescaling. 1 Moreover, although in general an energy E can be any real number, any device performing an energy measurement has a discrete set of output values. For this reason, we discretize the real line into steps of size δ > 0 and assume a measurement outcome is given by a value E m ∈ {0, δ, ..., 1 − δ, 1} (we take δ = 1/K for some positive integer K). 2 In principle, we could consider energy measurements on any state that can be efficiently prepared by a quantum device. However, we focus on measurements on product states, since we are interested in energy measurement problems whose complexity comes from the Hamiltonian and not from the state to be measured. Also, considering measurements on more general quantum states would only increase the classical complexity of the problem. Hence, we define the problem of Energy Sampling as follows.
Definition 2 (Energy Sampling). Given an nqubit product quantum state ρ, an n-qubit local Hamil- Such a sampler can be used to build an histogram containing information about how the state ρ decomposes in the eigenbasis of the measured Hamiltonian and thus to learn about the Hamiltonian spectrum, as where the number of terms r can be at most O(n k ). Specifically, κ ≤ max i h i r. 2 Strictly speaking the constraints on the measurement outcome distribution from Eq. (4) could allow for outcomes −δ or 1 + δ. We assume that these outcomes would be identified with outcome 0 and 1, respectively, via classical postprocessing.
represented in Figure 1. Namely, for a given outcome E m , the probability p(E m ) can be reconstructed up to 1/poly(n) errors in probabilistic polynomial time.
It is important to remark that Definition 2 is not robust to experimental imperfections, as the latter can introduce a sampling error in total variation distance. For this reason, our main interest will be the notion of approximate energy sampling, which allows us to consider laboratory errors.
Definition 3 (β-approximate Energy Sampling). Given a n-qubit product quantum state ρ, with probabilities q m such that this probability distribution is β-close in total variation distance to the outcome probability distribution of an energy sampler (Definition 2).
The parameter β quantifies how well the probabilities q m approximate the probability distribution of an energy measurement with resolution δ and confidence η. Hence, we will refer to the parameter β as the sampling error.

Regimes of resolution and error achievable by quantum devices
As anticipated in the previous section, theoretical knowledge about the Hamiltonian as well as experimental restrictions lead to different regimes of resolution, confidence and sampling error. In what follows we extend that presentation with some important remarks.
Standard-resolution measurements. For many ubiquitous measurement procedures, the time necessary to achieve resolution δ grows as poly(1/δ). Taking again as example the von Neumann model, if we assume that the pointer is prepared as a wavepacket of a fixed width σ (fixed energy), it is possible to distinguish two consecutive eigenvalues E 1 and E 2 with high confidence by letting the system interact with the pointer for a time such that αt|E 2 − E 1 | σ. Hence, a scaling of t = O(1/δ) is needed to achieve resolution δ, for a fixed value of the coupling α.
For quantum algorithms that simulate energy measurements, such as QPE, the natural way to quantify their running time is based on the number of quantum gates applied. It is known that an energy measurement protocol based on QPE achieves a resolution δ with a number of gates scaling as poly(1/δ) [36]. This follows from the fact that the bottleneck for this algorithm is the implementation of an approximation of the time-evolution operator exp(iHT ) for time T = O(1/δ) in terms of quantum gates, which takes time poly(1/δ, ||H||) using standard quantum simulation methods [53,54]. More generally, it was shown in Ref. [44] that if the Hamiltonian is unknown and we can only access the Hamiltonian evolution as a black box, or even if its eigenstates are known but there is no information about its eigenvalues, a number of gates scaling as poly(1/δ) is the best that can be achieved by any quantum algorithm.
An alternative way to approximate the probability distribution of an energy measurement on a state ρ is by considering its Fourier transform where λ i are the eigenvalues of the Hamiltonian, p(λ i ) is defined in Eq.
(2) and χ(t) = Tr[ρe −iHt ] is the expected value of the time evolution operator. The function P (ω) can be approximated by measuring the correlators χ(t) at different times t and calculating the discrete Fourier transform of these values [55,56]. Such correlators can be measured via an many-body-interferometric experiment akin to, for example, Ref. [57]. In this case, if we want to increase the resolution of the approximation by a factor 1/x, it is necessary to measure x times more values of the correlator χ(t), where the longest time will be x times larger than before. It is then expected that the time needed to run the experiment grows quadratically with the inverse resolution δ −1 .
Super-resolution measurements. By exploiting certain knowledge about the Hamiltonian it is sometimes possible to construct quantum algorithms for energy measurements whose running time is poly(log (1/δ)), i.e., the cost of increasing the resolution (decreasing δ) grows polynomially in the number of digits of δ, instead of δ itself. We say that such a measurement procedure is a super-resolution measurement, as it allows to resolve even exponentially small energy gaps of a Hamiltonian, with a cost increasing only polynomially in n. It was demonstrated in Ref. [44] that this regime can be achieved by a quantum algorithm iff the corresponding Hamiltonian can be exponentially fastforwarded i.e., the time-evolutionÛ = exp(−iHT ) can be implemented for exponentially large T using only polynomially many quantum gates. It is important to note, however, that it is not known how to implement super-resolution measurements of all local or sparse Hamiltonians, and indeed there is strong complexity theoretic evidence that this is impossible [44,47] -if such a quantum procedure existed, it would imply that quantum computers would be able to solve any problem in PSPACE, which is considered very unlikely.
Near-exact sampling. It is interesting to consider the regime where the sampling error β is 0 or inverseexponential in n (β = 1/2 poly(n) ), in order to understand the hardness of nearly-exact simulations of ideal (noiseless) quantum devices [20,48,58,59]. As we will show, classical hardness results can be demonstrated in this regime using the widely believed computational complexity-theoretic conjecture that the Polynomial Hierarchy (PH) does not collapse (see Sec. 3.3).
As an important side remark, we note that achieving the standard resolution and near-exact sampling regime via QPE is non-trivial, even assuming noiseless quantum gates. To achieve this regime, it is necessary to efficiently approximate the time-evolution operator U = e iHT up to an inverse-exponential error, which is not possible with methods based on product formulas such as the original quantum simulation proposal of Lloyd [53]. However, it is possible to solve this problem thanks to recently developed quantum simulation algorithms, which are exponentially more precise than the original proposal by Lloyd, and are applicable for most Hamiltonians of interest such as local, sparse, or low-rank ones [54,[60][61][62][63].
Approximate sampling. The sampling error considered in the near-exact sampling is extremely demanding and it is unknown to be reachable even by fault-tolerant universal quantum devices [64]. For this reason, several efforts to develop quantum advantage protocols that are robust against certain sampling errors have been developed [20,21,25]. The latter consider approximate sampling problems, where the sampling error β is a small constant. However, such proofs require the introduction of additional computational complexity conjectures, as will be discussed in more detail in Sec. 3.4. Furthermore, we discuss in Secs. 5 that current techniques to demonstrate hardness of approximate sampling fail for energy sampling problems with standard resolution, as they become sampling problems with a small output space.
Confidence regimes. For the purposes of our discussion on classical hardness of the energy sampling problem, we can take the value of the measurement failure probability = 1 − η to be of the same order of magnitude of the sampling error β, i.e., in the approximate sampling regime we can tolerate a small constant value of whereas in the near-exact sampling regime we require = 1/2 poly(n) .
Using a standard energy measurement procedure such as quantum phase estimation, a resolution δ = 2 −l can be achieved with failure probability by using l + log(2 + (2 ) −1 ) ancillary qubits [65]. The scaling of the number of gates with this approach is O(poly(δ −1 , −1 )) and so a small constant value of can be achieved with a constant overhead. Furthermore, any quantum algorithm for energy measurements achieving a failure probability smaller than ≤ 1/2 − 1/poly(n), can be efficiently converted into a procedure achieving an exponentially small failure probability = 1/2 poly(n) via confidence amplification methods [44].

Quantum advantage for superresolution energy measurements
As we previously mentioned, [44] proves the connection between energy measurements with super-resolution and the capability to exponentially fastforward the time evolution of the Hamiltonian. The intuition behind this result comes from the combination of three concepts: (i) the connection between energy measurements and QPE (ii) the capability to exponentially fast-forward the time evolution of a Hamiltonian; (iii) the quantum parallelization achieved in QPE that allows exponentially many more queries than a classical Fourier transform. The authors of [44] provide some examples of Hamiltonians amenable to an exponential speed-up of the time evolution: commuting local Hamiltonians, a Hamiltonian constructed from the modular exponentiation unitary in Shor's algorithm and free-fermions. The reason behind the capability to exponentially fastforward the time evolution of free-fermions (which is also applicable in the case of free bosons) is the fact that the diagonalization of the Hamiltonian is known [66,67], which allows to construct a quantum circuit that accelerates the simulation of the time evolution. Interestingly, this last set of examples can be generalized to define a potentially larger set that we name Quantum Diagonalizable Hamiltonians.

Quantum Hamiltonian Diagonalization
In full generality, we say that a HamiltonianH is quantum diagonalizable if where U is a poly-size quantum circuit and H f is a diagonal matrix in the computational basis written as where the gate decomposition of the circuit U can be obtained efficiently using a quantum computer and there is a quantum circuit that computes f (z) up to a given number of digits of precision l in time poly(l).
The two conditions on U and f (z) guarantee the existence of a poly-size quantum circuit that can exponentially fast-forward the Hamiltonian time evolution (see Figure 3a). Hybrid quantum-classical algorithms for finding quantum circuits for approximate diagonalization of a Hamiltonian have been proposed, as a way to develop more efficient Hamiltonian simulation procedures [68]. In this work we will restrict to cases where both the gate decomposition for U and the function f (z) can be computed efficiently classically, which allows us to consider simple energy measurement procedures in Sec. 3.2. Bellow, we explain how to build a circuit for fast-forwarding a quantum diagonalizable Hamil-tonianH by analyzing how the circuit acts on an eigenstate, which has the form |ψ z = U † |z . The steps of the circuit are the following: 1. Apply U to |ψ z to obtain the state |z .
Exploiting the structure ofH the time evolution can be implemented efficiently even for exponentially large time T . b) If the Hamiltonian has a quantum diagonalization, the energy measurements can be performed by sampling the outcomes z from the quantum circuit implementing U and computing the eigenvalues via the function f (z). This procedure is simpler than QPE and exponentially precise energy measurements can be achieved efficiently.
2. For a given desired evolution time T , compute an a-bit approximation of the phase on an ancillary register. This takes time O(poly(log(T ), log(n))) since we assume that lbits of f (z) can be computed in O(poly(log(l))) time. This creates a state |z |φ z,T , whereφ z,T is the a-bit approximation of φ z,T .
3. Apply the controlled phase e −iφ z,T |z |φ z,T 4. Undo the computation ofφ z,T in the ancillary register to obtain the state e −iφ z,T |z |0 ⊗a .

Apply
This quantum circuit, sketched in Figure 3, implements an approximation U of the time-evolution op- , which shows thatH is exponentially fast-forwardable, according to the definition in Ref. [44] (see Appendix A). We leave open the problem of how quantum diagonalizable Hamiltonians relate to the potentially larger class of exponentially fast-forwardable Hamiltonians. Also, it is important to remark that, since the quantum diagonalization assumptions imply the ability to fast-forward, it is unlikely that arbitrary Hamiltonians are quantum diagonalizable -otherwise, this would imply a general procedure for exponentially precise energy measurements of arbitrary Hamiltonians which, combined with the results in [44,47], implies BQP=PSPACE.

Super-resolution energy measurements for Hamiltonians with known diagonalization
The motivation to restrict f (z) to functions that can be computed efficiently classically, instead of the more general case where f (z) can be computed by a quantum circuit, is that it allows us to connect superresolution energy measurements ofH to the problem of sampling from U . In fact, given that the quantum diagonalization of the Hamiltonian is known, super-resolution can be achieved by sampling from the quantum circuit U , together with some classical postprocessing to compute f (z), as schematically depicted in Fig. 3b . Furthermore, the generated distribution can be characterized by the parameters δ, η and β defined in Sec. 2.2.
In order to show this, let us assume we have access to an approximate sampler from outputs of U , i.e. a device that samples approximately from the probabilities P z = | z| U |ψ | 2 . More precisely, we define the problem of approximate U -sampling as follows.

Definition 4 (β-approximate U -sampling).
Given an initial state |ψ and a unitary U , sample outcomes z with probabilities P z such that z |P z − P z | ≤ β.
We will refer to such a sampler as a β-approximate sampler for U . We can now demonstrate the following Theorem 1 (Quantum algorithm for super-resolution energy measurements). Consider any quantum diagonalizable HamiltonianH = U † H f U as in (6). Then, the following quantum algorithm efficiently solves the β−approximate Energy Sampling problem for HamiltonianH, with the initial state |ψ and parameters η = 1 and δ = 2 −l : • Query a β-approximate sampler for U , with initial state |ψ .
• Given an outcome z, output an l-digit approximation of the value f (z) .
Theorem 1 provides a simple quantum procedure for super-resolution energy measurements, since l digits of precision can be achieved in poly(l) time. The procedure is represented schematically in Fig. 3 and can be used to bypass the QPE algorithm, which requires further overhead. The details of the proof of Theorem 1 are given in Appendix B, but the result can be understood intuitively. As expected, an l-digit accuracy in the computation of f (z) translates into an l-digit resolution of the energy measurement δ. Moreover, the finite total variation distance β of the approximate sampler for U implies that the output distribution of the algorithm solves a β-approximate energy sampling problem. Finally, since we have assumed f (z) can be computed deterministically, the confidence η is 1.
This provides a reinterpretation of the result in [38] as a proposal for super-resolution energy measurements of the vibronic spectra of a molecule (described by a quadratic bosonic Hamiltonian) via a more economical quantum circuit than the traditional quantum phase-estimation algorithm. 3 The discussion above provides also a generalization of that result to any Hamiltonian with a quantum diagonalization.

Classical hardness of super-resolution Energy Sampling
In this subsection, we present our quantum advantage result based on super-resolution energy measurements of local Hamiltonians. After introducing the diagonalizable local Hamiltonians of interest for our proof, we show how the existence of an efficient classical (quantum) algorithm for the energy sampling problem implies also an efficient classical (quantum) solution to the problem of sampling from its diagonalization unitary (see Theorem 2). Exploiting known results on the hardness of sampling unitary circuits, we obtain as a corollary (Corollary 1) the existence of a simple class of local Hamiltonians for which super-resolution measurements can be feasibly implemented in a quantum device but not efficiently classically simulated (assuming plausible complexity theoretic statements). Our proof is constructive and applies to a family of Hamiltonians diagonalizable by IQP circuits [21,24] (i.e., quantum circuits that are diagonal in the X Pauli basis). However, the ideas behind the proof are general and can be applied to find other hard energy sampling problems, by considering Hamiltonians that are diagonalized by quantum circuits from other quantum advantage proposals.
The Hamiltonian. Specifically, our Hamiltonian family is defined by conjugating a diagonal Hamiltonian of form (7) with an IQP unitary. For physical reasons, we further focus on a specific family of nearest neighbor IQP circuits on the 2D square lattice L 2D 3 In a nutshell, the Franck-Condon profile at zero temperature which, under certain approximations, gives the vibronic spectrum of a molecule, can be seen as the distribution obtained by measuring the energy of the vaccuum state of a given bosonic quadratic Hamiltonian H, according to a different bosonic quadratic Hamiltonian H . If we choose our basis as the Fock basis of H, it is possible to compute classically the parameters describing the gaussian transformation U that diagonalizes H and write H = U † H f U , i.e. find a quantum diagonalization for H . The proposal of Ref. [38] for sampling from the Franck-Condon profile can thus be seen as an instance of the scheme for energy measurements discussed in Theorem 1 and represented in Fig. 3b.  (9), and three of its local terms Ha, H b and Hc. Each of these terms is a 5-local Hamiltonian acting on a given qubit (a, b, c) and its nearest neighbors on the 2D lattice (represented inside each one of the three squares). We remark that each term can have a different weight. When all the weights are the same, as in Eq. (9), the Hamiltonian is translationally invariant.
[24]. The latter implements a unitary corresponding to a constant time-evolution of a translation-invariant nearest-neighbor Hamiltonian on L 2D . The Hamiltonian we use to define an energy measurement problem is of the form wheren j acts locally on qubit j and w = (w 1 , ..., w n ) is a set of real-valued weights. It is easy to see that the unitary (8) implements a product of controlled-Z [65] gates in the X basis, whose action on Pauli operators can be analyzed in the stabilizer formalism [69]. Using this property, we arrive at the final expression for our Hamiltonian: This Hamiltonian is 5-local, with local terms represented in Fig. 4. It is a "weighted" non-translationinvariant variation of the 2D cluster state Hamiltonian used in measurement based quantum computation [70,71]. Quantum advantage result. Our next result gives complexity theoretic evidence that energy measurements of Hamiltonians of form (10) on product states cannot be efficiently classically simulated. To demonstrate our result, we exploit recent quantum advantage results [24]. Namely, we reduce the problem of measuring these Hamiltonians with super resolution, to that of sampling from the output distribution of a constant-time Hamiltonian evolution of form (8) given an input product states of form where θ j and x j are chosen uniformly at random from the set Θ = {0, π/4} and the set {0, 1}, respectively [24] 4 . Precisely, we prove the following reduction between these two problems.  (8), acting on the same inputs, with γ = 2 + β.
Ref. [24] rules out the existence of efficient classical simulations for short-time Hamiltonian evolutions of form (8) based on three plausible complexity-theoretic conjectures: (C1) the non-collapse of the Polynomial Hierarchy; (C2) an approximate average-case hardness conjecture; (C3) an anticoncentration conjecture. These conjectures are similar to those in [20,21] and are reviewed in Sec. 3.4. Here, we demonstrate a hardness result for super-resolution Hamiltonian measurements, which is a corollary of Theorem 2 and the hardness results in Refs. [24]. Corollary 1 (Quantum advantage for approximate super-resolution energy measurements). There cannot exist an efficient classical algorithm for simulating measurements of the Hamiltonian (10) with superresolution on input product states as in Theorem 2, for any η = 1 − , β such that 2 + β ≤ 1/22, assuming complexity-theoretic conjectures C1-C3 below, in section 3.4.
Corollary 1 leads to a natural quantum advantage proposal, since the problem can be solved via the quantum algorithm in Theorem 1, by realizing the unitary (8), which can conceivably be implemented in several quantum simulation platforms, e.g., coldatomic ones [24]. Furthermore, an efficient quantum verification protocol for the required quantum sampler exists, which only requires reliable single-qubit measurements [24,30]. Additionally, the result provides a connection between the quantum advantage protocol of [24], based on sampling measurements of a quantum state in a fixed basis, and a high-precision spectroscopy problem. This relates these Hamiltonian quantum advantage proposals to a physical problem of interest, complementing previous work on vibronic spectra [38].
To prove Theorem 2, we choose an appropriate set of weights w that establishes a one-to-one map between the eigenvalues of H 2D [ w] and its eigenvectors. The proof technique (detailed next) generally yields quantum advantage results for measurements of Hamiltonians that are diagonalized by "Ising-type" evolutions that implement IQP circuits [21,48,59], if the associated diagonalizing unitary U is hard to simulate classically. For instance, we could replace U 2D with the long-ranged local IQP circuits of [21], or the nearest-neighbor translation invariant Hamiltonian evolutions of [23,24,72]. The resulting Hamiltonian for an IQP circuit on a k-degree interaction graph would be k + 1 local (see Appendix C for details). These alternative constructions yield quantum advantage results analogous to Corollary 1 for Hamiltonians that are n-body long-ranged considering the quantum circuits from [21]; 6-local nearest-neighbor on the dangling-bond square lattice for [24]; and 4 or 5-local for the brickwork lattices of [23,72]. Alternatively, the diagonalizing unitaries could also be chosen from other families of quantum circuits that lead to quantum advantage beyond IQP circuits, such as random quantum circuits [22]. In the latter case, however, the resulting Hamiltonian family would be n-local and hence of less physical interest. Proof of Theorem 2: The Hamiltonian H 2D [ w] is local and admits a diagonalization of the form (6), since it is diagonalized by the matrix U 2D and its eigenvalues can be efficiently computed classically via the function f (z) = i w i z i , where z i denotes the bit decomposition of the integer z ∈ {0, 2 n − 1}. We will consider the case where f (z) is an invertible function such that the value of z can be efficiently computed from the energy value f (z). In this case the probability of observing a particular energy value (up to error δ) can directly be related to a certain output probability of the quantum circuit U 2D . For simplicity, we will consider the choice of weights u j = 2 −j , in which case f (z) ≡ Id(z) = z2 −n , for z ∈ {0, ..., 2 n − 1} is a rescaled identity function, which is clearly invertible. Other choices could be possible without necessarily requiring exponentially decaying weights 5 .
Let us consider a β-approximate energy sampler for the Hamiltonian H 2D [ u], given by To show how an efficient classical sampler from U 2D can be constructed from a classical algorithm for super-resolution energy measurements, we consider the following sampling algorithm Algorithm 1 • Query a β−approximate energy sampler with input Hamiltonian H 2D [ u], input state ρ = |ψ θ,x ψ θ,x | and parameters δ = 2 −n /3, η = 1− .
The algorithm queries an energy sampler with a resolution one third smaller than the separation between consecutive eigenvalues of H 2D [ u], which is 2 −n , in order to guarantee that the spectral projection in the interval [z2 −n − δ, z2 −n + δ] is simply given by Π z2 −n . As shown below, this allows us to relate the probability that Algorithm 1 outputs z to the quantity P z = | z| U |ψ θ,x | 2 , the latter being the probability of observing the computational basis state |z from a sampler from U 2D with initial state |ψ θ,x .
To demonstrate this relation, we first analyze the case when β = 0. Using the definition of measurement resolution from Eq. (4), we obtain that the probability that Algorithm 1 outputs z is given by The equality follows from the constraints on the probability distribution of an energy sampler, defined by Eq. (4). The first inequality results from the spectral projection in the interval [z2 −n − δ, z2 −n + δ] being simply given by Π z2 −n . The second inequality follows from the fact that, by construction of H 2D [ u], we have that From Eq. (15) we can define dp z ≥ 0 such that p z = (1 − )P z + dp z . In addition, it can be seen, from the construction of Algorithm 1, that the probabilities p z are normalized, which implies that Hence, it follows that which shows that Algorithm 1 is an 2 -approximate sampler for U 2D .
Proof of corollary 1. The classical hardness result for this energy measurement problem follows directly from the application of Theorem 2, which maps this problem to that of sampling from U 2D , together with the quantum advantage result of Ref. [24]. Specifically, Theorem 1 from Ref. [24] states that a classical algorithm cannot approximately sample from outputs of the circuit U 2D , up to a total variation distance of β = 1/22, in polynomial time. This result is based on three complexity theoretic assumptions which we summarize in section 3.4.
We remark that a crucial point of the proof of Theorem 2 is to choose the weights w i so the spectrum of the Hamiltonian is non-degenerate. It can be seen that this requires the weights w i to be defined up to O(n) bits of precision. Such precision in controlling the parameters of a Hamiltonian is hard to achieve in an experimental setting. Nevertheless, measuring the energy of quantum states according to such Hamiltonians is a valid theoretical question that can be explored using quantum devices beyond what is likely to be simulable efficiently classically given Corollary 1 . We leave as an open problem whether a similar hardness result to Corollary 1 can be achieved for measurements of Hamiltonians whose parameters are defined up to constant precision.

Complexity theoretic assumptions needed for quantum advantage via sampling problems
The quantum advantage proposal of Corollary 1 relies on complexity-theoretic conjectures. Specifically, these are the same conjectures 6 underlying the quantum advantage proposal of Ref. [24]; the latter are, in turn, analogous to those involved in hardness proofs for random universal quantum circuits [22,31] and slightly weaker than those in the seminal boson sampling and IQP proposals of Refs. [20,21]. Any progress towards proving the conjectures in Ref. [24] would simultaneously improve our results.
For the sake of completeness we here summarize the conjectures that enter the quantum advantage results in [24]. The first (C1) is the non-collapse of the Polynomial Hierarchy, a widely believed generalization of the P = NP conjecture [73][74][75], first linked to hardness of classical simulation of quantum circuits in [58]. This assumption alone rules out the existence of near-exact classical simulators for a large family of quantum devices, including the ones we study: specifically, this is the case for quantum devices with output probabilities that are #P-hard to approximate up to constant relative errors in worst case (see [20,59] and appendix D).
Following an approach pioneered in [20,21], this classical hardness result can be made noise-robust up to a constant sampling error in total variation distance β assuming two additional conjectures about the output probabilities of the quantum device. Specifically, let the set of all output probabilities of our device be P n = {p x , x ∈ {0, 1} n }, where n is the number of qubits. Then, we require: (C2) an approximate average-case hardness conjecture, which states that a constant fraction of the output probabilities in P n are #P-hard to approximate (up to a constant relative error); (C3) an anticoncentration conjecture, stating that the output distribution is "sufficiently flat", in the sense that prob px∼Pn [p x > α/2 n ] > γ, for some constants α, γ ∈ O(1). Assuming (C2)-(C3), it can be shown that the existence of an efficient classical algorithm for β-approximate sampling from the unitary U 2D implies the collapse of the Polynomial Hierarchy to its 3rd level. The central technique in this argument is Stockmeyer's counting algorithm [76] (Sec. 5.1 and Appendix D), which shows that such a classical sampler implies the existence of an algorithm (inside the third level of the Polynomial Hierarchy) for estimating the probabilities in P n on average with high accuracy, a #P-hard problem. This then implies the aforementioned collapse of complexity classes by Toda's theorem [77] (see appendix D for more details). We note that the specific total variation distance β tolerated for the classical sampler depends on the choices of the constant parameters in the statement of the conjectures; the values chosen in Ref. [24] lead to the threshold β = 1/22.
There has been steady progress towards proving the complexity theoretic assumptions made. In Ref. [24], numerical evidence was presented which supports the anticoncentration conjecture (C3), for the choice of angles of the input state from the set Θ = {0, π/4}. Moreover, this conjecture was recently proven for a uniform random choice of angles of the input state (see Eq. 11) from the set Θ = [0, 2π] [78]. Furthermore, positive evidence for conjecture (C2) is given by approximate worst-case hardness results in [24] (case Θ = {0, π/4}), as well as by recent proofs of exact average-case hardness and anticoncentration theorems given in [78] for the larger set of angles (case Θ = [0, 2π]). The results in [24,78] are analogous to approximate worst-case hardness results in [20][21][22]; proofs of anti-concentration of the output distribution in [21,72]; and exact average-case hardness results in [31,79]. For all known quantum advantage proposals, including ours, proving approximate average-case hardness remains an open question.

Near-exact simulation of energy measurements with standard resolution
The previous result establishes a complexity-theoretic obstruction for classical algorithms to simulate energy measurements of certain local Hamiltonians in the super-resolution regime. In this section, we investigate whether the less restrictive standard resolution regime can be efficiently reached classically. Our main result shows this problem remains hard, with high complexity theoretic evidence (specifically, a collapse of the Polynomial Hierarchy, introduced in section 3.3), if we demand exponentially small error parameters.

Theorem 3 (Hardness of near-exact standard--resolution energy measurements). Let H be a local Hamiltonian acting on n qubits. If there exists an efficient classical algorithm for the energy sampling problem for any such H with product-state inputs; resolution δ ∈ O(1/poly(n)); confidence η = 1 − O(1/2 poly(n) ); and sampling error β ∈ O(1/2 poly(n) ); then the Polynomial Hierarchy collapses to the 3rd level. Furthermore, the same holds if H is a nearestneighbor, translation invariant 5-local Hamiltonian on a 2d lattice.
To demonstrate this theorem, the idea is to show that such a classical algorithm would efficiently sample an outcome whose probability is #P-hard to approximate. This is achieved by constructing local Hamiltonians with two properties: (i) there is a unique ground state with energy 0, and a polynomially small gap to the first excited state.
(ii) the overlap of this ground state with a product state is #P-hard to calculate up to an inverseexponential additive error.
The first condition ensures that a standard-resolution energy measurement protocol is capable of efficiently discriminating the ground state from the rest of the spectra, while the second ensures that a near-exact sampling measurement samples from a #P-hard probability.
We provide two examples of families of local Hamiltonians that fulfill these properties in Secs. 4.1. The first example is presented in Secs. 4.1.1 and is a translation-invariant version of the 5-local cluster state Hamiltonian from Eq. (10). By construction, the ground state is related to an output state of the quantum circuit U 2D and hence property (ii) follows directly from the results of Ref. [24]. The second example, presented in Sec. 4 [51,80], used in the proof of equivalence between adiabatic and circuit model quantum computation [80]. This construction, although more complicated than the first example, gives a general technique to relate the output state of an arbitrary poly-size quantum circuit to the ground state of a local Hamiltonian. Consequently, all the results on #P-hardness of output probabilities of quantum circuits can be translated to results on hardness of Energy Sampling with standard resolution.

.1.2, is a 4-local Hamiltonian based on Feynman-Kitaev (FK) circuitto-Hamiltonian constructions
Using these examples of local Hamiltonians, we present the proof of Theorem 3 in Sec. 4.2, following standard arguments from the quantum advantage literature.

A 5-local translationally invariant Hamiltonian
We consider the Hamiltonian family defined in Eq. (10). If we pick a uniform choice of weights v j := 1/n, j = 1, . . . , n, the resulting Hamiltonian is 5-local nearest-neighbor and translation-invariant. In particular, up to single-qubit rotations, it is the well-known 2D cluster state Hamiltonian [70,71]: This model is mapped to a trivial unentangled one via the unitary (8), which has constant depth. In a condensed matter sense, this implies that H 2D [ v] can be seen as the energy density operator (the Hamiltonian divided by the number of particles in the lattice) of a gapped model in the trivial phase [81,82]. In particular, H 2D [ v] has an inverse polynomial gap Ω(1/n), which ensures that a standard-resolution measurement can efficiently discriminate its ground state from the rest of the spectra. Upon an ideal energy measurement of a state |ψ θ,x from Eq. (11), the probability of obtaining outcome E = 0, which is the ground-state energy, is given by Such probabilities are related to partition functions of Ising models and were shown to be #P-hard to approximate to relative error or to inverse-exponential additive error [21,24]. This is a crucial ingredient for the proof of Theorem 3, presented in Sec. 4.2.

Circuit-to-Hamiltonian constructions
So far, we have only considered Hamiltonians whose diagonalization is known. Here we present a more gen-eral strategy to relate probabilities of outputs of arbitrary quantum circuits to probabilities of outcomes of energy measurements, while keeping the Hamiltonian local. To do so, we use the so-called circuit-to-Hamiltonian constructions, based on Feynman clocks, that have been widely used in Hamiltonian complexity and adiabatic quantum computation [51,80]. In fact, the Hamiltonian that has the properties we require is used in the proof that the adiabatic model of quantum computation is equivalent to the circuit model [80]. More precisely, we consider the final Hamiltonian at the end of the adiabatic schedule, since it has polynomially small gap and its ground state contains a information about the final state of a quantum computation, as we explain in what follows. We describe this construction in more detail since we build upon it to demonstrate our main result in Sec. 5.2. Let us consider a quantum circuit of T = poly(n) gates U = U T U T −1 ...U 1 and the propagation Hamiltonian (24) which is defined on a Hilbert space with T + 1 clock states |t c and n qubits. We will describe the case, where the clock is implemented with O(log(T + 1)) qubits, in which case the Hamiltonian H prop. is O(log(n))-local. Nevertheless our result extends to 5local Hamiltonians using the unary clock implementation of Ref. [51], or 4-local Hamiltonians using a clock implementation based on the hopping of a excitation in a unidimensional spin-chain (see [83] for a recent discussion on different clock implementations). We define the states |η y (0) = |y |0 c |η y (t) = U t U t−1 ...U 1 |y |t c , t ∈ {1, ..., T }, (25) and the subspaces We note that for each y the subspace Ω (y) is invariant under the action of the Hamiltonian H prop . Hence, we can diagonalize the Hamiltonian in each of these subspaces, obtaining the 2 n degenerate ground states which have energy 0. These are called history states as they contain information about the whole history of a quantum computation. In particular, we have that x| t| c |ψ (y) = 1 √ T +1 x| U t ...U 1 |y , which is proportional to the transition amplitude from state y to state x after t steps of the computation. It can be shown that H prop is positive semidefinite and has a gap of O(1/T 2 ) with respect to the first excited state [51].
To fix the initial state of the computation to be |0 it is necessary to add an energy penalty Hamiltonian of the form where the projector |1 1| i acts on the ith qubit, ensuring that when the clock is in its initial state |0 c , any computational basis state which is not the |0 state is energetically penalized. The total Hamiltonian has a single ground state |ψ (0) with energy 0 and gap , ensuring the discernibility of the groundstate via a standard-resolution measurement. Hence, the probability of observing the ground state of H upon an ideal energy measurement of a state |y |T c is given by This quantity is #P-hard to estimate to relative error or inverse-exponential additive error for several families of quantum circuits such as IQP [21,84], or boson sampling [20] (which can also be implemented in the circuit model [85]), among others. Depending on the family of circuits we choose, this defines a family of local Hamiltonians of the form given by Eq. (29) for which Theorem 3 applies.

Proof of Theorem 3
Before we proceed, let us prove the following technical lemma that will be useful in what follows.

Lemma 1. Let H be a Hamiltonian with eigenvalues in the interval
where P GS = ψ|Π GS |ψ (33) and Π GS is the spectral projection in the ground state space of H.
Proof. An energy sampler with parameters η = 1 − and δ = ∆/3 outputs a value in the interval [0, ∆/3] with probability which follows from the constraints on the outcome probability distribution defined by Eq. (4). On the other hand, the probability of obtaining a value in the interval [2∆/3, 1] obeys the bound where we used Eq. (4) in the first step and the fact that Π GS + Π [∆,1] = I in the second step, which follows from the assumption that the Hamiltonian has a gap ∆. In addition, theq and q GS probabilities sum up to 1, which implies that where in the second step we used Eq. (36). Combining equations (34 and (39)) we obtain the inequality A β-approximate Energy sampler with parameters η = 1 − and δ = ∆/3 outputs a value in the interval [0, ∆/3] with probability q GS , where |q GS − q GS | ≤ β. Hence, using (40) and the triangle inequality we conclude the proof.
With this lemma we are ready to prove Theorem 3.
Proof of theorem 3. Lemma 1 implies that an approximate energy sampler with δ = ∆/3 = 1/(3n + 3), = 1 − η = 1/2 poly(n) and β = 1/2 poly(n) would output E m ∈ [0, ∆/3] with a probability q GS = P GS + , where | | is an exponentially small number. Furthermore, we have seen two constructions of local Hamiltonians for which P GS is #P-hard to estimate with inverse-exponential additive error (Eqs. (23) and (31)). Let us assume there is an efficient classical energy sampler with the parameters defined in Theorem 3 for these Hamiltonians. Following standard arguments in the literature of quantum advantage [20,21], this would imply the probability q GS could be estimated up to an inverse-exponential additive error via Stockmeyer's algorithm, an algorithm in the third level of the polynomial hierarchy (PH). This implies that a #P-hard problem could be solved in the third level of the PH and hence the PH would collapse to the third level. Stockmeyer's algorithm and its connection to quantum advantage is reviewed in more detail in Sec. 5.1 and Appendix D.
This gives strong evidence to the impossibility for classical computers to efficiently simulate energy sampling problems with confidence exponentially close to optimal, i.e., η = 1 − 1/2 poly(n) , inverseexponential β = 1/2 poly(n) , and standard resolution δ = 1/poly(n). This can be seen as a classical hardness result for the problem of simulating an ideal implementation of the quantum phase estimation algorithm (with confidence amplification [44]), for measuring the energy of a local Hamiltonian with standard resolution.
5 Computational Complexity of approximate energy sampling with standard resolution The previous section provided evidence that classical algorithms for near-exact simulations of energy measurements of many-body Hamiltonians cannot efficiently simulate measurements with standard resolution (δ = 1/poly(n)). From a physical perspective, it is natural to ask whether the previous complexity theoretic results involving collapses of the Polynomial Hierarchy can be extended to approximate simulations. Specifically, we are interested in extending Theorem 3 to the regime where the measurement failure probability and the sampling error β are small constants. We present a no-go lemma and one positive result.
Approximate sampling measurements of Hamiltonians in the standard resolution regime can be interpreted as examples of quantum sampling problems with a small number of output qubits. It would thus be tempting to apply the Stockmeyer-based techniques of Refs. [20,21] (cf. section 3.4) to study the complexity theory of classically simulating such measurements. Unfortunately, we first point out in Lemma 2 (section 5.1) that the Stockmeyer argument cannot meaningfully link the hardness of approximate sampling problems with few outputs to a Polynomial Hierarchy collapse. This is due to an error parameter in such proofs that becomes too large precisely for quantum computations where the number of measured output qubits is "small": constant or O(log(n)). This issue is generic and affects, e.g., existing quantum advantage proposals based on variations of the one-clean-qubit (DQC1) model [49,50].
In spite of the above hurdle, our second result (Theorem 4, section 5.2 below) gives complexity theoretic evidence of the classical hardness of approximate standard-resolution energy sampling. The result links the worst-case complexity of this problem to that of classically simulating universal quantum computers. Specifically, we prove that the existence of an efficient classical algorithm for this problem would imply the ability to efficiently classically compute arbitrary marginal output probabilities of universal poly-sized quantum circuits (a BQP-hard task, in the language of complexity theory [65]). This provides evidence against an efficient classical simulation of energy measurements with standard resolution.
The first result in this section highlights the existence of a gap in the complexity theoretic understand-ing of quantum approximate sampling problems with small output support. The second opens the possibility to develop quantum advantage tests based on such problems. This is however complicated by the lack of tools to study the average-case hardness of this problem. We discuss this latter possibility and associated open challenges in section 5.3.

5.1
Sampling problems with small support "do not simply" collapse the Polynomial Hierarchy Here, we point out a technical obstruction towards extending available approaches in quantum advantage proofs [20,21] to the standard-resolution approximate energy sampling problems. First, we remark that any algorithm for energy measurements with standard resolution samples from a probability distribution with poly(n) outcomes. This is in contrast with most quantum advantage proposals, which have an outcome space that is exponentially large. This fact constitutes a roadblock for the application of the proof technique of Refs. [20,21].
To understand the limitation, we recall (section 3.4) that the traditional approach to prove quantum advantage results via sampling problems heavily relies on Stockmeyer's algorithm Refs. [20,21]. The goal there is to induce a Polynomial Hierarchy (PH) collapse assuming, among other assumptions, that it is #P-hard to approximate the output probabilities of a quantum device up to very small errors: specifically, a constant relative one if we have anticoncentration. Unfortunately, as shown next, if we tried to adapt the same argument to rule out classical algorithms for sampling problems with poly-sized support, we would have to adopt an analogous average-case conjecture where the error is too large for the assumption to be plausible. Below, we characterize these errors for circuits with an arbitrary number of output bits. Let q U be the output probability distribution of a quantum circuit U , and 0 m be the string with m zeroes. Lemma 2 (Stockmeyer error). Let Q n , n ∈ N be a family of uniformly-generated poly-size n-qubit quantum circuits with m output bits and the hiding property Assume there exists a classical algorithm A that samples from q U with 1 error β in O(poly(n, 1/β)) time given U ∈ Q n . Then, for any 0 < ν < 1, there is an FBPP NP algorithm which, given access to A, approx- with probability 1 − ν over the choice of x ∈ {0, 1} m .
We provide the proof of this lemma in Appendix D. There, we also argue in detail how the Stockmeyer argument fails to provide a plausible collapse of PH in the m < log(n) regime. The basic intuition is as follows. In Refs. [20,21], where m = n, the algorithm provides a relative error estimation of the output probabilities in average case if we assume anticoncentration. This problem is then conjectured to be #P-hard. Evidence for this conjecture is provided by worst-case results and near-exact worst-to-average reductions (section 3.4). Because of Toda's theorem, that provides a collapse of PH, since PH⊂P #P . In the m < log(n) regime, the right hand side of Eq. (41) has a term that can only be upper bounded by an inverse polynomial, which limits the accuracy ε of the algorithm that estimates probabilities in FBPP NP . Unfortunately, to induce the same collapse of PH in the case m < log(n), we would need to show that it is #Phard to estimate quantum output probabilities with an inverse polynomial error. This is however implausible because, if it was true, then quantum computers could efficiently solve #P-hard problems, which is believed to be impossible [79,86].

Hardness of approximate energy measurements with standard resolution
In the previous section, we discussed obstructions towards proving quantum advantage results based on known complexity theoretic conjectures for approximate standard-resolution energy sampling, as well as quantum sampling problems with small support. This points towards a tension between having practical physically-motivated quantum advantage schemes and strong complexity theoretic proofs of classical hardness.
This motivates us to consider different approaches to prove physically-motivated quantum advantage results, which do not rely on the Stockmeyer argument of Refs. [20,21]. In fact, alternative evidence for the impossibility of developing efficient classical algorithms to simulate approximate energy measurements with standard resolution can be drawn from the work of Refs. [42,43]. Therein, the authors show that a procedure for energy measurements of local Hamiltonians achieving a resolution of 1/poly(n) and confidence η = 1 − , where is a small constant, can be used to decide any problem in BQP, the class of decision problems that can be efficiently solved by a quantum computer [65]. The Hamiltonians considered therein are 4-local non-nearest neighbor Hamiltonians in [42] and translational invariant chains of qudits in [43]. Although these works did not explicitly consider β sampling errors, they can be easily extended to the approximate energy sampling regime where β is a small constant. Consequently, the existence of a classical algorithm for standard resolution approximate energy sampling problems, would imply that classical computers could solve efficiently any problem in BQP.
In this section, we provide additional complexitytheoretic evidence that classical computers cannot efficiently simulate energy measurements with standard resolution. We do so by showing that this problem is at least as hard as estimating marginals of output probability distributions of universal circuits, a problem that is more general than considering only decision problems solvable by quantum circuits. Specifically, our main result (Theorem 4) shows that the ability to simulate the approximate energy sampling problem efficiently would imply the existence of a "poly box", in the notation of [87]: i.e., an efficient algorithm to estimate any marginal output probability of any poly-size quantum circuit up to a polynomially small error, a BQP-hard task.
Definition 5 (Probability estimator or "poly-box"). Let U be a poly-size quantum circuit acting on nqubits. An algorithm is said to be a probability estimator or poly-box for U if it can compute an estimatep of any marginal probability p of the distribution | x| U |0 | 2 such that The connection between standard-resolution energy measurements and probability estimators is stated precisely in the following theorem. Theorem 4 shows that standard-resolution energy measurements can be used to estimate arbitrary output probabilities of quantum circuits, and not just of single-qubit measurements, generalizing the results of [42,43].
To prove theorem 4, we show that it is possible to encode any marginal probability of a quantum circuit's output distribution as the probability of measuring the ground state energy of a certain 4-local Feynman-Kitaev Hamiltonian, which has a polynomially small gap. Hence, with a polynomial number of energy measurements, the marginal probability can be estimated with a polynomially small error via the Hoeffding bound.
Proof of Theorem 4. Let p be an output probability or a marginal probability of a poly-size quantum circuit U from a family of quantum circuits C acting on n-qubits. For a fixed computational basis state input of the circuit |x , we can write the marginal probability as for a given set of bit strings S * . More precisely, we define S * has a set of 2 n−l bit strings of size n where l bits are fixed. We pick the bits at different positions k i , with k i ∈ {1, ..., n}, i ∈ {1, ..., l}, such that the k i th bit is fixed to a chosen value b i ∈ {0, 1} i.e., To demonstrate the theorem we will first show that it is possible to construct a local Hamiltonian with two properties: the ground state energy is E GS = 0 and there is a polynomially small gap to the first excited state. Moreover, the probability of observing the outcome E GS = 0 after an energy measurement of this Hamiltonian on a product state is given by  O(log(n))-local. The physicality of the Hamiltonian can be improved to 4-local or 5-local, using standard clock implementations [51,83].
We recall that the subspaces with the states |η y (t) defined in Eq. (25), are invariant under the action of the Hamiltonian H prop . Furthermore, H prop has 2 n degenerate ground states |ψ (y) (see Eq. (27)) with energy zero. Another important property that will be used in the following proof is that H prop is positive semidefinite and has a gap of O(1/T 2 ) with respect to the first excited state [51].
To relate the probability of observing the ground state to the marginal probability p we need to lift the ground state such that |ψ (y) is a ground state only for y ∈ S * . With this aim, we introduce the following penalty Hamiltonian whereb i denotes the NOT of the bit b i and the projector |b i b i | ki acts non-trivially only on the k i th qubit (and as identity in the other qubits). It can easily be checked that |η y (t) are eigenstates of H pen with eigenenergies ∈ S * then at least one bit of y in one of the positions k i is in stateb i . From Eq. (47) it can be seen that H pen has no effect in the subspaces Ω (y) with y ∈ S * .
Let us now determine the ground states of H = H prop + H pen as well as the gap to the first excited state. First, let us note that the subspaces Ω (y) from Eq. (45) which follows from Eq. (47). Hence, for y ∈ S * the state of H (y) with the lowest energy is |ψ (y) , which has energy 0, and the first excited state has energy O(1/T 2 ). The final step needed to demonstrate that these are the only ground states of H is to show that the state with the lowest energy of H (y) , with y / ∈ S * , has an energy at least of O(1/T 3 ). This implies that no state belonging to the subspace Ω (y) with y / ∈ S * is a ground state of the whole Hamiltonian H and that this Hamiltonian has indeed a gap of 1/poly(n). To show this we use the geometrical lemma [51,88].

Lemma 3. (Geometrical Lemma)
Let H 1 and H 2 be two Hamiltonians with ground state energies g 1 and g 2 , respectively. Also, let ∆ 1 and ∆ 2 be the their respective gaps to their first excited states. Then the ground state energy of H is g ≥ g 1 +g 2 +∆(1−cos(θ)), where ∆ = min(∆ 1 , ∆ 2 ) and cos(θ) is the maximum possible absolute value of the overlap between a ground state of H 1 with a ground state of H 2 .
In order to calculate the maximum overlap between the two ground spaces, let us define Π 2 = T t=1 |η y (t) η y (t)|.
Hence, we obtain Hence, the geometrical lemma implies that the lowest energy state of H (y) for y / ∈ S * is O(1/T 3 ). This shows that the states |ψ (y) for y ∈ S * are the ground states of H = H prop + H pen . Consequently, the probability of observing 0 upon an ideal energy measurement of a quantum state |x |T is given by where δ p is defined in Eq. (42). By assumption, the energy sampling algorithm would output one sample in time poly(n, δ −1 p ). Given the choice of parameters , β we obtain from Lemma 1 that the probability of obtaining an outcome E m ∈ [−∆/3, ∆/3] is given by q GS such that We now demonstrate that we can estimate q GS within an additive error δ p /(2T + 2) by querying the energy sampler s times and computing the average number of times an event in the interval [−∆/3, ∆/3] is observed. Let us denote this estimator byq s . By Hoeffdings inequality we have that Hence, with s = O(poly(n, δ −1 p , log( p −1 ))) number of queries to the energy sampler we can obtain the estimatorq s within the desired error bound.
Finally we can construct our estimator for p asp = (T + 1)q s . Given the choice of s from Eq. (57) and using Eqs. (54) and (55) we obtain as desired.
The number of samples needed is s = O(poly(n, δ −1 p , log( p −1 ))) and for each sample we require time poly(n, δ −1 p ), which shows that the total running time to computep is O(poly(n, δ −1 p , log( p −1 ))) as required by Definition 5.

Random Energy Measurement (REM) Test
Given that standard-resolution energy measurements are BQP-hard to simulate, this problem has the potential to be a suitable physically motivated test at which quantum devices can outperform classical simulations. In particular, this suggests the following quantum advantage experiments where one measures the energy of a random local Hamiltonian on an input product state.
Random Energy Measurement (REM) Test:

A classical user picks a random many-body local
Hamiltonian H = i J i h i , where the local terms {h i } i and couplings {J j } j are chosen from a target class at random, according to a distribution that is efficient to sample from classically. The latter ensemble is picked so that complexity theoretic evidence against an efficient classical simulation is available.
2. The experimenter performs an approximate standard-resolution measurement of the energy of the Hamiltonian H picked from the ensemble.
3. The test is to produce samples from the output distribution of the above protocol in O(poly(n, δ −1 , β −1 , −1 )) time, within a β error in the total variation distance.
As discussed in section 5.1, this type of test is radically different than usual sampling problems [20,21]. Further research is thus deemed necessary to fully understand its classical simulability. Below, we discuss open directions for future investigations. Complexity of the REM Test. Theorem 4 provides worst-case evidence against the classical simulability of standard-resolution energy measurements. Yet, it provides no insight into the hardness of simulating a typical instance of this problem for different ensembles of random local Hamiltonians. Natural candidates that could lead to hard problems on average are, for example, Feynman-Kitaev Hamiltonians encoding random quantum circuits, frustrated spin systems [89] and universal quantum Hamiltonians [90]. However, in order to develop higher confidence against the classical simulability of the REM Test, it would be required to develop new tools to study average-case complexity of problems in BQP. This is because known polynomial-interpolation techniques used in worst-to-average reductions are rather sensitive to noise [31,78,91,92] and cannot be readily applied in the standard-resolution regime.
Is the REM test easy to verify? Commonly-studied quantum sampling problems, with an exponentially large output space, are difficult to verify [25]. Verifying statistical closure in the total variation distance to the ideal distribution based on a single-round of classical post-processing requires exponentially many experimental samples [93]. Although sample-efficient verification approaches have been proposed [22,29,31], the verification takes exponential time and works under circuit-level assumptions on noise [22,29,31] or new complexity conjectures [29]. If reliable singlequbit measurements are available, a polynomial-time verification is sometimes possible [24,28,30].
On the other hand, measurements with standard resolution could potentially be easier to verify than commonly-studied sampling problems. Indeed, it is easy to see that they bypass the no-go theorem in Ref. [93] because of the polynomial size of the output space: via the Hoeffding bound, collecting statistics and computing the variation distance to the ideal distribution gives a trivial brute-force exponential-time verification method with polynomial sample complexity, which could be applicable in near-term experiments of limited size. In this context, available verification methods for BQP-complete problems [30,[94][95][96] could potentially be useful.

Discussion
In this work, we have established energy measurements of many-body Hamiltonians as a problem that can show a reliable quantum advantage based on complexity theoretic arguments. We thus make a key step towards bringing quantum advantage demonstrations closer to physically-motivated questions.
We have analyzed two different regimes regarding the scaling of the cost of performing the measurement, which can be quantified either by the evolution time of the experiment or the number of quantum gates applied in a quantum algorithm such as quantum phase estimation. We have defined a standardresolution measurement as a measurement where the cost in increasing the resolution is polynomial in 1/δ, which is the standard performance of quantum devices for general local Hamiltonians; and super-resolution measurements, where the measurement cost scales as poly(log(1/δ)), which can be achieved by a quantum device if we exploit certain knowledge about the Hamiltonian, such as its diagonalization (or in gen-eral, the ability to exponentially fast-forward its timeevolution [44]).
We prove that for super-resolution measurements it is possible to achieve a quantum advantage demonstration even when the measurement is approximate (with a system-size-independent sampling error), based on plausible complexity-theoretic assumptions similar to ones used in the "quantum computational supremacy" literature [20-22, 24, 31, 78]. The quantum advantage originates in the super-resolution measurement of a simple 5-local cluster state Hamiltonian on the 2D square lattice on product state inputs. The protocol can be implemented using the quantum simulation scheme of Ref. [24] and requires the short time-evolution of a nearest-neighbor on a 2D square lattice, suitable for implementations in, for example, optical lattices. Moreover, this scheme can be efficiently certified using reliable single-qubit measurements. These results open up the possibility of near-term experimental demonstrations of quantum advantage via energy sampling.
In the standard-resolution regime, we find two types of complexity-theoretic evidence against the efficient classical simulation of measuring local Hamiltonians. First, in a reminiscent fashion to early work on IQP circuits [48], we find a classical simulation to be impossible for simple 2D translationinvariant Hamiltonians in the near-exact sampling regime with inverse-exponential sampling errors, unless the Polynomial Hierarchy collapses. Additionally, we point out limitations of available techniques [20,21] to extend this quantum advantage result to an approximate-sampling one, based on Polynomial Hierarchy collapses. Second, using circuit-to-Hamiltonian constructions and connections to random universal quantum circuits [22,29,31], we give alternative complexity-theoretic evidence that approximate standard-resolution measurements of 4local Hamiltonians can show a quantum advantage: a classical simulation here would lead to an efficient classical estimator of marginal probabilities of universal quantum circuits, a BQP-hard task [42,43].
Three potential improvements related to the technical results in our work are: Firstly, a major challenge would be to tie the hardness of simulating approximate standard-resolution energy measurements to well-known complexity-theoretic conjectures beyond BPP =BQP. This program would require techniques beyond the Stockmeyer-method and Polynomial-Hierarchy collapses [20,21]; Secondly, in this manuscript we have not investigated the verifiability of the standard resolution proposals. However, due to the small size of the energy output space, classical verification methods similar to those in Refs. [22,29,31] could be developed; Thirdly, it would be interesting to improve the locality of our Hamiltonians. The locality of the examples based on 5-local Hamiltonians on 2D lattices could, in principle, be improved using the general techniques presented in Ref. [90], which show that there exist simple 2local universal Hamiltonians that can reproduce the physics of other Hamiltonians, including the energy spectrum and measurement statistics. The examples based on circuit-to-Hamiltonian constructions could be improved using techniques such as, e.g., perturbation gadgets or space-time circuit-to-Hamiltonian constructions [97,98].
We have also introduced the concept of quantum Hamiltonian diagonalization which, up to our knowledge, is a new concept which can be of interest beyond the scope of this work. It characterizes a class of Hamiltonians for which there exists a polynomialsize quantum circuit U mapping its eigenbasis to the computational basis and whose eigenvalues can be computed efficiently by a function f (z) on a quantum computer. This guarantees the exponential fastforwarding of the dynamics of the Hamiltonian.
For the purposes of demonstrating quantum advantage, we restricted ourselves to examples where f (z) can be computed efficiently classically -this simplifies the protocol for super-resolution measurements so it can be potentially implemented in near-term devices. In this case, the reason why the energy measurement problem is hard to simulate classically results from the fact that the populations of the different eigenstates are # P-hard to approximate. It would be interesting to construct new examples of quantum advantage for super-resolution energy measurements where the classical hardness results from the need to sample from the right eigenvalues (to exponential accuracy) and not only from the right eigenstate populations.
Indeed, Ref. [44] shows that such constructions are, in principle, possible. Therein, the authors present an academic example of a Hamiltonian which can be measured by a quantum algorithm (Shor's algorithm) with super resolution, even though it is not known how to compute its eigenvalues efficiently classically. This Hamiltonian is given byĤ = U M E +U † M E , where U M E is the unitary implementation of the modular exponentiation used in Shor's algorithm. It is interesting to point out that it is possible to find a quantum diagonalization for the aforementioned Hamiltonian using existing quantum algorithms for decomposing finite commutative groups [99,100]. This academic example shows how quantum algorithms could potentially be helpful for expanding our knowledge of the inner structure of a given Hamiltonian (here its quantum diagonalization), which can later be exploited to answer specific questions about a given physical system more accurately (here obtaining its spectra). Finding families of Hamiltonians with stronger physical motivation than this example, for which its quantum diagonalization could be learned thanks to a quantum algorithm, would offer a new and interesting application for quantum computers, potentially leading to new exponential speed-ups over known classical algorithms.
Overall, we believe this work brings a new perspective into questions related to Hamiltonian complexity [101] by focusing on problems that can be solved efficiently by quantum devices, unlike problems such as the QMA-complete ground state problem [35]. Furthermore, we believe it could inspire new demonstrations of quantum advantage for measuring other quantities of interest in quantum manybody physics, which would strengthen the belief that quantum computers and simulators can answer problems about quantum matter beyond the power of any present or future classical algorithms.
a Hamiltonian and the ability to do exponentially precise energy measurements. In this section, we summarize the definitions of Ref. [44] and explain how our results fit in the context of that work.
A normalized Hamiltonian H is said to be exponentially fast-forwardable if a poly-size quantum circuit U can be constructed such that ||U −exp(−iHT )|| ≤ α for T = O(2 Ω(n) ) and α = 1/poly(n). Atai-Aharonov show that the ability to exponentially fastforward a Hamiltonian implies that one can find a poly-size circuitŨ EM such that ||Ũ EM − U EM || ≤ α , where U EM is a unitary operation that performs an exponentially precise energy measurement and α = 1/poly(n). More precisely, U EM acts on an eigenstate |ψ E and additional ancillas as where E , g(E ) live in a poly-size register, E is the measurement outcome and g(E ) is some garbage data; furthermore, the probability of observing E obeys Eq. (1) where δ = 1/2 Ω(n) and η = 1 − 1/poly(n).
It can be seen that sinceŨ EM is close to U EM in operator norm (||Ũ EM − U EM || ≤ 1/poly(n)), the total variation distance between the probability distributions resulting from a measurement of the output of U EM andŨ EM is also bounded by β = 1/poly(n). Hence, the ability to exponentially fast-forward a Hamiltonian implies the ability to generate a quantum circuit that solves the β-approximate energy sampling problem with confidence η sampling error β = 1/poly(n) and resolution δ = 1/2 Ω(n) .

B Proof of Theorem 1
In this section we give technical proof of Theorem 1 of the main text.
Theorem 1 (Quantum algorithm for super-resolution energy measurements). Consider any quantum diagonalizable HamiltonianH = U † H f U as in (6). Then, the following quantum algorithm efficiently solves the β−approximate Energy Sampling problem for HamiltonianH, with the initial state |ψ and parameters η = 1 and δ = 2 −l : • Query a β-approximate sampler for U , with initial state |ψ .
• Given an outcome z, output an l-digit approximation of the value f (z) .
Let us denote the probability of outputting E m , via the procedure described in the theorem, as q m . Then wheref −1 (E m ) is the pre-image of E m under the functionf i.e., the set of values z that are mapped to E m viaf . Let us demonstrate that this probability distribution obeys the constraints given by Eq. (4), of an energy sampler with = 0 and δ = 2 −l . Let us define f −1 ([E a , E b ]) as the pre-image of the energy interval [E a , E b ] under f (z) i.e., Given an outcome value z ∈ f −1 ([E a , E b ]), we have thatf (z) ∈ [E a − δ, E b + δ]. Hence, the probability that Procedure 1 outputs a value E ∈ [E a − δ, E b + δ] On the other hand, we have that the eigenstates of H are given by U † |z with eigenvalue f (z). It follows that the spectral projection of H in an interval [E a , E b ] is given by Consequently, defining ρ = |ψ ψ| and using Eqs. (63) and (64) we have that the probability that Procedure 1 outputs the energy value E is given by which is an energy sampler with parameters δ = 2 −l and = 0.
Let us now consider the more general case where Procedure 1 has access to a β-approximate sampler for U i.e., the outcome z is observed with probability P z such that z |P z − P z | ≤ β. (66) In this case, analogously to Eq. (60), we define q m as the probability that the procedure described in the theorem outputs E m = mδ, which is given by mation problem can be efficiently solved by sampling from quantum circuits, and is therefore in BQP. It is thus implausible that one can show that this problem is #P-hard, or even NP-hard, for then quantum computers would be able to solve such problems; which is, in turn, considered to be unlikely [79,86]. Hence, new techniques seem to be required to give complexity theoretic evidence for the classical harness of approximate sampling problems with small output space.