Topological data analysis (TDA) is an emergent field of data analysis. The critical step of TDA is computing the persistent Betti numbers. Existing classical algorithms for TDA are limited if we want to learn from high-dimensional topological features because the number of high-dimensional simplices grows exponentially in the size of the data. In the context of quantum computation, it has been previously shown that there exists an efficient quantum algorithm for estimating the Betti numbers even in high dimensions. However, the Betti numbers are less general than the persistent Betti numbers, and there have been no quantum algorithms that can estimate the persistent Betti numbers of arbitrary dimensions. This paper shows the first quantum algorithm that can estimate the ($normalized$) persistent Betti numbers of arbitrary dimensions. Our algorithm is efficient for simplicial complexes such as the Vietoris-Rips complex and demonstrates exponential speedup over the known classical algorithms.

]]>Topological data analysis (TDA) is an emergent field of data analysis. The critical step of TDA is computing the persistent Betti numbers. Existing classical algorithms for TDA are limited if we want to learn from high-dimensional topological features because the number of high-dimensional simplices grows exponentially in the size of the data. In the context of quantum computation, it has been previously shown that there exists an efficient quantum algorithm for estimating the Betti numbers even in high dimensions. However, the Betti numbers are less general than the persistent Betti numbers, and there have been no quantum algorithms that can estimate the persistent Betti numbers of arbitrary dimensions. This paper shows the first quantum algorithm that can estimate the ($normalized$) persistent Betti numbers of arbitrary dimensions. Our algorithm is efficient for simplicial complexes such as the Vietoris-Rips complex and demonstrates exponential speedup over the known classical algorithms.

]]>The quantum Fourier transform (QFT) is a key primitive for quantum computing that is typically used as a subroutine within a larger computation, for instance for phase estimation. As such, we may have little control over the state that is input to the QFT. Thus, in implementing a good QFT, we may imagine that it needs to perform well on arbitrary input states. $Verifying$ this worst-case correct behaviour of a QFT-implementation would be exponentially hard (in the number of qubits) in general, raising the concern that this verification would be impossible in practice on any useful-sized system. In this paper we show that, in fact, we only need to have good $average$-$case$ performance of the QFT to achieve good $worst$-$case$ performance for key tasks – phase estimation, period finding and amplitude estimation. Further we give a very efficient procedure to verify this required average-case behaviour of the QFT.

]]>The quantum Fourier transform (QFT) is a key primitive for quantum computing that is typically used as a subroutine within a larger computation, for instance for phase estimation. As such, we may have little control over the state that is input to the QFT. Thus, in implementing a good QFT, we may imagine that it needs to perform well on arbitrary input states. $Verifying$ this worst-case correct behaviour of a QFT-implementation would be exponentially hard (in the number of qubits) in general, raising the concern that this verification would be impossible in practice on any useful-sized system. In this paper we show that, in fact, we only need to have good $average$-$case$ performance of the QFT to achieve good $worst$-$case$ performance for key tasks – phase estimation, period finding and amplitude estimation. Further we give a very efficient procedure to verify this required average-case behaviour of the QFT.

]]>We develop a framework of "semi-automatic differentiation" that combines existing gradient-based methods of quantum optimal control with automatic differentiation. The approach allows to optimize practically any computable functional and is implemented in two open source Julia packages, $\tt{GRAPE.jl}$ and $\tt{Krotov.jl}$, part of the $\tt{QuantumControl.jl}$ framework. Our method is based on formally rewriting the optimization functional in terms of propagated states, overlaps with target states, or quantum gates. An analytical application of the chain rule then allows to separate the time propagation and the evaluation of the functional when calculating the gradient. The former can be evaluated with great efficiency via a modified GRAPE scheme. The latter is evaluated with automatic differentiation, but with a profoundly reduced complexity compared to the time propagation. Thus, our approach eliminates the prohibitive memory and runtime overhead normally associated with automatic differentiation and facilitates further advancement in quantum control by enabling the direct optimization of non-analytic functionals for quantum information and quantum metrology, especially in open quantum systems. We illustrate and benchmark the use of semi-automatic differentiation for the optimization of perfectly entangling quantum gates on superconducting qubits coupled via a shared transmission line. This includes the first direct optimization of the non-analytic gate concurrence.

]]>We develop a framework of "semi-automatic differentiation" that combines existing gradient-based methods of quantum optimal control with automatic differentiation. The approach allows to optimize practically any computable functional and is implemented in two open source Julia packages, $\tt{GRAPE.jl}$ and $\tt{Krotov.jl}$, part of the $\tt{QuantumControl.jl}$ framework. Our method is based on formally rewriting the optimization functional in terms of propagated states, overlaps with target states, or quantum gates. An analytical application of the chain rule then allows to separate the time propagation and the evaluation of the functional when calculating the gradient. The former can be evaluated with great efficiency via a modified GRAPE scheme. The latter is evaluated with automatic differentiation, but with a profoundly reduced complexity compared to the time propagation. Thus, our approach eliminates the prohibitive memory and runtime overhead normally associated with automatic differentiation and facilitates further advancement in quantum control by enabling the direct optimization of non-analytic functionals for quantum information and quantum metrology, especially in open quantum systems. We illustrate and benchmark the use of semi-automatic differentiation for the optimization of perfectly entangling quantum gates on superconducting qubits coupled via a shared transmission line. This includes the first direct optimization of the non-analytic gate concurrence.

]]>Quantum computers may provide good solutions to combinatorial optimization problems by leveraging the Quantum Approximate Optimization Algorithm (QAOA). The QAOA is often presented as an algorithm for noisy hardware. However, hardware constraints limit its applicability to problem instances that closely match the connectivity of the qubits. Furthermore, the QAOA must outpace classical solvers. Here, we investigate swap strategies to map dense problems into linear, grid and heavy-hex coupling maps. A line-based swap strategy works best for linear and two-dimensional grid coupling maps. Heavy-hex coupling maps require an adaptation of the line swap strategy. By contrast, three-dimensional grid coupling maps benefit from a different swap strategy. Using known entropic arguments we find that the required gate fidelity for dense problems lies deep below the fault-tolerant threshold. We also provide a methodology to reason about the execution-time of QAOA. Finally, we present a QAOA Qiskit Runtime program and execute the closed-loop optimization on cloud-based quantum computers with transpiler settings optimized for QAOA. This work highlights some obstacles to improve to make QAOA competitive, such as gate fidelity, gate speed, and the large number of shots needed. The Qiskit Runtime program gives us a tool to investigate such issues at scale on noisy superconducting qubit hardware.

]]>Quantum computers may provide good solutions to combinatorial optimization problems by leveraging the Quantum Approximate Optimization Algorithm (QAOA). The QAOA is often presented as an algorithm for noisy hardware. However, hardware constraints limit its applicability to problem instances that closely match the connectivity of the qubits. Furthermore, the QAOA must outpace classical solvers. Here, we investigate swap strategies to map dense problems into linear, grid and heavy-hex coupling maps. A line-based swap strategy works best for linear and two-dimensional grid coupling maps. Heavy-hex coupling maps require an adaptation of the line swap strategy. By contrast, three-dimensional grid coupling maps benefit from a different swap strategy. Using known entropic arguments we find that the required gate fidelity for dense problems lies deep below the fault-tolerant threshold. We also provide a methodology to reason about the execution-time of QAOA. Finally, we present a QAOA Qiskit Runtime program and execute the closed-loop optimization on cloud-based quantum computers with transpiler settings optimized for QAOA. This work highlights some obstacles to improve to make QAOA competitive, such as gate fidelity, gate speed, and the large number of shots needed. The Qiskit Runtime program gives us a tool to investigate such issues at scale on noisy superconducting qubit hardware.

]]>What is the minimum time required to take a temperature? In this paper, we solve this question for a large class of processes where temperature is inferred by measuring a probe (the thermometer) weakly coupled to the sample of interest, so that the probe's evolution is well described by a quantum Markovian master equation. Considering the most general control strategy on the probe (adaptive measurements, arbitrary control on the probe's state and Hamiltonian), we provide bounds on the achievable measurement precision in a finite amount of time, and show that in many scenarios these fundamental limits can be saturated with a relatively simple experiment. We find that for a general class of sample-probe interactions the scaling of the measurement uncertainty is inversely proportional to the time of the process, a shot-noise like behaviour that arises due to the dissipative nature of thermometry. As a side result, we show that the Lamb shift induced by the probe-sample interaction can play a relevant role in thermometry, allowing for finite measurement resolution in the low-temperature regime. More precisely, the measurement uncertainty decays polynomially with the temperature as $T\rightarrow 0$, in contrast to the usual exponential decay with $T^{-1}$. We illustrate these general results for (i) a qubit probe interacting with a bosonic sample, where the role of the Lamb shift is highlighted, and (ii) a collective superradiant coupling between a $N$-qubit probe and a sample, which enables a quadratic decay with $N$ of the measurement uncertainty.

]]>What is the minimum time required to take a temperature? In this paper, we solve this question for a large class of processes where temperature is inferred by measuring a probe (the thermometer) weakly coupled to the sample of interest, so that the probe's evolution is well described by a quantum Markovian master equation. Considering the most general control strategy on the probe (adaptive measurements, arbitrary control on the probe's state and Hamiltonian), we provide bounds on the achievable measurement precision in a finite amount of time, and show that in many scenarios these fundamental limits can be saturated with a relatively simple experiment. We find that for a general class of sample-probe interactions the scaling of the measurement uncertainty is inversely proportional to the time of the process, a shot-noise like behaviour that arises due to the dissipative nature of thermometry. As a side result, we show that the Lamb shift induced by the probe-sample interaction can play a relevant role in thermometry, allowing for finite measurement resolution in the low-temperature regime. More precisely, the measurement uncertainty decays polynomially with the temperature as $T\rightarrow 0$, in contrast to the usual exponential decay with $T^{-1}$. We illustrate these general results for (i) a qubit probe interacting with a bosonic sample, where the role of the Lamb shift is highlighted, and (ii) a collective superradiant coupling between a $N$-qubit probe and a sample, which enables a quadratic decay with $N$ of the measurement uncertainty.

]]>The rapid progress in the development of quantum devices is in large part due to the availability of a wide range of characterization techniques allowing to probe, test and adjust them. Nevertheless, these methods often make use of approximations that hold in rather simplistic circumstances. In particular, assuming that error mechanisms stay constant in time and have no dependence in the past, is something that will be impossible to do as quantum processors continue scaling up in depth and size. We establish a theoretical framework for the Randomized Benchmarking protocol encompassing temporally-correlated, so-called non-Markovian noise, at the gate level, for any gate set belonging to a wide class of finite groups. We obtain a general expression for the Average Sequence Fidelity (ASF) and propose a way to obtain average gate fidelities of full non-Markovian noise processes. Moreover, we obtain conditions that are fulfilled when an ASF displays authentic non-Markovian deviations. Finally, we show that even though gate-dependence does not translate into a perturbative term within the ASF, as in the Markovian case, the non-Markovian sequence fidelity nevertheless remains stable under small gate-dependent perturbations.

]]>The rapid progress in the development of quantum devices is in large part due to the availability of a wide range of characterization techniques allowing to probe, test and adjust them. Nevertheless, these methods often make use of approximations that hold in rather simplistic circumstances. In particular, assuming that error mechanisms stay constant in time and have no dependence in the past, is something that will be impossible to do as quantum processors continue scaling up in depth and size. We establish a theoretical framework for the Randomized Benchmarking protocol encompassing temporally-correlated, so-called non-Markovian noise, at the gate level, for any gate set belonging to a wide class of finite groups. We obtain a general expression for the Average Sequence Fidelity (ASF) and propose a way to obtain average gate fidelities of full non-Markovian noise processes. Moreover, we obtain conditions that are fulfilled when an ASF displays authentic non-Markovian deviations. Finally, we show that even though gate-dependence does not translate into a perturbative term within the ASF, as in the Markovian case, the non-Markovian sequence fidelity nevertheless remains stable under small gate-dependent perturbations.

]]>We study the classical simulatability of Gottesman-Kitaev-Preskill (GKP) states in combination with arbitrary displacements, a large set of symplectic operations and homodyne measurements. For these types of circuits, neither continuous-variable theorems based on the non-negativity of quasi-probability distributions nor discrete-variable theorems such as the Gottesman-Knill theorem can be employed to assess the simulatability. We first develop a method to evaluate the probability density function corresponding to measuring a single GKP state in the position basis following arbitrary squeezing and a large set of rotations. This method involves evaluating a transformed Jacobi theta function using techniques from analytic number theory. We then use this result to identify two large classes of multimode circuits which are classically efficiently simulatable and are not contained by the GKP encoded Clifford group. Our results extend the set of circuits previously known to be classically efficiently simulatable.

]]>We study the classical simulatability of Gottesman-Kitaev-Preskill (GKP) states in combination with arbitrary displacements, a large set of symplectic operations and homodyne measurements. For these types of circuits, neither continuous-variable theorems based on the non-negativity of quasi-probability distributions nor discrete-variable theorems such as the Gottesman-Knill theorem can be employed to assess the simulatability. We first develop a method to evaluate the probability density function corresponding to measuring a single GKP state in the position basis following arbitrary squeezing and a large set of rotations. This method involves evaluating a transformed Jacobi theta function using techniques from analytic number theory. We then use this result to identify two large classes of multimode circuits which are classically efficiently simulatable and are not contained by the GKP encoded Clifford group. Our results extend the set of circuits previously known to be classically efficiently simulatable.

]]>Quantum fully homomorphic encryption (QFHE) allows to evaluate quantum circuits on encrypted data. We present a novel QFHE scheme, which extends Pauli one-time pad encryption by relying on the quaternion representation of SU(2). With the scheme, evaluating 1-qubit gates is more efficient, and evaluating general quantum circuits is polynomially improved in asymptotic complexity. Technically, a new encrypted multi-bit control technique is proposed, which allows to perform any 1-qubit gate whose parameters are given in the encrypted form. With this technique, we establish a conversion between the new encryption and previous Pauli one-time pad encryption, bridging our QFHE scheme with previous ones. Also, this technique is useful for private quantum circuit evaluation. The security of the scheme relies on the hardness of the underlying quantum capable FHE scheme, and the latter sets its security on the learning with errors problem and the circular security assumption.

]]>Quantum fully homomorphic encryption (QFHE) allows to evaluate quantum circuits on encrypted data. We present a novel QFHE scheme, which extends Pauli one-time pad encryption by relying on the quaternion representation of SU(2). With the scheme, evaluating 1-qubit gates is more efficient, and evaluating general quantum circuits is polynomially improved in asymptotic complexity. Technically, a new encrypted multi-bit control technique is proposed, which allows to perform any 1-qubit gate whose parameters are given in the encrypted form. With this technique, we establish a conversion between the new encryption and previous Pauli one-time pad encryption, bridging our QFHE scheme with previous ones. Also, this technique is useful for private quantum circuit evaluation. The security of the scheme relies on the hardness of the underlying quantum capable FHE scheme, and the latter sets its security on the learning with errors problem and the circular security assumption.

]]>One of the key ways in which quantum mechanics differs from relativity is that it requires a fixed background reference frame for spacetime. In fact, this appears to be one of the main conceptual obstacles to uniting the two theories. Additionally, a combination of the two theories is expected to yield non-classical, or "indefinite", causal structures. In this paper, we present a background-independent formulation of the process matrix formalism – a form of quantum mechanics that allows for indefinite causal structure – while retaining operationally well-defined measurement statistics. We do this by postulating an arbitrary probability distribution of measurement outcomes across discrete "chunks" of spacetime, which we think of as physical laboratories, and then requiring that this distribution be invariant under any permutation of laboratories. We find (a) that one still obtains nontrivial, indefinite causal structures with background independence, (b) that we lose the idea of local operations in distinct laboratories, but can recover it by encoding a reference frame into the physical states of our system, and (c) that permutation invariance imposes surprising symmetry constraints that, although formally similar to a superselection rule, cannot be interpreted as such.

]]>One of the key ways in which quantum mechanics differs from relativity is that it requires a fixed background reference frame for spacetime. In fact, this appears to be one of the main conceptual obstacles to uniting the two theories. Additionally, a combination of the two theories is expected to yield non-classical, or "indefinite", causal structures. In this paper, we present a background-independent formulation of the process matrix formalism – a form of quantum mechanics that allows for indefinite causal structure – while retaining operationally well-defined measurement statistics. We do this by postulating an arbitrary probability distribution of measurement outcomes across discrete "chunks" of spacetime, which we think of as physical laboratories, and then requiring that this distribution be invariant under any permutation of laboratories. We find (a) that one still obtains nontrivial, indefinite causal structures with background independence, (b) that we lose the idea of local operations in distinct laboratories, but can recover it by encoding a reference frame into the physical states of our system, and (c) that permutation invariance imposes surprising symmetry constraints that, although formally similar to a superselection rule, cannot be interpreted as such.

]]>Does gravity constrain computation? We study this question using the AdS/CFT correspondence, where computation in the presence of gravity can be related to non-gravitational physics in the boundary theory. In AdS/CFT, computations which happen locally in the bulk are implemented in a particular non-local form in the boundary, which in general requires distributed entanglement. In more detail, we recall that for a large class of bulk subregions the area of a surface called the ridge is equal to the mutual information available in the boundary to perform the computation non-locally. We then argue the complexity of the local operation controls the amount of entanglement needed to implement it non-locally, and in particular complexity and entanglement cost are related by a polynomial. If this relationship holds, gravity constrains the complexity of operations within these regions to be polynomial in the area of the ridge.

]]>Does gravity constrain computation? We study this question using the AdS/CFT correspondence, where computation in the presence of gravity can be related to non-gravitational physics in the boundary theory. In AdS/CFT, computations which happen locally in the bulk are implemented in a particular non-local form in the boundary, which in general requires distributed entanglement. In more detail, we recall that for a large class of bulk subregions the area of a surface called the ridge is equal to the mutual information available in the boundary to perform the computation non-locally. We then argue the complexity of the local operation controls the amount of entanglement needed to implement it non-locally, and in particular complexity and entanglement cost are related by a polynomial. If this relationship holds, gravity constrains the complexity of operations within these regions to be polynomial in the area of the ridge.

]]>Gaussian boson sampling is a model of photonic quantum computing that has attracted attention as a platform for building quantum devices capable of performing tasks that are out of reach for classical devices. There is therefore significant interest, from the perspective of computational complexity theory, in solidifying the mathematical foundation for the hardness of simulating these devices. We show that, under the standard Anti-Concentration and Permanent-of-Gaussians conjectures, there is no efficient classical algorithm to sample from ideal Gaussian boson sampling distributions (even approximately) unless the polynomial hierarchy collapses. The hardness proof holds in the regime where the number of modes scales quadratically with the number of photons, a setting in which hardness was widely believed to hold but that nevertheless had no definitive proof. Crucial to the proof is a new method for programming a Gaussian boson sampling device so that the output probabilities are proportional to the permanents of submatrices of an arbitrary matrix. This technique is a generalization of Scattershot BosonSampling that we call BipartiteGBS. We also make progress towards the goal of proving hardness in the regime where there are fewer than quadratically more modes than photons (i.e., the high-collision regime) by showing that the ability to approximate permanents of matrices with repeated rows/columns confers the ability to approximate permanents of matrices with no repetitions. The reduction suffices to prove that GBS is hard in the constant-collision regime.

]]>

Gaussian boson sampling is a model of photonic quantum computing that has attracted attention as a platform for building quantum devices capable of performing tasks that are out of reach for classical devices. There is therefore significant interest, from the perspective of computational complexity theory, in solidifying the mathematical foundation for the hardness of simulating these devices. We show that, under the standard Anti-Concentration and Permanent-of-Gaussians conjectures, there is no efficient classical algorithm to sample from ideal Gaussian boson sampling distributions (even approximately) unless the polynomial hierarchy collapses. The hardness proof holds in the regime where the number of modes scales quadratically with the number of photons, a setting in which hardness was widely believed to hold but that nevertheless had no definitive proof. Crucial to the proof is a new method for programming a Gaussian boson sampling device so that the output probabilities are proportional to the permanents of submatrices of an arbitrary matrix. This technique is a generalization of Scattershot BosonSampling that we call BipartiteGBS. We also make progress towards the goal of proving hardness in the regime where there are fewer than quadratically more modes than photons (i.e., the high-collision regime) by showing that the ability to approximate permanents of matrices with repeated rows/columns confers the ability to approximate permanents of matrices with no repetitions. The reduction suffices to prove that GBS is hard in the constant-collision regime.

]]>

The data processing inequality is the most basic requirement for any meaningful measure of information. It essentially states that distinguishability measures between states decrease if we apply a quantum channel and is the centerpiece of many results in information theory. Moreover, it justifies the operational interpretation of most entropic quantities. In this work, we revisit the notion of contraction coefficients of quantum channels, which provide sharper and specialized versions of the data processing inequality. A concept closely related to data processing is partial orders on quantum channels. First, we discuss several quantum extensions of the well-known less noisy ordering and relate them to contraction coefficients. We further define approximate versions of the partial orders and show how they can give strengthened and conceptually simple proofs of several results on approximating capacities. Moreover, we investigate the relation to other partial orders in the literature and their properties, particularly with regard to tensorization. We then examine the relation between contraction coefficients with other properties of quantum channels such as hypercontractivity. Next, we extend the framework of contraction coefficients to general f-divergences and prove several structural results. Finally, we consider two important classes of quantum channels, namely Weyl-covariant and bosonic Gaussian channels. For those, we determine new contraction coefficients and relations for various partial orders.

]]>

The data processing inequality is the most basic requirement for any meaningful measure of information. It essentially states that distinguishability measures between states decrease if we apply a quantum channel and is the centerpiece of many results in information theory. Moreover, it justifies the operational interpretation of most entropic quantities. In this work, we revisit the notion of contraction coefficients of quantum channels, which provide sharper and specialized versions of the data processing inequality. A concept closely related to data processing is partial orders on quantum channels. First, we discuss several quantum extensions of the well-known less noisy ordering and relate them to contraction coefficients. We further define approximate versions of the partial orders and show how they can give strengthened and conceptually simple proofs of several results on approximating capacities. Moreover, we investigate the relation to other partial orders in the literature and their properties, particularly with regard to tensorization. We then examine the relation between contraction coefficients with other properties of quantum channels such as hypercontractivity. Next, we extend the framework of contraction coefficients to general f-divergences and prove several structural results. Finally, we consider two important classes of quantum channels, namely Weyl-covariant and bosonic Gaussian channels. For those, we determine new contraction coefficients and relations for various partial orders.

]]>

Approximate combinatorial optimisation has emerged as one of the most promising application areas for quantum computers, particularly those in the near term. In this work, we focus on the quantum approximate optimisation algorithm (QAOA) for solving the MaxCut problem. Specifically, we address two problems in the QAOA, how to initialise the algorithm, and how to subsequently train the parameters to find an optimal solution. For the former, we propose graph neural networks (GNNs) as a warm-starting technique for QAOA. We demonstrate that merging GNNs with QAOA can outperform both approaches individually. Furthermore, we demonstrate how graph neural networks enables warm-start generalisation across not only graph instances, but also to increasing graph sizes, a feature not straightforwardly available to other warm-starting methods. For training the QAOA, we test several optimisers for the MaxCut problem up to 16 qubits and benchmark against vanilla gradient descent. These include quantum aware/agnostic and machine learning based/neural optimisers. Examples of the latter include reinforcement and meta-learning. With the incorporation of these initialisation and optimisation toolkits, we demonstrate how the optimisation problems can be solved using QAOA in an end-to-end differentiable pipeline.

]]>Approximate combinatorial optimisation has emerged as one of the most promising application areas for quantum computers, particularly those in the near term. In this work, we focus on the quantum approximate optimisation algorithm (QAOA) for solving the MaxCut problem. Specifically, we address two problems in the QAOA, how to initialise the algorithm, and how to subsequently train the parameters to find an optimal solution. For the former, we propose graph neural networks (GNNs) as a warm-starting technique for QAOA. We demonstrate that merging GNNs with QAOA can outperform both approaches individually. Furthermore, we demonstrate how graph neural networks enables warm-start generalisation across not only graph instances, but also to increasing graph sizes, a feature not straightforwardly available to other warm-starting methods. For training the QAOA, we test several optimisers for the MaxCut problem up to 16 qubits and benchmark against vanilla gradient descent. These include quantum aware/agnostic and machine learning based/neural optimisers. Examples of the latter include reinforcement and meta-learning. With the incorporation of these initialisation and optimisation toolkits, we demonstrate how the optimisation problems can be solved using QAOA in an end-to-end differentiable pipeline.

]]>Quantum simulation is a prominent application of quantum computers. While there is extensive previous work on simulating finite-dimensional systems, less is known about quantum algorithms for real-space dynamics. We conduct a systematic study of such algorithms. In particular, we show that the dynamics of a $d$-dimensional Schrödinger equation with $\eta$ particles can be simulated with gate complexity $\tilde{O}\bigl(\eta d F \text{poly}(\log(g'/\epsilon))\bigr)$, where $\epsilon$ is the discretization error, $g'$ controls the higher-order derivatives of the wave function, and $F$ measures the time-integrated strength of the potential. Compared to the best previous results, this exponentially improves the dependence on $\epsilon$ and $g'$ from $\text{poly}(g'/\epsilon)$ to $\text{poly}(\log(g'/\epsilon))$ and polynomially improves the dependence on $T$ and $d$, while maintaining best known performance with respect to $\eta$. For the case of Coulomb interactions, we give an algorithm using $\eta^{3}(d+\eta)T\text{poly}(\log(\eta dTg'/(\Delta\epsilon)))/\Delta$ one- and two-qubit gates, and another using $\eta^{3}(4d)^{d/2}T\text{poly}(\log(\eta dTg'/(\Delta\epsilon)))/\Delta$ one- and two-qubit gates and QRAM operations, where $T$ is the evolution time and the parameter $\Delta$ regulates the unbounded Coulomb interaction. We give applications to several computational problems, including faster real-space simulation of quantum chemistry, rigorous analysis of discretization error for simulation of a uniform electron gas, and a quadratic improvement to a quantum algorithm for escaping saddle points in nonconvex optimization.

]]>Quantum simulation is a prominent application of quantum computers. While there is extensive previous work on simulating finite-dimensional systems, less is known about quantum algorithms for real-space dynamics. We conduct a systematic study of such algorithms. In particular, we show that the dynamics of a $d$-dimensional Schrödinger equation with $\eta$ particles can be simulated with gate complexity $\tilde{O}\bigl(\eta d F \text{poly}(\log(g'/\epsilon))\bigr)$, where $\epsilon$ is the discretization error, $g'$ controls the higher-order derivatives of the wave function, and $F$ measures the time-integrated strength of the potential. Compared to the best previous results, this exponentially improves the dependence on $\epsilon$ and $g'$ from $\text{poly}(g'/\epsilon)$ to $\text{poly}(\log(g'/\epsilon))$ and polynomially improves the dependence on $T$ and $d$, while maintaining best known performance with respect to $\eta$. For the case of Coulomb interactions, we give an algorithm using $\eta^{3}(d+\eta)T\text{poly}(\log(\eta dTg'/(\Delta\epsilon)))/\Delta$ one- and two-qubit gates, and another using $\eta^{3}(4d)^{d/2}T\text{poly}(\log(\eta dTg'/(\Delta\epsilon)))/\Delta$ one- and two-qubit gates and QRAM operations, where $T$ is the evolution time and the parameter $\Delta$ regulates the unbounded Coulomb interaction. We give applications to several computational problems, including faster real-space simulation of quantum chemistry, rigorous analysis of discretization error for simulation of a uniform electron gas, and a quadratic improvement to a quantum algorithm for escaping saddle points in nonconvex optimization.

]]>Invariance under Lorentz transformations is fundamental to both the standard model and general relativity. Testing Lorentz-symmetry violation (LSV) via atomic systems attracts extensive interests in both theory and experiment. In several test proposals, the LSV violation effects are described as a local interaction and the corresponding test precision can asymptotically reach the Heisenberg limit via increasing quantum Fisher information (QFI), but the limited resolution of collective observables prevents the detection of large QFI. Here, we propose a multimode many-body quantum interferometry for testing the LSV parameter $\kappa$ via an ensemble of spinor atoms. By employing an $N$-atom multimode GHZ state, the test precision can attain the Heisenberg limit $\Delta \kappa \propto 1/(F^2N)$ with the spin length $F$ and the atom number $N$. We find a realistic observable (i.e. practical measurement process) to achieve the ultimate precision and analyze the LSV test via an experimentally accessible three-mode interferometry with Bose condensed spin-$1$ atoms for example. By selecting suitable input states and unitary recombination operation, the LSV parameter $\kappa$ can be extracted via realizable population measurement. Especially, the measurement precision of the LSV parameter $\kappa$ can beat the standard quantum limit and even approach the Heisenberg limit via spin mixing dynamics or driving through quantum phase transitions. Moreover, the scheme is robust against nonadiabatic effect and detection noise. Our test scheme may open up a feasible way for a drastic improvement of the LSV tests with atomic systems and provide an alternative application of multi-particle entangled states.

]]>Invariance under Lorentz transformations is fundamental to both the standard model and general relativity. Testing Lorentz-symmetry violation (LSV) via atomic systems attracts extensive interests in both theory and experiment. In several test proposals, the LSV violation effects are described as a local interaction and the corresponding test precision can asymptotically reach the Heisenberg limit via increasing quantum Fisher information (QFI), but the limited resolution of collective observables prevents the detection of large QFI. Here, we propose a multimode many-body quantum interferometry for testing the LSV parameter $\kappa$ via an ensemble of spinor atoms. By employing an $N$-atom multimode GHZ state, the test precision can attain the Heisenberg limit $\Delta \kappa \propto 1/(F^2N)$ with the spin length $F$ and the atom number $N$. We find a realistic observable (i.e. practical measurement process) to achieve the ultimate precision and analyze the LSV test via an experimentally accessible three-mode interferometry with Bose condensed spin-$1$ atoms for example. By selecting suitable input states and unitary recombination operation, the LSV parameter $\kappa$ can be extracted via realizable population measurement. Especially, the measurement precision of the LSV parameter $\kappa$ can beat the standard quantum limit and even approach the Heisenberg limit via spin mixing dynamics or driving through quantum phase transitions. Moreover, the scheme is robust against nonadiabatic effect and detection noise. Our test scheme may open up a feasible way for a drastic improvement of the LSV tests with atomic systems and provide an alternative application of multi-particle entangled states.

]]>I show that if a finite-dimensional density matrix has strictly smaller von Neumann entropy than a second one of the same dimension (and the rank is not bigger), then sufficiently (but finitely) many tensor-copies of the first density matrix majorize a density matrix whose single-body marginals are all exactly equal to the second density matrix. This implies an affirmative solution of the exact catalytic entropy conjecture (CEC) introduced by Boes et al. [PRL 122, 210402 (2019)]. Both the Lemma and the solution to the CEC transfer to the classical setting of finite-dimensional probability vectors (with permutations of entries instead of unitary transformations for the CEC).

]]>I show that if a finite-dimensional density matrix has strictly smaller von Neumann entropy than a second one of the same dimension (and the rank is not bigger), then sufficiently (but finitely) many tensor-copies of the first density matrix majorize a density matrix whose single-body marginals are all exactly equal to the second density matrix. This implies an affirmative solution of the exact catalytic entropy conjecture (CEC) introduced by Boes et al. [PRL 122, 210402 (2019)]. Both the Lemma and the solution to the CEC transfer to the classical setting of finite-dimensional probability vectors (with permutations of entries instead of unitary transformations for the CEC).

]]>In this letter, we propose how to measure the quantized nonlinear transport using two-dimensional ultracold atomic Fermi gases in a harmonic trap. This scheme requires successively applying two optical pulses in the left and lower half-planes and then measuring the number of extra atoms in the first quadrant. In ideal situations, this nonlinear density response to two successive pulses is quantized, and the quantization value probes the Euler characteristic of the local Fermi sea at the trap center. We investigate the practical effects in experiments, including finite pulse duration, finite edge width of pulses, and finite temperature, which can lead to deviation from quantization. We propose a method to reduce the deviation by averaging measurements performed at the first and third quadrants, inspired by symmetry considerations. With this method, the quantized nonlinear response can be observed reasonably well with experimental conditions readily achieved with ultracold atoms.

]]>In this letter, we propose how to measure the quantized nonlinear transport using two-dimensional ultracold atomic Fermi gases in a harmonic trap. This scheme requires successively applying two optical pulses in the left and lower half-planes and then measuring the number of extra atoms in the first quadrant. In ideal situations, this nonlinear density response to two successive pulses is quantized, and the quantization value probes the Euler characteristic of the local Fermi sea at the trap center. We investigate the practical effects in experiments, including finite pulse duration, finite edge width of pulses, and finite temperature, which can lead to deviation from quantization. We propose a method to reduce the deviation by averaging measurements performed at the first and third quadrants, inspired by symmetry considerations. With this method, the quantized nonlinear response can be observed reasonably well with experimental conditions readily achieved with ultracold atoms.

]]>We systematically investigate the robustness of symmetry protected topological (SPT) order in open quantum systems by studying the evolution of string order parameters and other probes under noisy channels. We find that one-dimensional SPT order is robust against noisy couplings to the environment that satisfy a strong symmetry condition, while it is destabilized by noise that satisfies only a weak symmetry condition, which generalizes the notion of symmetry for closed systems. We also discuss "transmutation" of SPT phases into other SPT phases of equal or lesser complexity, under noisy channels that satisfy twisted versions of the strong symmetry condition.

]]>We systematically investigate the robustness of symmetry protected topological (SPT) order in open quantum systems by studying the evolution of string order parameters and other probes under noisy channels. We find that one-dimensional SPT order is robust against noisy couplings to the environment that satisfy a strong symmetry condition, while it is destabilized by noise that satisfies only a weak symmetry condition, which generalizes the notion of symmetry for closed systems. We also discuss "transmutation" of SPT phases into other SPT phases of equal or lesser complexity, under noisy channels that satisfy twisted versions of the strong symmetry condition.

]]>Even after decades of quantum computing development, examples of generally useful quantum algorithms with exponential speedups over classical counterparts are scarce. Recent progress in quantum algorithms for linear-algebra positioned quantum machine learning (QML) as a potential source of such useful exponential improvements. Yet, in an unexpected development, a recent series of "dequantization" results has equally rapidly removed the promise of exponential speedups for several QML algorithms. This raises the critical question whether exponential speedups of other linear-algebraic QML algorithms persist. In this paper, we study the quantum-algorithmic methods behind the algorithm for topological data analysis of Lloyd, Garnerone and Zanardi through this lens. We provide evidence that the problem solved by this algorithm is classically intractable by showing that its natural generalization is as hard as simulating the one clean qubit model – which is widely believed to require superpolynomial time on a classical computer – and is thus very likely immune to dequantizations. Based on this result, we provide a number of new quantum algorithms for problems such as rank estimation and complex network analysis, along with complexity-theoretic evidence for their classical intractability. Furthermore, we analyze the suitability of the proposed quantum algorithms for near-term implementations. Our results provide a number of useful applications for full-blown, and restricted quantum computers with a guaranteed exponential speedup over classical methods, recovering some of the potential for linear-algebraic QML to become one of quantum computing's killer applications.

]]>Even after decades of quantum computing development, examples of generally useful quantum algorithms with exponential speedups over classical counterparts are scarce. Recent progress in quantum algorithms for linear-algebra positioned quantum machine learning (QML) as a potential source of such useful exponential improvements. Yet, in an unexpected development, a recent series of "dequantization" results has equally rapidly removed the promise of exponential speedups for several QML algorithms. This raises the critical question whether exponential speedups of other linear-algebraic QML algorithms persist. In this paper, we study the quantum-algorithmic methods behind the algorithm for topological data analysis of Lloyd, Garnerone and Zanardi through this lens. We provide evidence that the problem solved by this algorithm is classically intractable by showing that its natural generalization is as hard as simulating the one clean qubit model – which is widely believed to require superpolynomial time on a classical computer – and is thus very likely immune to dequantizations. Based on this result, we provide a number of new quantum algorithms for problems such as rank estimation and complex network analysis, along with complexity-theoretic evidence for their classical intractability. Furthermore, we analyze the suitability of the proposed quantum algorithms for near-term implementations. Our results provide a number of useful applications for full-blown, and restricted quantum computers with a guaranteed exponential speedup over classical methods, recovering some of the potential for linear-algebraic QML to become one of quantum computing's killer applications.

]]>We put forward a simple construction of genuinely entangled subspaces – subspaces supporting only genuinely multipartite entangled states – of any permissible dimensionality for any number of parties and local dimensions. The method uses nonorthogonal product bases, which are built from totally nonsingular matrices with a certain structure. We give an explicit basis for the constructed subspaces. An immediate consequence of our result is the possibility of constructing in the general multiparty scenario genuinely multiparty entangled mixed states with ranks up to the maximal dimension of a genuinely entangled subspace.

]]>We put forward a simple construction of genuinely entangled subspaces – subspaces supporting only genuinely multipartite entangled states – of any permissible dimensionality for any number of parties and local dimensions. The method uses nonorthogonal product bases, which are built from totally nonsingular matrices with a certain structure. We give an explicit basis for the constructed subspaces. An immediate consequence of our result is the possibility of constructing in the general multiparty scenario genuinely multiparty entangled mixed states with ranks up to the maximal dimension of a genuinely entangled subspace.

]]>We study the partially ordered set of equivalence classes of quantum measurements endowed with the post-processing partial order. The post-processing order is fundamental as it enables to compare measurements by their intrinsic noise and it gives grounds to define the important concept of quantum incompatibility. Our approach is based on mapping this set into a simpler partially ordered set using an order preserving map and investigating the resulting image. The aim is to ignore unnecessary details while keeping the essential structure, thereby simplifying e.g. detection of incompatibility. One possible choice is the map based on Fisher information introduced by Huangjun Zhu, known to be an order morphism taking values in the cone of positive semidefinite matrices. We explore the properties of that construction and improve Zhu's incompatibility criterion by adding a constraint depending on the number of measurement outcomes. We generalize this type of construction to other ordered vector spaces and we show that this map is optimal among all quadratic maps.

]]>We study the partially ordered set of equivalence classes of quantum measurements endowed with the post-processing partial order. The post-processing order is fundamental as it enables to compare measurements by their intrinsic noise and it gives grounds to define the important concept of quantum incompatibility. Our approach is based on mapping this set into a simpler partially ordered set using an order preserving map and investigating the resulting image. The aim is to ignore unnecessary details while keeping the essential structure, thereby simplifying e.g. detection of incompatibility. One possible choice is the map based on Fisher information introduced by Huangjun Zhu, known to be an order morphism taking values in the cone of positive semidefinite matrices. We explore the properties of that construction and improve Zhu's incompatibility criterion by adding a constraint depending on the number of measurement outcomes. We generalize this type of construction to other ordered vector spaces and we show that this map is optimal among all quadratic maps.

]]>Quantum chaos cannot develop faster than $\lambda \leq 2 \pi/(\hbar \beta)$ for systems in thermal equilibrium [Maldacena, Shenker & Stanford, JHEP (2016)]. This `MSS bound' on the Lyapunov exponent $\lambda$ is set by the width of the strip on which the regularized out-of-time-order correlator is analytic. We show that similar constraints also bound the decay of the spectral form factor (SFF), that measures spectral correlation and is defined from the Fourier transform of the two-level correlation function. Specifically, the $\textit{inflection exponent}$ $\eta$, that we introduce to characterize the early-time decay of the SFF, is bounded as $\eta\leq \pi/(2\hbar\beta)$. This bound is universal and exists outside of the chaotic regime. The results are illustrated in systems with regular, chaotic, and tunable dynamics, namely the single-particle harmonic oscillator, the many-particle Calogero-Sutherland model, an ensemble from random matrix theory, and the quantum kicked top. The relation of the derived bound with other known bounds, including quantum speed limits, is discussed.

]]>Quantum chaos cannot develop faster than $\lambda \leq 2 \pi/(\hbar \beta)$ for systems in thermal equilibrium [Maldacena, Shenker & Stanford, JHEP (2016)]. This `MSS bound' on the Lyapunov exponent $\lambda$ is set by the width of the strip on which the regularized out-of-time-order correlator is analytic. We show that similar constraints also bound the decay of the spectral form factor (SFF), that measures spectral correlation and is defined from the Fourier transform of the two-level correlation function. Specifically, the $\textit{inflection exponent}$ $\eta$, that we introduce to characterize the early-time decay of the SFF, is bounded as $\eta\leq \pi/(2\hbar\beta)$. This bound is universal and exists outside of the chaotic regime. The results are illustrated in systems with regular, chaotic, and tunable dynamics, namely the single-particle harmonic oscillator, the many-particle Calogero-Sutherland model, an ensemble from random matrix theory, and the quantum kicked top. The relation of the derived bound with other known bounds, including quantum speed limits, is discussed.

]]>In this work, the concept of mutually unbiased frames is introduced as the most general notion of unbiasedness for sets composed by linearly independent and normalized vectors. It encompasses the already existing notions of unbiasedness for orthonormal bases, regular simplices, equiangular tight frames, positive operator valued measure, and also includes symmetric informationally complete quantum measurements. After introducing the tool, its power is shown by finding the following results about the last mentioned class of constellations: (i) real fiducial states do not exist in any even dimension, and (ii) unknown $d$-dimensional fiducial states are parameterized, a priori, with roughly $3d/2$ real variables only, without loss of generality. Furthermore, multi-parametric families of pure quantum states having minimum uncertainty with regard to several choices of $d+1$ orthonormal bases are shown, in every dimension $d$. These last families contain all existing fiducial states in every finite dimension, and the bases include maximal sets of $d+1$ mutually unbiased bases, when $d$ is a prime number.

]]>In this work, the concept of mutually unbiased frames is introduced as the most general notion of unbiasedness for sets composed by linearly independent and normalized vectors. It encompasses the already existing notions of unbiasedness for orthonormal bases, regular simplices, equiangular tight frames, positive operator valued measure, and also includes symmetric informationally complete quantum measurements. After introducing the tool, its power is shown by finding the following results about the last mentioned class of constellations: (i) real fiducial states do not exist in any even dimension, and (ii) unknown $d$-dimensional fiducial states are parameterized, a priori, with roughly $3d/2$ real variables only, without loss of generality. Furthermore, multi-parametric families of pure quantum states having minimum uncertainty with regard to several choices of $d+1$ orthonormal bases are shown, in every dimension $d$. These last families contain all existing fiducial states in every finite dimension, and the bases include maximal sets of $d+1$ mutually unbiased bases, when $d$ is a prime number.

]]>Symmetric quantum signal processing provides a parameterized representation of a real polynomial, which can be translated into an efficient quantum circuit for performing a wide range of computational tasks on quantum computers. For a given polynomial $f$, the parameters (called phase factors) can be obtained by solving an optimization problem. However, the cost function is non-convex, and has a very complex energy landscape with numerous global and local minima. It is therefore surprising that the solution can be robustly obtained in practice, starting from a fixed initial guess $\Phi^0$ that contains no information of the input polynomial. To investigate this phenomenon, we first explicitly characterize all the global minima of the cost function. We then prove that one particular global minimum (called the maximal solution) belongs to a neighborhood of $\Phi^0$, on which the cost function is strongly convex under the condition ${\left\lVert f\right\rVert}_{\infty}=\mathcal{O}(d^{-1})$ with $d=\mathrm{deg}(f)$. Our result provides a partial explanation of the aforementioned success of optimization algorithms.

]]>Symmetric quantum signal processing provides a parameterized representation of a real polynomial, which can be translated into an efficient quantum circuit for performing a wide range of computational tasks on quantum computers. For a given polynomial $f$, the parameters (called phase factors) can be obtained by solving an optimization problem. However, the cost function is non-convex, and has a very complex energy landscape with numerous global and local minima. It is therefore surprising that the solution can be robustly obtained in practice, starting from a fixed initial guess $\Phi^0$ that contains no information of the input polynomial. To investigate this phenomenon, we first explicitly characterize all the global minima of the cost function. We then prove that one particular global minimum (called the maximal solution) belongs to a neighborhood of $\Phi^0$, on which the cost function is strongly convex under the condition ${\left\lVert f\right\rVert}_{\infty}=\mathcal{O}(d^{-1})$ with $d=\mathrm{deg}(f)$. Our result provides a partial explanation of the aforementioned success of optimization algorithms.

]]>Effect algebras were introduced as an abstract algebraic model for Hilbert space effects representing quantum mechanical measurements. We study additional structures on an effect algebra $E$ that enable us to define spectrality and spectral resolutions for elements of $E$ akin to those of self-adjoint operators. These structures, called compression bases, are special families of maps on $E$, analogous to the set of compressions on operator algebras, order unit spaces or unital abelian groups. Elements of a compression base are in one-to-one correspondence with certain elements of $E$, called projections. An effect algebra is called spectral if it has a distinguished compression base with two special properties: the projection cover property (i.e., for every element $a$ in $E$ there is a smallest projection majorizing $a$), and the so-called b-comparability property, which is an analogue of general comparability in operator algebras or unital abelian groups. It is shown that in a spectral archimedean effect algebra $E$, every $a\in E$ admits a unique rational spectral resolution and its properties are studied. If in addition $E$ possesses a separating set of states, then every element $a\in E$ is determined by its spectral resolution. It is also proved that for some types of interval effect algebras (with RDP, archimedean divisible), spectrality of $E$ is equivalent to spectrality of its universal group and the corresponding rational spectral resolutions are the same. In particular, for convex archimedean effect algebras, spectral resolutions in $E$ are in agreement with spectral resolutions in the corresponding order unit space.

]]>Effect algebras were introduced as an abstract algebraic model for Hilbert space effects representing quantum mechanical measurements. We study additional structures on an effect algebra $E$ that enable us to define spectrality and spectral resolutions for elements of $E$ akin to those of self-adjoint operators. These structures, called compression bases, are special families of maps on $E$, analogous to the set of compressions on operator algebras, order unit spaces or unital abelian groups. Elements of a compression base are in one-to-one correspondence with certain elements of $E$, called projections. An effect algebra is called spectral if it has a distinguished compression base with two special properties: the projection cover property (i.e., for every element $a$ in $E$ there is a smallest projection majorizing $a$), and the so-called b-comparability property, which is an analogue of general comparability in operator algebras or unital abelian groups. It is shown that in a spectral archimedean effect algebra $E$, every $a\in E$ admits a unique rational spectral resolution and its properties are studied. If in addition $E$ possesses a separating set of states, then every element $a\in E$ is determined by its spectral resolution. It is also proved that for some types of interval effect algebras (with RDP, archimedean divisible), spectrality of $E$ is equivalent to spectrality of its universal group and the corresponding rational spectral resolutions are the same. In particular, for convex archimedean effect algebras, spectral resolutions in $E$ are in agreement with spectral resolutions in the corresponding order unit space.

]]>Preparing macroscopic mechanical resonators close to their motional quantum groundstate and generating entanglement with light offers great opportunities in studying fundamental physics and in developing a new generation of quantum applications. Here we propose an experimentally interesting scheme, which is particularly well suited for systems in the sideband-unresolved regime, based on coherent feedback with linear, passive optical components to achieve groundstate cooling and photon-phonon entanglement generation with optomechanical devices. We find that, by introducing an additional passive element – either a narrow linewidth cavity or a mirror with a delay line – an optomechanical system in the deeply sideband-unresolved regime will exhibit dynamics similar to one that is sideband-resolved. With this new approach, the experimental realization of groundstate cooling and optomechanical entanglement is well within reach of current integrated state-of-the-art high-Q mechanical resonators.

]]>Preparing macroscopic mechanical resonators close to their motional quantum groundstate and generating entanglement with light offers great opportunities in studying fundamental physics and in developing a new generation of quantum applications. Here we propose an experimentally interesting scheme, which is particularly well suited for systems in the sideband-unresolved regime, based on coherent feedback with linear, passive optical components to achieve groundstate cooling and photon-phonon entanglement generation with optomechanical devices. We find that, by introducing an additional passive element – either a narrow linewidth cavity or a mirror with a delay line – an optomechanical system in the deeply sideband-unresolved regime will exhibit dynamics similar to one that is sideband-resolved. With this new approach, the experimental realization of groundstate cooling and optomechanical entanglement is well within reach of current integrated state-of-the-art high-Q mechanical resonators.

]]>Tracing out the environmental degrees of freedom is a necessary procedure when simulating open quantum systems. While being an essential step in deriving a tractable master equation it represents a loss of information. In situations where there is strong interplay between the system and environmental degrees of freedom this loss makes understanding the dynamics challenging. These dynamics, when viewed in isolation, have no time-local description: they are non-Markovian and memory effects induce complex features that are difficult to interpret. To address this problem, we here show how to use system correlations, calculated by any method, to infer any correlation function of a Gaussian environment, so long as the coupling between system and environment is linear. This not only allows reconstruction of the full dynamics of both system and environment, but also opens avenues into studying the effect of a system on its environment. In order to obtain accurate bath dynamics, we exploit a numerically exact approach to simulating the system dynamics, which is based on the construction and contraction of a tensor network that represents the process tensor of this open quantum system. Using this we are able to find any system correlation function exactly. To demonstrate the applicability of our method we show how heat moves between different modes of a bosonic bath when coupled to a two-level system that is subject to an off-resonant drive.

]]>Tracing out the environmental degrees of freedom is a necessary procedure when simulating open quantum systems. While being an essential step in deriving a tractable master equation it represents a loss of information. In situations where there is strong interplay between the system and environmental degrees of freedom this loss makes understanding the dynamics challenging. These dynamics, when viewed in isolation, have no time-local description: they are non-Markovian and memory effects induce complex features that are difficult to interpret. To address this problem, we here show how to use system correlations, calculated by any method, to infer any correlation function of a Gaussian environment, so long as the coupling between system and environment is linear. This not only allows reconstruction of the full dynamics of both system and environment, but also opens avenues into studying the effect of a system on its environment. In order to obtain accurate bath dynamics, we exploit a numerically exact approach to simulating the system dynamics, which is based on the construction and contraction of a tensor network that represents the process tensor of this open quantum system. Using this we are able to find any system correlation function exactly. To demonstrate the applicability of our method we show how heat moves between different modes of a bosonic bath when coupled to a two-level system that is subject to an off-resonant drive.

]]>Stabilizer states and graph states find application in quantum error correction, measurement-based quantum computation and various other concepts in quantum information theory. In this work, we study party-local Clifford (PLC) transformations among stabilizer states. These transformations arise as a physically motivated extension of local operations in quantum networks with access to bipartite entanglement between some of the nodes of the network. First, we show that PLC transformations among graph states are equivalent to a generalization of the well-known local complementation, which describes local Clifford transformations among graph states. Then, we introduce a mathematical framework to study PLC equivalence of stabilizer states, relating it to the classification of tuples of bilinear forms. This framework allows us to study decompositions of stabilizer states into tensor products of indecomposable ones, that is, decompositions into states from the entanglement generating set (EGS). While the EGS is finite up to $3$ parties [Bravyi et al., J. Math. Phys. $\bf{47}$, 062106 (2006)], we show that for $4$ and more parties it is an infinite set, even when considering party-local unitary transformations. Moreover, we explicitly compute the EGS for $4$ parties up to $10$ qubits. Finally, we generalize the framework to qudit stabilizer states in prime dimensions not equal to $2$, which allows us to show that the decomposition of qudit stabilizer states into states from the EGS is unique.

]]>Stabilizer states and graph states find application in quantum error correction, measurement-based quantum computation and various other concepts in quantum information theory. In this work, we study party-local Clifford (PLC) transformations among stabilizer states. These transformations arise as a physically motivated extension of local operations in quantum networks with access to bipartite entanglement between some of the nodes of the network. First, we show that PLC transformations among graph states are equivalent to a generalization of the well-known local complementation, which describes local Clifford transformations among graph states. Then, we introduce a mathematical framework to study PLC equivalence of stabilizer states, relating it to the classification of tuples of bilinear forms. This framework allows us to study decompositions of stabilizer states into tensor products of indecomposable ones, that is, decompositions into states from the entanglement generating set (EGS). While the EGS is finite up to $3$ parties [Bravyi et al., J. Math. Phys. $\bf{47}$, 062106 (2006)], we show that for $4$ and more parties it is an infinite set, even when considering party-local unitary transformations. Moreover, we explicitly compute the EGS for $4$ parties up to $10$ qubits. Finally, we generalize the framework to qudit stabilizer states in prime dimensions not equal to $2$, which allows us to show that the decomposition of qudit stabilizer states into states from the EGS is unique.

]]>Quantum information science provides powerful technologies beyond the scope of classical physics. In practice, accurate control of quantum operations is a challenging task with current quantum devices. The implementation of high fidelity and multi-qubit quantum operations consumes massive resources and requires complicated hardware design to fight against noise. An approach to alleviating this problem is to replace quantum operations with classical processing. Despite the common practice of this approach, rigorous criteria to determine whether a given quantum operation is replaceable classically are still missing. In this work, we define the classically replaceable operations in four general scenarios. In each scenario, we provide their necessary and sufficient criteria and point out the corresponding classical processing. For a practically favorable case of unitary classically replaceable operations, we show that the replaced classical processing is deterministic. Beyond that, we regard the irreplaceability of quantum operations by classical processing as a quantum resource and relate it to the performance of a channel in a non-local game, as manifested in a robustness measure.

]]>Quantum information science provides powerful technologies beyond the scope of classical physics. In practice, accurate control of quantum operations is a challenging task with current quantum devices. The implementation of high fidelity and multi-qubit quantum operations consumes massive resources and requires complicated hardware design to fight against noise. An approach to alleviating this problem is to replace quantum operations with classical processing. Despite the common practice of this approach, rigorous criteria to determine whether a given quantum operation is replaceable classically are still missing. In this work, we define the classically replaceable operations in four general scenarios. In each scenario, we provide their necessary and sufficient criteria and point out the corresponding classical processing. For a practically favorable case of unitary classically replaceable operations, we show that the replaced classical processing is deterministic. Beyond that, we regard the irreplaceability of quantum operations by classical processing as a quantum resource and relate it to the performance of a channel in a non-local game, as manifested in a robustness measure.

]]>