The $p$-stage Quantum Approximate Optimization Algorithm (QAOA$_p$) is a promising approach for combinatorial optimization on noisy intermediate-scale quantum (NISQ) devices, but its theoretical behavior is not well understood beyond $p=1$. We analyze QAOA$_2$ for the $\textit{maximum cut problem}$ (MAX-CUT), deriving a graph-size-independent expression for the expected cut fraction on any $D$-regular graph of girth $> 5$ (i.e. without triangles, squares, or pentagons). We show that for all degrees $D \ge 2$ and every $D$-regular graph $G$ of girth $> 5$, QAOA$_2$ has a larger expected cut fraction than QAOA$_1$ on $G$. However, we also show that there exists a $2$-local randomized $\textit{classical}$ algorithm $A$ such that $A$ has a larger expected cut fraction than QAOA$_2$ on all $G$. This supports our conjecture that for every constant $p$, there exists a local classical MAX-CUT algorithm that performs as well as QAOA$_p$ on all graphs.

For further reading go to our presentation Local Competitors to QAOA

]]>The $p$-stage Quantum Approximate Optimization Algorithm (QAOA$_p$) is a promising approach for combinatorial optimization on noisy intermediate-scale quantum (NISQ) devices, but its theoretical behavior is not well understood beyond $p=1$. We analyze QAOA$_2$ for the $\textit{maximum cut problem}$ (MAX-CUT), deriving a graph-size-independent expression for the expected cut fraction on any $D$-regular graph of girth $> 5$ (i.e. without triangles, squares, or pentagons). We show that for all degrees $D \ge 2$ and every $D$-regular graph $G$ of girth $> 5$, QAOA$_2$ has a larger expected cut fraction than QAOA$_1$ on $G$. However, we also show that there exists a $2$-local randomized $\textit{classical}$ algorithm $A$ such that $A$ has a larger expected cut fraction than QAOA$_2$ on all $G$. This supports our conjecture that for every constant $p$, there exists a local classical MAX-CUT algorithm that performs as well as QAOA$_p$ on all graphs.

For further reading go to our presentation Local Competitors to QAOA

]]>Suppose we want to implement a unitary $U$, for instance a circuit for some quantum algorithm. Suppose our actual implementation is a unitary $\tilde{U}$, which we can only apply as a black-box. In general it is an exponentially-hard task to decide whether $\tilde{U}$ equals the intended $U$, or is significantly different in a worst-case norm. In this paper we consider two special cases where relatively efficient and lightweight procedures exist for this task. First, we give an efficient procedure under the assumption that $U$ and $\tilde{U}$ (both of which we can now apply as a black-box) are either equal, or differ significantly in only one $k$-qubit gate, where $k=O(1)$ (the $k$ qubits need not be contiguous). Second, we give an even more lightweight procedure under the assumption that $U$ and $\tilde{U}$ are $\textit{Clifford}$ circuits which are either equal, or different in arbitrary ways (the specification of $U$ is now classically given while $\tilde{U}$ can still only be applied as a black-box). Both procedures only need to run $\tilde{U}$ a constant number of times to detect a constant error in a worst-case norm. We note that the Clifford result also follows from earlier work of Flammia and Liu, and da Silva, Landon-Cardinal, and Poulin. In the Clifford case, our error-detection procedure also allows us efficiently to learn (and hence correct) $\tilde{U}$ if we have a small list of possible errors that could have happened to $U$; for example if we know that only $O(1)$ of the gates of $\tilde{U}$ are wrong, this list will be polynomially small and we can test each possible erroneous version of $U$ for equality with $\tilde{U}$.

]]>Suppose we want to implement a unitary $U$, for instance a circuit for some quantum algorithm. Suppose our actual implementation is a unitary $\tilde{U}$, which we can only apply as a black-box. In general it is an exponentially-hard task to decide whether $\tilde{U}$ equals the intended $U$, or is significantly different in a worst-case norm. In this paper we consider two special cases where relatively efficient and lightweight procedures exist for this task. First, we give an efficient procedure under the assumption that $U$ and $\tilde{U}$ (both of which we can now apply as a black-box) are either equal, or differ significantly in only one $k$-qubit gate, where $k=O(1)$ (the $k$ qubits need not be contiguous). Second, we give an even more lightweight procedure under the assumption that $U$ and $\tilde{U}$ are $\textit{Clifford}$ circuits which are either equal, or different in arbitrary ways (the specification of $U$ is now classically given while $\tilde{U}$ can still only be applied as a black-box). Both procedures only need to run $\tilde{U}$ a constant number of times to detect a constant error in a worst-case norm. We note that the Clifford result also follows from earlier work of Flammia and Liu, and da Silva, Landon-Cardinal, and Poulin. In the Clifford case, our error-detection procedure also allows us efficiently to learn (and hence correct) $\tilde{U}$ if we have a small list of possible errors that could have happened to $U$; for example if we know that only $O(1)$ of the gates of $\tilde{U}$ are wrong, this list will be polynomially small and we can test each possible erroneous version of $U$ for equality with $\tilde{U}$.

]]>We investigate the conditions under which an uncontrollable background processes may be harnessed by an agent to perform a task that would otherwise be impossible within their operational framework. This situation can be understood from the perspective of resource theory: rather than harnessing 'useful' quantum states to perform tasks, we propose a resource theory of quantum processes across multiple points in time. Uncontrollable background processes fulfil the role of resources, and a new set of objects called $\textit{superprocesses}$, corresponding to operationally implementable control of the system undergoing the process, constitute the transformations between them. After formally introducing a framework for deriving resource theories of multi-time processes, we present a hierarchy of examples induced by restricting quantum or classical communication within the superprocess — corresponding to a client-server scenario. The resulting nine resource theories have different notions of quantum or classical memory as the determinant of their utility. Furthermore, one of these theories has a strict correspondence between non-useful processes and those that are Markovian and, therefore, could be said to be a true 'quantum resource theory of non-Markovianity'.

]]>We investigate the conditions under which an uncontrollable background processes may be harnessed by an agent to perform a task that would otherwise be impossible within their operational framework. This situation can be understood from the perspective of resource theory: rather than harnessing 'useful' quantum states to perform tasks, we propose a resource theory of quantum processes across multiple points in time. Uncontrollable background processes fulfil the role of resources, and a new set of objects called $\textit{superprocesses}$, corresponding to operationally implementable control of the system undergoing the process, constitute the transformations between them. After formally introducing a framework for deriving resource theories of multi-time processes, we present a hierarchy of examples induced by restricting quantum or classical communication within the superprocess — corresponding to a client-server scenario. The resulting nine resource theories have different notions of quantum or classical memory as the determinant of their utility. Furthermore, one of these theories has a strict correspondence between non-useful processes and those that are Markovian and, therefore, could be said to be a true 'quantum resource theory of non-Markovianity'.

]]>The hybrid quantum-classical algorithm is actively examined as a technique applicable even to intermediate-scale quantum computers. To execute this algorithm, the hardware efficient ansatz is often used, thanks to its implementability and expressibility; however, this ansatz has a critical issue in its trainability in the sense that it generically suffers from the so-called gradient vanishing problem. This issue can be resolved by limiting the circuit to the class of shallow alternating layered ansatz. However, even though the high trainability of this ansatz is proved, it is still unclear whether it has rich expressibility in state generation. In this paper, with a proper definition of the expressibility found in the literature, we show that the shallow alternating layered ansatz has almost the same level of expressibility as that of hardware efficient ansatz. Hence the expressibility and the trainability can coexist, giving a new designing method for quantum circuits in the intermediate-scale quantum computing era.

]]>The hybrid quantum-classical algorithm is actively examined as a technique applicable even to intermediate-scale quantum computers. To execute this algorithm, the hardware efficient ansatz is often used, thanks to its implementability and expressibility; however, this ansatz has a critical issue in its trainability in the sense that it generically suffers from the so-called gradient vanishing problem. This issue can be resolved by limiting the circuit to the class of shallow alternating layered ansatz. However, even though the high trainability of this ansatz is proved, it is still unclear whether it has rich expressibility in state generation. In this paper, with a proper definition of the expressibility found in the literature, we show that the shallow alternating layered ansatz has almost the same level of expressibility as that of hardware efficient ansatz. Hence the expressibility and the trainability can coexist, giving a new designing method for quantum circuits in the intermediate-scale quantum computing era.

]]>We significantly reduce the cost of factoring integers and computing discrete logarithms in finite fields on a quantum computer by combining techniques from Shor 1994, Griffiths-Niu 1996, Zalka 2006, Fowler 2012, Ekerå-Håstad 2017, Ekerå 2017, Ekerå 2018, Gidney-Fowler 2019, Gidney 2019. We estimate the approximate cost of our construction using plausible physical assumptions for large-scale superconducting qubit platforms: a planar grid of qubits with nearest-neighbor connectivity, a characteristic physical gate error rate of $10^{-3}$, a surface code cycle time of 1 microsecond, and a reaction time of 10 microseconds. We account for factors that are normally ignored such as noise, the need to make repeated attempts, and the spacetime layout of the computation. When factoring 2048 bit RSA integers, our construction's spacetime volume is a hundredfold less than comparable estimates from earlier works (Van Meter et al. 2009, Jones et al. 2010, Fowler et al. 2012, Gheorghiu et al. 2019). In the abstract circuit model (which ignores overheads from distillation, routing, and error correction) our construction uses $3 n + 0.002 n \lg n$ logical qubits, $0.3 n^3 + 0.0005 n^3 \lg n$ Toffolis, and $500 n^2 + n^2 \lg n$ measurement depth to factor $n$-bit RSA integers. We quantify the cryptographic implications of our work, both for RSA and for schemes based on the DLP in finite fields.

]]>We significantly reduce the cost of factoring integers and computing discrete logarithms in finite fields on a quantum computer by combining techniques from Shor 1994, Griffiths-Niu 1996, Zalka 2006, Fowler 2012, Ekerå-Håstad 2017, Ekerå 2017, Ekerå 2018, Gidney-Fowler 2019, Gidney 2019. We estimate the approximate cost of our construction using plausible physical assumptions for large-scale superconducting qubit platforms: a planar grid of qubits with nearest-neighbor connectivity, a characteristic physical gate error rate of $10^{-3}$, a surface code cycle time of 1 microsecond, and a reaction time of 10 microseconds. We account for factors that are normally ignored such as noise, the need to make repeated attempts, and the spacetime layout of the computation. When factoring 2048 bit RSA integers, our construction's spacetime volume is a hundredfold less than comparable estimates from earlier works (Van Meter et al. 2009, Jones et al. 2010, Fowler et al. 2012, Gheorghiu et al. 2019). In the abstract circuit model (which ignores overheads from distillation, routing, and error correction) our construction uses $3 n + 0.002 n \lg n$ logical qubits, $0.3 n^3 + 0.0005 n^3 \lg n$ Toffolis, and $500 n^2 + n^2 \lg n$ measurement depth to factor $n$-bit RSA integers. We quantify the cryptographic implications of our work, both for RSA and for schemes based on the DLP in finite fields.

]]>Hypergraph product codes are a class of constant-rate quantum low-density parity-check (LDPC) codes equipped with a linear-time decoder called small-set-flip (SSF). This decoder displays sub-optimal performance in practice and requires very large error correcting codes to be effective. In this work, we present new hybrid decoders that combine the belief propagation (BP) algorithm with the SSF decoder. We present the results of numerical simulations when codes are subject to independent bit-flip and phase-flip errors. We provide evidence that the threshold of these codes is roughly 7.5% assuming an ideal syndrome extraction, and remains close to 3% in the presence of syndrome noise. This result subsumes and significantly improves upon an earlier work by Grospellier and Krishna (arXiv:1810.03681). The low-complexity high-performance of these heuristic decoders suggests that decoding should not be a substantial difficulty when moving from zero-rate surface codes to constant-rate LDPC codes and gives a further hint that such codes are well-worth investigating in the context of building large universal quantum computers.

]]>Hypergraph product codes are a class of constant-rate quantum low-density parity-check (LDPC) codes equipped with a linear-time decoder called small-set-flip (SSF). This decoder displays sub-optimal performance in practice and requires very large error correcting codes to be effective. In this work, we present new hybrid decoders that combine the belief propagation (BP) algorithm with the SSF decoder. We present the results of numerical simulations when codes are subject to independent bit-flip and phase-flip errors. We provide evidence that the threshold of these codes is roughly 7.5% assuming an ideal syndrome extraction, and remains close to 3% in the presence of syndrome noise. This result subsumes and significantly improves upon an earlier work by Grospellier and Krishna (arXiv:1810.03681). The low-complexity high-performance of these heuristic decoders suggests that decoding should not be a substantial difficulty when moving from zero-rate surface codes to constant-rate LDPC codes and gives a further hint that such codes are well-worth investigating in the context of building large universal quantum computers.

]]>In this work we study the encoding of smooth, differentiable multivariate functions in quantum registers, using quantum computers or tensor-network representations. We show that a large family of distributions can be encoded as low-entanglement states of the quantum register. These states can be efficiently created in a quantum computer, but they are also efficiently stored, manipulated and probed using Matrix-Product States techniques. Inspired by this idea, we present eight quantum-inspired numerical analysis algorithms, that include Fourier sampling, interpolation, differentiation and integration of partial derivative equations. These algorithms combine classical ideas – finite-differences, spectral methods – with the efficient encoding of quantum registers, and well known algorithms, such as the Quantum Fourier Transform. $\textit{When these heuristic methods work}$, they provide an exponential speed-up over other classical algorithms, such as Monte Carlo integration, finite-difference and fast Fourier transforms (FFT). But even when they don't, some of these algorithms can be translated back to a quantum computer to implement a similar task.

]]>In this work we study the encoding of smooth, differentiable multivariate functions in quantum registers, using quantum computers or tensor-network representations. We show that a large family of distributions can be encoded as low-entanglement states of the quantum register. These states can be efficiently created in a quantum computer, but they are also efficiently stored, manipulated and probed using Matrix-Product States techniques. Inspired by this idea, we present eight quantum-inspired numerical analysis algorithms, that include Fourier sampling, interpolation, differentiation and integration of partial derivative equations. These algorithms combine classical ideas – finite-differences, spectral methods – with the efficient encoding of quantum registers, and well known algorithms, such as the Quantum Fourier Transform. $\textit{When these heuristic methods work}$, they provide an exponential speed-up over other classical algorithms, such as Monte Carlo integration, finite-difference and fast Fourier transforms (FFT). But even when they don't, some of these algorithms can be translated back to a quantum computer to implement a similar task.

]]>In this work, we study a recently proposed operational measure of nonlocality by Fonseca and Parisio [Phys. Rev. A 92, 030101(R) (2015)] which describes the probability of violation of local realism under randomly sampled observables, and the strength of such violation as described by resistance to white noise admixture. While our knowledge concerning these quantities is well established from a theoretical point of view, the experimental counterpart is a considerably harder task and very little has been done in this field. It is caused by the lack of complete knowledge about the facets of the local polytope required for the analysis. In this paper, we propose a simple procedure towards experimentally determining both quantities for $N$-qubit pure states, based on the incomplete set of tight Bell inequalities. We show that the imprecision arising from this approach is of similar magnitude as the potential measurement errors. We also show that even with both a randomly chosen $N$-qubit pure state and randomly chosen measurement bases, a violation of local realism can be detected experimentally almost $100\%$ of the time. Among other applications, our work provides a feasible alternative for the witnessing of genuine multipartite entanglement without aligned reference frames.

]]>In this work, we study a recently proposed operational measure of nonlocality by Fonseca and Parisio [Phys. Rev. A 92, 030101(R) (2015)] which describes the probability of violation of local realism under randomly sampled observables, and the strength of such violation as described by resistance to white noise admixture. While our knowledge concerning these quantities is well established from a theoretical point of view, the experimental counterpart is a considerably harder task and very little has been done in this field. It is caused by the lack of complete knowledge about the facets of the local polytope required for the analysis. In this paper, we propose a simple procedure towards experimentally determining both quantities for $N$-qubit pure states, based on the incomplete set of tight Bell inequalities. We show that the imprecision arising from this approach is of similar magnitude as the potential measurement errors. We also show that even with both a randomly chosen $N$-qubit pure state and randomly chosen measurement bases, a violation of local realism can be detected experimentally almost $100\%$ of the time. Among other applications, our work provides a feasible alternative for the witnessing of genuine multipartite entanglement without aligned reference frames.

]]>A central tenet of theoretical cryptography is the study of the minimal assumptions required to implement a given cryptographic primitive. One such primitive is the one-time memory (OTM), introduced by Goldwasser, Kalai, and Rothblum [CRYPTO 2008], which is a classical functionality modeled after a non-interactive 1-out-of-2 oblivious transfer, and which is complete for one-time classical and quantum programs. It is known that secure OTMs do not exist in the standard model in both the classical and quantum settings. Here, we propose a scheme for using quantum information, together with the assumption of stateless ($i.e.$, reusable) hardware tokens, to build statistically secure OTMs. Via the semidefinite programming-based quantum games framework of Gutoski and Watrous [STOC 2007], we prove security for a malicious receiver making at most 0.114$n$ adaptive queries to the token (for $n$ the key size), in the quantum universal composability framework, but leave open the question of security against a polynomial amount of queries. Compared to alternative schemes derived from the literature on quantum money, our scheme is technologically simple since it is of the "prepare-and-measure" type. We also give two impossibility results showing certain assumptions in our scheme cannot be relaxed.

]]>A central tenet of theoretical cryptography is the study of the minimal assumptions required to implement a given cryptographic primitive. One such primitive is the one-time memory (OTM), introduced by Goldwasser, Kalai, and Rothblum [CRYPTO 2008], which is a classical functionality modeled after a non-interactive 1-out-of-2 oblivious transfer, and which is complete for one-time classical and quantum programs. It is known that secure OTMs do not exist in the standard model in both the classical and quantum settings. Here, we propose a scheme for using quantum information, together with the assumption of stateless ($i.e.$, reusable) hardware tokens, to build statistically secure OTMs. Via the semidefinite programming-based quantum games framework of Gutoski and Watrous [STOC 2007], we prove security for a malicious receiver making at most 0.114$n$ adaptive queries to the token (for $n$ the key size), in the quantum universal composability framework, but leave open the question of security against a polynomial amount of queries. Compared to alternative schemes derived from the literature on quantum money, our scheme is technologically simple since it is of the "prepare-and-measure" type. We also give two impossibility results showing certain assumptions in our scheme cannot be relaxed.

]]>In this paper we discuss Grover Adaptive Search (GAS) for Constrained Polynomial Binary Optimization (CPBO) problems, and in particular, Quadratic Unconstrained Binary Optimization (QUBO) problems, as a special case. GAS can provide a quadratic speed-up for combinatorial optimization problems compared to brute force search. However, this requires the development of efficient oracles to represent problems and flag states that satisfy certain search criteria. In general, this can be achieved using quantum arithmetic, however, this is expensive in terms of Toffoli gates as well as required ancilla qubits, which can be prohibitive in the near-term. Within this work, we develop a way to construct efficient oracles to solve CPBO problems using GAS algorithms. We demonstrate this approach and the potential speed-up for the portfolio optimization problem, i.e. a QUBO, using simulation and experimental results obtained on real quantum hardware. However, our approach applies to higher-degree polynomial objective functions as well as constrained optimization problems.

]]>In this paper we discuss Grover Adaptive Search (GAS) for Constrained Polynomial Binary Optimization (CPBO) problems, and in particular, Quadratic Unconstrained Binary Optimization (QUBO) problems, as a special case. GAS can provide a quadratic speed-up for combinatorial optimization problems compared to brute force search. However, this requires the development of efficient oracles to represent problems and flag states that satisfy certain search criteria. In general, this can be achieved using quantum arithmetic, however, this is expensive in terms of Toffoli gates as well as required ancilla qubits, which can be prohibitive in the near-term. Within this work, we develop a way to construct efficient oracles to solve CPBO problems using GAS algorithms. We demonstrate this approach and the potential speed-up for the portfolio optimization problem, i.e. a QUBO, using simulation and experimental results obtained on real quantum hardware. However, our approach applies to higher-degree polynomial objective functions as well as constrained optimization problems.

]]>We present a quantum interior-point method (IPM) for second-order cone programming (SOCP) that runs in time $\widetilde{O} \left( n\sqrt{r} \frac{\zeta \kappa}{\delta^2} \log \left(1/\epsilon\right) \right)$ where $r$ is the rank and $n$ the dimension of the SOCP, $\delta$ bounds the distance of intermediate solutions from the cone boundary, $\zeta$ is a parameter upper bounded by $\sqrt{n}$, and $\kappa$ is an upper bound on the condition number of matrices arising in the classical IPM for SOCP. The algorithm takes as its input a suitable quantum description of an arbitrary SOCP and outputs a classical description of a $\delta$-approximate $\epsilon$-optimal solution of the given problem. Furthermore, we perform numerical simulations to determine the values of the aforementioned parameters when solving the SOCP up to a fixed precision $\epsilon$. We present experimental evidence that in this case our quantum algorithm exhibits a polynomial speedup over the best classical algorithms for solving general SOCPs that run in time $O(n^{\omega+0.5})$ (here, $\omega$ is the matrix multiplication exponent, with a value of roughly $2.37$ in theory, and up to $3$ in practice). For the case of random SVM (support vector machine) instances of size $O(n)$, the quantum algorithm scales as $O(n^k)$, where the exponent $k$ is estimated to be $2.59$ using a least-squares power law. On the same family random instances, the estimated scaling exponent for an external SOCP solver is $3.31$ while that for a state-of-the-art SVM solver is $3.11$.

]]>We present a quantum interior-point method (IPM) for second-order cone programming (SOCP) that runs in time $\widetilde{O} \left( n\sqrt{r} \frac{\zeta \kappa}{\delta^2} \log \left(1/\epsilon\right) \right)$ where $r$ is the rank and $n$ the dimension of the SOCP, $\delta$ bounds the distance of intermediate solutions from the cone boundary, $\zeta$ is a parameter upper bounded by $\sqrt{n}$, and $\kappa$ is an upper bound on the condition number of matrices arising in the classical IPM for SOCP. The algorithm takes as its input a suitable quantum description of an arbitrary SOCP and outputs a classical description of a $\delta$-approximate $\epsilon$-optimal solution of the given problem. Furthermore, we perform numerical simulations to determine the values of the aforementioned parameters when solving the SOCP up to a fixed precision $\epsilon$. We present experimental evidence that in this case our quantum algorithm exhibits a polynomial speedup over the best classical algorithms for solving general SOCPs that run in time $O(n^{\omega+0.5})$ (here, $\omega$ is the matrix multiplication exponent, with a value of roughly $2.37$ in theory, and up to $3$ in practice). For the case of random SVM (support vector machine) instances of size $O(n)$, the quantum algorithm scales as $O(n^k)$, where the exponent $k$ is estimated to be $2.59$ using a least-squares power law. On the same family random instances, the estimated scaling exponent for an external SOCP solver is $3.31$ while that for a state-of-the-art SVM solver is $3.11$.

]]>We propose an efficient quantum algorithm for simulating the dynamics of general Hamiltonian systems. Our technique is based on a power series expansion of the time-evolution operator in its off-diagonal terms. The expansion decouples the dynamics due to the diagonal component of the Hamiltonian from the dynamics generated by its off-diagonal part, which we encode using the linear combination of unitaries technique. Our method has an optimal dependence on the desired precision and, as we illustrate, generally requires considerably fewer resources than the current state-of-the-art. We provide an analysis of resource costs for several sample models.

]]>We propose an efficient quantum algorithm for simulating the dynamics of general Hamiltonian systems. Our technique is based on a power series expansion of the time-evolution operator in its off-diagonal terms. The expansion decouples the dynamics due to the diagonal component of the Hamiltonian from the dynamics generated by its off-diagonal part, which we encode using the linear combination of unitaries technique. Our method has an optimal dependence on the desired precision and, as we illustrate, generally requires considerably fewer resources than the current state-of-the-art. We provide an analysis of resource costs for several sample models.

]]>We present an in-depth study of the problem of multiple-shot discrimination of von Neumann measurements in finite-dimensional Hilbert spaces. Specifically, we consider two scenarios: minimum error and unambiguous discrimination. In the case of minimum error discrimination, we focus on discrimination of measurements with the assistance of entanglement. We provide an alternative proof of the fact that all pairs of distinct von Neumann measurements can be distinguished perfectly (i.e. with the unit success probability) using only a finite number of queries. Moreover, we analytically find the minimal number of queries needed for perfect discrimination. We also show that in this scenario querying the measurements $\textit{in parallel}$ gives the optimal strategy, and hence any possible adaptive methods do not offer any advantage over the parallel scheme. In the unambiguous discrimination scenario, we give the general expressions for the optimal discrimination probabilities with and without the assistance of entanglement. Finally, we show that typical pairs of Haar-random von Neumann measurements can be perfectly distinguished with only two queries.

]]>We present an in-depth study of the problem of multiple-shot discrimination of von Neumann measurements in finite-dimensional Hilbert spaces. Specifically, we consider two scenarios: minimum error and unambiguous discrimination. In the case of minimum error discrimination, we focus on discrimination of measurements with the assistance of entanglement. We provide an alternative proof of the fact that all pairs of distinct von Neumann measurements can be distinguished perfectly (i.e. with the unit success probability) using only a finite number of queries. Moreover, we analytically find the minimal number of queries needed for perfect discrimination. We also show that in this scenario querying the measurements $\textit{in parallel}$ gives the optimal strategy, and hence any possible adaptive methods do not offer any advantage over the parallel scheme. In the unambiguous discrimination scenario, we give the general expressions for the optimal discrimination probabilities with and without the assistance of entanglement. Finally, we show that typical pairs of Haar-random von Neumann measurements can be perfectly distinguished with only two queries.

]]>We consider the problem of certification of arbitrary ensembles of pure states and projective measurements solely from the experimental statistics in the prepare-and-measure scenario assuming the upper bound on the dimension of the Hilbert space. To this aim, we propose a universal and intuitive scheme based on establishing perfect correlations between target states and suitably-chosen projective measurements. The method works in all finite dimensions and allows for robust certification of the overlaps between arbitrary preparation states and between the corresponding measurement operators. Finally, we prove that for qubits, our technique can be used to robustly self-test arbitrary configurations of pure quantum states and projective measurements. These results pave the way towards the practical application of the prepare-and-measure paradigm to certification of quantum devices.

]]>We consider the problem of certification of arbitrary ensembles of pure states and projective measurements solely from the experimental statistics in the prepare-and-measure scenario assuming the upper bound on the dimension of the Hilbert space. To this aim, we propose a universal and intuitive scheme based on establishing perfect correlations between target states and suitably-chosen projective measurements. The method works in all finite dimensions and allows for robust certification of the overlaps between arbitrary preparation states and between the corresponding measurement operators. Finally, we prove that for qubits, our technique can be used to robustly self-test arbitrary configurations of pure quantum states and projective measurements. These results pave the way towards the practical application of the prepare-and-measure paradigm to certification of quantum devices.

]]>Giving a convincing experimental evidence of the quantum supremacy over classical simulations is a challenging goal. Noise is considered to be the main problem in such a demonstration, hence it is urgent to understand the effect of noise. Recently found classical algorithms can efficiently approximate, to any small error, the output of boson sampling with finite-amplitude noise. In this work it is shown analytically and confirmed by numerical simulations that one can efficiently distinguish the output distribution of such a noisy boson sampling from the approximations accounting for low-order quantum multiboson interferences, what includes the mentioned classical algorithms. The number of samples required to tell apart the quantum and classical output distributions is strongly affected by the previously unexplored parameter: density of bosons, i.e., the ratio of total number of interfering bosons to number of input ports of interferometer. Such critical dependence is strikingly reminiscent of the quantum-to-classical transition in systems of identical particles, which sets in when the system size scales up while density of particles vanishes.

]]>Giving a convincing experimental evidence of the quantum supremacy over classical simulations is a challenging goal. Noise is considered to be the main problem in such a demonstration, hence it is urgent to understand the effect of noise. Recently found classical algorithms can efficiently approximate, to any small error, the output of boson sampling with finite-amplitude noise. In this work it is shown analytically and confirmed by numerical simulations that one can efficiently distinguish the output distribution of such a noisy boson sampling from the approximations accounting for low-order quantum multiboson interferences, what includes the mentioned classical algorithms. The number of samples required to tell apart the quantum and classical output distributions is strongly affected by the previously unexplored parameter: density of bosons, i.e., the ratio of total number of interfering bosons to number of input ports of interferometer. Such critical dependence is strikingly reminiscent of the quantum-to-classical transition in systems of identical particles, which sets in when the system size scales up while density of particles vanishes.

]]>Parametric quantum circuits play a crucial role in the performance of many variational quantum algorithms. To successfully implement such algorithms, one must design efficient quantum circuits that sufficiently approximate the solution space while maintaining a low parameter count and circuit depth. In this paper, develop a method to analyze the dimensional expressivity of parametric quantum circuits. Our technique allows for identifying superfluous parameters in the circuit layout and for obtaining a maximally expressive ansatz with a minimum number of parameters. Using a hybrid quantum-classical approach, we show how to efficiently implement the expressivity analysis using quantum hardware, and we provide a proof of principle demonstration of this procedure on IBM's quantum hardware. We also discuss the effect of symmetries and demonstrate how to incorporate or remove symmetries from the parametrized ansatz.

]]>Parametric quantum circuits play a crucial role in the performance of many variational quantum algorithms. To successfully implement such algorithms, one must design efficient quantum circuits that sufficiently approximate the solution space while maintaining a low parameter count and circuit depth. In this paper, develop a method to analyze the dimensional expressivity of parametric quantum circuits. Our technique allows for identifying superfluous parameters in the circuit layout and for obtaining a maximally expressive ansatz with a minimum number of parameters. Using a hybrid quantum-classical approach, we show how to efficiently implement the expressivity analysis using quantum hardware, and we provide a proof of principle demonstration of this procedure on IBM's quantum hardware. We also discuss the effect of symmetries and demonstrate how to incorporate or remove symmetries from the parametrized ansatz.

]]>Translations between the quantum circuit model and the measurement-based one-way model are useful for verification and optimisation of quantum computations. They make crucial use of a property known as gflow. While gflow is defined for one-way computations allowing measurements in three different planes of the Bloch sphere, most research so far has focused on computations containing only measurements in the XY-plane. Here, we give the first circuit-extraction algorithm to work for one-way computations containing measurements in all three planes and having gflow. The algorithm is efficient and the resulting circuits do not contain ancillae. One-way computations are represented using the ZX-calculus, hence the algorithm also represents the most general known procedure for extracting circuits from ZX-diagrams. In developing this algorithm, we generalise several concepts and results previously known for computations containing only XY-plane measurements. We bring together several known rewrite rules for measurement patterns and formalise them in a unified notation using the ZX-calculus. These rules are used to simplify measurement patterns by reducing the number of qubits while preserving both the semantics and the existence of gflow. The results can be applied to circuit optimisation by translating circuits to patterns and back again.

]]>

Translations between the quantum circuit model and the measurement-based one-way model are useful for verification and optimisation of quantum computations. They make crucial use of a property known as gflow. While gflow is defined for one-way computations allowing measurements in three different planes of the Bloch sphere, most research so far has focused on computations containing only measurements in the XY-plane. Here, we give the first circuit-extraction algorithm to work for one-way computations containing measurements in all three planes and having gflow. The algorithm is efficient and the resulting circuits do not contain ancillae. One-way computations are represented using the ZX-calculus, hence the algorithm also represents the most general known procedure for extracting circuits from ZX-diagrams. In developing this algorithm, we generalise several concepts and results previously known for computations containing only XY-plane measurements. We bring together several known rewrite rules for measurement patterns and formalise them in a unified notation using the ZX-calculus. These rules are used to simplify measurement patterns by reducing the number of qubits while preserving both the semantics and the existence of gflow. The results can be applied to circuit optimisation by translating circuits to patterns and back again.

]]>

In this paper we consider deterministic nonlinear time evolutions satisfying so called convex quasi-linearity condition. Such evolutions preserve the equivalence of ensembles and therefore are free from problems with signaling. We show that if family of linear non-trace-preserving maps satisfies the semigroup property then the generated family of convex quasi-linear operations also possesses the semigroup property. Next we generalize the Gorini-Kossakowski-Sudarshan-Lindblad type equation for the considered evolution. As examples we discuss the general qubit evolution in our model as well as an extension of the Jaynes-Cummings model. We apply our formalism to spin density matrix of a charged particle moving in the electromagnetic field as well as to flavor evolution of solar neutrinos.

]]>In this paper we consider deterministic nonlinear time evolutions satisfying so called convex quasi-linearity condition. Such evolutions preserve the equivalence of ensembles and therefore are free from problems with signaling. We show that if family of linear non-trace-preserving maps satisfies the semigroup property then the generated family of convex quasi-linear operations also possesses the semigroup property. Next we generalize the Gorini-Kossakowski-Sudarshan-Lindblad type equation for the considered evolution. As examples we discuss the general qubit evolution in our model as well as an extension of the Jaynes-Cummings model. We apply our formalism to spin density matrix of a charged particle moving in the electromagnetic field as well as to flavor evolution of solar neutrinos.

]]>We define the type-independent resource theory of local operations and shared entanglement (LOSE). This allows us to formally quantify postquantumness in common-cause scenarios such as the Bell scenario. Any nonsignaling bipartite quantum channel which cannot be generated by LOSE operations requires a $\textit{postquantum common cause}$ to generate, and constitutes a valuable resource. Our framework allows LOSE operations that arbitrarily transform between different types of resources, which in turn allows us to undertake a systematic study of the different manifestations of postquantum common causes. Only three of these have been previously recognized, namely postquantum correlations, postquantum steering, and non-localizable channels, all of which are subsumed as special cases of resources in our framework. Finally, we prove several fundamental results regarding how the type of a resource determines what conversions into other resources are possible, and also places constraints on the resource's ability to provide an advantage in distributed tasks such as nonlocal games, semiquantum games, steering games, etc.

]]>We define the type-independent resource theory of local operations and shared entanglement (LOSE). This allows us to formally quantify postquantumness in common-cause scenarios such as the Bell scenario. Any nonsignaling bipartite quantum channel which cannot be generated by LOSE operations requires a $\textit{postquantum common cause}$ to generate, and constitutes a valuable resource. Our framework allows LOSE operations that arbitrarily transform between different types of resources, which in turn allows us to undertake a systematic study of the different manifestations of postquantum common causes. Only three of these have been previously recognized, namely postquantum correlations, postquantum steering, and non-localizable channels, all of which are subsumed as special cases of resources in our framework. Finally, we prove several fundamental results regarding how the type of a resource determines what conversions into other resources are possible, and also places constraints on the resource's ability to provide an advantage in distributed tasks such as nonlocal games, semiquantum games, steering games, etc.

]]>Self-testing protocols are methods to determine the presence of shared entangled states in a device independent scenario, where no assumptions on the measurements involved in the protocol are made. A particular type of self-testing protocol, called parallel self-testing, can certify the presence of copies of a state, however such protocols typically suffer from the problem of requiring a number of measurements that increases with respect to the number of copies one aims to certify. Here we propose a procedure to transform single-copy self-testing protocols into a procedure that certifies the tensor product of an arbitrary number of (not necessarily equal) quantum states, without increasing the number of parties or measurement choices. Moreover, we prove that self-testing protocols that certify a state and rank-one measurements can always be parallelized to certify many copies of the state. Our results suggest a method to achieve device-independent unbounded randomness expansion with high-dimensional quantum states.

]]>Self-testing protocols are methods to determine the presence of shared entangled states in a device independent scenario, where no assumptions on the measurements involved in the protocol are made. A particular type of self-testing protocol, called parallel self-testing, can certify the presence of copies of a state, however such protocols typically suffer from the problem of requiring a number of measurements that increases with respect to the number of copies one aims to certify. Here we propose a procedure to transform single-copy self-testing protocols into a procedure that certifies the tensor product of an arbitrary number of (not necessarily equal) quantum states, without increasing the number of parties or measurement choices. Moreover, we prove that self-testing protocols that certify a state and rank-one measurements can always be parallelized to certify many copies of the state. Our results suggest a method to achieve device-independent unbounded randomness expansion with high-dimensional quantum states.

]]>Here we study the comparative power of classical and quantum learners for generative modelling within the Probably Approximately Correct (PAC) framework. More specifically we consider the following task: Given samples from some unknown discrete probability distribution, output with high probability an efficient algorithm for generating new samples from a good approximation of the original distribution. Our primary result is the explicit construction of a class of discrete probability distributions which, under the decisional Diffie-Hellman assumption, is provably not efficiently PAC learnable by a classical generative modelling algorithm, but for which we construct an efficient quantum learner. This class of distributions therefore provides a concrete example of a generative modelling problem for which quantum learners exhibit a provable advantage over classical learning algorithms. In addition, we discuss techniques for proving classical generative modelling hardness results, as well as the relationship between the PAC learnability of Boolean functions and the PAC learnability of discrete probability distributions.

]]>Here we study the comparative power of classical and quantum learners for generative modelling within the Probably Approximately Correct (PAC) framework. More specifically we consider the following task: Given samples from some unknown discrete probability distribution, output with high probability an efficient algorithm for generating new samples from a good approximation of the original distribution. Our primary result is the explicit construction of a class of discrete probability distributions which, under the decisional Diffie-Hellman assumption, is provably not efficiently PAC learnable by a classical generative modelling algorithm, but for which we construct an efficient quantum learner. This class of distributions therefore provides a concrete example of a generative modelling problem for which quantum learners exhibit a provable advantage over classical learning algorithms. In addition, we discuss techniques for proving classical generative modelling hardness results, as well as the relationship between the PAC learnability of Boolean functions and the PAC learnability of discrete probability distributions.

]]>Today's most widely used method of encoding quantum information in optical qubits is the dual-rail basis, often carried out through the polarisation of a single photon. On the other hand, many stationary carriers of quantum information – such as atoms – couple to light via the single-rail encoding in which the qubit is encoded in the number of photons. As such, interconversion between the two encodings is paramount in order to achieve cohesive quantum networks. In this paper, we demonstrate this by generating an entangled resource between the two encodings and using it to teleport a dual-rail qubit onto its single-rail counterpart. This work completes the set of tools necessary for the interconversion between the three primary encodings of the qubit in the optical field: single-rail, dual-rail and continuous-variable.

]]>Today's most widely used method of encoding quantum information in optical qubits is the dual-rail basis, often carried out through the polarisation of a single photon. On the other hand, many stationary carriers of quantum information – such as atoms – couple to light via the single-rail encoding in which the qubit is encoded in the number of photons. As such, interconversion between the two encodings is paramount in order to achieve cohesive quantum networks. In this paper, we demonstrate this by generating an entangled resource between the two encodings and using it to teleport a dual-rail qubit onto its single-rail counterpart. This work completes the set of tools necessary for the interconversion between the three primary encodings of the qubit in the optical field: single-rail, dual-rail and continuous-variable.

]]>Quantum computing systems need to be benchmarked in terms of practical tasks they would be expected to do. Here, we propose 3 "application-motivated" circuit classes for benchmarking: deep (relevant for state preparation in the variational quantum eigensolver algorithm), shallow (inspired by IQP-type circuits that might be useful for near-term quantum machine learning), and square (inspired by the quantum volume benchmark). We quantify the performance of a quantum computing system in running circuits from these classes using several figures of merit, all of which require exponential classical computing resources and a polynomial number of classical samples (bitstrings) from the system. We study how performance varies with the compilation strategy used and the device on which the circuit is run. Using systems made available by IBM Quantum, we examine their performance, showing that noise-aware compilation strategies may be beneficial, and that device connectivity and noise levels play a crucial role in the performance of the system according to our benchmarks.

]]>Quantum computing systems need to be benchmarked in terms of practical tasks they would be expected to do. Here, we propose 3 "application-motivated" circuit classes for benchmarking: deep (relevant for state preparation in the variational quantum eigensolver algorithm), shallow (inspired by IQP-type circuits that might be useful for near-term quantum machine learning), and square (inspired by the quantum volume benchmark). We quantify the performance of a quantum computing system in running circuits from these classes using several figures of merit, all of which require exponential classical computing resources and a polynomial number of classical samples (bitstrings) from the system. We study how performance varies with the compilation strategy used and the device on which the circuit is run. Using systems made available by IBM Quantum, we examine their performance, showing that noise-aware compilation strategies may be beneficial, and that device connectivity and noise levels play a crucial role in the performance of the system according to our benchmarks.

]]>Quantum measurement is a basic tool to manifest intrinsic quantum effects from fundamental tests to quantum information applications. While a measurement is typically performed to gain information on a quantum state, its role in quantum technology is indeed manifold. For instance, quantum measurement is a crucial process element in measurement-based quantum computation. It is also used to detect and correct errors thereby protecting quantum information in error-correcting frameworks. It is therefore important to fully characterize the roles of quantum measurement encompassing information gain, state disturbance and reversibility, together with their fundamental relations. Numerous efforts have been made to obtain the trade-off between information gain and state disturbance, which becomes a practical basis for secure information processing. However, a complete information balance is necessary to include the reversibility of quantum measurement, which constitutes an integral part of practical quantum information processing. We here establish all pairs of trade-off relations involving information gain, disturbance, and reversibility, and crucially the one among all of them together. By doing so, we show that the reversibility plays a vital role in completing the information balance. Remarkably, our result can be interpreted as an information-conservation law of quantum measurement in a nontrivial form. We completely identify the conditions for optimal measurements that satisfy the conservation for each tradeoff relation with their potential applications. Our work can provide a useful guideline for designing a quantum measurement in accordance with the aims of quantum information processors.

]]>Quantum measurement is a basic tool to manifest intrinsic quantum effects from fundamental tests to quantum information applications. While a measurement is typically performed to gain information on a quantum state, its role in quantum technology is indeed manifold. For instance, quantum measurement is a crucial process element in measurement-based quantum computation. It is also used to detect and correct errors thereby protecting quantum information in error-correcting frameworks. It is therefore important to fully characterize the roles of quantum measurement encompassing information gain, state disturbance and reversibility, together with their fundamental relations. Numerous efforts have been made to obtain the trade-off between information gain and state disturbance, which becomes a practical basis for secure information processing. However, a complete information balance is necessary to include the reversibility of quantum measurement, which constitutes an integral part of practical quantum information processing. We here establish all pairs of trade-off relations involving information gain, disturbance, and reversibility, and crucially the one among all of them together. By doing so, we show that the reversibility plays a vital role in completing the information balance. Remarkably, our result can be interpreted as an information-conservation law of quantum measurement in a nontrivial form. We completely identify the conditions for optimal measurements that satisfy the conservation for each tradeoff relation with their potential applications. Our work can provide a useful guideline for designing a quantum measurement in accordance with the aims of quantum information processors.

]]>We introduce an approximate description of an $N$-qubit state, which contains sufficient information to estimate the expectation value of any observable to a precision that is upper bounded by the ratio of a suitably-defined seminorm of the observable to the square root of the number of the system's identical preparations $M$, with no explicit dependence on $N$. We describe an operational procedure for constructing the approximate description of the state that requires, besides the quantum state preparation, only single-qubit rotations followed by single-qubit measurements. We show that following this procedure, the cardinality of the resulting description of the state grows as $3MN$. We test the proposed method on Rigetti's quantum processor unit with 12, 16 and 25 qubits for random states and random observables, and find an excellent agreement with the theory, despite experimental errors.

]]>We introduce an approximate description of an $N$-qubit state, which contains sufficient information to estimate the expectation value of any observable to a precision that is upper bounded by the ratio of a suitably-defined seminorm of the observable to the square root of the number of the system's identical preparations $M$, with no explicit dependence on $N$. We describe an operational procedure for constructing the approximate description of the state that requires, besides the quantum state preparation, only single-qubit rotations followed by single-qubit measurements. We show that following this procedure, the cardinality of the resulting description of the state grows as $3MN$. We test the proposed method on Rigetti's quantum processor unit with 12, 16 and 25 qubits for random states and random observables, and find an excellent agreement with the theory, despite experimental errors.

]]>We consider the task of breaking down a quantum computation given as an isometry into C-NOTs and single-qubit gates, while keeping the number of C-NOT gates small. Although several decompositions are known for general isometries, here we focus on a method based on Householder reflections that adapts well in the case of sparse isometries. We show how to use this method to decompose an arbitrary isometry before illustrating that the method can lead to significant improvements in the case of sparse isometries. We also discuss the classical complexity of this method and illustrate its effectiveness in the case of sparse state preparation by applying it to randomly chosen sparse states.

]]>

We consider the task of breaking down a quantum computation given as an isometry into C-NOTs and single-qubit gates, while keeping the number of C-NOT gates small. Although several decompositions are known for general isometries, here we focus on a method based on Householder reflections that adapts well in the case of sparse isometries. We show how to use this method to decompose an arbitrary isometry before illustrating that the method can lead to significant improvements in the case of sparse isometries. We also discuss the classical complexity of this method and illustrate its effectiveness in the case of sparse state preparation by applying it to randomly chosen sparse states.

]]>

Passivity is a fundamental concept that constitutes a necessary condition for any quantum system to attain thermodynamic equilibrium, and for a notion of temperature to emerge. While extensive work has been done that exploits this, the transition from passivity at a single-shot level to the completely passive Gibbs state is technically clear but lacks a good over-arching intuition. Here, we reformulate passivity for quantum systems in purely geometric terms. This description makes the emergence of the Gibbs state from passive states entirely transparent. Beyond clarifying existing results, it also provides novel analysis for non-equilibrium quantum systems. We show that, to every passive state, one can associate a simple convex shape in a $2$-dimensional plane, and that the area of this shape measures the degree to which the system deviates from the manifold of equilibrium states. This provides a novel geometric measure of athermality with relations to both ergotropy and $\beta$--athermality.

]]>Passivity is a fundamental concept that constitutes a necessary condition for any quantum system to attain thermodynamic equilibrium, and for a notion of temperature to emerge. While extensive work has been done that exploits this, the transition from passivity at a single-shot level to the completely passive Gibbs state is technically clear but lacks a good over-arching intuition. Here, we reformulate passivity for quantum systems in purely geometric terms. This description makes the emergence of the Gibbs state from passive states entirely transparent. Beyond clarifying existing results, it also provides novel analysis for non-equilibrium quantum systems. We show that, to every passive state, one can associate a simple convex shape in a $2$-dimensional plane, and that the area of this shape measures the degree to which the system deviates from the manifold of equilibrium states. This provides a novel geometric measure of athermality with relations to both ergotropy and $\beta$--athermality.

]]>Tensor networks represent the state-of-the-art in computational methods across many disciplines, including the classical simulation of quantum many-body systems and quantum circuits. Several applications of current interest give rise to tensor networks with irregular geometries. Finding the best possible contraction path for such networks is a central problem, with an exponential effect on computation time and memory footprint. In this work, we implement new randomized protocols that find very high quality contraction paths for arbitrary and large tensor networks. We test our methods on a variety of benchmarks, including the random quantum circuit instances recently implemented on Google quantum chips. We find that the paths obtained can be very close to optimal, and often many orders or magnitude better than the most established approaches. As different underlying geometries suit different methods, we also introduce a hyper-optimization approach, where both the method applied and its algorithmic parameters are tuned during the path finding. The increase in quality of contraction schemes found has significant practical implications for the simulation of quantum many-body systems and particularly for the benchmarking of new quantum chips. Concretely, we estimate a speed-up of over 10,000$\times$ compared to the original expectation for the classical simulation of the Sycamore `supremacy' circuits.

]]>Tensor networks represent the state-of-the-art in computational methods across many disciplines, including the classical simulation of quantum many-body systems and quantum circuits. Several applications of current interest give rise to tensor networks with irregular geometries. Finding the best possible contraction path for such networks is a central problem, with an exponential effect on computation time and memory footprint. In this work, we implement new randomized protocols that find very high quality contraction paths for arbitrary and large tensor networks. We test our methods on a variety of benchmarks, including the random quantum circuit instances recently implemented on Google quantum chips. We find that the paths obtained can be very close to optimal, and often many orders or magnitude better than the most established approaches. As different underlying geometries suit different methods, we also introduce a hyper-optimization approach, where both the method applied and its algorithmic parameters are tuned during the path finding. The increase in quality of contraction schemes found has significant practical implications for the simulation of quantum many-body systems and particularly for the benchmarking of new quantum chips. Concretely, we estimate a speed-up of over 10,000$\times$ compared to the original expectation for the classical simulation of the Sycamore `supremacy' circuits.

]]>The framework of quantum invariants is an elegant generalization of adiabatic quantum control to control fields that do not need to change slowly. Due to the unavailability of invariants for systems with more than one spatial dimension, the benefits of this framework have not yet been exploited in multi-dimensional systems. We construct a multi-dimensional Gaussian quantum invariant that permits the design of time-dependent potentials that let the ground state of an initial potential evolve towards the ground state of a final potential. The scope of this framework is demonstrated with the task of shuttling an ion around a corner which is a paradigmatic control problem in achieving scalability of trapped ion quantum information technology.

]]>The framework of quantum invariants is an elegant generalization of adiabatic quantum control to control fields that do not need to change slowly. Due to the unavailability of invariants for systems with more than one spatial dimension, the benefits of this framework have not yet been exploited in multi-dimensional systems. We construct a multi-dimensional Gaussian quantum invariant that permits the design of time-dependent potentials that let the ground state of an initial potential evolve towards the ground state of a final potential. The scope of this framework is demonstrated with the task of shuttling an ion around a corner which is a paradigmatic control problem in achieving scalability of trapped ion quantum information technology.

]]>In stochastic thermodynamics work is a random variable whose average is bounded by the change in the free energy of the system. In most treatments, however, the work reservoir that absorbs this change is either tacitly assumed or modelled using unphysical systems with unbounded Hamiltonians (i.e. the ideal weight). In this work we describe the consequences of introducing the ground state of the battery and hence — of breaking its translational symmetry. The most striking consequence of this shift is the fact that the Jarzynski identity is replaced by a family of inequalities. Using these inequalities we obtain corrections to the second law of thermodynamics which vanish exponentially with the distance of the initial state of the battery to the bottom of its spectrum. Finally, we study an exemplary thermal operation which realizes the approximate Landauer erasure and demonstrate the consequences which arise when the ground state of the battery is explicitly introduced. In particular, we show that occupation of the vacuum state of any physical battery sets a lower bound on fluctuations of work, while batteries without vacuum state allow for fluctuation-free erasure.

]]>In stochastic thermodynamics work is a random variable whose average is bounded by the change in the free energy of the system. In most treatments, however, the work reservoir that absorbs this change is either tacitly assumed or modelled using unphysical systems with unbounded Hamiltonians (i.e. the ideal weight). In this work we describe the consequences of introducing the ground state of the battery and hence — of breaking its translational symmetry. The most striking consequence of this shift is the fact that the Jarzynski identity is replaced by a family of inequalities. Using these inequalities we obtain corrections to the second law of thermodynamics which vanish exponentially with the distance of the initial state of the battery to the bottom of its spectrum. Finally, we study an exemplary thermal operation which realizes the approximate Landauer erasure and demonstrate the consequences which arise when the ground state of the battery is explicitly introduced. In particular, we show that occupation of the vacuum state of any physical battery sets a lower bound on fluctuations of work, while batteries without vacuum state allow for fluctuation-free erasure.

]]>