Neural networks have emerged as a powerful way to approach many practical problems in quantum physics. In this work, we illustrate the power of deep learning to predict the dynamics of a quantum many-body system, where the training is $\textit{based purely on monitoring expectation values of observables under random driving}$. The trained recurrent network is able to produce accurate predictions for driving trajectories entirely different than those observed during training. As a proof of principle, here we train the network on numerical data generated from spin models, showing that it can learn the dynamics of observables of interest without needing information about the full quantum state. This allows our approach to be applied eventually to actual experimental data generated from a quantum many-body system that might be open, noisy, or disordered, without any need for a detailed understanding of the system. This scheme provides considerable speedup for rapid explorations and pulse optimization. Remarkably, we show the network is able to extrapolate the dynamics to times longer than those it has been trained on, as well as to the infinite-system-size limit.

]]>Neural networks have emerged as a powerful way to approach many practical problems in quantum physics. In this work, we illustrate the power of deep learning to predict the dynamics of a quantum many-body system, where the training is $\textit{based purely on monitoring expectation values of observables under random driving}$. The trained recurrent network is able to produce accurate predictions for driving trajectories entirely different than those observed during training. As a proof of principle, here we train the network on numerical data generated from spin models, showing that it can learn the dynamics of observables of interest without needing information about the full quantum state. This allows our approach to be applied eventually to actual experimental data generated from a quantum many-body system that might be open, noisy, or disordered, without any need for a detailed understanding of the system. This scheme provides considerable speedup for rapid explorations and pulse optimization. Remarkably, we show the network is able to extrapolate the dynamics to times longer than those it has been trained on, as well as to the infinite-system-size limit.

]]>We describe a general approach to modeling rational decision-making agents who adopt either quantum or classical mechanics based on the Quantum Bayesian (QBist) approach to quantum theory. With the additional ingredient of a scheme by which the properties of one agent may influence another, we arrive at a flexible framework for treating multiple interacting quantum and classical Bayesian agents. We present simulations in several settings to illustrate our construction: quantum and classical agents receiving signals from an exogenous source, two interacting classical agents, two interacting quantum agents, and interactions between classical and quantum agents. A consistent treatment of multiple interacting users of quantum theory may allow us to properly interpret existing multi-agent protocols and could suggest new approaches in other areas such as quantum algorithm design.

]]>We describe a general approach to modeling rational decision-making agents who adopt either quantum or classical mechanics based on the Quantum Bayesian (QBist) approach to quantum theory. With the additional ingredient of a scheme by which the properties of one agent may influence another, we arrive at a flexible framework for treating multiple interacting quantum and classical Bayesian agents. We present simulations in several settings to illustrate our construction: quantum and classical agents receiving signals from an exogenous source, two interacting classical agents, two interacting quantum agents, and interactions between classical and quantum agents. A consistent treatment of multiple interacting users of quantum theory may allow us to properly interpret existing multi-agent protocols and could suggest new approaches in other areas such as quantum algorithm design.

]]>We identify time-optimal laser pulses to implement the controlled-Z gate and its three qubit generalization, the C$_2$Z gate, for Rydberg atoms in the blockade regime. Pulses are optimized using a combination of numerical and semi-analytical quantum optimal control techniques that result in smooth Ansätze with just a few variational parameters. For the CZ gate, the time-optimal implementation corresponds to a global laser pulse that does not require single site addressability of the atoms, simplifying experimental implementation of the gate. We employ quantum optimal control techniques to mitigate errors arising due to the finite lifetime of Rydberg states and finite blockade strengths, while several other types of errors affecting the gates are directly mitigated by the short gate duration. For the considered error sources, we achieve theoretical gate fidelities compatible with error correction using reasonable experimental parameters for CZ and C$_2$Z gates.

]]>We identify time-optimal laser pulses to implement the controlled-Z gate and its three qubit generalization, the C$_2$Z gate, for Rydberg atoms in the blockade regime. Pulses are optimized using a combination of numerical and semi-analytical quantum optimal control techniques that result in smooth Ansätze with just a few variational parameters. For the CZ gate, the time-optimal implementation corresponds to a global laser pulse that does not require single site addressability of the atoms, simplifying experimental implementation of the gate. We employ quantum optimal control techniques to mitigate errors arising due to the finite lifetime of Rydberg states and finite blockade strengths, while several other types of errors affecting the gates are directly mitigated by the short gate duration. For the considered error sources, we achieve theoretical gate fidelities compatible with error correction using reasonable experimental parameters for CZ and C$_2$Z gates.

]]>Quantum low-density parity-check (LDPC) codes are an important class of quantum error correcting codes. In such codes, each qubit only affects a constant number of syndrome bits, and each syndrome bit only relies on some constant number of qubits. Constructing quantum LDPC codes is challenging. It is an open problem to understand if there exist good quantum LDPC codes, i.e. with constant rate and relative distance. Furthermore, techniques to perform fault-tolerant gates are poorly understood. We present a unified way to address these problems. Our main results are a) a bound on the distance, b) a bound on the code dimension and c) limitations on certain fault-tolerant gates that can be applied to quantum LDPC codes. All three of these bounds are cast as a function of the graph separator of the connectivity graph representation of the quantum code. We find that unless the connectivity graph contains an expander, the code is severely limited. This implies a necessary, but not sufficient, condition to construct good codes. This is the first bound that studies the limitations of quantum LDPC codes that does not rely on locality. As an application, we present novel bounds on quantum LDPC codes associated with local graphs in $D$-dimensional hyperbolic space.

]]>Quantum low-density parity-check (LDPC) codes are an important class of quantum error correcting codes. In such codes, each qubit only affects a constant number of syndrome bits, and each syndrome bit only relies on some constant number of qubits. Constructing quantum LDPC codes is challenging. It is an open problem to understand if there exist good quantum LDPC codes, i.e. with constant rate and relative distance. Furthermore, techniques to perform fault-tolerant gates are poorly understood. We present a unified way to address these problems. Our main results are a) a bound on the distance, b) a bound on the code dimension and c) limitations on certain fault-tolerant gates that can be applied to quantum LDPC codes. All three of these bounds are cast as a function of the graph separator of the connectivity graph representation of the quantum code. We find that unless the connectivity graph contains an expander, the code is severely limited. This implies a necessary, but not sufficient, condition to construct good codes. This is the first bound that studies the limitations of quantum LDPC codes that does not rely on locality. As an application, we present novel bounds on quantum LDPC codes associated with local graphs in $D$-dimensional hyperbolic space.

]]>In this work we propose a novel numerical approach to decompose general quantum programs in terms of single- and two-qubit quantum gates with a $CNOT$ gate count very close to the current theoretical lower bounds. In particular, it turns out that $15$ and $63$ $CNOT$ gates are sufficient to decompose a general $3$- and $4$-qubit unitary, respectively, with high numerical accuracy. Our approach is based on a sequential optimization of parameters related to the single-qubit rotation gates involved in a pre-designed quantum circuit used for the decomposition. In addition, the algorithm can be adopted to sparse inter-qubit connectivity architectures provided by current mid-scale quantum computers, needing only a few additional $CNOT$ gates to be implemented in the resulting quantum circuits.

]]>In this work we propose a novel numerical approach to decompose general quantum programs in terms of single- and two-qubit quantum gates with a $CNOT$ gate count very close to the current theoretical lower bounds. In particular, it turns out that $15$ and $63$ $CNOT$ gates are sufficient to decompose a general $3$- and $4$-qubit unitary, respectively, with high numerical accuracy. Our approach is based on a sequential optimization of parameters related to the single-qubit rotation gates involved in a pre-designed quantum circuit used for the decomposition. In addition, the algorithm can be adopted to sparse inter-qubit connectivity architectures provided by current mid-scale quantum computers, needing only a few additional $CNOT$ gates to be implemented in the resulting quantum circuits.

]]>We introduce a new open-source software library $Jet$, which uses task-based parallelism to obtain speed-ups in classical tensor-network simulations of quantum circuits. These speed-ups result from i) the increased parallelism introduced by mapping the tensor-network simulation to a task-based framework, ii) a novel method of reusing shared work between tensor-network contraction tasks, and iii) the concurrent contraction of tensor networks on all available hardware. We demonstrate the advantages of our method by benchmarking our code on several Sycamore-53 and Gaussian boson sampling (GBS) supremacy circuits against other simulators. We also provide and compare theoretical performance estimates for tensor-network simulations of Sycamore-53 and GBS supremacy circuits for the first time.

]]>We introduce a new open-source software library $Jet$, which uses task-based parallelism to obtain speed-ups in classical tensor-network simulations of quantum circuits. These speed-ups result from i) the increased parallelism introduced by mapping the tensor-network simulation to a task-based framework, ii) a novel method of reusing shared work between tensor-network contraction tasks, and iii) the concurrent contraction of tensor networks on all available hardware. We demonstrate the advantages of our method by benchmarking our code on several Sycamore-53 and Gaussian boson sampling (GBS) supremacy circuits against other simulators. We also provide and compare theoretical performance estimates for tensor-network simulations of Sycamore-53 and GBS supremacy circuits for the first time.

]]>We introduce finite-function-encoding (FFE) states which encode arbitrary $d$-valued logic functions, i.e., multivariate functions over the ring of integers modulo $d$, and investigate some of their structural properties. We also point out some differences between polynomial and non-polynomial function encoding states: The former can be associated to graphical objects, that we dub tensor-edge hypergraphs (TEH), which are a generalization of hypergraphs with a tensor attached to each hyperedge encoding the coefficients of the different monomials. To complete the framework, we also introduce a notion of finite-function-encoding Pauli (FP) operators, which correspond to elements of what is known as the generalized symmetric group in mathematics. First, using this machinery, we study the stabilizer group associated to FFE states and observe how qudit hypergraph states introduced in Ref. [1] admit stabilizers of a particularly simpler form. Afterwards, we investigate the classification of FFE states under local unitaries (LU), and, after showing the complexity of this problem, we focus on the case of bipartite states and especially on the classification under local FP operations (LFP). We find all LU and LFP classes for two qutrits and two ququarts and study several other special classes, pointing out the relation between maximally entangled FFE states and complex Butson-type Hadamard matrices. Our investigation showcases also the relation between the properties of FFE states, especially their LU classification, and the theory of finite rings over the integers.

]]>We introduce finite-function-encoding (FFE) states which encode arbitrary $d$-valued logic functions, i.e., multivariate functions over the ring of integers modulo $d$, and investigate some of their structural properties. We also point out some differences between polynomial and non-polynomial function encoding states: The former can be associated to graphical objects, that we dub tensor-edge hypergraphs (TEH), which are a generalization of hypergraphs with a tensor attached to each hyperedge encoding the coefficients of the different monomials. To complete the framework, we also introduce a notion of finite-function-encoding Pauli (FP) operators, which correspond to elements of what is known as the generalized symmetric group in mathematics. First, using this machinery, we study the stabilizer group associated to FFE states and observe how qudit hypergraph states introduced in Ref. [1] admit stabilizers of a particularly simpler form. Afterwards, we investigate the classification of FFE states under local unitaries (LU), and, after showing the complexity of this problem, we focus on the case of bipartite states and especially on the classification under local FP operations (LFP). We find all LU and LFP classes for two qutrits and two ququarts and study several other special classes, pointing out the relation between maximally entangled FFE states and complex Butson-type Hadamard matrices. Our investigation showcases also the relation between the properties of FFE states, especially their LU classification, and the theory of finite rings over the integers.

]]>The quantum volume test is a full-system benchmark for quantum computers that is sensitive to qubit number, fidelity, connectivity, and other quantities believed to be important in building useful devices. The test was designed to produce a single-number measure of a quantum computer's general capability, but a complete understanding of its limitations and operational meaning is still missing. We explore the quantum volume test to better understand its design aspects, sensitivity to errors, passing criteria, and what passing implies about a quantum computer. We elucidate some transient behaviors the test exhibits for small qubit number including the ideal measurement output distributions and the efficacy of common compiler optimizations. We then present an efficient algorithm for estimating the expected heavy output probability under different error models and compiler optimization options, which predicts performance goals for future systems. Additionally, we explore the original confidence interval construction and show that it underachieves the desired coverage level for single shot experiments and overachieves for more typical number of shots. We propose a new confidence interval construction that reaches the specified coverage for typical number of shots and is more efficient in the number of circuits needed to pass the test. We demonstrate these savings with a $QV=2^{10}$ experimental dataset collected from Quantinuum System Model H1-1. Finally, we discuss what the quantum volume test implies about a quantum computer's practical or operational abilities especially in terms of quantum error correction.

Repository of code used in simulation and analysis: https://github.com/CQCL/qvtsim

]]>The quantum volume test is a full-system benchmark for quantum computers that is sensitive to qubit number, fidelity, connectivity, and other quantities believed to be important in building useful devices. The test was designed to produce a single-number measure of a quantum computer's general capability, but a complete understanding of its limitations and operational meaning is still missing. We explore the quantum volume test to better understand its design aspects, sensitivity to errors, passing criteria, and what passing implies about a quantum computer. We elucidate some transient behaviors the test exhibits for small qubit number including the ideal measurement output distributions and the efficacy of common compiler optimizations. We then present an efficient algorithm for estimating the expected heavy output probability under different error models and compiler optimization options, which predicts performance goals for future systems. Additionally, we explore the original confidence interval construction and show that it underachieves the desired coverage level for single shot experiments and overachieves for more typical number of shots. We propose a new confidence interval construction that reaches the specified coverage for typical number of shots and is more efficient in the number of circuits needed to pass the test. We demonstrate these savings with a $QV=2^{10}$ experimental dataset collected from Quantinuum System Model H1-1. Finally, we discuss what the quantum volume test implies about a quantum computer's practical or operational abilities especially in terms of quantum error correction.

Repository of code used in simulation and analysis: https://github.com/CQCL/qvtsim

]]>Classical simulators play a major role in the development and benchmark of quantum algorithms and practically any software framework for quantum computation provides the option of running the algorithms on simulators. However, the development of quantum simulators was substantially separated from the rest of the software frameworks which, instead, focus on usability and compilation. Here, we demonstrate the advantage of co-developing and integrating simulators and compilers by proposing a specialized compiler pass to reduce the simulation time for arbitrary circuits. While the concept is broadly applicable, we present a concrete implementation based on the Intel Quantum Simulator, a high-performance distributed simulator. As part of this work, we extend its implementation with additional functionalities related to the representation of quantum states. The communication overhead is reduced by changing the order in which state amplitudes are stored in the distributed memory, a concept analogous to the distinction between local and global qubits for distributed Schroedinger-type simulators. We then implement a compiler pass to exploit the novel functionalities by introducing special instructions governing data movement as part of the quantum circuit. Those instructions target unique capabilities of simulators and have no analogue in actual quantum devices. To quantify the advantage, we compare the time required to simulate random circuits with and without our optimization. The simulation time is typically halved.

Open-source repository of Intel Quantum Simulator: https://github.com/iqusoft/intel-qs

]]>Classical simulators play a major role in the development and benchmark of quantum algorithms and practically any software framework for quantum computation provides the option of running the algorithms on simulators. However, the development of quantum simulators was substantially separated from the rest of the software frameworks which, instead, focus on usability and compilation. Here, we demonstrate the advantage of co-developing and integrating simulators and compilers by proposing a specialized compiler pass to reduce the simulation time for arbitrary circuits. While the concept is broadly applicable, we present a concrete implementation based on the Intel Quantum Simulator, a high-performance distributed simulator. As part of this work, we extend its implementation with additional functionalities related to the representation of quantum states. The communication overhead is reduced by changing the order in which state amplitudes are stored in the distributed memory, a concept analogous to the distinction between local and global qubits for distributed Schroedinger-type simulators. We then implement a compiler pass to exploit the novel functionalities by introducing special instructions governing data movement as part of the quantum circuit. Those instructions target unique capabilities of simulators and have no analogue in actual quantum devices. To quantify the advantage, we compare the time required to simulate random circuits with and without our optimization. The simulation time is typically halved.

Open-source repository of Intel Quantum Simulator: https://github.com/iqusoft/intel-qs

]]>There exist severe limitations on the accuracy of low-temperature thermometry, which poses a major challenge for future quantum-technological applications. Low-temperature sensitivity might be manipulated by tailoring the interactions between probe and sample. Unfortunately, the tunability of these interactions is usually very restricted. Here, we focus on a more practical solution to boost thermometric precision – driving the probe. Specifically, we solve for the limit cycle of a periodically modulated linear probe in an equilibrium sample. We treat the probe-sample interactions $exactly$ and hence, our results are valid for arbitrarily low temperatures $ T $ and any spectral density. We find that weak near-resonant modulation strongly enhances the signal-to-noise ratio of low-temperature measurements, while causing minimal back action on the sample. Furthermore, we show that near-resonant driving changes the power law that governs thermal sensitivity over a broad range of temperatures, thus `bending' the fundamental precision limits and enabling more sensitive low-temperature thermometry. We then focus on a concrete example – impurity thermometry in an atomic condensate. We demonstrate that periodic driving allows for a sensitivity improvement of several orders of magnitude in sub-nanokelvin temperature estimates drawn from the density profile of the impurity atoms. We thus provide a feasible upgrade that can be easily integrated into low-$T$ thermometry experiments.

]]>There exist severe limitations on the accuracy of low-temperature thermometry, which poses a major challenge for future quantum-technological applications. Low-temperature sensitivity might be manipulated by tailoring the interactions between probe and sample. Unfortunately, the tunability of these interactions is usually very restricted. Here, we focus on a more practical solution to boost thermometric precision – driving the probe. Specifically, we solve for the limit cycle of a periodically modulated linear probe in an equilibrium sample. We treat the probe-sample interactions $exactly$ and hence, our results are valid for arbitrarily low temperatures $ T $ and any spectral density. We find that weak near-resonant modulation strongly enhances the signal-to-noise ratio of low-temperature measurements, while causing minimal back action on the sample. Furthermore, we show that near-resonant driving changes the power law that governs thermal sensitivity over a broad range of temperatures, thus `bending' the fundamental precision limits and enabling more sensitive low-temperature thermometry. We then focus on a concrete example – impurity thermometry in an atomic condensate. We demonstrate that periodic driving allows for a sensitivity improvement of several orders of magnitude in sub-nanokelvin temperature estimates drawn from the density profile of the impurity atoms. We thus provide a feasible upgrade that can be easily integrated into low-$T$ thermometry experiments.

]]>Entanglement is an indispensable quantum resource for quantum information technology. In continuous-variable quantum optics, photon subtraction can increase the entanglement between Gaussian states of light, but for mixed states the extent of this entanglement increase is poorly understood. In this work, we use an entanglement measure based the Rényi-2 entropy to prove that single-photon subtraction increases bipartite entanglement by no more than log 2. This value coincides with the maximal amount of bipartite entanglement that can be achieved with one photon. The upper bound is valid for all Gaussian input states, regardless of the number of modes and the purity.

]]>Entanglement is an indispensable quantum resource for quantum information technology. In continuous-variable quantum optics, photon subtraction can increase the entanglement between Gaussian states of light, but for mixed states the extent of this entanglement increase is poorly understood. In this work, we use an entanglement measure based the Rényi-2 entropy to prove that single-photon subtraction increases bipartite entanglement by no more than log 2. This value coincides with the maximal amount of bipartite entanglement that can be achieved with one photon. The upper bound is valid for all Gaussian input states, regardless of the number of modes and the purity.

]]>Simulating molecules using the Variational Quantum Eigensolver method is one of the promising applications for NISQ-era quantum computers. Designing an efficient ansatz to represent the electronic wave function is crucial in such simulations. Standard unitary coupled-cluster with singles and doubles (UCCSD) ansatz tends to have a large number of insignificant terms that do not lower the energy of the system. In this work, we present a unitary selective coupled-cluster method, a way to construct a unitary coupled-cluster ansatz iteratively using a selection procedure with excitations up to fourth order. This approach uses the electronic Hamiltonian matrix elements and the amplitudes for excitations already present in the ansatz to find the important excitations of higher order and to add them to the ansatz. The important feature of the method is that it systematically reduces the energy error with increasing ansatz size for a set of test molecules. {The main advantage of the proposed method is that the effort to increase the ansatz does not require any additional measurements on a quantum computer.}

]]>Simulating molecules using the Variational Quantum Eigensolver method is one of the promising applications for NISQ-era quantum computers. Designing an efficient ansatz to represent the electronic wave function is crucial in such simulations. Standard unitary coupled-cluster with singles and doubles (UCCSD) ansatz tends to have a large number of insignificant terms that do not lower the energy of the system. In this work, we present a unitary selective coupled-cluster method, a way to construct a unitary coupled-cluster ansatz iteratively using a selection procedure with excitations up to fourth order. This approach uses the electronic Hamiltonian matrix elements and the amplitudes for excitations already present in the ansatz to find the important excitations of higher order and to add them to the ansatz. The important feature of the method is that it systematically reduces the energy error with increasing ansatz size for a set of test molecules. {The main advantage of the proposed method is that the effort to increase the ansatz does not require any additional measurements on a quantum computer.}

]]>Topological phases are characterized by their entanglement properties, which is manifest in a direct relation between entanglement spectra and edge states discovered by Li and Haldane. We propose to leverage the power of synthetic quantum systems for measuring entanglement via the Entanglement Hamiltonian to probe this relationship experimentally. This is made possible by exploiting the quasi-local structure of Entanglement Hamiltonians. The feasibility of this proposal is illustrated for two paradigmatic examples realizable with current technology, an integer quantum Hall state of non-interacting fermions on a 2D lattice and a symmetry protected topological state of interacting fermions on a 1D chain. Our results pave the road towards an experimental identification of topological order in strongly correlated quantum many-body systems.

]]>Topological phases are characterized by their entanglement properties, which is manifest in a direct relation between entanglement spectra and edge states discovered by Li and Haldane. We propose to leverage the power of synthetic quantum systems for measuring entanglement via the Entanglement Hamiltonian to probe this relationship experimentally. This is made possible by exploiting the quasi-local structure of Entanglement Hamiltonians. The feasibility of this proposal is illustrated for two paradigmatic examples realizable with current technology, an integer quantum Hall state of non-interacting fermions on a 2D lattice and a symmetry protected topological state of interacting fermions on a 1D chain. Our results pave the road towards an experimental identification of topological order in strongly correlated quantum many-body systems.

]]>The average entanglement entropy (EE) of the energy eigenstates in non-vanishing partitions has been recently proposed as a diagnostic of integrability in quantum many-body systems. For it to be a faithful characterization of quantum integrability, it should distinguish quantum systems with a well-defined classical limit in the same way as the unequivocal classical integrability criteria. We examine the proposed diagnostic in the class of collective spin models characterized by permutation symmetry in the spins. The well-known Lipkin-Meshov-Glick (LMG) model is a paradigmatic integrable system in this class with a well-defined classical limit. Thus, this model is an excellent testbed for examining quantum integrability diagnostics. First, we calculate analytically the average EE of the Dicke basis $\{|j,m\rangle \}_{m=-j}^j$ in any non-vanishing bipartition, and show that in the thermodynamic limit, it converges to $1/2$ of the maximal EE in the corresponding bipartition. Using finite-size scaling, we numerically demonstrate that the aforementioned average EE in the thermodynamic limit is universal for all parameter values of the LMG model. Our analysis illustrates how a value of the average EE far away from the maximal in the thermodynamic limit could be a signature of integrability.

]]>The average entanglement entropy (EE) of the energy eigenstates in non-vanishing partitions has been recently proposed as a diagnostic of integrability in quantum many-body systems. For it to be a faithful characterization of quantum integrability, it should distinguish quantum systems with a well-defined classical limit in the same way as the unequivocal classical integrability criteria. We examine the proposed diagnostic in the class of collective spin models characterized by permutation symmetry in the spins. The well-known Lipkin-Meshov-Glick (LMG) model is a paradigmatic integrable system in this class with a well-defined classical limit. Thus, this model is an excellent testbed for examining quantum integrability diagnostics. First, we calculate analytically the average EE of the Dicke basis $\{|j,m\rangle \}_{m=-j}^j$ in any non-vanishing bipartition, and show that in the thermodynamic limit, it converges to $1/2$ of the maximal EE in the corresponding bipartition. Using finite-size scaling, we numerically demonstrate that the aforementioned average EE in the thermodynamic limit is universal for all parameter values of the LMG model. Our analysis illustrates how a value of the average EE far away from the maximal in the thermodynamic limit could be a signature of integrability.

]]>We carefully examine critical metrology and present an improved critical quantum metrology protocol which relies on quenching a system exhibiting a superradiant quantum phase transition beyond its critical point. We show that this approach can lead to an exponential increase of the quantum Fisher information in time with respect to existing critical quantum metrology protocols relying on quenching close to the critical point and observing power law behaviour. We demonstrate that the Cramér-Rao bound can be saturated in our protocol through the standard homodyne detection scheme. We explicitly show its advantage using the archetypal setting of the Dicke model and explore a quantum gas coupled to a single-mode cavity field as a potential platform. In this case an additional exponential enhancement of the quantum Fisher information can in practice be observed with the number of atoms $N$ in the cavity, even in the absence of $N$-body coupling terms.

]]>We carefully examine critical metrology and present an improved critical quantum metrology protocol which relies on quenching a system exhibiting a superradiant quantum phase transition beyond its critical point. We show that this approach can lead to an exponential increase of the quantum Fisher information in time with respect to existing critical quantum metrology protocols relying on quenching close to the critical point and observing power law behaviour. We demonstrate that the Cramér-Rao bound can be saturated in our protocol through the standard homodyne detection scheme. We explicitly show its advantage using the archetypal setting of the Dicke model and explore a quantum gas coupled to a single-mode cavity field as a potential platform. In this case an additional exponential enhancement of the quantum Fisher information can in practice be observed with the number of atoms $N$ in the cavity, even in the absence of $N$-body coupling terms.

]]>Observed quantum correlations are known to determine in certain cases the underlying quantum state and measurements. This phenomenon is known as $\textit{(quantum) self-testing}$. Self-testing constitutes a significant research area with practical and theoretical ramifications for quantum information theory. But since its conception two decades ago by Mayers and Yao, the common way to rigorously formulate self-testing has been in terms of operator-algebraic identities, and this formulation lacks an operational interpretation. In particular, it is unclear how to formulate self-testing in other physical theories, in formulations of quantum theory not referring to operator-algebra, or in scenarios causally different from the standard one. In this paper, we explain how to understand quantum self-testing operationally, in terms of causally structured dilations of the input-output channel encoding the correlations. These dilations model side-information which leaks to an environment according to a specific schedule, and we show how self-testing concerns the relative strength between such scheduled leaks of information. As such, the title of our paper has double meaning: we recast conventional quantum self-testing in terms of information-leaks to an environment – and this realises quantum self-testing as a special case within the surroundings of a general operational framework. Our new approach to quantum self-testing not only supplies an operational understanding apt for various generalisations, but also resolves some unexplained aspects of the existing definition, naturally suggests a distance measure suitable for robust self-testing, and points towards self-testing as a modular concept in a larger, cryptographic perspective.

]]>Observed quantum correlations are known to determine in certain cases the underlying quantum state and measurements. This phenomenon is known as $\textit{(quantum) self-testing}$. Self-testing constitutes a significant research area with practical and theoretical ramifications for quantum information theory. But since its conception two decades ago by Mayers and Yao, the common way to rigorously formulate self-testing has been in terms of operator-algebraic identities, and this formulation lacks an operational interpretation. In particular, it is unclear how to formulate self-testing in other physical theories, in formulations of quantum theory not referring to operator-algebra, or in scenarios causally different from the standard one. In this paper, we explain how to understand quantum self-testing operationally, in terms of causally structured dilations of the input-output channel encoding the correlations. These dilations model side-information which leaks to an environment according to a specific schedule, and we show how self-testing concerns the relative strength between such scheduled leaks of information. As such, the title of our paper has double meaning: we recast conventional quantum self-testing in terms of information-leaks to an environment – and this realises quantum self-testing as a special case within the surroundings of a general operational framework. Our new approach to quantum self-testing not only supplies an operational understanding apt for various generalisations, but also resolves some unexplained aspects of the existing definition, naturally suggests a distance measure suitable for robust self-testing, and points towards self-testing as a modular concept in a larger, cryptographic perspective.

]]>We consider a topological stabilizer code on a honeycomb grid, the "XYZ$^2$" code. The code is inspired by the Kitaev honeycomb model and is a simple realization of a "matching code" discussed by Wootton [J. Phys. A: Math. Theor. 48, 215302 (2015)], with a specific implementation of the boundary. It utilizes weight-six ($XYZXYZ$) plaquette stabilizers and weight-two ($XX$) link stabilizers on a planar hexagonal grid composed of $2d^2$ qubits for code distance $d$, with weight-three stabilizers at the boundary, stabilizing one logical qubit. We study the properties of the code using maximum-likelihood decoding, assuming perfect stabilizer measurements. For pure $X$, $Y$, or $Z$ noise, we can solve for the logical failure rate analytically, giving a threshold of 50%. In contrast to the rotated surface code and the XZZX code, which have code distance $d^2$ only for pure $Y$ noise, here the code distance is $2d^2$ for both pure $Z$ and pure $Y$ noise. Thresholds for noise with finite $Z$ bias are similar to the XZZX code, but with markedly lower sub-threshold logical failure rates. The code possesses distinctive syndrome properties with unidirectional pairs of plaquette defects along the three directions of the triangular lattice for isolated errors, which may be useful for efficient matching-based or other approximate decoding.

]]>We consider a topological stabilizer code on a honeycomb grid, the "XYZ$^2$" code. The code is inspired by the Kitaev honeycomb model and is a simple realization of a "matching code" discussed by Wootton [J. Phys. A: Math. Theor. 48, 215302 (2015)], with a specific implementation of the boundary. It utilizes weight-six ($XYZXYZ$) plaquette stabilizers and weight-two ($XX$) link stabilizers on a planar hexagonal grid composed of $2d^2$ qubits for code distance $d$, with weight-three stabilizers at the boundary, stabilizing one logical qubit. We study the properties of the code using maximum-likelihood decoding, assuming perfect stabilizer measurements. For pure $X$, $Y$, or $Z$ noise, we can solve for the logical failure rate analytically, giving a threshold of 50%. In contrast to the rotated surface code and the XZZX code, which have code distance $d^2$ only for pure $Y$ noise, here the code distance is $2d^2$ for both pure $Z$ and pure $Y$ noise. Thresholds for noise with finite $Z$ bias are similar to the XZZX code, but with markedly lower sub-threshold logical failure rates. The code possesses distinctive syndrome properties with unidirectional pairs of plaquette defects along the three directions of the triangular lattice for isolated errors, which may be useful for efficient matching-based or other approximate decoding.

]]>Photon losses are intrinsic for any translationally invariant optical imaging system with a non-trivial Point Spread Function, and the relation between the transmission factor and the coherence properties of an imaged object is universal – we demonstrate the rigorous proof of this statement, based on the principles of quantum mechanics. The fundamental limit on the precision of estimating separation between two partially coherent sources is then derived. The careful study of the role of photon losses allows to resolve conflicting claims present in previous works. We compute the Quantum Fisher Information for the generic model of optical 4f imaging system, and use prior considerations to validate the result for a general, translationally invariant imaging apparatus. We prove that the spatial-mode demultiplexing (SPADE) measurement, optimal for non-coherent sources, remains optimal for an arbitrary degree of coherence. Moreover, we show that some approximations, omnipresent in theoretical works about optical imaging, inevitably lead to unphysical, zero-transmission models, resulting in misleading claims regarding fundamental resolution limits.

]]>Photon losses are intrinsic for any translationally invariant optical imaging system with a non-trivial Point Spread Function, and the relation between the transmission factor and the coherence properties of an imaged object is universal – we demonstrate the rigorous proof of this statement, based on the principles of quantum mechanics. The fundamental limit on the precision of estimating separation between two partially coherent sources is then derived. The careful study of the role of photon losses allows to resolve conflicting claims present in previous works. We compute the Quantum Fisher Information for the generic model of optical 4f imaging system, and use prior considerations to validate the result for a general, translationally invariant imaging apparatus. We prove that the spatial-mode demultiplexing (SPADE) measurement, optimal for non-coherent sources, remains optimal for an arbitrary degree of coherence. Moreover, we show that some approximations, omnipresent in theoretical works about optical imaging, inevitably lead to unphysical, zero-transmission models, resulting in misleading claims regarding fundamental resolution limits.

]]>We describe an optimal procedure, as well as its efficient software implementation, for exact and approximate synthesis of two-qubit unitary operations into any prescribed discrete family of XX-type interactions and local gates. This arises from the analysis and manipulation of certain polyhedral subsets of the space of canonical gates. Using this, we analyze which small sets of XX-type interactions cause the greatest improvement in expected infidelity under experimentally-motivated error models. For the exact circuit synthesis of Haar-randomly selected two-qubit operations, we find an improvement in estimated infidelity by 31.4% when including alongside CX its square- and cube-roots, near to the optimal limit of 36.9% obtained by including all fractional applications of CX.

]]>We describe an optimal procedure, as well as its efficient software implementation, for exact and approximate synthesis of two-qubit unitary operations into any prescribed discrete family of XX-type interactions and local gates. This arises from the analysis and manipulation of certain polyhedral subsets of the space of canonical gates. Using this, we analyze which small sets of XX-type interactions cause the greatest improvement in expected infidelity under experimentally-motivated error models. For the exact circuit synthesis of Haar-randomly selected two-qubit operations, we find an improvement in estimated infidelity by 31.4% when including alongside CX its square- and cube-roots, near to the optimal limit of 36.9% obtained by including all fractional applications of CX.

]]>Entanglement shared among multiple parties presents complex challenges for the characterisation of different types of entanglement. One of the most fundamental insights is the fact that some mixed states can feature entanglement across every possible cut of a multipartite system yet can be produced via a mixture of states separable with respect to different partitions. To distinguish states that genuinely cannot be produced from mixing such partition-separable states, the term $\textit{genuine multipartite entanglement}$ was coined. All these considerations originate in a paradigm where only a single copy of the state is distributed and locally acted upon. In contrast, advances in quantum technologies prompt the question of how this picture changes when multiple copies of the same state become locally accessible. Here we show that multiple copies unlock genuine multipartite entanglement from partially separable states, i.e., mixtures of the partition-separable states, even from undistillable ensembles, and even more than two copies can be required to observe this effect. With these findings, we characterise the notion of genuine multipartite entanglement in the paradigm of multiple copies and conjecture a strict hierarchy of activatable states and an asymptotic collapse of the hierarchy.

]]>Entanglement shared among multiple parties presents complex challenges for the characterisation of different types of entanglement. One of the most fundamental insights is the fact that some mixed states can feature entanglement across every possible cut of a multipartite system yet can be produced via a mixture of states separable with respect to different partitions. To distinguish states that genuinely cannot be produced from mixing such partition-separable states, the term $\textit{genuine multipartite entanglement}$ was coined. All these considerations originate in a paradigm where only a single copy of the state is distributed and locally acted upon. In contrast, advances in quantum technologies prompt the question of how this picture changes when multiple copies of the same state become locally accessible. Here we show that multiple copies unlock genuine multipartite entanglement from partially separable states, i.e., mixtures of the partition-separable states, even from undistillable ensembles, and even more than two copies can be required to observe this effect. With these findings, we characterise the notion of genuine multipartite entanglement in the paradigm of multiple copies and conjecture a strict hierarchy of activatable states and an asymptotic collapse of the hierarchy.

]]>Path integrals constitute powerful representations for both quantum and stochastic dynamics. Yet despite many decades of intensive studies, there is no consensus on how to formulate them for dynamics in curved space, or how to make them covariant with respect to nonlinear transform of variables (NTV). In this work, we construct a rigorous and covariant formulation of time-slicing path integrals for dynamics in curved space. We first establish a rigorous criterion for equivalence of $\textit{time-slice Green's function}$ (TSGF) in the continuum limit (Lemma 1). This implies the existence of infinitely many equivalent representations for time-slicing path integral. We then show that, for any dynamics with second order generator, all time-slice actions are equivalent to a Gaussian (Lemma 2). We further construct a continuous family of equivalent path-integral actions parameterized by an interpolation parameter $\alpha \in [0,1]$ (Lemma 3). The action generically contains term linear in $\Delta \boldsymbol x$, whose concrete form depends on $\alpha$. Finally we also establish the covariance of our path-integral formalism, by demonstrating how the action transforms under NTV. The $\alpha = 0$ representation of time-slice action is particularly convenient because it is Gaussian and transforms as a scalar, as long as $\Delta \boldsymbol x$ transforms according to $\textit{Ito's formula}$.

]]>Path integrals constitute powerful representations for both quantum and stochastic dynamics. Yet despite many decades of intensive studies, there is no consensus on how to formulate them for dynamics in curved space, or how to make them covariant with respect to nonlinear transform of variables (NTV). In this work, we construct a rigorous and covariant formulation of time-slicing path integrals for dynamics in curved space. We first establish a rigorous criterion for equivalence of $\textit{time-slice Green's function}$ (TSGF) in the continuum limit (Lemma 1). This implies the existence of infinitely many equivalent representations for time-slicing path integral. We then show that, for any dynamics with second order generator, all time-slice actions are equivalent to a Gaussian (Lemma 2). We further construct a continuous family of equivalent path-integral actions parameterized by an interpolation parameter $\alpha \in [0,1]$ (Lemma 3). The action generically contains term linear in $\Delta \boldsymbol x$, whose concrete form depends on $\alpha$. Finally we also establish the covariance of our path-integral formalism, by demonstrating how the action transforms under NTV. The $\alpha = 0$ representation of time-slice action is particularly convenient because it is Gaussian and transforms as a scalar, as long as $\Delta \boldsymbol x$ transforms according to $\textit{Ito's formula}$.

]]>We introduce a simple construction of boundary conditions for the honeycomb code [1] that uses only pairwise checks and allows parallelogram geometries at the cost of modifying the bulk measurement sequence. We discuss small instances of the code.

]]>We introduce a simple construction of boundary conditions for the honeycomb code [1] that uses only pairwise checks and allows parallelogram geometries at the cost of modifying the bulk measurement sequence. We discuss small instances of the code.

]]>In this work, we present number-theoretic and algebraic-geometric techniques for bounding the stabilizer rank of quantum states. First, we refine a number-theoretic theorem of Moulton to exhibit an explicit sequence of product states with exponential stabilizer rank but constant approximate stabilizer rank, and to provide alternate (and simplified) proofs of the best-known asymptotic lower bounds on stabilizer rank and approximate stabilizer rank, up to a log factor. Second, we find the first non-trivial examples of quantum states with multiplicative stabilizer rank under the tensor product. Third, we introduce and study the generic stabilizer rank using algebraic-geometric techniques.

]]>In this work, we present number-theoretic and algebraic-geometric techniques for bounding the stabilizer rank of quantum states. First, we refine a number-theoretic theorem of Moulton to exhibit an explicit sequence of product states with exponential stabilizer rank but constant approximate stabilizer rank, and to provide alternate (and simplified) proofs of the best-known asymptotic lower bounds on stabilizer rank and approximate stabilizer rank, up to a log factor. Second, we find the first non-trivial examples of quantum states with multiplicative stabilizer rank under the tensor product. Third, we introduce and study the generic stabilizer rank using algebraic-geometric techniques.

]]>Understanding dynamics of localized quantum systems embedded in engineered bosonic environments is a central problem in quantum optics and open quantum system theory. We present a formalism for studying few-particle scattering from a localized quantum system interacting with an bosonic bath described by an inhomogeneous wave-equation. In particular, we provide exact relationships between the quantum scattering matrix of this interacting system and frequency domain solutions of the inhomogeneous wave-equation thus providing access to the spatial distribution of the scattered few-particle wave-packet. The formalism developed in this paper paves the way to computationally understanding the impact of structured media on the scattering properties of localized quantum systems embedded in them without simplifying assumptions on the physics of the structured media.

]]>Understanding dynamics of localized quantum systems embedded in engineered bosonic environments is a central problem in quantum optics and open quantum system theory. We present a formalism for studying few-particle scattering from a localized quantum system interacting with an bosonic bath described by an inhomogeneous wave-equation. In particular, we provide exact relationships between the quantum scattering matrix of this interacting system and frequency domain solutions of the inhomogeneous wave-equation thus providing access to the spatial distribution of the scattered few-particle wave-packet. The formalism developed in this paper paves the way to computationally understanding the impact of structured media on the scattering properties of localized quantum systems embedded in them without simplifying assumptions on the physics of the structured media.

]]>We propose a simple quantum algorithm for simulating highly oscillatory quantum dynamics, which does not require complicated quantum control logic for handling time-ordering operators. To our knowledge, this is the first quantum algorithm that is both insensitive to the rapid changes of the time-dependent Hamiltonian and exhibits commutator scaling. Our method can be used for efficient Hamiltonian simulation in the interaction picture. In particular, we demonstrate that for the simulation of the Schrödinger equation, our method exhibits superconvergence and achieves a surprising second order convergence rate, of which the proof rests on a careful application of pseudo-differential calculus. Numerical results verify the effectiveness and the superconvergence property of our method.

https://kitpcloud.s3-us-west-2.amazonaws.com/qcomp22/Fang_QComp22_KITP.mp4

]]>We propose a simple quantum algorithm for simulating highly oscillatory quantum dynamics, which does not require complicated quantum control logic for handling time-ordering operators. To our knowledge, this is the first quantum algorithm that is both insensitive to the rapid changes of the time-dependent Hamiltonian and exhibits commutator scaling. Our method can be used for efficient Hamiltonian simulation in the interaction picture. In particular, we demonstrate that for the simulation of the Schrödinger equation, our method exhibits superconvergence and achieves a surprising second order convergence rate, of which the proof rests on a careful application of pseudo-differential calculus. Numerical results verify the effectiveness and the superconvergence property of our method.

https://kitpcloud.s3-us-west-2.amazonaws.com/qcomp22/Fang_QComp22_KITP.mp4

]]>Quantum coherence is an essential resource to gain advantage over classical physics and technology. Recently, it has been proposed that a low-temperature environment can induce quantum coherence of a spin without an external coherent pump. We address a critical question if such coherence is extractable by a weak coupling to an output system dynamically affecting back the spin-environment coupling. Describing the entire mechanism, we prove that such extraction is generically possible for output spins (also oscillators or fields) and, as well, in a fermionic analogue of such a process. We compare the internal spin coherence and output coherence over temperature and characteristic frequencies. The proposed optimal coherence extraction opens paths for the upcoming experimental tests with atomic and solid-state systems.

]]>Quantum coherence is an essential resource to gain advantage over classical physics and technology. Recently, it has been proposed that a low-temperature environment can induce quantum coherence of a spin without an external coherent pump. We address a critical question if such coherence is extractable by a weak coupling to an output system dynamically affecting back the spin-environment coupling. Describing the entire mechanism, we prove that such extraction is generically possible for output spins (also oscillators or fields) and, as well, in a fermionic analogue of such a process. We compare the internal spin coherence and output coherence over temperature and characteristic frequencies. The proposed optimal coherence extraction opens paths for the upcoming experimental tests with atomic and solid-state systems.

]]>Evaluating an expectation value of an arbitrary observable $A\in{\mathbb C}^{2^n\times 2^n}$ through naïve Pauli measurements requires a large number of terms to be evaluated. We approach this issue using a method based on Bell measurement, which we refer to as the extended Bell measurement method. This analytical method quickly assembles the $4^n$ matrix elements into at most $2^{n+1}$ groups for simultaneous measurements in $O(nd)$ time, where $d$ is the number of non-zero elements of $A$. The number of groups is particularly small when $A$ is a band matrix. When the bandwidth of $A$ is $k=O(n^c)$, the number of groups for simultaneous measurement reduces to $O(n^{c+1})$. In addition, when non-zero elements densely fill the band, the variance is $O((n^{c+1}/2^n)\,{\rm tr}(A^2))$, which is small compared with the variances of existing methods. The proposed method requires a few additional gates for each measurement, namely one Hadamard gate, one phase gate and at most $n-1$ CNOT gates. Experimental results on an IBM-Q system show the computational efficiency and scalability of the proposed scheme, compared with existing state-of-the-art approaches. Code is available at https://github.com/ToyotaCRDL/extended-bell-measurements.

]]>Evaluating an expectation value of an arbitrary observable $A\in{\mathbb C}^{2^n\times 2^n}$ through naïve Pauli measurements requires a large number of terms to be evaluated. We approach this issue using a method based on Bell measurement, which we refer to as the extended Bell measurement method. This analytical method quickly assembles the $4^n$ matrix elements into at most $2^{n+1}$ groups for simultaneous measurements in $O(nd)$ time, where $d$ is the number of non-zero elements of $A$. The number of groups is particularly small when $A$ is a band matrix. When the bandwidth of $A$ is $k=O(n^c)$, the number of groups for simultaneous measurement reduces to $O(n^{c+1})$. In addition, when non-zero elements densely fill the band, the variance is $O((n^{c+1}/2^n)\,{\rm tr}(A^2))$, which is small compared with the variances of existing methods. The proposed method requires a few additional gates for each measurement, namely one Hadamard gate, one phase gate and at most $n-1$ CNOT gates. Experimental results on an IBM-Q system show the computational efficiency and scalability of the proposed scheme, compared with existing state-of-the-art approaches. Code is available at https://github.com/ToyotaCRDL/extended-bell-measurements.

]]>We study the correlation clustering problem using the quantum approximate optimization algorithm (QAOA) and qudits, which constitute a natural platform for such non-binary problems. Specifically, we consider a neutral atom quantum computer and propose a full stack approach for correlation clustering, including Hamiltonian formulation of the algorithm, analysis of its performance, identification of a suitable level structure for ${}^{87}{\rm Sr}$ and specific gate design. We show the qudit implementation is superior to the qubit encoding as quantified by the gate count. For single layer QAOA, we also prove (conjecture) a lower bound of $0.6367$ ($0.6699$) for the approximation ratio on 3-regular graphs. Our numerical studies evaluate the algorithm's performance by considering complete and Erdős-Rényi graphs of up to 7 vertices and clusters. We find that in all cases the QAOA surpasses the Swamy bound $0.7666$ for the approximation ratio for QAOA depths $p \geq 2$. Finally, by analysing the effect of errors when solving complete graphs we find that their inclusion severely limits the algorithm's performance.

]]>We study the correlation clustering problem using the quantum approximate optimization algorithm (QAOA) and qudits, which constitute a natural platform for such non-binary problems. Specifically, we consider a neutral atom quantum computer and propose a full stack approach for correlation clustering, including Hamiltonian formulation of the algorithm, analysis of its performance, identification of a suitable level structure for ${}^{87}{\rm Sr}$ and specific gate design. We show the qudit implementation is superior to the qubit encoding as quantified by the gate count. For single layer QAOA, we also prove (conjecture) a lower bound of $0.6367$ ($0.6699$) for the approximation ratio on 3-regular graphs. Our numerical studies evaluate the algorithm's performance by considering complete and Erdős-Rényi graphs of up to 7 vertices and clusters. We find that in all cases the QAOA surpasses the Swamy bound $0.7666$ for the approximation ratio for QAOA depths $p \geq 2$. Finally, by analysing the effect of errors when solving complete graphs we find that their inclusion severely limits the algorithm's performance.

]]>Quantum coupling between mechanical oscillators and atomic gases generating entanglement has been recently experimentally demonstrated using their subsequent interaction with light. The next step is to build a hybrid atom-mechanical quantum gate showing bosonic interference effects of single quanta in the atoms and oscillators. We propose an experimental test of Hong-Ou-Mandel interference between single phononic excitation and single collective excitation of atoms using the optical connection between them. A single optical pulse is sufficient to build a hybrid quantum-nondemolition gate to observe the bunching of such different quanta. The output atomic-mechanical state exhibits a probability of a hybrid bunching effect that proves its nonclassical aspects. This proposal opens a feasible road to broadly test such advanced quantum bunching phenomena in a hybrid system with different specific couplings.

]]>Quantum coupling between mechanical oscillators and atomic gases generating entanglement has been recently experimentally demonstrated using their subsequent interaction with light. The next step is to build a hybrid atom-mechanical quantum gate showing bosonic interference effects of single quanta in the atoms and oscillators. We propose an experimental test of Hong-Ou-Mandel interference between single phononic excitation and single collective excitation of atoms using the optical connection between them. A single optical pulse is sufficient to build a hybrid quantum-nondemolition gate to observe the bunching of such different quanta. The output atomic-mechanical state exhibits a probability of a hybrid bunching effect that proves its nonclassical aspects. This proposal opens a feasible road to broadly test such advanced quantum bunching phenomena in a hybrid system with different specific couplings.

]]>Fluctuation theorems allow one to make generalised statements about the behaviour of thermodynamic quantities in systems that are driven far from thermal equilibrium. In this article we use Crooks' fluctuation theorem to understand the entropy production of a continuously measured, zero temperature quantum system; namely an optical cavity measured via homodyne detection. At zero temperature, if one uses the classical definition of inverse temperature $\beta$, then the entropy production becomes divergent. Our analysis shows that the entropy production can be well defined at zero temperature by considering the entropy produced in the measurement record leading to an effective inverse temperature $\beta_{\rm eff}$ which does not diverge. We link this result to the Cramér-Rao inequality and show that the product of the Fisher information of the work distribution with the entropy production is bounded below by half of the square of the effective inverse temperature $\beta_{\rm eff}$. This inequality indicates that there is a minimal amount of entropy production that is paid to acquire information about the work done to a quantum system driven far from equilibrium.

]]>Fluctuation theorems allow one to make generalised statements about the behaviour of thermodynamic quantities in systems that are driven far from thermal equilibrium. In this article we use Crooks' fluctuation theorem to understand the entropy production of a continuously measured, zero temperature quantum system; namely an optical cavity measured via homodyne detection. At zero temperature, if one uses the classical definition of inverse temperature $\beta$, then the entropy production becomes divergent. Our analysis shows that the entropy production can be well defined at zero temperature by considering the entropy produced in the measurement record leading to an effective inverse temperature $\beta_{\rm eff}$ which does not diverge. We link this result to the Cramér-Rao inequality and show that the product of the Fisher information of the work distribution with the entropy production is bounded below by half of the square of the effective inverse temperature $\beta_{\rm eff}$. This inequality indicates that there is a minimal amount of entropy production that is paid to acquire information about the work done to a quantum system driven far from equilibrium.

]]>