As increasingly impressive quantum information processors are realized in laboratories around the world, robust and reliable characterization of these devices is now more urgent than ever. These diagnostics can take many forms, but one of the most popular categories is $\textit{tomography}$, where an underlying parameterized model is proposed for a device and inferred by experiments. Here, we introduce and implement efficient operational tomography, which uses experimental observables as these model parameters. This addresses a problem of ambiguity in representation that arises in current tomographic approaches (the $\textit{gauge problem}$). Solving the gauge problem enables us to efficiently implement operational tomography in a Bayesian framework computationally, and hence gives us a natural way to include prior information and discuss uncertainty in fit parameters. We demonstrate this new tomography in a variety of different experimentally-relevant scenarios, including standard process tomography, Ramsey interferometry, randomized benchmarking, and gate set tomography.

]]>As increasingly impressive quantum information processors are realized in laboratories around the world, robust and reliable characterization of these devices is now more urgent than ever. These diagnostics can take many forms, but one of the most popular categories is $\textit{tomography}$, where an underlying parameterized model is proposed for a device and inferred by experiments. Here, we introduce and implement efficient operational tomography, which uses experimental observables as these model parameters. This addresses a problem of ambiguity in representation that arises in current tomographic approaches (the $\textit{gauge problem}$). Solving the gauge problem enables us to efficiently implement operational tomography in a Bayesian framework computationally, and hence gives us a natural way to include prior information and discuss uncertainty in fit parameters. We demonstrate this new tomography in a variety of different experimentally-relevant scenarios, including standard process tomography, Ramsey interferometry, randomized benchmarking, and gate set tomography.

]]>Any measurement is intended to provide $information$ on a system, namely knowledge about its state. However, we learn from quantum theory that it is generally impossible to extract information without disturbing the state of the system or its correlations with other systems. In this paper we address the issue of the interplay between information and disturbance for a general operational probabilistic theory. The traditional notion of disturbance considers the fate of the system state after the measurement. However, the fact that the system state is left untouched ensures that also correlations are preserved only in the presence of local discriminability. Here we provide the definition of disturbance that is appropriate for a general theory. Moreover, since in a theory without causality information can be gathered also on the effect, we generalise the notion of no-information test. We then prove an equivalent condition for no-information without disturbance---$\textit{atomicity of the identity}$---namely the impossibility of achieving the trivial evolution---the $identity$---as the coarse-graining of a set of non trivial ones. We prove a general theorem showing that information that can be retrieved without disturbance corresponds to perfectly repeatable and discriminating tests. Based on this, we prove a structure theorem for operational probabilistic theories, showing that the set of states of any system decomposes as a direct sum of perfectly discriminable sets, and such decomposition is preserved under system composition. As a consequence, a theory is such that any information can be extracted without disturbance only if all its systems are classical. Finally, we show via concrete examples that no-information without disturbance is independent of both local discriminability and purification.

]]>Any measurement is intended to provide $information$ on a system, namely knowledge about its state. However, we learn from quantum theory that it is generally impossible to extract information without disturbing the state of the system or its correlations with other systems. In this paper we address the issue of the interplay between information and disturbance for a general operational probabilistic theory. The traditional notion of disturbance considers the fate of the system state after the measurement. However, the fact that the system state is left untouched ensures that also correlations are preserved only in the presence of local discriminability. Here we provide the definition of disturbance that is appropriate for a general theory. Moreover, since in a theory without causality information can be gathered also on the effect, we generalise the notion of no-information test. We then prove an equivalent condition for no-information without disturbance---$\textit{atomicity of the identity}$---namely the impossibility of achieving the trivial evolution---the $identity$---as the coarse-graining of a set of non trivial ones. We prove a general theorem showing that information that can be retrieved without disturbance corresponds to perfectly repeatable and discriminating tests. Based on this, we prove a structure theorem for operational probabilistic theories, showing that the set of states of any system decomposes as a direct sum of perfectly discriminable sets, and such decomposition is preserved under system composition. As a consequence, a theory is such that any information can be extracted without disturbance only if all its systems are classical. Finally, we show via concrete examples that no-information without disturbance is independent of both local discriminability and purification.

]]>We propose a very large family of benchmarks for probing the performance of quantum computers. We call them $\textit{volumetric benchmarks}$ (VBs) because they generalize IBM's benchmark for measuring quantum volume [1]. The quantum volume benchmark defines a family of $\textit{square}$ circuits whose depth $d$ and width $w$ are the same. A volumetric benchmark defines a family of $\textit{rectangular}$ quantum circuits, for which $d$ and $w$ are uncoupled to allow the study of time/space performance trade-offs. Each VB defines a mapping from circuit shapes — $(w,d)$ pairs — to test suites $\mathcal{C}$$\textit(w,d)$. A test suite is an ensemble of test circuits that share a common structure. The test suite $\mathcal{C}$ for a given circuit shape may be a single circuit $C$, a specific list of circuits $\{C_1\ldots C_N\}$ that must all be run, or a large set of possible circuits equipped with a distribution $Pr(C)$. The circuits in a given VB share a structure, which is limited only by designers' creativity. We list some known benchmarks, and other circuit families, that fit into the VB framework: several families of random circuits, periodic circuits, and algorithm-inspired circuits. The last ingredient defining a benchmark is a success criterion that defines when a processor is judged to have ``passed'' a given test circuit. We discuss several options. Benchmark data can be analyzed in many ways to extract many properties, but we propose a simple, universal graphical summary of results that illustrates the Pareto frontier of the $d$ vs $w$ trade-off for the processor being benchmarked.

]]>We propose a very large family of benchmarks for probing the performance of quantum computers. We call them $\textit{volumetric benchmarks}$ (VBs) because they generalize IBM's benchmark for measuring quantum volume [1]. The quantum volume benchmark defines a family of $\textit{square}$ circuits whose depth $d$ and width $w$ are the same. A volumetric benchmark defines a family of $\textit{rectangular}$ quantum circuits, for which $d$ and $w$ are uncoupled to allow the study of time/space performance trade-offs. Each VB defines a mapping from circuit shapes — $(w,d)$ pairs — to test suites $\mathcal{C}$$\textit(w,d)$. A test suite is an ensemble of test circuits that share a common structure. The test suite $\mathcal{C}$ for a given circuit shape may be a single circuit $C$, a specific list of circuits $\{C_1\ldots C_N\}$ that must all be run, or a large set of possible circuits equipped with a distribution $Pr(C)$. The circuits in a given VB share a structure, which is limited only by designers' creativity. We list some known benchmarks, and other circuit families, that fit into the VB framework: several families of random circuits, periodic circuits, and algorithm-inspired circuits. The last ingredient defining a benchmark is a success criterion that defines when a processor is judged to have ``passed'' a given test circuit. We discuss several options. Benchmark data can be analyzed in many ways to extract many properties, but we propose a simple, universal graphical summary of results that illustrates the Pareto frontier of the $d$ vs $w$ trade-off for the processor being benchmarked.

]]>We present a quantum eigenstate filtering algorithm based on quantum signal processing (QSP) and minimax polynomials. The algorithm allows us to efficiently prepare a target eigenstate of a given Hamiltonian, if we have access to an initial state with non-trivial overlap with the target eigenstate and have a reasonable lower bound for the spectral gap. We apply this algorithm to the quantum linear system problem (QLSP), and present two algorithms based on quantum adiabatic computing (AQC) and quantum Zeno effect respectively. Both algorithms prepare the final solution as a pure state, and achieves the near optimal $\mathcal{\widetilde{O}}(d\kappa\log(1/\epsilon))$ query complexity for a $d$-sparse matrix, where $\kappa$ is the condition number, and $\epsilon$ is the desired precision. Neither algorithm uses phase estimation or amplitude amplification.

]]>We present a quantum eigenstate filtering algorithm based on quantum signal processing (QSP) and minimax polynomials. The algorithm allows us to efficiently prepare a target eigenstate of a given Hamiltonian, if we have access to an initial state with non-trivial overlap with the target eigenstate and have a reasonable lower bound for the spectral gap. We apply this algorithm to the quantum linear system problem (QLSP), and present two algorithms based on quantum adiabatic computing (AQC) and quantum Zeno effect respectively. Both algorithms prepare the final solution as a pure state, and achieves the near optimal $\mathcal{\widetilde{O}}(d\kappa\log(1/\epsilon))$ query complexity for a $d$-sparse matrix, where $\kappa$ is the condition number, and $\epsilon$ is the desired precision. Neither algorithm uses phase estimation or amplitude amplification.

]]>We address the problem of existence of completely positive trace preserving (CPTP) maps between two sets of density matrices. We refine the result of Alberti and Uhlmann and derive a necessary and sufficient condition for the existence of a unital channel between two pairs of qubit states which ultimately boils down to three simple inequalities.

]]>We address the problem of existence of completely positive trace preserving (CPTP) maps between two sets of density matrices. We refine the result of Alberti and Uhlmann and derive a necessary and sufficient condition for the existence of a unital channel between two pairs of qubit states which ultimately boils down to three simple inequalities.

]]>We consider possible non-signaling composites of probabilistic models based on euclidean Jordan algebras (EJAs), satisfying some reasonable additional constraints motivated by the desire to construct dagger-compact categories of such models. We show that no such composite has the exceptional Jordan algebra as a direct summand, nor does any such composite exist if one factor has an exceptional summand, unless the other factor is a direct sum of one-dimensional Jordan algebras (representing essentially a classical system). Moreover, we show that any composite of simple, non-exceptional EJAs is a direct summand of their universal tensor product, sharply limiting the possibilities. These results warrant our focussing on concrete Jordan algebras of hermitian matrices, i.e., euclidean Jordan algebras with a preferred embedding in a complex matrix algebra. We show that these can be organized in a natural way as a symmetric monoidal category, albeit one that is not compact closed. We then construct a related category $\mbox{InvQM}$ of embedded euclidean Jordan algebras, having fewer objects but more morphisms, that is not only compact closed but dagger-compact. This category unifies finite-dimensional real, complex and quaternionic mixed-state quantum mechanics, except that the composite of two complex quantum systems comes with an extra classical bit. Our notion of composite requires neither tomographic locality, nor preservation of purity under tensor product. The categories we construct include examples in which both of these conditions fail. {In such cases, the information capacity (the maximum number of mutually distinguishable states) of a composite is greater than the product of the capacities of its constituents.}

]]>We consider possible non-signaling composites of probabilistic models based on euclidean Jordan algebras (EJAs), satisfying some reasonable additional constraints motivated by the desire to construct dagger-compact categories of such models. We show that no such composite has the exceptional Jordan algebra as a direct summand, nor does any such composite exist if one factor has an exceptional summand, unless the other factor is a direct sum of one-dimensional Jordan algebras (representing essentially a classical system). Moreover, we show that any composite of simple, non-exceptional EJAs is a direct summand of their universal tensor product, sharply limiting the possibilities. These results warrant our focussing on concrete Jordan algebras of hermitian matrices, i.e., euclidean Jordan algebras with a preferred embedding in a complex matrix algebra. We show that these can be organized in a natural way as a symmetric monoidal category, albeit one that is not compact closed. We then construct a related category $\mbox{InvQM}$ of embedded euclidean Jordan algebras, having fewer objects but more morphisms, that is not only compact closed but dagger-compact. This category unifies finite-dimensional real, complex and quaternionic mixed-state quantum mechanics, except that the composite of two complex quantum systems comes with an extra classical bit. Our notion of composite requires neither tomographic locality, nor preservation of purity under tensor product. The categories we construct include examples in which both of these conditions fail. {In such cases, the information capacity (the maximum number of mutually distinguishable states) of a composite is greater than the product of the capacities of its constituents.}

]]>We investigate quantum error correction using continuous parity measurements to correct bit-flip errors with the three-qubit code. Continuous monitoring of errors brings the benefit of a continuous stream of information, which facilitates passive error tracking in real time. It reduces overhead from the standard gate-based approach that periodically entangles and measures additional ancilla qubits. However, the noisy analog signals from continuous parity measurements mandate more complicated signal processing to interpret syndromes accurately. We analyze the performance of several practical filtering methods for continuous error correction and demonstrate that they are viable alternatives to the standard ancilla-based approach. As an optimal filter, we discuss an unnormalized (linear) Bayesian filter, with improved computational efficiency compared to the related Wonham filter introduced by Mabuchi [New J. Phys. 11, 105044 (2009)]. We compare this optimal continuous filter to two practical variations of the simplest periodic boxcar-averaging-and-thresholding filter, targeting real-time hardware implementations with low-latency circuitry. As variations, we introduce a non-Markovian ``half-boxcar'' filter and a Markovian filter with a second adjustable threshold; these filters eliminate the dominant source of error in the boxcar filter, and compare favorably to the optimal filter. For each filter, we derive analytic results for the decay in average fidelity and verify them with numerical simulations.

]]>We investigate quantum error correction using continuous parity measurements to correct bit-flip errors with the three-qubit code. Continuous monitoring of errors brings the benefit of a continuous stream of information, which facilitates passive error tracking in real time. It reduces overhead from the standard gate-based approach that periodically entangles and measures additional ancilla qubits. However, the noisy analog signals from continuous parity measurements mandate more complicated signal processing to interpret syndromes accurately. We analyze the performance of several practical filtering methods for continuous error correction and demonstrate that they are viable alternatives to the standard ancilla-based approach. As an optimal filter, we discuss an unnormalized (linear) Bayesian filter, with improved computational efficiency compared to the related Wonham filter introduced by Mabuchi [New J. Phys. 11, 105044 (2009)]. We compare this optimal continuous filter to two practical variations of the simplest periodic boxcar-averaging-and-thresholding filter, targeting real-time hardware implementations with low-latency circuitry. As variations, we introduce a non-Markovian ``half-boxcar'' filter and a Markovian filter with a second adjustable threshold; these filters eliminate the dominant source of error in the boxcar filter, and compare favorably to the optimal filter. For each filter, we derive analytic results for the decay in average fidelity and verify them with numerical simulations.

]]>In ``Playing Pool with $\pi$'' [1], Galperin invented an extraordinary method to learn the digits of $\pi$ by counting the collisions of billiard balls. Here I demonstrate an exact isomorphism between Galperin's bouncing billiards and Grover's algorithm for quantum search. This provides an illuminating way to visualize Grover's algorithm.

]]>In ``Playing Pool with $\pi$'' [1], Galperin invented an extraordinary method to learn the digits of $\pi$ by counting the collisions of billiard balls. Here I demonstrate an exact isomorphism between Galperin's bouncing billiards and Grover's algorithm for quantum search. This provides an illuminating way to visualize Grover's algorithm.

]]>In this paper we propose a technique for distributing entanglement in architectures in which interactions between pairs of qubits are constrained to a fixed network $G$. This allows for two-qubit operations to be performed between qubits which are remote from each other in $G$, through gate teleportation. We demonstrate how adapting $\textit{quantum linear network coding}$ to this problem of entanglement distribution in a network of qubits can be used to solve the problem of distributing Bell states and GHZ states in parallel, when bottlenecks in $G$ would otherwise force such entangled states to be distributed sequentially. In particular, we show that by reduction to classical network coding protocols for the $k$-pairs problem or multiple multicast problem in a fixed network $G$, one can distribute entanglement between the transmitters and receivers with a Clifford circuit whose quantum depth is some (typically small and easily computed) constant, which does not depend on the size of $G$, however remote the transmitters and receivers are, or the number of transmitters and receivers. These results also generalise straightforwardly to qudits of any prime dimension. We demonstrate our results using a specialised formalism, distinct from and more efficient than the stabiliser formalism, which is likely to be helpful to reason about and prototype such quantum linear network coding circuits.

]]>In this paper we propose a technique for distributing entanglement in architectures in which interactions between pairs of qubits are constrained to a fixed network $G$. This allows for two-qubit operations to be performed between qubits which are remote from each other in $G$, through gate teleportation. We demonstrate how adapting $\textit{quantum linear network coding}$ to this problem of entanglement distribution in a network of qubits can be used to solve the problem of distributing Bell states and GHZ states in parallel, when bottlenecks in $G$ would otherwise force such entangled states to be distributed sequentially. In particular, we show that by reduction to classical network coding protocols for the $k$-pairs problem or multiple multicast problem in a fixed network $G$, one can distribute entanglement between the transmitters and receivers with a Clifford circuit whose quantum depth is some (typically small and easily computed) constant, which does not depend on the size of $G$, however remote the transmitters and receivers are, or the number of transmitters and receivers. These results also generalise straightforwardly to qudits of any prime dimension. We demonstrate our results using a specialised formalism, distinct from and more efficient than the stabiliser formalism, which is likely to be helpful to reason about and prototype such quantum linear network coding circuits.

]]>Quantum resource theories (QRTs) provide a unified theoretical framework for understanding inherent quantum-mechanical properties that serve as resources in quantum information processing, but resources motivated by physics may possess structure whose analysis is mathematically intractable, such as non-uniqueness of maximally resourceful states, lack of convexity, and infinite dimension. We investigate state conversion and resource measures in general QRTs under minimal assumptions to figure out universal properties of physically motivated quantum resources that may have such mathematical structure whose analysis is intractable. In the general setting, we prove the existence of maximally resourceful states in one-shot state conversion. Also analyzing asymptotic state conversion, we discover $\textit{catalytic replication}$ of quantum resources, where a resource state is infinitely replicable by free operations. In QRTs without assuming the uniqueness of maximally resourceful states, we formulate the tasks of distillation and formation of quantum resources, and introduce distillable resource and resource cost based on the distillation and the formation, respectively. Furthermore, we introduce $\textit{consistent resource measures}$ that quantify the amount of quantum resources without contradicting the rate of state conversion even in QRTs with non-unique maximally resourceful states. Progressing beyond the previous work showing a uniqueness theorem for additive resource measures, we prove the corresponding uniqueness inequality for the consistent resource measures; that is, consistent resource measures of a quantum state take values between the distillable resource and the resource cost of the state. These formulations and results establish a foundation of QRTs applicable in a unified way to physically motivated quantum resources whose analysis can be mathematically intractable.

]]>Quantum resource theories (QRTs) provide a unified theoretical framework for understanding inherent quantum-mechanical properties that serve as resources in quantum information processing, but resources motivated by physics may possess structure whose analysis is mathematically intractable, such as non-uniqueness of maximally resourceful states, lack of convexity, and infinite dimension. We investigate state conversion and resource measures in general QRTs under minimal assumptions to figure out universal properties of physically motivated quantum resources that may have such mathematical structure whose analysis is intractable. In the general setting, we prove the existence of maximally resourceful states in one-shot state conversion. Also analyzing asymptotic state conversion, we discover $\textit{catalytic replication}$ of quantum resources, where a resource state is infinitely replicable by free operations. In QRTs without assuming the uniqueness of maximally resourceful states, we formulate the tasks of distillation and formation of quantum resources, and introduce distillable resource and resource cost based on the distillation and the formation, respectively. Furthermore, we introduce $\textit{consistent resource measures}$ that quantify the amount of quantum resources without contradicting the rate of state conversion even in QRTs with non-unique maximally resourceful states. Progressing beyond the previous work showing a uniqueness theorem for additive resource measures, we prove the corresponding uniqueness inequality for the consistent resource measures; that is, consistent resource measures of a quantum state take values between the distillable resource and the resource cost of the state. These formulations and results establish a foundation of QRTs applicable in a unified way to physically motivated quantum resources whose analysis can be mathematically intractable.

]]>Time in quantum mechanics is peculiar: it is an observable that cannot be associated to an Hermitian operator. As a consequence it is impossible to explain dynamics in an isolated system without invoking an external classical clock, a fact that becomes particularly problematic in the context of quantum gravity. An unconventional solution was pioneered by Page and Wootters (PaW) in 1983. PaW showed that dynamics can be an emergent property of the entanglement between two subsystems of a static Universe. In this work we first investigate the possibility to introduce in this framework a Hermitian time operator complement of a clock Hamiltonian having an equally-spaced energy spectrum. An Hermitian operator complement of such Hamiltonian was introduced by Pegg in 1998, who named it "Age". We show here that Age, when introduced in the PaW context, can be interpreted as a proper Hermitian time operator conjugate to a "good" clock Hamiltonian. We therefore show that, still following Pegg's formalism, it is possible to introduce in the PaW framework bounded clock Hamiltonians with an unequally-spaced energy spectrum with rational energy ratios. In this case time is described by a POVM and we demonstrate that Pegg's POVM states provide a consistent dynamical evolution of the system even if they are not orthogonal, and therefore partially undistinguishables.

]]>Time in quantum mechanics is peculiar: it is an observable that cannot be associated to an Hermitian operator. As a consequence it is impossible to explain dynamics in an isolated system without invoking an external classical clock, a fact that becomes particularly problematic in the context of quantum gravity. An unconventional solution was pioneered by Page and Wootters (PaW) in 1983. PaW showed that dynamics can be an emergent property of the entanglement between two subsystems of a static Universe. In this work we first investigate the possibility to introduce in this framework a Hermitian time operator complement of a clock Hamiltonian having an equally-spaced energy spectrum. An Hermitian operator complement of such Hamiltonian was introduced by Pegg in 1998, who named it "Age". We show here that Age, when introduced in the PaW context, can be interpreted as a proper Hermitian time operator conjugate to a "good" clock Hamiltonian. We therefore show that, still following Pegg's formalism, it is possible to introduce in the PaW framework bounded clock Hamiltonians with an unequally-spaced energy spectrum with rational energy ratios. In this case time is described by a POVM and we demonstrate that Pegg's POVM states provide a consistent dynamical evolution of the system even if they are not orthogonal, and therefore partially undistinguishables.

]]>In order to reject the local hidden variables hypothesis, the usefulness of a Bell inequality can be quantified by how small a $p$-value it will give for a physical experiment. Here we show that to obtain a small expected $p$-value it is sufficient to have a large gap between the local and Tsirelson bounds of the Bell inequality, when it is formulated as a nonlocal game. We develop an algorithm for transforming an arbitrary Bell inequality into an equivalent nonlocal game with the largest possible gap, and show its results for the CGLMP and $I_{nn22}$ inequalities. We present explicit examples of Bell inequalities with gap arbitrarily close to one, and show that this makes it possible to reject local hidden variables with arbitrarily small $p$-value in a single shot, without needing to collect statistics. We also develop an algorithm for calculating local bounds of general Bell inequalities which is significantly faster than the naïve approach, which may be of independent interest.

]]>In order to reject the local hidden variables hypothesis, the usefulness of a Bell inequality can be quantified by how small a $p$-value it will give for a physical experiment. Here we show that to obtain a small expected $p$-value it is sufficient to have a large gap between the local and Tsirelson bounds of the Bell inequality, when it is formulated as a nonlocal game. We develop an algorithm for transforming an arbitrary Bell inequality into an equivalent nonlocal game with the largest possible gap, and show its results for the CGLMP and $I_{nn22}$ inequalities. We present explicit examples of Bell inequalities with gap arbitrarily close to one, and show that this makes it possible to reject local hidden variables with arbitrarily small $p$-value in a single shot, without needing to collect statistics. We also develop an algorithm for calculating local bounds of general Bell inequalities which is significantly faster than the naïve approach, which may be of independent interest.

]]>The surface code is a prominent topological error-correcting code exhibiting high fault-tolerance accuracy thresholds. Conventional schemes for error correction with the surface code place qubits on a planar grid and assume native CNOT gates between the data qubits with nearest-neighbor ancilla qubits. Here, we present surface code error-correction schemes using $\textit{only}$ Pauli measurements on single qubits and on pairs of nearest-neighbor qubits. In particular, we provide several qubit layouts that offer favorable trade-offs between qubit overhead, circuit depth and connectivity degree. We also develop minimized measurement sequences for syndrome extraction, enabling reduced logical error rates and improved fault-tolerance thresholds. Our work applies to topologically protected qubits realized with Majorana zero modes and to similar systems in which multi-qubit Pauli measurements rather than CNOT gates are the native operations.

]]>The surface code is a prominent topological error-correcting code exhibiting high fault-tolerance accuracy thresholds. Conventional schemes for error correction with the surface code place qubits on a planar grid and assume native CNOT gates between the data qubits with nearest-neighbor ancilla qubits. Here, we present surface code error-correction schemes using $\textit{only}$ Pauli measurements on single qubits and on pairs of nearest-neighbor qubits. In particular, we provide several qubit layouts that offer favorable trade-offs between qubit overhead, circuit depth and connectivity degree. We also develop minimized measurement sequences for syndrome extraction, enabling reduced logical error rates and improved fault-tolerance thresholds. Our work applies to topologically protected qubits realized with Majorana zero modes and to similar systems in which multi-qubit Pauli measurements rather than CNOT gates are the native operations.

]]>We numerically compute renormalized expectation values of quadratic operators in a quantum field theory (QFT) of free Dirac fermions in curved two-dimensional (Lorentzian) spacetime. First, we use a staggered-fermion discretization to generate a sequence of lattice theories yielding the desired QFT in the continuum limit. Numerically-computed lattice correlators are then used to approximate, through extrapolation, those in the continuum. Finally, we use so-called point-splitting regularization and Hadamard renormalization to remove divergences, and thus obtain finite, renormalized expectation values of quadratic operators in the continuum. As illustrative applications, we show how to recover the Unruh effect in flat spacetime and how to compute renormalized expectation values in the Hawking-Hartle vacuum of a Schwarzschild black hole and in the Bunch-Davies vacuum of an expanding universe described by de Sitter spacetime. Although here we address a non-interacting QFT using free fermion techniques, the framework described in this paper lays the groundwork for a series of subsequent studies involving simulation of interacting QFTs in curved spacetime by tensor network techniques.

]]>We numerically compute renormalized expectation values of quadratic operators in a quantum field theory (QFT) of free Dirac fermions in curved two-dimensional (Lorentzian) spacetime. First, we use a staggered-fermion discretization to generate a sequence of lattice theories yielding the desired QFT in the continuum limit. Numerically-computed lattice correlators are then used to approximate, through extrapolation, those in the continuum. Finally, we use so-called point-splitting regularization and Hadamard renormalization to remove divergences, and thus obtain finite, renormalized expectation values of quadratic operators in the continuum. As illustrative applications, we show how to recover the Unruh effect in flat spacetime and how to compute renormalized expectation values in the Hawking-Hartle vacuum of a Schwarzschild black hole and in the Bunch-Davies vacuum of an expanding universe described by de Sitter spacetime. Although here we address a non-interacting QFT using free fermion techniques, the framework described in this paper lays the groundwork for a series of subsequent studies involving simulation of interacting QFTs in curved spacetime by tensor network techniques.

]]>The Born rule provides a fundamental connection between theory and observation in quantum mechanics, yet its origin remains a mystery. We consider this problem within the context of quantum optics using only classical physics and the assumption of a quantum electrodynamic vacuum that is real rather than virtual. The connection to observation is made via classical intensity threshold detectors that are used as a simple, deterministic model of photon detection. By following standard experimental conventions of data analysis on discrete detection events, we show that this model is capable of reproducing several observed phenomena thought to be uniquely quantum in nature, thus providing greater elucidation of the quantum-classical boundary.

]]>The Born rule provides a fundamental connection between theory and observation in quantum mechanics, yet its origin remains a mystery. We consider this problem within the context of quantum optics using only classical physics and the assumption of a quantum electrodynamic vacuum that is real rather than virtual. The connection to observation is made via classical intensity threshold detectors that are used as a simple, deterministic model of photon detection. By following standard experimental conventions of data analysis on discrete detection events, we show that this model is capable of reproducing several observed phenomena thought to be uniquely quantum in nature, thus providing greater elucidation of the quantum-classical boundary.

]]>We introduce a three-player nonlocal game, with a finite number of classical questions and answers, such that the optimal success probability of $1$ in the game can only be achieved in the limit of strategies using arbitrarily high-dimensional entangled states. Precisely, there exists a constant $0 <c\leq 1$ such that to succeed with probability $1-\varepsilon $ in the game it is necessary to use an entangled state of at least $\Omega(\varepsilon ^{-c})$ qubits, and it is sufficient to use a state of at most $O(\varepsilon ^{-1})$ qubits. The game is based on the coherent state exchange game of Leung et al.\ (CJTCS 2013). In our game, the task of the quantum verifier is delegated to a third player by a classical referee. Our results complement those of Slofstra (arXiv:1703.08618) and Dykema et al.\ (arXiv:1709.05032), who obtained two-player games with similar (though quantitatively weaker) properties based on the representation theory of finitely presented groups and $C^*$-algebras respectively.

]]>We introduce a three-player nonlocal game, with a finite number of classical questions and answers, such that the optimal success probability of $1$ in the game can only be achieved in the limit of strategies using arbitrarily high-dimensional entangled states. Precisely, there exists a constant $0 <c\leq 1$ such that to succeed with probability $1-\varepsilon $ in the game it is necessary to use an entangled state of at least $\Omega(\varepsilon ^{-c})$ qubits, and it is sufficient to use a state of at most $O(\varepsilon ^{-1})$ qubits. The game is based on the coherent state exchange game of Leung et al.\ (CJTCS 2013). In our game, the task of the quantum verifier is delegated to a third player by a classical referee. Our results complement those of Slofstra (arXiv:1703.08618) and Dykema et al.\ (arXiv:1709.05032), who obtained two-player games with similar (though quantitatively weaker) properties based on the representation theory of finitely presented groups and $C^*$-algebras respectively.

]]>Critical to the construction of large scale quantum networks, i.e. a quantum internet, is the development of fast algorithms for managing entanglement present in the network. One fundamental building block for a quantum internet is the distribution of Bell pairs between distant nodes in the network. Here we focus on the problem of transforming multipartite entangled states into the tensor product of bipartite Bell pairs between specific nodes using only a certain class of local operations and classical communication. In particular we study the problem of deciding whether a given graph state, and in general a stabilizer state, can be transformed into a set of Bell pairs on specific vertices using only single-qubit Clifford operations, single-qubit Pauli measurements and classical communication. We prove that this problem is ${\mathbb{NP}}$-Complete.

]]>Critical to the construction of large scale quantum networks, i.e. a quantum internet, is the development of fast algorithms for managing entanglement present in the network. One fundamental building block for a quantum internet is the distribution of Bell pairs between distant nodes in the network. Here we focus on the problem of transforming multipartite entangled states into the tensor product of bipartite Bell pairs between specific nodes using only a certain class of local operations and classical communication. In particular we study the problem of deciding whether a given graph state, and in general a stabilizer state, can be transformed into a set of Bell pairs on specific vertices using only single-qubit Clifford operations, single-qubit Pauli measurements and classical communication. We prove that this problem is ${\mathbb{NP}}$-Complete.

]]>Considering two non-interacting qubits in the context of open quantum systems, it is well known that their common environment may act as an entangling agent. In a perturbative regime the influence of the environment on the system dynamics can effectively be described by a unitary and a dissipative contribution. For the two-spin Boson model with (sub-) Ohmic spectral density considered here, the particular unitary contribution (Lamb shift) easily explains the buildup of entanglement between the two qubits. Furthermore it has been argued that in the adiabatic limit, adding the so-called counterterm to the microscopic model compensates the unitary influence of the environment and, thus, inhibits the generation of entanglement. Investigating this assertion is one of the main objectives of the work presented here. Using the hierarchy of pure states (HOPS) method to numerically calculate the exact reduced dynamics, we find and explain that the degree of inhibition crucially depends on the parameter $s$ determining the low frequency power law behavior of the spectral density $J(\omega) \sim \omega^s e^{-\omega/\omega_c}$. Remarkably, we find that for resonant qubits, even in the adiabatic regime (arbitrarily large $\omega_c$), the entanglement dynamics is still influenced by an environmentally induced Hamiltonian interaction. Further, we study the model in detail and present the exact entanglement dynamics for a wide range of coupling strengths, distinguish between resonant and detuned qubits, as well as Ohmic and deep sub-Ohmic environments. Notably, we find that in all cases the asymptotic entanglement does not vanish and conjecture a linear relation between the coupling strength and the asymptotic entanglement measured by means of concurrence. Further we discuss the suitability of various perturbative master equations for obtaining approximate entanglement dynamics.

]]>Considering two non-interacting qubits in the context of open quantum systems, it is well known that their common environment may act as an entangling agent. In a perturbative regime the influence of the environment on the system dynamics can effectively be described by a unitary and a dissipative contribution. For the two-spin Boson model with (sub-) Ohmic spectral density considered here, the particular unitary contribution (Lamb shift) easily explains the buildup of entanglement between the two qubits. Furthermore it has been argued that in the adiabatic limit, adding the so-called counterterm to the microscopic model compensates the unitary influence of the environment and, thus, inhibits the generation of entanglement. Investigating this assertion is one of the main objectives of the work presented here. Using the hierarchy of pure states (HOPS) method to numerically calculate the exact reduced dynamics, we find and explain that the degree of inhibition crucially depends on the parameter $s$ determining the low frequency power law behavior of the spectral density $J(\omega) \sim \omega^s e^{-\omega/\omega_c}$. Remarkably, we find that for resonant qubits, even in the adiabatic regime (arbitrarily large $\omega_c$), the entanglement dynamics is still influenced by an environmentally induced Hamiltonian interaction. Further, we study the model in detail and present the exact entanglement dynamics for a wide range of coupling strengths, distinguish between resonant and detuned qubits, as well as Ohmic and deep sub-Ohmic environments. Notably, we find that in all cases the asymptotic entanglement does not vanish and conjecture a linear relation between the coupling strength and the asymptotic entanglement measured by means of concurrence. Further we discuss the suitability of various perturbative master equations for obtaining approximate entanglement dynamics.

]]>$\textit{Self-testing}$ has been a rich area of study in quantum information theory. It allows an experimenter to interact classically with a black box quantum system and to test that a specific entangled state was present and a specific set of measurements were performed. Recently, self-testing has been central to high-profile results in complexity theory as seen in the work on entangled games PCP of Natarajan and Vidick [26], iterated compression by Fitzsimons et al. [16], and NEEXP in MIP* due to Natarajan and Wright [27]. The most studied self-test is the CHSH game which features a bipartite system with two isolated devices. This game certifies the presence of a single EPR entangled state and the use of anti-commuting Pauli measurements. Most of the self-testing literature has focused on extending these results to self-test for tensor products of EPR states and tensor products of Pauli measurements. In this work, we introduce an algebraic generalization of CHSH by viewing it as a linear constraint system (LCS) game, exhibiting self-testing properties that are qualitatively different. These provide the first example of LCS games that self-test non-Pauli operators resolving an open questions posed by Coladangelo and Stark [15]. Our games also provide a self-test for states other than the maximally entangled state, and hence resolves the open question posed by Cleve and Mittal [11]. Additionally, our games have $1$ bit question and $\log n$ bit answer lengths making them suitable candidates for complexity theoretic application. This work is the first step towards a general theory of self-testing arbitrary groups. In order to obtain our results, we exploit connections between sum of squares proofs, non-commutative ring theory, and the Gowers-Hatami theorem from approximate representation theory. A crucial part of our analysis is to introduce a sum of squares framework that generalizes the $\textit{solution group}$ of Cleve, Liu, and Slofstra [10] to the non-pseudo-telepathic regime. Finally, we give the first example of a game that is not a self-test. Our results suggest a richer landscape of self-testing phenomena than previously considered.

]]>$\textit{Self-testing}$ has been a rich area of study in quantum information theory. It allows an experimenter to interact classically with a black box quantum system and to test that a specific entangled state was present and a specific set of measurements were performed. Recently, self-testing has been central to high-profile results in complexity theory as seen in the work on entangled games PCP of Natarajan and Vidick [26], iterated compression by Fitzsimons et al. [16], and NEEXP in MIP* due to Natarajan and Wright [27]. The most studied self-test is the CHSH game which features a bipartite system with two isolated devices. This game certifies the presence of a single EPR entangled state and the use of anti-commuting Pauli measurements. Most of the self-testing literature has focused on extending these results to self-test for tensor products of EPR states and tensor products of Pauli measurements. In this work, we introduce an algebraic generalization of CHSH by viewing it as a linear constraint system (LCS) game, exhibiting self-testing properties that are qualitatively different. These provide the first example of LCS games that self-test non-Pauli operators resolving an open questions posed by Coladangelo and Stark [15]. Our games also provide a self-test for states other than the maximally entangled state, and hence resolves the open question posed by Cleve and Mittal [11]. Additionally, our games have $1$ bit question and $\log n$ bit answer lengths making them suitable candidates for complexity theoretic application. This work is the first step towards a general theory of self-testing arbitrary groups. In order to obtain our results, we exploit connections between sum of squares proofs, non-commutative ring theory, and the Gowers-Hatami theorem from approximate representation theory. A crucial part of our analysis is to introduce a sum of squares framework that generalizes the $\textit{solution group}$ of Cleve, Liu, and Slofstra [10] to the non-pseudo-telepathic regime. Finally, we give the first example of a game that is not a self-test. Our results suggest a richer landscape of self-testing phenomena than previously considered.

]]>Based on an intuitive generalization of the Leibniz principle of `the identity of indiscernibles', we introduce a novel ontological notion of classicality, called bounded ontological distinctness. Formulated as a principle, bounded ontological distinctness equates the distinguishability of a set of operational physical entities to the distinctness of their ontological counterparts. Employing three instances of two-dimensional quantum preparations, we demonstrate the violation of bounded ontological distinctness or excess ontological distinctness of quantum preparations, without invoking any additional assumptions. Moreover, our methodology enables the inference of tight lower bounds on the extent of excess ontological distinctness of quantum preparations. Similarly, we demonstrate excess ontological distinctness of quantum transformations, using three two-dimensional unitary transformations. However, to demonstrate excess ontological distinctness of quantum measurements, an additional assumption such as outcome determinism or bounded ontological distinctness of preparations is required. Moreover, we show that quantum violations of other well-known ontological principles implicate quantum excess ontological distinctness. Finally, to showcase the operational vitality of excess ontological distinctness, we introduce two distinct classes of communication tasks powered by excess ontological distinctness.

]]>Based on an intuitive generalization of the Leibniz principle of `the identity of indiscernibles', we introduce a novel ontological notion of classicality, called bounded ontological distinctness. Formulated as a principle, bounded ontological distinctness equates the distinguishability of a set of operational physical entities to the distinctness of their ontological counterparts. Employing three instances of two-dimensional quantum preparations, we demonstrate the violation of bounded ontological distinctness or excess ontological distinctness of quantum preparations, without invoking any additional assumptions. Moreover, our methodology enables the inference of tight lower bounds on the extent of excess ontological distinctness of quantum preparations. Similarly, we demonstrate excess ontological distinctness of quantum transformations, using three two-dimensional unitary transformations. However, to demonstrate excess ontological distinctness of quantum measurements, an additional assumption such as outcome determinism or bounded ontological distinctness of preparations is required. Moreover, we show that quantum violations of other well-known ontological principles implicate quantum excess ontological distinctness. Finally, to showcase the operational vitality of excess ontological distinctness, we introduce two distinct classes of communication tasks powered by excess ontological distinctness.

]]>An important problem in quantum information theory is that of bounding sets of correlations that arise from making local measurements on entangled states of arbitrary dimension. Currently, the best-known method to tackle this problem is the NPA hierarchy; an infinite sequence of semidefinite programs that provides increasingly tighter outer approximations to the desired set of correlations. In this work we consider a more general scenario in which one performs sequences of local measurements on an entangled state of arbitrary dimension. We show that a simple adaptation of the original NPA hierarchy provides an analogous hierarchy for this scenario, with comparable resource requirements and convergence properties. We then use the method to tackle some problems in device-independent quantum information. First, we show how one can robustly certify over 2.3 bits of device-independent local randomness from a two-quibt state using a sequence of measurements, going beyond the theoretical maximum of two bits that can be achieved with non-sequential measurements. Finally, we show tight upper bounds to two previously defined tasks in sequential Bell test scenarios.

]]>An important problem in quantum information theory is that of bounding sets of correlations that arise from making local measurements on entangled states of arbitrary dimension. Currently, the best-known method to tackle this problem is the NPA hierarchy; an infinite sequence of semidefinite programs that provides increasingly tighter outer approximations to the desired set of correlations. In this work we consider a more general scenario in which one performs sequences of local measurements on an entangled state of arbitrary dimension. We show that a simple adaptation of the original NPA hierarchy provides an analogous hierarchy for this scenario, with comparable resource requirements and convergence properties. We then use the method to tackle some problems in device-independent quantum information. First, we show how one can robustly certify over 2.3 bits of device-independent local randomness from a two-quibt state using a sequence of measurements, going beyond the theoretical maximum of two bits that can be achieved with non-sequential measurements. Finally, we show tight upper bounds to two previously defined tasks in sequential Bell test scenarios.

]]>We devise a method to certify nonclassical features via correlations of phase-space distributions by unifying the notions of quasiprobabilities and matrices of correlation functions. Our approach complements and extends recent results that were based on Chebyshev's integral inequality [65]. The method developed here correlates arbitrary phase-space functions at arbitrary points in phase space, including multimode scenarios and higher-order correlations. Furthermore, our approach provides necessary and sufficient nonclassicality criteria, applies to phase-space functions beyond $s$-parametrized ones, and is accessible in experiments. To demonstrate the power of our technique, the quantum characteristics of discrete- and continuous-variable, single- and multimode, as well as pure and mixed states are certified only employing second-order correlations and Husimi functions, which always resemble a classical probability distribution. Moreover, nonlinear generalizations of our approach are studied. Therefore, a versatile and broadly applicable framework is devised to uncover quantum properties in terms of matrices of phase-space distributions.

]]>We devise a method to certify nonclassical features via correlations of phase-space distributions by unifying the notions of quasiprobabilities and matrices of correlation functions. Our approach complements and extends recent results that were based on Chebyshev's integral inequality [65]. The method developed here correlates arbitrary phase-space functions at arbitrary points in phase space, including multimode scenarios and higher-order correlations. Furthermore, our approach provides necessary and sufficient nonclassicality criteria, applies to phase-space functions beyond $s$-parametrized ones, and is accessible in experiments. To demonstrate the power of our technique, the quantum characteristics of discrete- and continuous-variable, single- and multimode, as well as pure and mixed states are certified only employing second-order correlations and Husimi functions, which always resemble a classical probability distribution. Moreover, nonlinear generalizations of our approach are studied. Therefore, a versatile and broadly applicable framework is devised to uncover quantum properties in terms of matrices of phase-space distributions.

]]>We propose a quantum algorithm for training nonlinear support vector machines (SVM) for feature space learning where classical input data is encoded in the amplitudes of quantum states. Based on the classical SVM-perf algorithm of Joachims [1], our algorithm has a running time which scales linearly in the number of training examples $m$ (up to polylogarithmic factors) and applies to the standard soft-margin $\ell_1$-SVM model. In contrast, while classical SVM-perf has demonstrated impressive performance on both linear and nonlinear SVMs, its efficiency is guaranteed only in certain cases: it achieves linear $m$ scaling only for linear SVMs, where classification is performed in the original input data space, or for the special cases of low-rank or shift-invariant kernels. Similarly, previously proposed quantum algorithms either have super-linear scaling in $m$, or else apply to different SVM models such as the hard-margin or least squares $\ell_2$-SVM which lack certain desirable properties of the soft-margin $\ell_1$-SVM model. We classically simulate our algorithm and give evidence that it can perform well in practice, and not only for asymptotically large data sets.

]]>We propose a quantum algorithm for training nonlinear support vector machines (SVM) for feature space learning where classical input data is encoded in the amplitudes of quantum states. Based on the classical SVM-perf algorithm of Joachims [1], our algorithm has a running time which scales linearly in the number of training examples $m$ (up to polylogarithmic factors) and applies to the standard soft-margin $\ell_1$-SVM model. In contrast, while classical SVM-perf has demonstrated impressive performance on both linear and nonlinear SVMs, its efficiency is guaranteed only in certain cases: it achieves linear $m$ scaling only for linear SVMs, where classification is performed in the original input data space, or for the special cases of low-rank or shift-invariant kernels. Similarly, previously proposed quantum algorithms either have super-linear scaling in $m$, or else apply to different SVM models such as the hard-margin or least squares $\ell_2$-SVM which lack certain desirable properties of the soft-margin $\ell_1$-SVM model. We classically simulate our algorithm and give evidence that it can perform well in practice, and not only for asymptotically large data sets.

]]>We introduce $\texttt{Yao}$, an extensible, efficient open-source framework for quantum algorithm design. $\texttt{Yao}$ features generic and differentiable programming of quantum circuits. It achieves state-of-the-art performance in simulating small to intermediate-sized quantum circuits that are relevant to near-term applications. We introduce the design principles and critical techniques behind $\texttt{Yao}$. These include the quantum block intermediate representation of quantum circuits, a builtin automatic differentiation engine optimized for reversible computing, and batched quantum registers with GPU acceleration. The extensibility and efficiency of $\texttt{Yao}$ help boost innovation in quantum algorithm design.

]]>We introduce $\texttt{Yao}$, an extensible, efficient open-source framework for quantum algorithm design. $\texttt{Yao}$ features generic and differentiable programming of quantum circuits. It achieves state-of-the-art performance in simulating small to intermediate-sized quantum circuits that are relevant to near-term applications. We introduce the design principles and critical techniques behind $\texttt{Yao}$. These include the quantum block intermediate representation of quantum circuits, a builtin automatic differentiation engine optimized for reversible computing, and batched quantum registers with GPU acceleration. The extensibility and efficiency of $\texttt{Yao}$ help boost innovation in quantum algorithm design.

]]>We extend the concept of transfer learning, widely applied in modern machine learning algorithms, to the emerging context of hybrid neural networks composed of classical and quantum elements. We propose different implementations of hybrid transfer learning, but we focus mainly on the paradigm in which a pre-trained classical network is modified and augmented by a final variational quantum circuit. This approach is particularly attractive in the current era of intermediate-scale quantum technology since it allows to optimally pre-process high dimensional data (e.g., images) with any state-of-the-art classical network and to embed a select set of highly informative features into a quantum processor. We present several proof-of-concept examples of the convenient application of quantum transfer learning for image recognition and quantum state classification. We use the cross-platform software library PennyLane to experimentally test a high-resolution image classifier with two different quantum computers, respectively provided by IBM and Rigetti.

]]>We extend the concept of transfer learning, widely applied in modern machine learning algorithms, to the emerging context of hybrid neural networks composed of classical and quantum elements. We propose different implementations of hybrid transfer learning, but we focus mainly on the paradigm in which a pre-trained classical network is modified and augmented by a final variational quantum circuit. This approach is particularly attractive in the current era of intermediate-scale quantum technology since it allows to optimally pre-process high dimensional data (e.g., images) with any state-of-the-art classical network and to embed a select set of highly informative features into a quantum processor. We present several proof-of-concept examples of the convenient application of quantum transfer learning for image recognition and quantum state classification. We use the cross-platform software library PennyLane to experimentally test a high-resolution image classifier with two different quantum computers, respectively provided by IBM and Rigetti.

]]>We study the spectral properties of $D$-dimensional $N=2$ supersymmetric lattice models. We find systematic departures from the eigenstate thermalization hypothesis (ETH) in the form of a degenerate set of ETH-violating supersymmetric (SUSY) doublets, also referred to as many-body scars, that we construct analytically. These states are stable against arbitrary SUSY-preserving perturbations, including inhomogeneous couplings. For the specific case of two-leg ladders, we provide extensive numerical evidence that shows how those states are the only ones violating the ETH, and discuss their robustness to SUSY-violating perturbations. Our work suggests a generic mechanism to stabilize quantum many-body scars in lattice models in arbitrary dimensions.

]]>We study the spectral properties of $D$-dimensional $N=2$ supersymmetric lattice models. We find systematic departures from the eigenstate thermalization hypothesis (ETH) in the form of a degenerate set of ETH-violating supersymmetric (SUSY) doublets, also referred to as many-body scars, that we construct analytically. These states are stable against arbitrary SUSY-preserving perturbations, including inhomogeneous couplings. For the specific case of two-leg ladders, we provide extensive numerical evidence that shows how those states are the only ones violating the ETH, and discuss their robustness to SUSY-violating perturbations. Our work suggests a generic mechanism to stabilize quantum many-body scars in lattice models in arbitrary dimensions.

]]>We study the class of quantum measurements with the property that the image of the set of quantum states under the measurement map transforming states into probability distributions is similar to this set and call such measurements morphophoric. This leads to the generalisation of the notion of a qplex, where SIC-POVMs are replaced by the elements of the much larger class of morphophoric POVMs, containing in particular 2-design (rank-1 and equal-trace) POVMs. The intrinsic geometry of a generalised qplex is the same as that of the set of quantum states, so we explore its external geometry, investigating, inter alia, the algebraic and geometric form of the inner (basis) and the outer (primal) polytopes between which the generalised qplex is sandwiched. In particular, we examine generalised qplexes generated by MUB-like 2-design POVMs utilising their graph-theoretical properties. Moreover, we show how to extend the primal equation of QBism designed for SIC-POVMs to the morphophoric case.

]]>We study the class of quantum measurements with the property that the image of the set of quantum states under the measurement map transforming states into probability distributions is similar to this set and call such measurements morphophoric. This leads to the generalisation of the notion of a qplex, where SIC-POVMs are replaced by the elements of the much larger class of morphophoric POVMs, containing in particular 2-design (rank-1 and equal-trace) POVMs. The intrinsic geometry of a generalised qplex is the same as that of the set of quantum states, so we explore its external geometry, investigating, inter alia, the algebraic and geometric form of the inner (basis) and the outer (primal) polytopes between which the generalised qplex is sandwiched. In particular, we examine generalised qplexes generated by MUB-like 2-design POVMs utilising their graph-theoretical properties. Moreover, we show how to extend the primal equation of QBism designed for SIC-POVMs to the morphophoric case.

]]>Self-testing is a method to infer the underlying physics of a quantum experiment in a black box scenario. As such it represents the strongest form of certification for quantum systems. In recent years a considerable self-testing literature has developed, leading to progress in related device-independent quantum information protocols and deepening our understanding of quantum correlations. In this work we give a thorough and self-contained introduction and review of self-testing and its application to other areas of quantum information.

]]>Self-testing is a method to infer the underlying physics of a quantum experiment in a black box scenario. As such it represents the strongest form of certification for quantum systems. In recent years a considerable self-testing literature has developed, leading to progress in related device-independent quantum information protocols and deepening our understanding of quantum correlations. In this work we give a thorough and self-contained introduction and review of self-testing and its application to other areas of quantum information.

]]>A universal scheme is introduced to speed up the dynamics of a driven open quantum system along a prescribed trajectory of interest. This framework generalizes counterdiabatic driving to open quantum processes. Shortcuts to adiabaticity designed in this fashion can be implemented in two alternative physical scenarios: one characterized by the presence of balanced gain and loss, the other involves non-Markovian dynamics with time-dependent Lindblad operators. As an illustration, we engineer superadiabatic cooling, heating, and isothermal strokes for a two-level system, and provide a protocol for the fast thermalization of a quantum oscillator.

]]>A universal scheme is introduced to speed up the dynamics of a driven open quantum system along a prescribed trajectory of interest. This framework generalizes counterdiabatic driving to open quantum processes. Shortcuts to adiabaticity designed in this fashion can be implemented in two alternative physical scenarios: one characterized by the presence of balanced gain and loss, the other involves non-Markovian dynamics with time-dependent Lindblad operators. As an illustration, we engineer superadiabatic cooling, heating, and isothermal strokes for a two-level system, and provide a protocol for the fast thermalization of a quantum oscillator.

]]>We study the thermodynamic properties of a system of two-level dipoles that are coupled ultrastrongly to a single cavity mode. By using exact numerical and approximate analytical methods, we evaluate the free energy of this system at arbitrary interaction strengths and discuss strong-coupling modifications of derivative quantities such as the specific heat or the electric susceptibility. From this analysis we identify the lowest-order cavity-induced corrections to those quantities in the collective ultrastrong coupling regime and show that for even stronger interactions the presence of a single cavity mode can strongly modify extensive thermodynamic quantities of a large ensemble of dipoles. In this non-perturbative coupling regime we also observe a significant shift of the ferroelectric phase transition temperature and a characteristic broadening and collapse of the black-body spectrum of the cavity mode. Apart from a purely fundamental interest, these general insights will be important for identifying potential applications of ultrastrong-coupling effects, for example, in the field of quantum chemistry or for realizing quantum thermal machines.

]]>We study the thermodynamic properties of a system of two-level dipoles that are coupled ultrastrongly to a single cavity mode. By using exact numerical and approximate analytical methods, we evaluate the free energy of this system at arbitrary interaction strengths and discuss strong-coupling modifications of derivative quantities such as the specific heat or the electric susceptibility. From this analysis we identify the lowest-order cavity-induced corrections to those quantities in the collective ultrastrong coupling regime and show that for even stronger interactions the presence of a single cavity mode can strongly modify extensive thermodynamic quantities of a large ensemble of dipoles. In this non-perturbative coupling regime we also observe a significant shift of the ferroelectric phase transition temperature and a characteristic broadening and collapse of the black-body spectrum of the cavity mode. Apart from a purely fundamental interest, these general insights will be important for identifying potential applications of ultrastrong-coupling effects, for example, in the field of quantum chemistry or for realizing quantum thermal machines.

]]>