Quantum continues to grow and is hiring an editorial assistant. For now, this is a part-time position for six months based in Vienna.

To learn more have a look at the job announcement and application page.

]]>Treating reference frames fundamentally as quantum systems is inevitable in quantum gravity and also in quantum foundations once considering laboratories as physical systems. Both fields thereby face the question of how to describe physics relative to quantum reference systems and how the descriptions relative to different such choices are related. Here, we exploit a fruitful interplay of ideas from both fields to begin developing a unifying approach to transformations among quantum reference systems that ultimately aims at encompassing both quantum and gravitational physics. In particular, using a gravity inspired symmetry principle, which enforces physical observables to be relational and leads to an inherent redundancy in the description, we develop a perspective-neutral structure, which contains all frame perspectives at once and via which they are changed. We show that taking the perspective of a specific frame amounts to a fixing of the symmetry related redundancies in both the classical and quantum theory and that changing perspective corresponds to a symmetry transformation. We implement this using the language of constrained systems, which naturally encodes symmetries. Within a simple one-dimensional model, we recover some of the quantum frame transformations of [1], embedding them in a perspective-neutral framework. Using them, we illustrate how entanglement and classicality of an observed system depend on the quantum frame perspective. Our operational language also inspires a new interpretation of Dirac and reduced quantized theories within our model as perspective-neutral and perspectival quantum theories, respectively, and reveals the explicit link between them. In this light, we suggest a new take on the relation between a `quantum general covariance' and the diffeomorphism symmetry in quantum gravity.

]]>Treating reference frames fundamentally as quantum systems is inevitable in quantum gravity and also in quantum foundations once considering laboratories as physical systems. Both fields thereby face the question of how to describe physics relative to quantum reference systems and how the descriptions relative to different such choices are related. Here, we exploit a fruitful interplay of ideas from both fields to begin developing a unifying approach to transformations among quantum reference systems that ultimately aims at encompassing both quantum and gravitational physics. In particular, using a gravity inspired symmetry principle, which enforces physical observables to be relational and leads to an inherent redundancy in the description, we develop a perspective-neutral structure, which contains all frame perspectives at once and via which they are changed. We show that taking the perspective of a specific frame amounts to a fixing of the symmetry related redundancies in both the classical and quantum theory and that changing perspective corresponds to a symmetry transformation. We implement this using the language of constrained systems, which naturally encodes symmetries. Within a simple one-dimensional model, we recover some of the quantum frame transformations of [1], embedding them in a perspective-neutral framework. Using them, we illustrate how entanglement and classicality of an observed system depend on the quantum frame perspective. Our operational language also inspires a new interpretation of Dirac and reduced quantized theories within our model as perspective-neutral and perspectival quantum theories, respectively, and reveals the explicit link between them. In this light, we suggest a new take on the relation between a `quantum general covariance' and the diffeomorphism symmetry in quantum gravity.

]]>The Hodgkin-Huxley model describes the conduction of the nervous impulse through the axon, whose membrane's electric response can be described employing multiple connected electric circuits containing capacitors, voltage sources, and conductances. These conductances depend on previous depolarizing membrane voltages, which can be identified with a memory resistive element called memristor. Inspired by the recent quantization of the memristor, a simplified Hodgkin-Huxley model including a single ion channel has been studied in the quantum regime. Here, we study the quantization of the complete Hodgkin-Huxley model, accounting for all three ion channels, and introduce a quantum source, together with an output waveguide as the connection to a subsequent neuron. Our system consists of two memristors and one resistor, describing potassium, sodium, and chloride ion channel conductances, respectively, and a capacitor to account for the axon's membrane capacitance. We study the behavior of both ion channel conductivities and the circuit voltage, and we compare the results with those of the single channel, for a given quantum state of the source. It is remarkable that, in opposition to the single-channel model, we are able to reproduce the voltage spike in an adiabatic regime. Arguing that the circuit voltage is a quantum variable, we find a purely quantum-mechanical contribution in the system voltage's second moment. This work represents a complete study of the Hodgkin-Huxley model in the quantum regime, establishing a recipe for constructing quantum neuron networks with quantum state inputs. This paves the way for advances in hardware-based neuromorphic quantum computing, as well as quantum machine learning, which might be more efficient resource-wise.

]]>The Hodgkin-Huxley model describes the conduction of the nervous impulse through the axon, whose membrane's electric response can be described employing multiple connected electric circuits containing capacitors, voltage sources, and conductances. These conductances depend on previous depolarizing membrane voltages, which can be identified with a memory resistive element called memristor. Inspired by the recent quantization of the memristor, a simplified Hodgkin-Huxley model including a single ion channel has been studied in the quantum regime. Here, we study the quantization of the complete Hodgkin-Huxley model, accounting for all three ion channels, and introduce a quantum source, together with an output waveguide as the connection to a subsequent neuron. Our system consists of two memristors and one resistor, describing potassium, sodium, and chloride ion channel conductances, respectively, and a capacitor to account for the axon's membrane capacitance. We study the behavior of both ion channel conductivities and the circuit voltage, and we compare the results with those of the single channel, for a given quantum state of the source. It is remarkable that, in opposition to the single-channel model, we are able to reproduce the voltage spike in an adiabatic regime. Arguing that the circuit voltage is a quantum variable, we find a purely quantum-mechanical contribution in the system voltage's second moment. This work represents a complete study of the Hodgkin-Huxley model in the quantum regime, establishing a recipe for constructing quantum neuron networks with quantum state inputs. This paves the way for advances in hardware-based neuromorphic quantum computing, as well as quantum machine learning, which might be more efficient resource-wise.

]]>Investigating the classical simulability of quantum circuits provides a promising avenue towards understanding the computational power of quantum systems. Whether a class of quantum circuits can be efficiently simulated with a probabilistic classical computer, or is provably hard to simulate, depends quite critically on the precise notion of ``classical simulation'' and in particular on the required accuracy. We argue that a notion of classical simulation, which we call EPSILON-simulation (or $\epsilon$-simulation for short), captures the essence of possessing ``equivalent computational power'' as the quantum system it simulates: It is statistically impossible to distinguish an agent with access to an $\epsilon$-simulator from one possessing the simulated quantum system. We relate $\epsilon$-simulation to various alternative notions of simulation predominantly focusing on a simulator we call a $\textit{poly-box}$. A poly-box outputs $1/poly$ precision additive estimates of Born probabilities and marginals. This notion of simulation has gained prominence through a number of recent simulability results. Accepting some plausible computational theoretic assumptions, we show that $\epsilon$-simulation is strictly stronger than a poly-box by showing that IQP circuits and unconditioned magic-state injected Clifford circuits are both hard to $\epsilon$-simulate and yet admit a poly-box. In contrast, we also show that these two notions are equivalent under an additional assumption on the sparsity of the output distribution ($\textit{poly-sparsity}$).

]]>Investigating the classical simulability of quantum circuits provides a promising avenue towards understanding the computational power of quantum systems. Whether a class of quantum circuits can be efficiently simulated with a probabilistic classical computer, or is provably hard to simulate, depends quite critically on the precise notion of ``classical simulation'' and in particular on the required accuracy. We argue that a notion of classical simulation, which we call EPSILON-simulation (or $\epsilon$-simulation for short), captures the essence of possessing ``equivalent computational power'' as the quantum system it simulates: It is statistically impossible to distinguish an agent with access to an $\epsilon$-simulator from one possessing the simulated quantum system. We relate $\epsilon$-simulation to various alternative notions of simulation predominantly focusing on a simulator we call a $\textit{poly-box}$. A poly-box outputs $1/poly$ precision additive estimates of Born probabilities and marginals. This notion of simulation has gained prominence through a number of recent simulability results. Accepting some plausible computational theoretic assumptions, we show that $\epsilon$-simulation is strictly stronger than a poly-box by showing that IQP circuits and unconditioned magic-state injected Clifford circuits are both hard to $\epsilon$-simulate and yet admit a poly-box. In contrast, we also show that these two notions are equivalent under an additional assumption on the sparsity of the output distribution ($\textit{poly-sparsity}$).

]]>We show that it is impossible to perform ideal projective measurements on quantum systems using finite resources. We identify three fundamental features of ideal projective measurements and show that when limited by finite resources only one of these features can be salvaged. Our framework is general enough to accommodate any system and measuring device (pointer) models, but for illustration we use an explicit model of an $N$-particle pointer. For a pointer that perfectly reproduces the statistics of the system, we provide tight analytic expressions for the energy cost of performing the measurement. This cost may be broken down into two parts. First, the cost of preparing the pointer in a suitable state, and second, the cost of a global interaction between the system and pointer in order to correlate them. Our results show that, even under the assumption that the interaction can be controlled perfectly, achieving perfect correlation is infinitely expensive. We provide protocols for achieving optimal correlation given finite resources for the most general system and pointer Hamiltonians, phrasing our results as fundamental bounds in terms of the dimensions of these systems.

]]>We show that it is impossible to perform ideal projective measurements on quantum systems using finite resources. We identify three fundamental features of ideal projective measurements and show that when limited by finite resources only one of these features can be salvaged. Our framework is general enough to accommodate any system and measuring device (pointer) models, but for illustration we use an explicit model of an $N$-particle pointer. For a pointer that perfectly reproduces the statistics of the system, we provide tight analytic expressions for the energy cost of performing the measurement. This cost may be broken down into two parts. First, the cost of preparing the pointer in a suitable state, and second, the cost of a global interaction between the system and pointer in order to correlate them. Our results show that, even under the assumption that the interaction can be controlled perfectly, achieving perfect correlation is infinitely expensive. We provide protocols for achieving optimal correlation given finite resources for the most general system and pointer Hamiltonians, phrasing our results as fundamental bounds in terms of the dimensions of these systems.

]]>While recent work suggests that quantum computers can speed up the solution of semidefinite programs, little is known about the quantum complexity of more general convex optimization. We present a quantum algorithm that can optimize a convex function over an $n$-dimensional convex body using $\tilde{O}(n)$ queries to oracles that evaluate the objective function and determine membership in the convex body. This represents a quadratic improvement over the best-known classical algorithm. We also study limitations on the power of quantum computers for general convex optimization, showing that it requires $\tilde{\Omega}(\sqrt n)$ evaluation queries and $\Omega(\sqrt{n})$ membership queries.

]]>While recent work suggests that quantum computers can speed up the solution of semidefinite programs, little is known about the quantum complexity of more general convex optimization. We present a quantum algorithm that can optimize a convex function over an $n$-dimensional convex body using $\tilde{O}(n)$ queries to oracles that evaluate the objective function and determine membership in the convex body. This represents a quadratic improvement over the best-known classical algorithm. We also study limitations on the power of quantum computers for general convex optimization, showing that it requires $\tilde{\Omega}(\sqrt n)$ evaluation queries and $\Omega(\sqrt{n})$ membership queries.

]]>We study to what extent quantum algorithms can speed up solving convex optimization problems. Following the classical literature we assume access to a convex set via various oracles, and we examine the efficiency of reductions between the different oracles. In particular, we show how a separation oracle can be implemented using $\tilde{O}(1)$ quantum queries to a membership oracle, which is an exponential quantum speed-up over the $\Omega(n)$ membership queries that are needed classically. We show that a quantum computer can very efficiently compute an approximate subgradient of a convex Lipschitz function. Combining this with a simplification of recent classical work of Lee, Sidford, and Vempala gives our efficient separation oracle. This in turn implies, via a known algorithm, that $\tilde{O}(n)$ quantum queries to a membership oracle suffice to implement an optimization oracle (the best known classical upper bound on the number of membership queries is quadratic). We also prove several lower bounds: $\Omega(\sqrt{n})$ quantum separation (or membership) queries are needed for optimization if the algorithm knows an interior point of the convex set, and $\Omega(n)$ quantum separation queries are needed if it does not.

]]>We study to what extent quantum algorithms can speed up solving convex optimization problems. Following the classical literature we assume access to a convex set via various oracles, and we examine the efficiency of reductions between the different oracles. In particular, we show how a separation oracle can be implemented using $\tilde{O}(1)$ quantum queries to a membership oracle, which is an exponential quantum speed-up over the $\Omega(n)$ membership queries that are needed classically. We show that a quantum computer can very efficiently compute an approximate subgradient of a convex Lipschitz function. Combining this with a simplification of recent classical work of Lee, Sidford, and Vempala gives our efficient separation oracle. This in turn implies, via a known algorithm, that $\tilde{O}(n)$ quantum queries to a membership oracle suffice to implement an optimization oracle (the best known classical upper bound on the number of membership queries is quadratic). We also prove several lower bounds: $\Omega(\sqrt{n})$ quantum separation (or membership) queries are needed for optimization if the algorithm knows an interior point of the convex set, and $\Omega(n)$ quantum separation queries are needed if it does not.

]]>Kochen-Specker (KS) theorem reveals the inconsistency between quantum theory and any putative underlying model of it satisfying the constraint of KS-noncontextuality. A logical proof of the KS theorem is one that relies only on the compatibility relations amongst a set of projectors (a KS set) to witness this inconsistency. These compatibility relations can be represented by a hypergraph, referred to as a contextuality scenario. Here we consider contextuality scenarios that we term KS-uncolourable, e.g., those which appear in logical proofs of the KS theorem. We introduce a hypergraph framework to obtain noise-robust witnesses of contextuality from such scenarios. Our approach builds on the results of R. Kunjwal and R. W. Spekkens, Phys. Rev. Lett. 115, 110403 (2015), by providing new insights into the relationship between the structure of a contextuality scenario and the associated noise-robust noncontextuality inequalities that witness contextuality. The present work also forms a necessary counterpart to the framework presented in R. Kunjwal, Quantum 3, 184 (2019), which only applies to KS-colourable contextuality scenarios, i.e., those which do not admit logical proofs of the KS theorem but do admit statistical proofs. We rely on a single hypergraph invariant, defined in R. Kunjwal, Quantum 3, 184 (2019), that appears in our contextuality witnesses, namely, the weighted max-predictability. The present work can also be viewed as a study of this invariant. Significantly, unlike the case of R. Kunjwal, Quantum 3, 184 (2019), none of the graph invariants from the graph-theoretic framework for KS-contextuality due to Cabello, Severini, and Winter (the ``CSW framework", Phys. Rev. Lett. 112, 040401 (2014)) are relevant for our noise-robust noncontextuality inequalities.

A video recording of a talk on this paper is available form http://pirsa.org/17070059/.

]]>Kochen-Specker (KS) theorem reveals the inconsistency between quantum theory and any putative underlying model of it satisfying the constraint of KS-noncontextuality. A logical proof of the KS theorem is one that relies only on the compatibility relations amongst a set of projectors (a KS set) to witness this inconsistency. These compatibility relations can be represented by a hypergraph, referred to as a contextuality scenario. Here we consider contextuality scenarios that we term KS-uncolourable, e.g., those which appear in logical proofs of the KS theorem. We introduce a hypergraph framework to obtain noise-robust witnesses of contextuality from such scenarios. Our approach builds on the results of R. Kunjwal and R. W. Spekkens, Phys. Rev. Lett. 115, 110403 (2015), by providing new insights into the relationship between the structure of a contextuality scenario and the associated noise-robust noncontextuality inequalities that witness contextuality. The present work also forms a necessary counterpart to the framework presented in R. Kunjwal, Quantum 3, 184 (2019), which only applies to KS-colourable contextuality scenarios, i.e., those which do not admit logical proofs of the KS theorem but do admit statistical proofs. We rely on a single hypergraph invariant, defined in R. Kunjwal, Quantum 3, 184 (2019), that appears in our contextuality witnesses, namely, the weighted max-predictability. The present work can also be viewed as a study of this invariant. Significantly, unlike the case of R. Kunjwal, Quantum 3, 184 (2019), none of the graph invariants from the graph-theoretic framework for KS-contextuality due to Cabello, Severini, and Winter (the ``CSW framework", Phys. Rev. Lett. 112, 040401 (2014)) are relevant for our noise-robust noncontextuality inequalities.

A video recording of a talk on this paper is available form http://pirsa.org/17070059/.

]]>A leading choice of error correction for scalable quantum computing is the surface code with lattice surgery. The basic lattice surgery operations, the merging and splitting of logical qubits, act non-unitarily on the logical states and are not easily captured by standard circuit notation. This raises the question of how best to design, verify, and optimise protocols that use lattice surgery, in particular in architectures with complex resource management issues. In this paper we demonstrate that the operations of the ZX calculus --- a form of quantum diagrammatic reasoning based on bialgebras --- match exactly the operations of lattice surgery. Red and green ``spider'' nodes match rough and smooth merges and splits, and follow the axioms of a dagger special associative Frobenius algebra. Some lattice surgery operations require non-trivial correction operations, which are captured natively in the use of the ZX calculus in the form of ensembles of diagrams. We give a first taste of the power of the calculus as a language for lattice surgery by considering two operations (T gates and producing a CNOT) and show how ZX diagram re-write rules give lattice surgery procedures for these operations that are novel, efficient, and highly configurable.

]]>A leading choice of error correction for scalable quantum computing is the surface code with lattice surgery. The basic lattice surgery operations, the merging and splitting of logical qubits, act non-unitarily on the logical states and are not easily captured by standard circuit notation. This raises the question of how best to design, verify, and optimise protocols that use lattice surgery, in particular in architectures with complex resource management issues. In this paper we demonstrate that the operations of the ZX calculus --- a form of quantum diagrammatic reasoning based on bialgebras --- match exactly the operations of lattice surgery. Red and green ``spider'' nodes match rough and smooth merges and splits, and follow the axioms of a dagger special associative Frobenius algebra. Some lattice surgery operations require non-trivial correction operations, which are captured natively in the use of the ZX calculus in the form of ensembles of diagrams. We give a first taste of the power of the calculus as a language for lattice surgery by considering two operations (T gates and producing a CNOT) and show how ZX diagram re-write rules give lattice surgery procedures for these operations that are novel, efficient, and highly configurable.

]]>We consider the phenomenon of quantum mechanical contextuality, and specifically parity-based proofs thereof. Mermin’s square and star are representative examples. Part of the information invoked in such contextuality proofs is the commutativity structure among the pertaining observables. We investigate to which extent this commutativity structure alone determines the viability of a parity-based contextuality proof. We establish a topological criterion for this, generalizing an earlier result by Arkhipov.

]]>We consider the phenomenon of quantum mechanical contextuality, and specifically parity-based proofs thereof. Mermin’s square and star are representative examples. Part of the information invoked in such contextuality proofs is the commutativity structure among the pertaining observables. We investigate to which extent this commutativity structure alone determines the viability of a parity-based contextuality proof. We establish a topological criterion for this, generalizing an earlier result by Arkhipov.

]]>A potential quantum internet would open up the possibility of realizing numerous new applications, including provably secure communication. Since losses of photons limit long-distance, direct quantum communication and wide-spread quantum networks, quantum repeaters are needed. The so-called PLOB-repeaterless bound [Pirandola et al., Nat. Commun. 8, 15043 (2017)] is a fundamental limit on the quantum capacity of direct quantum communication. Here, we analytically derive the quantum-repeater gain for error-corrected, one-way quantum repeaters based on higher-dimensional qudits for two different physical encodings: Fock and multimode qudits. We identify parameter regimes in which such quantum repeaters can surpass the PLOB-repeaterless bound and systematically analyze how typical parameters manifest themselves in the quantum-repeater gain. This benchmarking provides a guideline for the implementation of error-corrected qudit repeaters.

]]>A potential quantum internet would open up the possibility of realizing numerous new applications, including provably secure communication. Since losses of photons limit long-distance, direct quantum communication and wide-spread quantum networks, quantum repeaters are needed. The so-called PLOB-repeaterless bound [Pirandola et al., Nat. Commun. 8, 15043 (2017)] is a fundamental limit on the quantum capacity of direct quantum communication. Here, we analytically derive the quantum-repeater gain for error-corrected, one-way quantum repeaters based on higher-dimensional qudits for two different physical encodings: Fock and multimode qudits. We identify parameter regimes in which such quantum repeaters can surpass the PLOB-repeaterless bound and systematically analyze how typical parameters manifest themselves in the quantum-repeater gain. This benchmarking provides a guideline for the implementation of error-corrected qudit repeaters.

]]>Quantum error correction is widely thought to be the key to fault-tolerant quantum computation. However, determining the most suited encoding for unknown error channels or specific laboratory setups is highly challenging. Here, we present a reinforcement learning framework for optimizing and fault-tolerantly adapting quantum error correction codes. We consider a reinforcement learning agent tasked with modifying a family of surface code quantum memories until a desired logical error rate is reached. Using efficient simulations with about 70 data qubits with arbitrary connectivity, we demonstrate that such a reinforcement learning agent can determine near-optimal solutions, in terms of the number of data qubits, for various error models of interest. Moreover, we show that agents trained on one setting are able to successfully transfer their experience to different settings. This ability for transfer learning showcases the inherent strengths of reinforcement learning and the applicability of our approach for optimization from off-line simulations to on-line laboratory settings.

]]>Quantum error correction is widely thought to be the key to fault-tolerant quantum computation. However, determining the most suited encoding for unknown error channels or specific laboratory setups is highly challenging. Here, we present a reinforcement learning framework for optimizing and fault-tolerantly adapting quantum error correction codes. We consider a reinforcement learning agent tasked with modifying a family of surface code quantum memories until a desired logical error rate is reached. Using efficient simulations with about 70 data qubits with arbitrary connectivity, we demonstrate that such a reinforcement learning agent can determine near-optimal solutions, in terms of the number of data qubits, for various error models of interest. Moreover, we show that agents trained on one setting are able to successfully transfer their experience to different settings. This ability for transfer learning showcases the inherent strengths of reinforcement learning and the applicability of our approach for optimization from off-line simulations to on-line laboratory settings.

]]>Parametrized quantum circuits initialized with random initial parameter values are characterized by barren plateaus where the gradient becomes exponentially small in the number of qubits. In this technical note we theoretically motivate and empirically validate an initialization strategy which can resolve the barren plateau problem for practical applications. The technique involves randomly selecting some of the initial parameter values, then choosing the remaining values so that the circuit is a sequence of shallow blocks that each evaluates to the identity. This initialization limits the effective depth of the circuits used to calculate the first parameter update so that they cannot be stuck in a barren plateau at the start of training. In turn, this makes some of the most compact ansätze usable in practice, which was not possible before even for rather basic problems. We show empirically that variational quantum eigensolvers and quantum neural networks initialized using this strategy can be trained using a gradient based method.

]]>Parametrized quantum circuits initialized with random initial parameter values are characterized by barren plateaus where the gradient becomes exponentially small in the number of qubits. In this technical note we theoretically motivate and empirically validate an initialization strategy which can resolve the barren plateau problem for practical applications. The technique involves randomly selecting some of the initial parameter values, then choosing the remaining values so that the circuit is a sequence of shallow blocks that each evaluates to the identity. This initialization limits the effective depth of the circuits used to calculate the first parameter update so that they cannot be stuck in a barren plateau at the start of training. In turn, this makes some of the most compact ansätze usable in practice, which was not possible before even for rather basic problems. We show empirically that variational quantum eigensolvers and quantum neural networks initialized using this strategy can be trained using a gradient based method.

]]>We develop a formalism for modelling $exact$ time dynamics in waveguide quantum electrodynamics (QED) using the real-space approach. The formalism does not assume any specific configuration of emitters and allows the study of Markovian dynamics $\textit{fully analytically}$ and non-Markovian dynamics semi-analytically with a simple numerical integration step. We use the formalism to study subradiance, superradiance and bound states in continuum. We discuss new phenomena such as subdivision of collective decay rates into symmetric and anti-symmetric subsets and non-Markovian superradiance effects that can lead to collective decay stronger than Dicke superradiance. We also discuss possible applications such as pulse-shaping and coherent absorption. We thus broaden the range of applicability of real-space approaches beyond steady-state photon transport.

]]>We develop a formalism for modelling $exact$ time dynamics in waveguide quantum electrodynamics (QED) using the real-space approach. The formalism does not assume any specific configuration of emitters and allows the study of Markovian dynamics $\textit{fully analytically}$ and non-Markovian dynamics semi-analytically with a simple numerical integration step. We use the formalism to study subradiance, superradiance and bound states in continuum. We discuss new phenomena such as subdivision of collective decay rates into symmetric and anti-symmetric subsets and non-Markovian superradiance effects that can lead to collective decay stronger than Dicke superradiance. We also discuss possible applications such as pulse-shaping and coherent absorption. We thus broaden the range of applicability of real-space approaches beyond steady-state photon transport.

]]>Spin qubits in silicon quantum dots are one of the most promising building blocks for large scale quantum computers thanks to their high qubit density and compatibility with the existing semiconductor technologies. High fidelity single-qubit gates exceeding the threshold of error correction codes like the surface code have been demonstrated, while two-qubit gates have reached 98% fidelity and are improving rapidly. However, there are other types of error --- such as charge leakage and propagation --- that may occur in quantum dot arrays and which cannot be corrected by quantum error correction codes, making them potentially damaging even when their probability is small. We propose a surface code architecture for silicon quantum dot spin qubits that is robust against leakage errors by incorporating multi-electron mediator dots. Charge leakage in the qubit dots is transferred to the mediator dots via charge relaxation processes and then removed using charge reservoirs attached to the mediators. A stabiliser-check cycle, optimised for our hardware, then removes the correlations between the residual physical errors. Through simulations we obtain the surface code threshold for the charge leakage errors and show that in our architecture the damage due to charge leakage errors is reduced to a similar level to that of the usual depolarising gate noise. Spin leakage errors in our architecture are constrained to only ancilla qubits and can be removed during quantum error correction via reinitialisations of ancillae, which ensure the robustness of our architecture against spin leakage as well. Our use of an elongated mediator dots creates spaces throughout the quantum dot array for charge reservoirs, measuring devices and control gates, providing the scalability in the design.

]]>Spin qubits in silicon quantum dots are one of the most promising building blocks for large scale quantum computers thanks to their high qubit density and compatibility with the existing semiconductor technologies. High fidelity single-qubit gates exceeding the threshold of error correction codes like the surface code have been demonstrated, while two-qubit gates have reached 98% fidelity and are improving rapidly. However, there are other types of error --- such as charge leakage and propagation --- that may occur in quantum dot arrays and which cannot be corrected by quantum error correction codes, making them potentially damaging even when their probability is small. We propose a surface code architecture for silicon quantum dot spin qubits that is robust against leakage errors by incorporating multi-electron mediator dots. Charge leakage in the qubit dots is transferred to the mediator dots via charge relaxation processes and then removed using charge reservoirs attached to the mediators. A stabiliser-check cycle, optimised for our hardware, then removes the correlations between the residual physical errors. Through simulations we obtain the surface code threshold for the charge leakage errors and show that in our architecture the damage due to charge leakage errors is reduced to a similar level to that of the usual depolarising gate noise. Spin leakage errors in our architecture are constrained to only ancilla qubits and can be removed during quantum error correction via reinitialisations of ancillae, which ensure the robustness of our architecture against spin leakage as well. Our use of an elongated mediator dots creates spaces throughout the quantum dot array for charge reservoirs, measuring devices and control gates, providing the scalability in the design.

]]>We present an operational and model-independent framework to investigate the concept of no-backwards-in-time signalling. We define no-backwards-in-time signalling conditions, closely related to the spatial no-signalling conditions. These allow for theoretical possibilities in which the future affects the past, nevertheless without signalling backwards in time. This is analogous to non-local but no-signalling spatial correlations. Furthermore, our results shed new light on situations with indefinite causal structure and their connection to quantum theory.

]]>We present an operational and model-independent framework to investigate the concept of no-backwards-in-time signalling. We define no-backwards-in-time signalling conditions, closely related to the spatial no-signalling conditions. These allow for theoretical possibilities in which the future affects the past, nevertheless without signalling backwards in time. This is analogous to non-local but no-signalling spatial correlations. Furthermore, our results shed new light on situations with indefinite causal structure and their connection to quantum theory.

]]>We show that a system of three trapped ultracold and strongly interacting atoms in one-dimension can be emulated using an optical fiber with a graded-index profile and thin metallic slabs. While the wave-nature of single quantum particles leads to direct and well known analogies with classical optics, for interacting many-particle systems with unrestricted statistics such analoga are not straightforward. Here we study the symmetries present in the fiber eigenstates by using discrete group theory and show that, by spatially modulating the incident field, one can select the atomic statistics, i.e., emulate a system of three bosons, fermions or two bosons or fermions plus an additional distinguishable particle. We also show that the optical system is able to produce classical non-separability resembling that found in the analogous atomic system.

]]>We show that a system of three trapped ultracold and strongly interacting atoms in one-dimension can be emulated using an optical fiber with a graded-index profile and thin metallic slabs. While the wave-nature of single quantum particles leads to direct and well known analogies with classical optics, for interacting many-particle systems with unrestricted statistics such analoga are not straightforward. Here we study the symmetries present in the fiber eigenstates by using discrete group theory and show that, by spatially modulating the incident field, one can select the atomic statistics, i.e., emulate a system of three bosons, fermions or two bosons or fermions plus an additional distinguishable particle. We also show that the optical system is able to produce classical non-separability resembling that found in the analogous atomic system.

]]>Given two pairs of quantum states, we want to decide if there exists a quantum channel that transforms one pair into the other. The theory of quantum statistical comparison and quantum relative majorization provides necessary and sufficient conditions for such a transformation to exist, but such conditions are typically difficult to check in practice. Here, by building upon work by Keiji Matsumoto, we relax the problem by allowing for small errors in one of the transformations. In this way, a simple sufficient condition can be formulated in terms of one-shot relative entropies of the two pairs. In the asymptotic setting where we consider sequences of state pairs, under some mild convergence conditions, this implies that the quantum relative entropy is the only relevant quantity deciding when a pairwise state transformation is possible. More precisely, if the relative entropy of the initial state pair is strictly larger compared to the relative entropy of the target state pair, then a transformation with exponentially vanishing error is possible. On the other hand, if the relative entropy of the target state is strictly larger, then any such transformation will have an error converging exponentially to one. As an immediate consequence, we show that the rate at which pairs of states can be transformed into each other is given by the ratio of their relative entropies. We discuss applications to the resource theories of athermality and coherence, where our results imply an exponential strong converse for general state interconversion.

]]>Given two pairs of quantum states, we want to decide if there exists a quantum channel that transforms one pair into the other. The theory of quantum statistical comparison and quantum relative majorization provides necessary and sufficient conditions for such a transformation to exist, but such conditions are typically difficult to check in practice. Here, by building upon work by Keiji Matsumoto, we relax the problem by allowing for small errors in one of the transformations. In this way, a simple sufficient condition can be formulated in terms of one-shot relative entropies of the two pairs. In the asymptotic setting where we consider sequences of state pairs, under some mild convergence conditions, this implies that the quantum relative entropy is the only relevant quantity deciding when a pairwise state transformation is possible. More precisely, if the relative entropy of the initial state pair is strictly larger compared to the relative entropy of the target state pair, then a transformation with exponentially vanishing error is possible. On the other hand, if the relative entropy of the target state is strictly larger, then any such transformation will have an error converging exponentially to one. As an immediate consequence, we show that the rate at which pairs of states can be transformed into each other is given by the ratio of their relative entropies. We discuss applications to the resource theories of athermality and coherence, where our results imply an exponential strong converse for general state interconversion.

]]>Recent work has dramatically reduced the gate complexity required to quantum simulate chemistry by using linear combinations of unitaries based methods to exploit structure in the plane wave basis Coulomb operator. Here, we show that one can achieve similar scaling even for arbitrary basis sets (which can be hundreds of times more compact than plane waves) by using qubitized quantum walks in a fashion that takes advantage of structure in the Coulomb operator, either by directly exploiting sparseness, or via a low rank tensor factorization. We provide circuits for several variants of our algorithm (which all improve over the scaling of prior methods) including one with $\widetilde{\cal O}(N^{3/2} \lambda)$ T complexity, where $N$ is number of orbitals and $\lambda$ is the 1-norm of the chemistry Hamiltonian. We deploy our algorithms to simulate the FeMoco molecule (relevant to Nitrogen fixation) and obtain circuits requiring about seven hundred times less surface code spacetime volume than prior quantum algorithms for this system, despite us using a larger and more accurate active space.

]]>Recent work has dramatically reduced the gate complexity required to quantum simulate chemistry by using linear combinations of unitaries based methods to exploit structure in the plane wave basis Coulomb operator. Here, we show that one can achieve similar scaling even for arbitrary basis sets (which can be hundreds of times more compact than plane waves) by using qubitized quantum walks in a fashion that takes advantage of structure in the Coulomb operator, either by directly exploiting sparseness, or via a low rank tensor factorization. We provide circuits for several variants of our algorithm (which all improve over the scaling of prior methods) including one with $\widetilde{\cal O}(N^{3/2} \lambda)$ T complexity, where $N$ is number of orbitals and $\lambda$ is the 1-norm of the chemistry Hamiltonian. We deploy our algorithms to simulate the FeMoco molecule (relevant to Nitrogen fixation) and obtain circuits requiring about seven hundred times less surface code spacetime volume than prior quantum algorithms for this system, despite us using a larger and more accurate active space.

]]>Quantum devices, such as quantum simulators, quantum annealers, and quantum computers, may be exploited to solve problems beyond what is tractable with classical computers. This may be achieved as the Hilbert space available to perform such `calculations' is far larger than that which may be classically simulated. In practice, however, quantum devices have imperfections, which may limit the accessibility to the whole Hilbert space. We thus determine that the dimension of the space of quantum states that are available to a quantum device is a meaningful measure of its functionality, though unfortunately this quantity cannot be directly experimentally determined. Here we outline an experimentally realisable approach to obtaining the required Hilbert space dimension of such a device to compute its time evolution, by exploiting the thermalization dynamics of a probe qubit. This is achieved by obtaining a fluctuation-dissipation theorem for high-temperature chaotic quantum systems, which facilitates the extraction of information on the Hilbert space dimension via measurements of the decay rate, and time-fluctuations.

]]>Quantum devices, such as quantum simulators, quantum annealers, and quantum computers, may be exploited to solve problems beyond what is tractable with classical computers. This may be achieved as the Hilbert space available to perform such `calculations' is far larger than that which may be classically simulated. In practice, however, quantum devices have imperfections, which may limit the accessibility to the whole Hilbert space. We thus determine that the dimension of the space of quantum states that are available to a quantum device is a meaningful measure of its functionality, though unfortunately this quantity cannot be directly experimentally determined. Here we outline an experimentally realisable approach to obtaining the required Hilbert space dimension of such a device to compute its time evolution, by exploiting the thermalization dynamics of a probe qubit. This is achieved by obtaining a fluctuation-dissipation theorem for high-temperature chaotic quantum systems, which facilitates the extraction of information on the Hilbert space dimension via measurements of the decay rate, and time-fluctuations.

]]>It has been shown that it is theoretically possible for there to exist higher-order quantum processes in which the operations performed by separate parties cannot be ascribed a definite causal order. Some of these processes are believed to have a physical realization in standard quantum mechanics via coherent control of the times of the operations. A prominent example is the quantum SWITCH, which was recently demonstrated experimentally. However, the interpretation of such experiments as realizations of a process with indefinite causal structure as opposed to some form of simulation of such a process has remained controversial. Where exactly are the local operations of the parties in such an experiment? On what spaces do they act given that their times are indefinite? Can we probe them directly rather than assume what they ought to be based on heuristic considerations? How can we reconcile the claim that these operations really take place, each once as required, with the fact that the structure of the presumed process implies that they cannot be part of any acyclic circuit? Here, I offer a precise answer to these questions: the input and output systems of the operations in such a process are generally nontrivial subsystems of Hilbert spaces that are tensor products of Hilbert spaces associated with systems at different times---a fact that is directly experimentally verifiable. With respect to these time-delocalized subsystems, the structure of the process is one of a circuit with a causal cycle. This provides a rigorous sense in which processes with indefinite causal structure can be said to exist within the known quantum mechanics. I also identify a whole class of isometric processes, of which the quantum SWITCH is a special case, that admit a physical realization on time-delocalized subsystems. These results unveil a novel structure within quantum mechanics, which may have important implications for physics and information processing.

]]>It has been shown that it is theoretically possible for there to exist higher-order quantum processes in which the operations performed by separate parties cannot be ascribed a definite causal order. Some of these processes are believed to have a physical realization in standard quantum mechanics via coherent control of the times of the operations. A prominent example is the quantum SWITCH, which was recently demonstrated experimentally. However, the interpretation of such experiments as realizations of a process with indefinite causal structure as opposed to some form of simulation of such a process has remained controversial. Where exactly are the local operations of the parties in such an experiment? On what spaces do they act given that their times are indefinite? Can we probe them directly rather than assume what they ought to be based on heuristic considerations? How can we reconcile the claim that these operations really take place, each once as required, with the fact that the structure of the presumed process implies that they cannot be part of any acyclic circuit? Here, I offer a precise answer to these questions: the input and output systems of the operations in such a process are generally nontrivial subsystems of Hilbert spaces that are tensor products of Hilbert spaces associated with systems at different times---a fact that is directly experimentally verifiable. With respect to these time-delocalized subsystems, the structure of the process is one of a circuit with a causal cycle. This provides a rigorous sense in which processes with indefinite causal structure can be said to exist within the known quantum mechanics. I also identify a whole class of isometric processes, of which the quantum SWITCH is a special case, that admit a physical realization on time-delocalized subsystems. These results unveil a novel structure within quantum mechanics, which may have important implications for physics and information processing.

]]>Despite significant overhead reductions since its first proposal, magic state distillation is often considered to be a very costly procedure that dominates the resource cost of fault-tolerant quantum computers. The goal of this work is to demonstrate that this is not true. By writing distillation circuits in a form that separates qubits that are capable of error detection from those that are not, most logical qubits used for distillation can be encoded at a very low code distance. This significantly reduces the space-time cost of distillation, as well as the number of qubits. In extreme cases, it can cost less to distill a magic state than to perform a logical Clifford gate on full-distance logical qubits.

]]>Despite significant overhead reductions since its first proposal, magic state distillation is often considered to be a very costly procedure that dominates the resource cost of fault-tolerant quantum computers. The goal of this work is to demonstrate that this is not true. By writing distillation circuits in a form that separates qubits that are capable of error detection from those that are not, most logical qubits used for distillation can be encoded at a very low code distance. This significantly reduces the space-time cost of distillation, as well as the number of qubits. In extreme cases, it can cost less to distill a magic state than to perform a logical Clifford gate on full-distance logical qubits.

]]>The notions of $k$-separability and $k$-producibility are useful and expressive tools for the characterization of entanglement in multipartite quantum systems, when a more detailed analysis would be infeasible or simply needless. In this work we reveal a partial duality between them, which is valid also for their correlation counterparts. This duality can be seen from a much wider perspective, when we consider the entanglement and correlation properties which are invariant under the permutations of the subsystems. These properties are labeled by Young diagrams, which we endow with a refinement-like partial order, to build up their classification scheme. This general treatment reveals a new property, which we call $k$-stretchability, being sensitive in a balanced way to both the maximal size of correlated (or entangled) subsystems and the minimal number of subsystems uncorrelated with (or separable from) one another.

]]>The notions of $k$-separability and $k$-producibility are useful and expressive tools for the characterization of entanglement in multipartite quantum systems, when a more detailed analysis would be infeasible or simply needless. In this work we reveal a partial duality between them, which is valid also for their correlation counterparts. This duality can be seen from a much wider perspective, when we consider the entanglement and correlation properties which are invariant under the permutations of the subsystems. These properties are labeled by Young diagrams, which we endow with a refinement-like partial order, to build up their classification scheme. This general treatment reveals a new property, which we call $k$-stretchability, being sensitive in a balanced way to both the maximal size of correlated (or entangled) subsystems and the minimal number of subsystems uncorrelated with (or separable from) one another.

]]>The operator Schmidt rank is the minimum number of terms required to express a state as a sum of elementary tensor factors. Here we provide a new proof of the fact that any bipartite mixed state with operator Schmidt rank two is separable, and can be written as a sum of two positive semidefinite matrices per site. Our proof uses results from the theory of free spectrahedra and operator systems, and illustrates the use of a connection between decompositions of quantum states and decompositions of nonnegative matrices. In the multipartite case, we prove that any Hermitian Matrix Product Density Operator (MPDO) of bond dimension two is separable, and can be written as a sum of at most four positive semidefinite matrices per site. This implies that these states can only contain classical correlations, and very few of them, as measured by the entanglement of purification. In contrast, MPDOs of bond dimension three can contain an unbounded amount of classical correlations.

]]>The operator Schmidt rank is the minimum number of terms required to express a state as a sum of elementary tensor factors. Here we provide a new proof of the fact that any bipartite mixed state with operator Schmidt rank two is separable, and can be written as a sum of two positive semidefinite matrices per site. Our proof uses results from the theory of free spectrahedra and operator systems, and illustrates the use of a connection between decompositions of quantum states and decompositions of nonnegative matrices. In the multipartite case, we prove that any Hermitian Matrix Product Density Operator (MPDO) of bond dimension two is separable, and can be written as a sum of at most four positive semidefinite matrices per site. This implies that these states can only contain classical correlations, and very few of them, as measured by the entanglement of purification. In contrast, MPDOs of bond dimension three can contain an unbounded amount of classical correlations.

]]>In Newtonian mechanics, any closed-system dynamics of a composite system in a microstate will leave all its individual subsystems in distinct microstates, however this fails dramatically in quantum mechanics due to the existence of quantum entanglement. Here we introduce the notion of a `coherent work process', and show that it is the direct extension of a work process in classical mechanics into quantum theory. This leads to the notion of `decomposable' and `non-decomposable' quantum coherence and gives a new perspective on recent results in the theory of asymmetry as well as early analysis in the theory of classical random variables. Within the context of recent fluctuation relations, originally framed in terms of quantum channels, we show that coherent work processes play the same role as their classical counterparts, and so provide a simple physical primitive for quantum coherence in such systems. We also introduce a pure state effective potential as a tool with which to analyze the coherent component of these fluctuation relations, and which leads to a notion of temperature-dependent mean coherence, provides connections with multi-partite entanglement, and gives a hierarchy of quantum corrections to the classical Crooks relation in powers of inverse temperature.

]]>In Newtonian mechanics, any closed-system dynamics of a composite system in a microstate will leave all its individual subsystems in distinct microstates, however this fails dramatically in quantum mechanics due to the existence of quantum entanglement. Here we introduce the notion of a `coherent work process', and show that it is the direct extension of a work process in classical mechanics into quantum theory. This leads to the notion of `decomposable' and `non-decomposable' quantum coherence and gives a new perspective on recent results in the theory of asymmetry as well as early analysis in the theory of classical random variables. Within the context of recent fluctuation relations, originally framed in terms of quantum channels, we show that coherent work processes play the same role as their classical counterparts, and so provide a simple physical primitive for quantum coherence in such systems. We also introduce a pure state effective potential as a tool with which to analyze the coherent component of these fluctuation relations, and which leads to a notion of temperature-dependent mean coherence, provides connections with multi-partite entanglement, and gives a hierarchy of quantum corrections to the classical Crooks relation in powers of inverse temperature.

]]>We consider classical and quantum algorithms which have a duality property: roughly, either the algorithm provides some nontrivial improvement over random or there exist many solutions which are significantly worse than random. This enables one to give guarantees that the algorithm will find such a nontrivial improvement: if few solutions exist which are much worse than random, then a nontrivial improvement is guaranteed. The quantum algorithm is based on a sudden $quench$ of a Hamiltonian; while the algorithm is general, we analyze it in the specific context of MAX-$K$-LIN$2$, for both even and odd $K$. The classical algorithm is a ``dequantization of this algorithm", obtaining the same guarantee (indeed, some results which are only conjectured in the quantum case can be proven here); however, the quantum point of view helps in analyzing the performance of the classical algorithm and might in some cases perform better.

]]>We consider classical and quantum algorithms which have a duality property: roughly, either the algorithm provides some nontrivial improvement over random or there exist many solutions which are significantly worse than random. This enables one to give guarantees that the algorithm will find such a nontrivial improvement: if few solutions exist which are much worse than random, then a nontrivial improvement is guaranteed. The quantum algorithm is based on a sudden $quench$ of a Hamiltonian; while the algorithm is general, we analyze it in the specific context of MAX-$K$-LIN$2$, for both even and odd $K$. The classical algorithm is a ``dequantization of this algorithm", obtaining the same guarantee (indeed, some results which are only conjectured in the quantum case can be proven here); however, the quantum point of view helps in analyzing the performance of the classical algorithm and might in some cases perform better.

]]>The phase of an optical field inside a linear amplifier is widely known to diffuse with a diffusion coefficient that is inversely proportional to the photon number. The same process occurs in lasers which limits its intrinsic linewidth and makes the phase uncertainty difficult to calculate. The most commonly used simplification is to assume a narrow photon-number distribution for the optical field (which we call the small-noise approximation). For coherent light, this condition is determined by the average photon number. The small-noise approximation relies on (i) the input to have a good signal-to-noise ratio, and (ii) that such a signal-to-noise ratio can be maintained throughout the amplification process. Here we ask: For a coherent input, how many photons must be present in the input to a quantum linear amplifier for the phase noise at the output to be amenable to a small-noise analysis? We address these questions by showing how the phase uncertainty can be obtained without recourse to the small-noise approximation. It is shown that for an ideal linear amplifier (i.e. an amplifier most favourable to the small-noise approximation), the small-noise approximation breaks down with only a few photons on average. Interestingly, when the input strength is increased to tens of photons, the small-noise approximation can be seen to perform much better and the process of phase diffusion permits a small-noise analysis. This demarcates the limit of the small-noise assumption in linear amplifiers as such an assumption is less true for a nonideal amplifier.

]]>The phase of an optical field inside a linear amplifier is widely known to diffuse with a diffusion coefficient that is inversely proportional to the photon number. The same process occurs in lasers which limits its intrinsic linewidth and makes the phase uncertainty difficult to calculate. The most commonly used simplification is to assume a narrow photon-number distribution for the optical field (which we call the small-noise approximation). For coherent light, this condition is determined by the average photon number. The small-noise approximation relies on (i) the input to have a good signal-to-noise ratio, and (ii) that such a signal-to-noise ratio can be maintained throughout the amplification process. Here we ask: For a coherent input, how many photons must be present in the input to a quantum linear amplifier for the phase noise at the output to be amenable to a small-noise analysis? We address these questions by showing how the phase uncertainty can be obtained without recourse to the small-noise approximation. It is shown that for an ideal linear amplifier (i.e. an amplifier most favourable to the small-noise approximation), the small-noise approximation breaks down with only a few photons on average. Interestingly, when the input strength is increased to tens of photons, the small-noise approximation can be seen to perform much better and the process of phase diffusion permits a small-noise analysis. This demarcates the limit of the small-noise assumption in linear amplifiers as such an assumption is less true for a nonideal amplifier.

]]>It is well-known that any quantum channel $\mathcal{E}$ satisfies the data processing inequality (DPI), with respect to various divergences, e.g., quantum $\chi^2_{\kappa}$ divergences and quantum relative entropy. More specifically, the data processing inequality states that the divergence between two arbitrary quantum states $\rho$ and $\sigma$ does not increase under the action of any quantum channel $\mathcal{E}$. For a fixed channel $\mathcal{E}$ and a state $\sigma$, the divergence between output states $\mathcal{E}(\rho)$ and $\mathcal{E}(\sigma)$ might be strictly smaller than the divergence between input states $\rho$ and $\sigma$, which is characterized by the strong data processing inequality (SDPI). Among various input states $\rho$, the largest value of the rate of contraction is known as the SDPI constant. An important and widely studied property for classical channels is that SDPI constants tensorize. In this paper, we extend the tensorization property to the quantum regime: we establish the tensorization of SDPIs for the quantum $\chi^2_{\kappa_{1/2}}$ divergence for arbitrary quantum channels and also for a family of $\chi^2_{\kappa}$ divergences (with $\kappa \ge \kappa_{1/2}$) for arbitrary quantum-classical channels.

]]>It is well-known that any quantum channel $\mathcal{E}$ satisfies the data processing inequality (DPI), with respect to various divergences, e.g., quantum $\chi^2_{\kappa}$ divergences and quantum relative entropy. More specifically, the data processing inequality states that the divergence between two arbitrary quantum states $\rho$ and $\sigma$ does not increase under the action of any quantum channel $\mathcal{E}$. For a fixed channel $\mathcal{E}$ and a state $\sigma$, the divergence between output states $\mathcal{E}(\rho)$ and $\mathcal{E}(\sigma)$ might be strictly smaller than the divergence between input states $\rho$ and $\sigma$, which is characterized by the strong data processing inequality (SDPI). Among various input states $\rho$, the largest value of the rate of contraction is known as the SDPI constant. An important and widely studied property for classical channels is that SDPI constants tensorize. In this paper, we extend the tensorization property to the quantum regime: we establish the tensorization of SDPIs for the quantum $\chi^2_{\kappa_{1/2}}$ divergence for arbitrary quantum channels and also for a family of $\chi^2_{\kappa}$ divergences (with $\kappa \ge \kappa_{1/2}$) for arbitrary quantum-classical channels.

]]>Bell inequalities are an important tool in device-independent quantum information processing because their violation can serve as a certificate of relevant quantum properties. Probably the best known example of a Bell inequality is due to Clauser, Horne, Shimony and Holt (CHSH), which is defined in the simplest scenario involving two dichotomic measurements and whose all key properties are well understood. There have been many attempts to generalise the CHSH Bell inequality to higher-dimensional quantum systems, however, for most of them the maximal quantum violation---the key quantity for most device-independent applications---remains unknown. On the other hand, the constructions for which the maximal quantum violation can be computed, do not preserve the natural property of the CHSH inequality, namely, that the maximal quantum violation is achieved by the maximally entangled state and measurements corresponding to mutually unbiased bases. In this work we propose a novel family of Bell inequalities which exhibit precisely these properties, and whose maximal quantum violation can be computed analytically. In the simplest scenario it recovers the CHSH Bell inequality. These inequalities involve $d$ measurements settings, each having $d$ outcomes for an arbitrary prime number $d\geq 3$. We then show that in the three-outcome case our Bell inequality can be used to self-test the maximally entangled state of two-qutrits and three mutually unbiased bases at each site. Yet, we demonstrate that in the case of more outcomes, their maximal violation does not allow for self-testing in the standard sense, which motivates the definition of a new weak form of self-testing. The ability to certify high-dimensional MUBs makes these inequalities attractive from the device-independent cryptography point of view.

]]>Bell inequalities are an important tool in device-independent quantum information processing because their violation can serve as a certificate of relevant quantum properties. Probably the best known example of a Bell inequality is due to Clauser, Horne, Shimony and Holt (CHSH), which is defined in the simplest scenario involving two dichotomic measurements and whose all key properties are well understood. There have been many attempts to generalise the CHSH Bell inequality to higher-dimensional quantum systems, however, for most of them the maximal quantum violation---the key quantity for most device-independent applications---remains unknown. On the other hand, the constructions for which the maximal quantum violation can be computed, do not preserve the natural property of the CHSH inequality, namely, that the maximal quantum violation is achieved by the maximally entangled state and measurements corresponding to mutually unbiased bases. In this work we propose a novel family of Bell inequalities which exhibit precisely these properties, and whose maximal quantum violation can be computed analytically. In the simplest scenario it recovers the CHSH Bell inequality. These inequalities involve $d$ measurements settings, each having $d$ outcomes for an arbitrary prime number $d\geq 3$. We then show that in the three-outcome case our Bell inequality can be used to self-test the maximally entangled state of two-qutrits and three mutually unbiased bases at each site. Yet, we demonstrate that in the case of more outcomes, their maximal violation does not allow for self-testing in the standard sense, which motivates the definition of a new weak form of self-testing. The ability to certify high-dimensional MUBs makes these inequalities attractive from the device-independent cryptography point of view.

]]>The dissipation generated during a quasistatic thermodynamic process can be characterised by introducing a metric on the space of Gibbs states, in such a way that minimally-dissipating protocols correspond to geodesic trajectories. Here, we show how to generalize this approach to open quantum systems by finding the thermodynamic metric associated to a given Lindblad master equation. The obtained metric can be understood as a perturbation over the background geometry of equilibrium Gibbs states, which is induced by the Kubo-Mori-Bogoliubov (KMB) inner product. We illustrate this construction on two paradigmatic examples: an Ising chain and a two-level system interacting with a bosonic bath with different spectral densities.

]]>The dissipation generated during a quasistatic thermodynamic process can be characterised by introducing a metric on the space of Gibbs states, in such a way that minimally-dissipating protocols correspond to geodesic trajectories. Here, we show how to generalize this approach to open quantum systems by finding the thermodynamic metric associated to a given Lindblad master equation. The obtained metric can be understood as a perturbation over the background geometry of equilibrium Gibbs states, which is induced by the Kubo-Mori-Bogoliubov (KMB) inner product. We illustrate this construction on two paradigmatic examples: an Ising chain and a two-level system interacting with a bosonic bath with different spectral densities.

]]>