Tensor network states provide an efficient class of states that faithfully capture strongly correlated quantum models and systems in classical statistical mechanics. While tensor networks can now be seen as becoming standard tools in the description of such complex many-body systems, close to optimal variational principles based on such states are less obvious to come by. In this work, we generalize a recently proposed variational uniform matrix product state algorithm for capturing one-dimensional quantum lattices in the thermodynamic limit, to the study of regular two-dimensional tensor networks with a non-trivial unit cell. A key property of the algorithm is a computational effort that scales linearly rather than exponentially in the size of the unit cell. We demonstrate the performance of our approach on the computation of the classical partition functions of the antiferromagnetic Ising model and interacting dimers on the square lattice, as well as of a quantum doped resonating valence bond state.

]]>Tensor network states provide an efficient class of states that faithfully capture strongly correlated quantum models and systems in classical statistical mechanics. While tensor networks can now be seen as becoming standard tools in the description of such complex many-body systems, close to optimal variational principles based on such states are less obvious to come by. In this work, we generalize a recently proposed variational uniform matrix product state algorithm for capturing one-dimensional quantum lattices in the thermodynamic limit, to the study of regular two-dimensional tensor networks with a non-trivial unit cell. A key property of the algorithm is a computational effort that scales linearly rather than exponentially in the size of the unit cell. We demonstrate the performance of our approach on the computation of the classical partition functions of the antiferromagnetic Ising model and interacting dimers on the square lattice, as well as of a quantum doped resonating valence bond state.

]]>The manipulation of neutral atoms by light is at the heart of countless scientific discoveries in the field of quantum physics in the last three decades. The level of control that has been achieved at the single particle level within arrays of optical traps, while preserving the fundamental properties of quantum matter (coherence, entanglement, superposition), makes these technologies prime candidates to implement disruptive computation paradigms. In this paper, we review the main characteristics of these devices from atoms / qubits to application interfaces, and propose a classification of a wide variety of tasks that can already be addressed in a computationally efficient manner in the Noisy Intermediate Scale Quantum[1] era we are in. We illustrate how applications ranging from optimization challenges to simulation of quantum systems can be explored either at the digital level (programming gate-based circuits) or at the analog level (programming Hamiltonian sequences). We give evidence of the intrinsic scalability of neutral atom quantum processors in the 100-1,000 qubits range and introduce prospects for universal fault tolerant quantum computing and applications beyond quantum computing.

]]>The manipulation of neutral atoms by light is at the heart of countless scientific discoveries in the field of quantum physics in the last three decades. The level of control that has been achieved at the single particle level within arrays of optical traps, while preserving the fundamental properties of quantum matter (coherence, entanglement, superposition), makes these technologies prime candidates to implement disruptive computation paradigms. In this paper, we review the main characteristics of these devices from atoms / qubits to application interfaces, and propose a classification of a wide variety of tasks that can already be addressed in a computationally efficient manner in the Noisy Intermediate Scale Quantum[1] era we are in. We illustrate how applications ranging from optimization challenges to simulation of quantum systems can be explored either at the digital level (programming gate-based circuits) or at the analog level (programming Hamiltonian sequences). We give evidence of the intrinsic scalability of neutral atom quantum processors in the 100-1,000 qubits range and introduce prospects for universal fault tolerant quantum computing and applications beyond quantum computing.

]]>Here we present a Lindblad master equation that approximates the Redfield equation, a well known master equation derived from first principles, without significantly compromising the range of applicability of the Redfield equation. Instead of full-scale coarse-graining, this approximation only truncates terms in the Redfield equation that average out over a time-scale typical of the quantum system. The first step in this approximation is to properly renormalize the system Hamiltonian, to symmetrize the gains and losses of the state due to the environmental coupling. In the second step, we swap out an arithmetic mean of the spectral density with a geometric one, in these gains and losses, thereby restoring complete positivity. This completely positive approximation, GAME (geometric-arithmetic master equation), is adaptable between its time-independent, time-dependent, and Floquet form. In the exactly solvable, three-level, Jaynes-Cummings model, we find that the error of the approximate state is almost an order of magnitude lower than that obtained by solving the coarse-grained stochastic master equation. As a test-bed, we use a ferromagnetic Heisenberg spin-chain with long-range dipole-dipole coupling between up to 25-spins, and study the differences between various master equations. We find that GAME has the highest accuracy per computational resource.

]]>Here we present a Lindblad master equation that approximates the Redfield equation, a well known master equation derived from first principles, without significantly compromising the range of applicability of the Redfield equation. Instead of full-scale coarse-graining, this approximation only truncates terms in the Redfield equation that average out over a time-scale typical of the quantum system. The first step in this approximation is to properly renormalize the system Hamiltonian, to symmetrize the gains and losses of the state due to the environmental coupling. In the second step, we swap out an arithmetic mean of the spectral density with a geometric one, in these gains and losses, thereby restoring complete positivity. This completely positive approximation, GAME (geometric-arithmetic master equation), is adaptable between its time-independent, time-dependent, and Floquet form. In the exactly solvable, three-level, Jaynes-Cummings model, we find that the error of the approximate state is almost an order of magnitude lower than that obtained by solving the coarse-grained stochastic master equation. As a test-bed, we use a ferromagnetic Heisenberg spin-chain with long-range dipole-dipole coupling between up to 25-spins, and study the differences between various master equations. We find that GAME has the highest accuracy per computational resource.

]]>We present in detail a statistical approach for the reference-frame-independent detection and characterization of multipartite entanglement based on moments of randomly measured correlation functions. We start by discussing how the corresponding moments can be evaluated with designs, linking methods from group and entanglement theory. Then, we illustrate the strengths of the presented framework with a focus on the multipartite scenario. We discuss a condition for characterizing genuine multipartite entanglement for three qubits, and we prove criteria that allow for a discrimination of $W$-type entanglement for an arbitrary number of qubits.

]]>We present in detail a statistical approach for the reference-frame-independent detection and characterization of multipartite entanglement based on moments of randomly measured correlation functions. We start by discussing how the corresponding moments can be evaluated with designs, linking methods from group and entanglement theory. Then, we illustrate the strengths of the presented framework with a focus on the multipartite scenario. We discuss a condition for characterizing genuine multipartite entanglement for three qubits, and we prove criteria that allow for a discrimination of $W$-type entanglement for an arbitrary number of qubits.

]]>One of the key applications for the emerging quantum simulators is to emulate the ground state of many-body systems, as it is of great interest in various fields from condensed matter physics to material science. Traditionally, in an analog sense, adiabatic evolution has been proposed to slowly evolve a simple Hamiltonian, initialized in its ground state, to the Hamiltonian of interest such that the final state becomes the desired ground state. Recently, variational methods have also been proposed and realized in quantum simulators for emulating the ground state of many-body systems. Here, we first provide a quantitative comparison between the adiabatic and variational methods with respect to required quantum resources on digital quantum simulators, namely the depth of the circuit and the number of two-qubit quantum gates. Our results show that the variational methods are less demanding with respect to these resources. However, they need to be hybridized with a classical optimization which can converge slowly. Therefore, as the second result of the paper, we provide two different approaches for speeding the convergence of the classical optimizer by taking a good initial guess for the parameters of the variational circuit. We show that these approaches are applicable to a wide range of Hamiltonian and provide significant improvement in the optimization procedure.

]]>One of the key applications for the emerging quantum simulators is to emulate the ground state of many-body systems, as it is of great interest in various fields from condensed matter physics to material science. Traditionally, in an analog sense, adiabatic evolution has been proposed to slowly evolve a simple Hamiltonian, initialized in its ground state, to the Hamiltonian of interest such that the final state becomes the desired ground state. Recently, variational methods have also been proposed and realized in quantum simulators for emulating the ground state of many-body systems. Here, we first provide a quantitative comparison between the adiabatic and variational methods with respect to required quantum resources on digital quantum simulators, namely the depth of the circuit and the number of two-qubit quantum gates. Our results show that the variational methods are less demanding with respect to these resources. However, they need to be hybridized with a classical optimization which can converge slowly. Therefore, as the second result of the paper, we provide two different approaches for speeding the convergence of the classical optimizer by taking a good initial guess for the parameters of the variational circuit. We show that these approaches are applicable to a wide range of Hamiltonian and provide significant improvement in the optimization procedure.

]]>Expansion testing aims to decide whether an $n$-node graph has expansion at least $\Phi$, or is far from any such graph. We propose a quantum expansion tester with complexity $\widetilde{O}(n^{1/3}\Phi^{-1})$. This accelerates the $\widetilde{O}(n^{1/2}\Phi^{-2})$ classical tester by Goldreich and Ron [Algorithmica '02] [12], and combines the $\widetilde{O}(n^{1/3}\Phi^{-2})$ and $\widetilde{O}(n^{1/2}\Phi^{-1})$ quantum speedups by Ambainis, Childs and Liu [RANDOM '11] and Apers and Sarlette [QIC '19] [8], respectively. The latter approach builds on a quantum fast-forwarding scheme, which we improve upon by initially growing a seed set in the graph. To grow this seed set we use a so-called evolving set process from the graph clustering literature, which allows to grow an appropriately local seed set.

]]>Expansion testing aims to decide whether an $n$-node graph has expansion at least $\Phi$, or is far from any such graph. We propose a quantum expansion tester with complexity $\widetilde{O}(n^{1/3}\Phi^{-1})$. This accelerates the $\widetilde{O}(n^{1/2}\Phi^{-2})$ classical tester by Goldreich and Ron [Algorithmica '02] [12], and combines the $\widetilde{O}(n^{1/3}\Phi^{-2})$ and $\widetilde{O}(n^{1/2}\Phi^{-1})$ quantum speedups by Ambainis, Childs and Liu [RANDOM '11] and Apers and Sarlette [QIC '19] [8], respectively. The latter approach builds on a quantum fast-forwarding scheme, which we improve upon by initially growing a seed set in the graph. To grow this seed set we use a so-called evolving set process from the graph clustering literature, which allows to grow an appropriately local seed set.

]]>Many applications of practical interest rely on time evolution of Hamiltonians that are given by a sum of Pauli operators. Quantum circuits for exact time evolution of single Pauli operators are well known, and can be extended trivially to sums of commuting Paulis by concatenating the circuits of individual terms. In this paper we reduce the circuit complexity of Hamiltonian simulation by partitioning the Pauli operators into mutually commuting clusters and exponentiating the elements within each cluster after applying simultaneous diagonalization. We provide a practical algorithm for partitioning sets of Paulis into commuting subsets, and show that the proposed approach can help to significantly reduce both the number of ${CNOT}$ operations and circuit depth for Hamiltonians arising in quantum chemistry. The algorithms for simultaneous diagonalization are also applicable in the context of stabilizer states; in particular we provide novel four- and five-stage representations, each containing only a single stage of conditional gates.

]]>Many applications of practical interest rely on time evolution of Hamiltonians that are given by a sum of Pauli operators. Quantum circuits for exact time evolution of single Pauli operators are well known, and can be extended trivially to sums of commuting Paulis by concatenating the circuits of individual terms. In this paper we reduce the circuit complexity of Hamiltonian simulation by partitioning the Pauli operators into mutually commuting clusters and exponentiating the elements within each cluster after applying simultaneous diagonalization. We provide a practical algorithm for partitioning sets of Paulis into commuting subsets, and show that the proposed approach can help to significantly reduce both the number of ${CNOT}$ operations and circuit depth for Hamiltonians arising in quantum chemistry. The algorithms for simultaneous diagonalization are also applicable in the context of stabilizer states; in particular we provide novel four- and five-stage representations, each containing only a single stage of conditional gates.

]]>Crosstalk occurs in most quantum computing systems with more than one qubit. It can cause a variety of correlated and nonlocal $\textit{crosstalk errors}$ that can be especially harmful to fault-tolerant quantum error correction, which generally relies on errors being local and relatively predictable. Mitigating crosstalk errors requires understanding, modeling, and detecting them. In this paper, we introduce a comprehensive framework for crosstalk errors and a protocol for detecting and localizing them. We give a rigorous definition of crosstalk errors that captures a wide range of disparate physical phenomena that have been called ``crosstalk'', and a concrete model for crosstalk-free quantum processors. Errors that violate this model are crosstalk errors. Next, we give an equivalent but purely operational (model-independent) definition of crosstalk errors. Using this definition, we construct a protocol for detecting a large class of crosstalk errors in a multi-qubit processor by finding conditional dependencies between observed experimental probabilities. It is highly efficient, in the sense that the number of unique experiments required scales at most cubically, and very often quadratically, with the number of qubits. We demonstrate the protocol using simulations of 2-qubit and 6-qubit processors.

]]>Crosstalk occurs in most quantum computing systems with more than one qubit. It can cause a variety of correlated and nonlocal $\textit{crosstalk errors}$ that can be especially harmful to fault-tolerant quantum error correction, which generally relies on errors being local and relatively predictable. Mitigating crosstalk errors requires understanding, modeling, and detecting them. In this paper, we introduce a comprehensive framework for crosstalk errors and a protocol for detecting and localizing them. We give a rigorous definition of crosstalk errors that captures a wide range of disparate physical phenomena that have been called ``crosstalk'', and a concrete model for crosstalk-free quantum processors. Errors that violate this model are crosstalk errors. Next, we give an equivalent but purely operational (model-independent) definition of crosstalk errors. Using this definition, we construct a protocol for detecting a large class of crosstalk errors in a multi-qubit processor by finding conditional dependencies between observed experimental probabilities. It is highly efficient, in the sense that the number of unique experiments required scales at most cubically, and very often quadratically, with the number of qubits. We demonstrate the protocol using simulations of 2-qubit and 6-qubit processors.

]]>For any pair of quantum states (the hypotheses), the task of binary quantum hypotheses testing is to derive the tradeoff relation between the probability $p_{01}$ of rejecting the null hypothesis and $p_{10}$ of accepting the alternative hypothesis. The case when both hypotheses are explicitly given was solved in the pioneering work by Helstrom. Here, instead, for any given null hypothesis as a pure state, we consider the worst-case alternative hypothesis that maximizes $p_{10}$ under a constraint on the distinguishability of such hypotheses. Additionally, we restrict the optimization to separable measurements, in order to describe tests that are performed locally. The case $p_{01}=0$ has been recently studied under the name of ``quantum state verification''. We show that the problem can be cast as a semi-definite program (SDP). Then we study in detail the two-qubit case. A comprehensive study in parameter space is done by solving the SDP numerically. We also obtain analytical solutions in the case of commuting hypotheses, and in the case where the two hypotheses can be orthogonal (in the latter case, we prove that the restriction to separable measurements generically prevents perfect distinguishability). In regards to quantum state verification, our work shows the existence of more efficient strategies for noisy measurement scenarios.

]]>For any pair of quantum states (the hypotheses), the task of binary quantum hypotheses testing is to derive the tradeoff relation between the probability $p_{01}$ of rejecting the null hypothesis and $p_{10}$ of accepting the alternative hypothesis. The case when both hypotheses are explicitly given was solved in the pioneering work by Helstrom. Here, instead, for any given null hypothesis as a pure state, we consider the worst-case alternative hypothesis that maximizes $p_{10}$ under a constraint on the distinguishability of such hypotheses. Additionally, we restrict the optimization to separable measurements, in order to describe tests that are performed locally. The case $p_{01}=0$ has been recently studied under the name of ``quantum state verification''. We show that the problem can be cast as a semi-definite program (SDP). Then we study in detail the two-qubit case. A comprehensive study in parameter space is done by solving the SDP numerically. We also obtain analytical solutions in the case of commuting hypotheses, and in the case where the two hypotheses can be orthogonal (in the latter case, we prove that the restriction to separable measurements generically prevents perfect distinguishability). In regards to quantum state verification, our work shows the existence of more efficient strategies for noisy measurement scenarios.

]]>Quantum coherence is one of the most important resources in quantum information theory. Indeed, preventing the loss of coherence is one of the most important technical challenges obstructing the development of large-scale quantum computers. Recently, there has been substantial progress in developing mathematical resource theories of coherence, paving the way towards its quantification and control. To date however, these resource theories have only been mathematically formalised within the realms of convex-geometry, information theory, and linear algebra. This approach is limited in scope, and makes it difficult to generalise beyond resource theories of coherence for single system quantum states. In this paper we take a complementary perspective, showing that resource theories of coherence can instead be defined purely compositionally, that is, working with the mathematics of process theories, string diagrams and category theory. This new perspective offers several advantages: i) it unifies various existing approaches to the study of coherence, for example, subsuming both speakable and unspeakable coherence; ii) it provides a general treatment of the compositional multi-system setting; iii) it generalises immediately to the case of quantum channels, measurements, instruments, and beyond rather than just states; iv) it can easily be generalised to the setting where there are multiple distinct sources of decoherence; and, iv) it directly extends to arbitrary process theories, for example, generalised probabilistic theories and Spekkens toy model---providing the ability to operationally characterise coherence rather than relying on specific mathematical features of quantum theory for its description. More importantly, by providing a new, complementary, perspective on the resource of coherence, this work opens the door to the development of novel tools which would not be accessible from the linear algebraic mind set.

]]>Quantum coherence is one of the most important resources in quantum information theory. Indeed, preventing the loss of coherence is one of the most important technical challenges obstructing the development of large-scale quantum computers. Recently, there has been substantial progress in developing mathematical resource theories of coherence, paving the way towards its quantification and control. To date however, these resource theories have only been mathematically formalised within the realms of convex-geometry, information theory, and linear algebra. This approach is limited in scope, and makes it difficult to generalise beyond resource theories of coherence for single system quantum states. In this paper we take a complementary perspective, showing that resource theories of coherence can instead be defined purely compositionally, that is, working with the mathematics of process theories, string diagrams and category theory. This new perspective offers several advantages: i) it unifies various existing approaches to the study of coherence, for example, subsuming both speakable and unspeakable coherence; ii) it provides a general treatment of the compositional multi-system setting; iii) it generalises immediately to the case of quantum channels, measurements, instruments, and beyond rather than just states; iv) it can easily be generalised to the setting where there are multiple distinct sources of decoherence; and, iv) it directly extends to arbitrary process theories, for example, generalised probabilistic theories and Spekkens toy model---providing the ability to operationally characterise coherence rather than relying on specific mathematical features of quantum theory for its description. More importantly, by providing a new, complementary, perspective on the resource of coherence, this work opens the door to the development of novel tools which would not be accessible from the linear algebraic mind set.

]]>Understanding the computational power of noisy intermediate-scale quantum (NISQ) devices is of both fundamental and practical importance to quantum information science. Here, we address the question of whether error-uncorrected noisy quantum computers can provide computational advantage over classical computers. Specifically, we study noisy random circuit sampling in one dimension (or 1D noisy RCS) as a simple model for exploring the effects of noise on the computational power of a noisy quantum device. In particular, we simulate the real-time dynamics of 1D noisy random quantum circuits via matrix product operators (MPOs) and characterize the computational power of the 1D noisy quantum system by using a metric we call MPO entanglement entropy. The latter metric is chosen because it determines the cost of classical MPO simulation. We numerically demonstrate that for the two-qubit gate error rates we considered, there exists a characteristic system size above which adding more qubits does not bring about an exponential growth of the cost of classical MPO simulation of 1D noisy systems. Specifically, we show that above the characteristic system size, there is an optimal circuit depth, independent of the system size, where the MPO entanglement entropy is maximized. Most importantly, the maximum achievable MPO entanglement entropy is bounded by a constant that depends only on the gate error rate, not on the system size. We also provide a heuristic analysis to get the scaling of the maximum achievable MPO entanglement entropy as a function of the gate error rate. The obtained scaling suggests that although the cost of MPO simulation does not increase exponentially in the system size above a certain characteristic system size, it does increase exponentially as the gate error rate decreases, possibly making classical simulation practically not feasible even with state-of-the-art supercomputers.

]]>Understanding the computational power of noisy intermediate-scale quantum (NISQ) devices is of both fundamental and practical importance to quantum information science. Here, we address the question of whether error-uncorrected noisy quantum computers can provide computational advantage over classical computers. Specifically, we study noisy random circuit sampling in one dimension (or 1D noisy RCS) as a simple model for exploring the effects of noise on the computational power of a noisy quantum device. In particular, we simulate the real-time dynamics of 1D noisy random quantum circuits via matrix product operators (MPOs) and characterize the computational power of the 1D noisy quantum system by using a metric we call MPO entanglement entropy. The latter metric is chosen because it determines the cost of classical MPO simulation. We numerically demonstrate that for the two-qubit gate error rates we considered, there exists a characteristic system size above which adding more qubits does not bring about an exponential growth of the cost of classical MPO simulation of 1D noisy systems. Specifically, we show that above the characteristic system size, there is an optimal circuit depth, independent of the system size, where the MPO entanglement entropy is maximized. Most importantly, the maximum achievable MPO entanglement entropy is bounded by a constant that depends only on the gate error rate, not on the system size. We also provide a heuristic analysis to get the scaling of the maximum achievable MPO entanglement entropy as a function of the gate error rate. The obtained scaling suggests that although the cost of MPO simulation does not increase exponentially in the system size above a certain characteristic system size, it does increase exponentially as the gate error rate decreases, possibly making classical simulation practically not feasible even with state-of-the-art supercomputers.

]]>In spite of their potential usefulness, Wigner functions for systems with SU(1,1) symmetry have not been explored thus far. We address this problem from a physically-motivated perspective, with an eye towards applications in modern metrology. Starting from two independent modes, and after getting rid of the irrelevant degrees of freedom, we derive in a consistent way a Wigner distribution for SU(1,1). This distribution appears as the expectation value of the displaced parity operator, which suggests a direct way to experimentally sample it. We show how this formalism works in some relevant examples. $\textbf{Dedication}$: While this manuscript was under review, we learnt with great sadness of the untimely passing of our colleague and friend Jonathan Dowling. Through his outstanding scientific work, his kind attitude, and his inimitable humor, he leaves behind a rich legacy for all of us. Our work on SU(1,1) came as a result of long conversations during his frequent visits to Erlangen. We dedicate this paper to his memory.

]]>In spite of their potential usefulness, Wigner functions for systems with SU(1,1) symmetry have not been explored thus far. We address this problem from a physically-motivated perspective, with an eye towards applications in modern metrology. Starting from two independent modes, and after getting rid of the irrelevant degrees of freedom, we derive in a consistent way a Wigner distribution for SU(1,1). This distribution appears as the expectation value of the displaced parity operator, which suggests a direct way to experimentally sample it. We show how this formalism works in some relevant examples. $\textbf{Dedication}$: While this manuscript was under review, we learnt with great sadness of the untimely passing of our colleague and friend Jonathan Dowling. Through his outstanding scientific work, his kind attitude, and his inimitable humor, he leaves behind a rich legacy for all of us. Our work on SU(1,1) came as a result of long conversations during his frequent visits to Erlangen. We dedicate this paper to his memory.

]]>Quantum correlations which violate a Bell inequality are presumed to power better-than-classical protocols for solving communication complexity problems (CCPs). How general is this statement? We show that violations of correlation-type Bell inequalities allow advantages in CCPs, when communication protocols are tailored to emulate the Bell no-signaling constraint (by not communicating measurement settings). Abandonment of this restriction on classical models allows us to disprove the main result of, inter alia, [22]; we show that quantum correlations obtained from these communication strategies assisted by a small quantum violation of the CGLMP Bell inequalities do not imply advantages in any CCP in the input/output scenario considered in the reference. More generally, we show that there exists quantum correlations, with nontrivial local marginal probabilities, which violate the $I_{3322}$ Bell inequality, but do not enable a quantum advantange in any CCP, regardless of the communication strategy employed in the quantum protocol, for a scenario with a fixed number of inputs and outputs

]]>Quantum correlations which violate a Bell inequality are presumed to power better-than-classical protocols for solving communication complexity problems (CCPs). How general is this statement? We show that violations of correlation-type Bell inequalities allow advantages in CCPs, when communication protocols are tailored to emulate the Bell no-signaling constraint (by not communicating measurement settings). Abandonment of this restriction on classical models allows us to disprove the main result of, inter alia, [22]; we show that quantum correlations obtained from these communication strategies assisted by a small quantum violation of the CGLMP Bell inequalities do not imply advantages in any CCP in the input/output scenario considered in the reference. More generally, we show that there exists quantum correlations, with nontrivial local marginal probabilities, which violate the $I_{3322}$ Bell inequality, but do not enable a quantum advantange in any CCP, regardless of the communication strategy employed in the quantum protocol, for a scenario with a fixed number of inputs and outputs

]]>We formulate an optimization problem of Hamiltonian design based on the variational principle. Given a variational ansatz for a Hamiltonian we construct a loss function to be minimised as a weighted sum of relevant Hamiltonian properties specifying thereby the search query. Using fractional quantum Hall effect as a test system we illustrate how the framework can be used to determine a generating Hamiltonian of a finite-size model wavefunction (Moore-Read Pfaffian and Read-Rezayi states), find optimal conditions for an experiment or "extrapolate" given wavefunctions in a certain universality class from smaller to larger system sizes. We also discuss how the search for approximate generating Hamiltonians may be used to find simpler and more realistic models implementing the given exotic phase of matter by experimentally accessible interaction terms.

]]>We formulate an optimization problem of Hamiltonian design based on the variational principle. Given a variational ansatz for a Hamiltonian we construct a loss function to be minimised as a weighted sum of relevant Hamiltonian properties specifying thereby the search query. Using fractional quantum Hall effect as a test system we illustrate how the framework can be used to determine a generating Hamiltonian of a finite-size model wavefunction (Moore-Read Pfaffian and Read-Rezayi states), find optimal conditions for an experiment or "extrapolate" given wavefunctions in a certain universality class from smaller to larger system sizes. We also discuss how the search for approximate generating Hamiltonians may be used to find simpler and more realistic models implementing the given exotic phase of matter by experimentally accessible interaction terms.

]]>Within the context of hybrid quantum-classical optimization, gradient descent based optimizers typically require the evaluation of expectation values with respect to the outcome of parameterized quantum circuits. In this work, we explore the consequences of the prior observation that estimation of these quantities on quantum hardware results in a form of $stochastic$ gradient descent optimization. We formalize this notion, which allows us to show that in many relevant cases, including VQE, QAOA and certain quantum classifiers, estimating expectation values with $k$ measurement outcomes results in optimization algorithms whose convergence properties can be rigorously well understood, for any value of $k$. In fact, even using single measurement outcomes for the estimation of expectation values is sufficient. Moreover, in many settings the required gradients can be expressed as linear combinations of expectation values -- originating, e.g., from a sum over local terms of a Hamiltonian, a parameter shift rule, or a sum over data-set instances -- and we show that in these cases $k$-shot expectation value estimation can be combined with sampling over terms of the linear combination, to obtain ``doubly stochastic'' gradient descent optimizers. For all algorithms we prove convergence guarantees, providing a framework for the derivation of rigorous optimization results in the context of near-term quantum devices. Additionally, we explore numerically these methods on benchmark VQE, QAOA and quantum-enhanced machine learning tasks and show that treating the stochastic settings as hyper-parameters allows for state-of-the-art results with significantly fewer circuit executions and measurements.

]]>Within the context of hybrid quantum-classical optimization, gradient descent based optimizers typically require the evaluation of expectation values with respect to the outcome of parameterized quantum circuits. In this work, we explore the consequences of the prior observation that estimation of these quantities on quantum hardware results in a form of $stochastic$ gradient descent optimization. We formalize this notion, which allows us to show that in many relevant cases, including VQE, QAOA and certain quantum classifiers, estimating expectation values with $k$ measurement outcomes results in optimization algorithms whose convergence properties can be rigorously well understood, for any value of $k$. In fact, even using single measurement outcomes for the estimation of expectation values is sufficient. Moreover, in many settings the required gradients can be expressed as linear combinations of expectation values -- originating, e.g., from a sum over local terms of a Hamiltonian, a parameter shift rule, or a sum over data-set instances -- and we show that in these cases $k$-shot expectation value estimation can be combined with sampling over terms of the linear combination, to obtain ``doubly stochastic'' gradient descent optimizers. For all algorithms we prove convergence guarantees, providing a framework for the derivation of rigorous optimization results in the context of near-term quantum devices. Additionally, we explore numerically these methods on benchmark VQE, QAOA and quantum-enhanced machine learning tasks and show that treating the stochastic settings as hyper-parameters allows for state-of-the-art results with significantly fewer circuit executions and measurements.

]]>Unitary $t$-designs are the bread and butter of quantum information theory and beyond. An important issue in practice is that of efficiently constructing good approximations of such unitary $t$-designs. Building on results by Aubrun (Comm. Math. Phys. 2009), we prove that sampling $d^t\mathrm{poly}(t,\log d, 1/\epsilon)$ unitaries from an exact $t$-design provides with positive probability an $\epsilon$-approximate $t$-design, if the error is measured in one-to-one norm. As an application, we give a randomized construction of a quantum encryption scheme that has roughly the same key size and security as the quantum one-time pad, but possesses the additional property of being non-malleable against adversaries without quantum side information.

]]>Unitary $t$-designs are the bread and butter of quantum information theory and beyond. An important issue in practice is that of efficiently constructing good approximations of such unitary $t$-designs. Building on results by Aubrun (Comm. Math. Phys. 2009), we prove that sampling $d^t\mathrm{poly}(t,\log d, 1/\epsilon)$ unitaries from an exact $t$-design provides with positive probability an $\epsilon$-approximate $t$-design, if the error is measured in one-to-one norm. As an application, we give a randomized construction of a quantum encryption scheme that has roughly the same key size and security as the quantum one-time pad, but possesses the additional property of being non-malleable against adversaries without quantum side information.

]]>A fundamental pursuit in complexity theory concerns reducing worst-case problems to average-case problems. There exist complexity classes such as PSPACE that admit worst-case to average-case reductions. However, for many other classes such as NP, the evidence so far is typically negative, in the sense that the existence of such reductions would cause collapses of the polynomial hierarchy(PH). Basing cryptographic primitives, e.g., the average-case hardness of inverting one-way permutations, on NP-completeness is a particularly intriguing instance. As there is evidence showing that classical reductions from NP-hard problems to breaking these primitives result in PH collapses, it seems unlikely to base cryptographic primitives on NP-hard problems. Nevertheless, these results do not rule out the possibilities of the existence of quantum reductions. In this work, we initiate a study of the quantum analogues of these questions. Aside from formalizing basic notions of quantum reductions and demonstrating powers of quantum reductions by examples of separations, our main result shows that if NP-complete problems reduce to inverting one-way permutations using certain types of quantum reductions, then $\textsf{coNP}$ $\subseteq$ $\textsf{QIP}($2$)$.

]]>A fundamental pursuit in complexity theory concerns reducing worst-case problems to average-case problems. There exist complexity classes such as PSPACE that admit worst-case to average-case reductions. However, for many other classes such as NP, the evidence so far is typically negative, in the sense that the existence of such reductions would cause collapses of the polynomial hierarchy(PH). Basing cryptographic primitives, e.g., the average-case hardness of inverting one-way permutations, on NP-completeness is a particularly intriguing instance. As there is evidence showing that classical reductions from NP-hard problems to breaking these primitives result in PH collapses, it seems unlikely to base cryptographic primitives on NP-hard problems. Nevertheless, these results do not rule out the possibilities of the existence of quantum reductions. In this work, we initiate a study of the quantum analogues of these questions. Aside from formalizing basic notions of quantum reductions and demonstrating powers of quantum reductions by examples of separations, our main result shows that if NP-complete problems reduce to inverting one-way permutations using certain types of quantum reductions, then $\textsf{coNP}$ $\subseteq$ $\textsf{QIP}($2$)$.

]]>Unitary braiding operators can be used as robust entangling quantum gates. We introduce a solution-generating technique to solve the $(d,m,l)$-generalized Yang-Baxter equation, for $m/2\leq l \leq m$, which allows to systematically construct such braiding operators. This is achieved by using partition algebras, a generalization of the Temperley-Lieb algebra encountered in statistical mechanics. We obtain families of unitary and non-unitary braiding operators that generate the full braid group. Explicit examples are given for a 2-, 3-, and 4-qubit system, including the classification of the entangled states generated by these operators based on Stochastic Local Operations and Classical Communication.

]]>Unitary braiding operators can be used as robust entangling quantum gates. We introduce a solution-generating technique to solve the $(d,m,l)$-generalized Yang-Baxter equation, for $m/2\leq l \leq m$, which allows to systematically construct such braiding operators. This is achieved by using partition algebras, a generalization of the Temperley-Lieb algebra encountered in statistical mechanics. We obtain families of unitary and non-unitary braiding operators that generate the full braid group. Explicit examples are given for a 2-, 3-, and 4-qubit system, including the classification of the entangled states generated by these operators based on Stochastic Local Operations and Classical Communication.

]]>We still do not have perfect decoders for topological codes that can satisfy all needs of different experimental setups. Recently, a few neural network based decoders have been studied, with the motivation that they can adapt to a wide range of noise models, and can easily run on dedicated chips without a full-fledged computer. The later feature might lead to fast speed and the ability to operate at low temperatures. However, a question which has not been addressed in previous works is whether neural network decoders can handle 2D topological codes with large distances. In this work, we provide a positive answer for the toric code [1]. The structure of our neural network decoder is inspired by the renormalization group decoder [2,3]. With a fairly strict policy on training time, when the bit-flip error rate is lower than $9\%$ and syndrome extraction is perfect, the neural network decoder performs better when code distance increases. With a less strict policy, we find it is not hard for the neural decoder to achieve a performance close to the minimum-weight perfect matching algorithm. The numerical simulation is done up to code distance $d=64$. Last but not least, we describe and analyze a few failed approaches. They guide us to the final design of our neural decoder, but also serve as a caution when we gauge the versatility of stock deep neural networks. The source code of our neural decoder can be found at [4].

]]>We still do not have perfect decoders for topological codes that can satisfy all needs of different experimental setups. Recently, a few neural network based decoders have been studied, with the motivation that they can adapt to a wide range of noise models, and can easily run on dedicated chips without a full-fledged computer. The later feature might lead to fast speed and the ability to operate at low temperatures. However, a question which has not been addressed in previous works is whether neural network decoders can handle 2D topological codes with large distances. In this work, we provide a positive answer for the toric code [1]. The structure of our neural network decoder is inspired by the renormalization group decoder [2,3]. With a fairly strict policy on training time, when the bit-flip error rate is lower than $9\%$ and syndrome extraction is perfect, the neural network decoder performs better when code distance increases. With a less strict policy, we find it is not hard for the neural decoder to achieve a performance close to the minimum-weight perfect matching algorithm. The numerical simulation is done up to code distance $d=64$. Last but not least, we describe and analyze a few failed approaches. They guide us to the final design of our neural decoder, but also serve as a caution when we gauge the versatility of stock deep neural networks. The source code of our neural decoder can be found at [4].

]]>The theory of relativity associates a proper time with each moving object via its world line. In quantum theory however, such well-defined trajectories are forbidden. After introducing a general characterisation of quantum clocks, we demonstrate that, in the weak-field, low-velocity limit, all ``good'' quantum clocks experience time dilation as dictated by general relativity when their state of motion is classical (i.e. Gaussian). For nonclassical states of motion, on the other hand, we find that quantum interference effects may give rise to a significant discrepancy between the proper time and the time measured by the clock. The universality of this discrepancy implies that it is not simply a systematic error, but rather a quantum modification to the proper time itself. We also show how the clock's delocalisation leads to a larger uncertainty in the time it measures – a consequence of the unavoidable entanglement between the clock time and its center-of-mass degrees of freedom. We demonstrate how this lost precision can be recovered by performing a measurement of the clock's state of motion alongside its time reading.

]]>The theory of relativity associates a proper time with each moving object via its world line. In quantum theory however, such well-defined trajectories are forbidden. After introducing a general characterisation of quantum clocks, we demonstrate that, in the weak-field, low-velocity limit, all ``good'' quantum clocks experience time dilation as dictated by general relativity when their state of motion is classical (i.e. Gaussian). For nonclassical states of motion, on the other hand, we find that quantum interference effects may give rise to a significant discrepancy between the proper time and the time measured by the clock. The universality of this discrepancy implies that it is not simply a systematic error, but rather a quantum modification to the proper time itself. We also show how the clock's delocalisation leads to a larger uncertainty in the time it measures – a consequence of the unavoidable entanglement between the clock time and its center-of-mass degrees of freedom. We demonstrate how this lost precision can be recovered by performing a measurement of the clock's state of motion alongside its time reading.

]]>The Kochen-Specker theorem is a fundamental result in quantum foundations that has spawned massive interest since its inception. We show that within every Kochen-Specker graph, there exist interesting subgraphs which we term $01$-gadgets, that capture the essential contradiction necessary to prove the Kochen-Specker theorem, i.e,. every Kochen-Specker graph contains a $01$-gadget and from every $01$-gadget one can construct a proof of the Kochen-Specker theorem. Moreover, we show that the $01$-gadgets form a fundamental primitive that can be used to formulate state-independent and state-dependent statistical Kochen-Specker arguments as well as to give simple constructive proofs of an ``extended'' Kochen-Specker theorem first considered by Pitowsky in [22].

]]>The Kochen-Specker theorem is a fundamental result in quantum foundations that has spawned massive interest since its inception. We show that within every Kochen-Specker graph, there exist interesting subgraphs which we term $01$-gadgets, that capture the essential contradiction necessary to prove the Kochen-Specker theorem, i.e,. every Kochen-Specker graph contains a $01$-gadget and from every $01$-gadget one can construct a proof of the Kochen-Specker theorem. Moreover, we show that the $01$-gadgets form a fundamental primitive that can be used to formulate state-independent and state-dependent statistical Kochen-Specker arguments as well as to give simple constructive proofs of an ``extended'' Kochen-Specker theorem first considered by Pitowsky in [22].

]]>We study the practical performance of quantum-inspired algorithms for recommendation systems and linear systems of equations. These algorithms were shown to have an exponential asymptotic speedup compared to previously known classical methods for problems involving low-rank matrices, but with complexity bounds that exhibit a hefty polynomial overhead compared to quantum algorithms. This raised the question of whether these methods were actually useful in practice. We conduct a theoretical analysis aimed at identifying their computational bottlenecks, then implement and benchmark the algorithms on a variety of problems, including applications to portfolio optimization and movie recommendations. On the one hand, our analysis reveals that the performance of these algorithms is better than the theoretical complexity bounds would suggest. On the other hand, their performance as seen in our implementation degrades noticeably as the rank and condition number of the input matrix are increased. Overall, our results indicate that quantum-inspired algorithms can perform well in practice provided that stringent conditions are met: low rank, low condition number, and very large dimension of the input matrix. By contrast, practical datasets are often sparse and high-rank, precisely the type that can be handled by quantum algorithms.

Please see this blog post for a summary of the work.

]]>We study the practical performance of quantum-inspired algorithms for recommendation systems and linear systems of equations. These algorithms were shown to have an exponential asymptotic speedup compared to previously known classical methods for problems involving low-rank matrices, but with complexity bounds that exhibit a hefty polynomial overhead compared to quantum algorithms. This raised the question of whether these methods were actually useful in practice. We conduct a theoretical analysis aimed at identifying their computational bottlenecks, then implement and benchmark the algorithms on a variety of problems, including applications to portfolio optimization and movie recommendations. On the one hand, our analysis reveals that the performance of these algorithms is better than the theoretical complexity bounds would suggest. On the other hand, their performance as seen in our implementation degrades noticeably as the rank and condition number of the input matrix are increased. Overall, our results indicate that quantum-inspired algorithms can perform well in practice provided that stringent conditions are met: low rank, low condition number, and very large dimension of the input matrix. By contrast, practical datasets are often sparse and high-rank, precisely the type that can be handled by quantum algorithms.

Please see this blog post for a summary of the work.

]]>The Schwinger model (quantum electrodynamics in 1+1 dimensions) is a testbed for the study of quantum gauge field theories. We give scalable, explicit digital quantum algorithms to simulate the lattice Schwinger model in both NISQ and fault-tolerant settings. In particular, we perform a tight analysis of low-order Trotter formula simulations of the Schwinger model, using recently derived commutator bounds, and give upper bounds on the resources needed for simulations in both scenarios. In lattice units, we find a Schwinger model on $N/2$ physical sites with coupling constant $x^{-1/2}$ and electric field cutoff $x^{-1/2}\Lambda$ can be simulated on a quantum computer for time $2xT$ using a number of $T$-gates or CNOTs in $\widetilde{O}( N^{3/2} T^{3/2} \sqrt{x} \Lambda )$ for fixed operator error. This scaling with the truncation $\Lambda$ is better than that expected from algorithms such as qubitization or QDRIFT. Furthermore, we give scalable measurement schemes and algorithms to estimate observables which we cost in both the NISQ and fault-tolerant settings by assuming a simple target observable–the mean pair density. Finally, we bound the root-mean-square error in estimating this observable via simulation as a function of the diamond distance between the ideal and actual CNOT channels. This work provides a rigorous analysis of simulating the Schwinger model, while also providing benchmarks against which subsequent simulation algorithms can be tested.

]]>The Schwinger model (quantum electrodynamics in 1+1 dimensions) is a testbed for the study of quantum gauge field theories. We give scalable, explicit digital quantum algorithms to simulate the lattice Schwinger model in both NISQ and fault-tolerant settings. In particular, we perform a tight analysis of low-order Trotter formula simulations of the Schwinger model, using recently derived commutator bounds, and give upper bounds on the resources needed for simulations in both scenarios. In lattice units, we find a Schwinger model on $N/2$ physical sites with coupling constant $x^{-1/2}$ and electric field cutoff $x^{-1/2}\Lambda$ can be simulated on a quantum computer for time $2xT$ using a number of $T$-gates or CNOTs in $\widetilde{O}( N^{3/2} T^{3/2} \sqrt{x} \Lambda )$ for fixed operator error. This scaling with the truncation $\Lambda$ is better than that expected from algorithms such as qubitization or QDRIFT. Furthermore, we give scalable measurement schemes and algorithms to estimate observables which we cost in both the NISQ and fault-tolerant settings by assuming a simple target observable–the mean pair density. Finally, we bound the root-mean-square error in estimating this observable via simulation as a function of the diamond distance between the ideal and actual CNOT channels. This work provides a rigorous analysis of simulating the Schwinger model, while also providing benchmarks against which subsequent simulation algorithms can be tested.

]]>Graph states, and the entanglement they posses, are central to modern quantum computing and communications architectures. Local complementation – the graph operation that links all local-Clifford equivalent graph states – allows us to classify all stabiliser states by their entanglement. Here, we study the structure of the orbits generated by local complementation, mapping them up to 9 qubits and revealing a rich hidden structure. We provide programs to compute these orbits, along with our data for each of the $587$ orbits up to $9$ qubits and a means to visualise them. We find direct links between the connectivity of certain orbits with the entanglement properties of their component graph states. Furthermore, we observe the correlations between graph-theoretical orbit properties, such as diameter and colourability, with Schmidt measure and preparation complexity and suggest potential applications. It is well known that graph theory and quantum entanglement have strong interplay – our exploration deepens this relationship, providing new tools with which to probe the nature of entanglement.

]]>Graph states, and the entanglement they posses, are central to modern quantum computing and communications architectures. Local complementation – the graph operation that links all local-Clifford equivalent graph states – allows us to classify all stabiliser states by their entanglement. Here, we study the structure of the orbits generated by local complementation, mapping them up to 9 qubits and revealing a rich hidden structure. We provide programs to compute these orbits, along with our data for each of the $587$ orbits up to $9$ qubits and a means to visualise them. We find direct links between the connectivity of certain orbits with the entanglement properties of their component graph states. Furthermore, we observe the correlations between graph-theoretical orbit properties, such as diameter and colourability, with Schmidt measure and preparation complexity and suggest potential applications. It is well known that graph theory and quantum entanglement have strong interplay – our exploration deepens this relationship, providing new tools with which to probe the nature of entanglement.

]]>Error probability distribution associated with a given Clifford measurement circuit is described exactly in terms of the circuit error-equivalence group, or the circuit subsystem code previously introduced by Bacon, Flammia, Harrow, and Shi. This gives a prescription for maximum-likelihood decoding with a given measurement circuit. Marginal distributions for subsets of circuit errors are also analyzed; these generate a family of related asymmetric LDPC codes of varying degeneracy. More generally, such a family is associated with any quantum code. Implications for decoding highly-degenerate quantum codes are discussed.

]]>Error probability distribution associated with a given Clifford measurement circuit is described exactly in terms of the circuit error-equivalence group, or the circuit subsystem code previously introduced by Bacon, Flammia, Harrow, and Shi. This gives a prescription for maximum-likelihood decoding with a given measurement circuit. Marginal distributions for subsets of circuit errors are also analyzed; these generate a family of related asymmetric LDPC codes of varying degeneracy. More generally, such a family is associated with any quantum code. Implications for decoding highly-degenerate quantum codes are discussed.

]]>Error-detection and correction are necessary prerequisites for any scalable quantum computing architecture. Given the inevitability of unwanted physical noise in quantum systems and the propensity for errors to spread as computations proceed, computational outcomes can become substantially corrupted. This observation applies regardless of the choice of physical implementation. In the context of photonic quantum information processing, there has recently been much interest in $\textit{passive}$ linear optics quantum computing, which includes boson-sampling, as this model eliminates the highly-challenging requirements for feed-forward via fast, active control. That is, these systems are $\textit{passive}$ by definition. In usual scenarios, error detection and correction techniques are inherently $\textit{active}$, making them incompatible with this model, arousing suspicion that physical error processes may be an insurmountable obstacle. Here we explore a photonic error-detection technique, based on W-state encoding of photonic qubits, which is entirely passive, based on post-selection, and compatible with these near-term photonic architectures of interest. We show that this W-state redundant encoding techniques enables the suppression of dephasing noise on photonic qubits via simple fan-out style operations, implemented by optical Fourier transform networks, which can be readily realised today. The protocol effectively maps dephasing noise into heralding failures, with zero failure probability in the ideal no-noise limit. We present our scheme in the context of a single photonic qubit passing through a noisy communication or quantum memory channel, which has not been generalised to the more general context of full quantum computation.

]]>Error-detection and correction are necessary prerequisites for any scalable quantum computing architecture. Given the inevitability of unwanted physical noise in quantum systems and the propensity for errors to spread as computations proceed, computational outcomes can become substantially corrupted. This observation applies regardless of the choice of physical implementation. In the context of photonic quantum information processing, there has recently been much interest in $\textit{passive}$ linear optics quantum computing, which includes boson-sampling, as this model eliminates the highly-challenging requirements for feed-forward via fast, active control. That is, these systems are $\textit{passive}$ by definition. In usual scenarios, error detection and correction techniques are inherently $\textit{active}$, making them incompatible with this model, arousing suspicion that physical error processes may be an insurmountable obstacle. Here we explore a photonic error-detection technique, based on W-state encoding of photonic qubits, which is entirely passive, based on post-selection, and compatible with these near-term photonic architectures of interest. We show that this W-state redundant encoding techniques enables the suppression of dephasing noise on photonic qubits via simple fan-out style operations, implemented by optical Fourier transform networks, which can be readily realised today. The protocol effectively maps dephasing noise into heralding failures, with zero failure probability in the ideal no-noise limit. We present our scheme in the context of a single photonic qubit passing through a noisy communication or quantum memory channel, which has not been generalised to the more general context of full quantum computation.

]]>Violation of a noncontextuality inequality or the phenomenon referred to `quantum contextuality' is a fundamental feature of quantum theory. In this article, we derive a novel family of noncontextuality inequalities along with their sum-of-squares decompositions in the simplest (odd-cycle) sequential-measurement scenario capable to demonstrate Kochen-Specker contextuality. The sum-of-squares decompositions allow us to obtain the maximal quantum violation of these inequalities and a set of algebraic relations necessarily satisfied by any state and measurements achieving it. With their help, we prove that our inequalities can be used for self-testing of three-dimensional quantum state and measurements. Remarkably, the presented self-testing results rely on weaker assumptions than the ones considered in Kochen-Specker contextuality.

]]>Violation of a noncontextuality inequality or the phenomenon referred to `quantum contextuality' is a fundamental feature of quantum theory. In this article, we derive a novel family of noncontextuality inequalities along with their sum-of-squares decompositions in the simplest (odd-cycle) sequential-measurement scenario capable to demonstrate Kochen-Specker contextuality. The sum-of-squares decompositions allow us to obtain the maximal quantum violation of these inequalities and a set of algebraic relations necessarily satisfied by any state and measurements achieving it. With their help, we prove that our inequalities can be used for self-testing of three-dimensional quantum state and measurements. Remarkably, the presented self-testing results rely on weaker assumptions than the ones considered in Kochen-Specker contextuality.

]]>According to our current conception of physics, any valid physical theory is supposed to describe the objective evolution of a unique external world. However, this condition is challenged by quantum theory, which suggests that physical systems should not always be understood as having objective properties which are simply revealed by measurement. Furthermore, as argued below, several other conceptual puzzles in the foundations of physics and related fields point to limitations of our current perspective and motivate the exploration of an alternative: to start with the first-person (the observer) rather than the third-person perspective (the world). In this work, I propose a rigorous approach of this kind on the basis of algorithmic information theory. It is based on a single postulate: that $\textit{universal induction}$ determines the chances of what any observer sees next. That is, instead of a world or physical laws, it is the local state of the observer alone that determines those probabilities. Surprisingly, despite its solipsistic foundation, I show that the resulting theory recovers many features of our established physical worldview: it predicts that it appears to observers $\textit{as if there was an external world}$ that evolves according to simple, computable, probabilistic laws. In contrast to the standard view, objective reality is not assumed on this approach but rather provably emerges as an asymptotic statistical phenomenon. The resulting theory dissolves puzzles like cosmology's Boltzmann brain problem, makes concrete predictions for thought experiments like the computer simulation of agents, and suggests novel phenomena such as ``probabilistic zombies'' governed by observer-dependent probabilistic chances. It also suggests that some basic phenomena of quantum theory (Bell inequality violation and no-signalling) might be understood as consequences of this framework.

]]>According to our current conception of physics, any valid physical theory is supposed to describe the objective evolution of a unique external world. However, this condition is challenged by quantum theory, which suggests that physical systems should not always be understood as having objective properties which are simply revealed by measurement. Furthermore, as argued below, several other conceptual puzzles in the foundations of physics and related fields point to limitations of our current perspective and motivate the exploration of an alternative: to start with the first-person (the observer) rather than the third-person perspective (the world). In this work, I propose a rigorous approach of this kind on the basis of algorithmic information theory. It is based on a single postulate: that $\textit{universal induction}$ determines the chances of what any observer sees next. That is, instead of a world or physical laws, it is the local state of the observer alone that determines those probabilities. Surprisingly, despite its solipsistic foundation, I show that the resulting theory recovers many features of our established physical worldview: it predicts that it appears to observers $\textit{as if there was an external world}$ that evolves according to simple, computable, probabilistic laws. In contrast to the standard view, objective reality is not assumed on this approach but rather provably emerges as an asymptotic statistical phenomenon. The resulting theory dissolves puzzles like cosmology's Boltzmann brain problem, makes concrete predictions for thought experiments like the computer simulation of agents, and suggests novel phenomena such as ``probabilistic zombies'' governed by observer-dependent probabilistic chances. It also suggests that some basic phenomena of quantum theory (Bell inequality violation and no-signalling) might be understood as consequences of this framework.

]]>Central in entanglement theory is the characterization of local transformations among pure multipartite states. As a first step towards such a characterization, one needs to identify those states which can be transformed into each other via local operations with a non-vanishing probability. The classes obtained in this way are called SLOCC classes. They can be categorized into three disjoint types: the null-cone, the polystable states and strictly semistable states. Whereas the former two are well characterized, not much is known about strictly semistable states. We derive a criterion for the existence of the latter. In particular, we show that there exists a strictly semistable state if and only if there exist two polystable states whose orbits have different dimensions. We illustrate the usefulness of this criterion by applying it to tripartite states where one of the systems is a qubit. Moreover, we scrutinize all SLOCC classes of these systems and derive a complete characterization of the corresponding orbit types. We present representatives of strictly semistable classes and show to which polystable state they converge via local regular operators.

]]>Central in entanglement theory is the characterization of local transformations among pure multipartite states. As a first step towards such a characterization, one needs to identify those states which can be transformed into each other via local operations with a non-vanishing probability. The classes obtained in this way are called SLOCC classes. They can be categorized into three disjoint types: the null-cone, the polystable states and strictly semistable states. Whereas the former two are well characterized, not much is known about strictly semistable states. We derive a criterion for the existence of the latter. In particular, we show that there exists a strictly semistable state if and only if there exist two polystable states whose orbits have different dimensions. We illustrate the usefulness of this criterion by applying it to tripartite states where one of the systems is a qubit. Moreover, we scrutinize all SLOCC classes of these systems and derive a complete characterization of the corresponding orbit types. We present representatives of strictly semistable classes and show to which polystable state they converge via local regular operators.

]]>In this paper we initiate the study of entanglement-breaking (EB) superchannels. These are processes that always yield separable maps when acting on one side of a bipartite completely positive (CP) map. EB superchannels are a generalization of the well-known EB channels. We give several equivalent characterizations of EB supermaps and superchannels. Unlike its channel counterpart, we find that not every EB superchannel can be implemented as a measure-and-prepare superchannel. We also demonstrate that many EB superchannels can be superactivated, in the sense that they can output non-separable channels when wired in series. We then introduce the notions of CPTP- and CP-complete images of a superchannel, which capture deterministic and probabilistic channel convertibility, respectively. This allows us to characterize the power of EB superchannels for generating CP maps in different scenarios, and it reveals some fundamental differences between channels and superchannels. Finally, we relax the definition of separable channels to include $(p,q)$-non-entangling channels, which are bipartite channels that cannot generate entanglement using $p$- and $q$-dimensional ancillary systems. By introducing and investigating $k$-EB maps, we construct examples of $(p,q)$-EB superchannels that are not fully entanglement breaking. Partial results on the characterization of $(p,q)$-EB superchannels are also provided.

]]>In this paper we initiate the study of entanglement-breaking (EB) superchannels. These are processes that always yield separable maps when acting on one side of a bipartite completely positive (CP) map. EB superchannels are a generalization of the well-known EB channels. We give several equivalent characterizations of EB supermaps and superchannels. Unlike its channel counterpart, we find that not every EB superchannel can be implemented as a measure-and-prepare superchannel. We also demonstrate that many EB superchannels can be superactivated, in the sense that they can output non-separable channels when wired in series. We then introduce the notions of CPTP- and CP-complete images of a superchannel, which capture deterministic and probabilistic channel convertibility, respectively. This allows us to characterize the power of EB superchannels for generating CP maps in different scenarios, and it reveals some fundamental differences between channels and superchannels. Finally, we relax the definition of separable channels to include $(p,q)$-non-entangling channels, which are bipartite channels that cannot generate entanglement using $p$- and $q$-dimensional ancillary systems. By introducing and investigating $k$-EB maps, we construct examples of $(p,q)$-EB superchannels that are not fully entanglement breaking. Partial results on the characterization of $(p,q)$-EB superchannels are also provided.

]]>