Measurement-based quantum computing (MBQC) is a promising alternative to traditional circuit-based quantum computing predicated on the construction and measurement of cluster states. Recent work has demonstrated that MBQC provides a more general framework for fault-tolerance that extends beyond foliated quantum error-correcting codes. We systematically expand on that paradigm, and use combinatorial tiling theory to study and construct new examples of fault-tolerant cluster states derived from crystal structures. Included among these is a robust self-dual cluster state requiring only degree-$3$ connectivity. We benchmark several of these cluster states in the presence of circuit-level noise, and find a variety of promising candidates whose performance depends on the specifics of the noise model. By eschewing the distinction between data and ancilla, this malleable framework lays a foundation for the development of creative and competitive fault-tolerance schemes beyond conventional error-correcting codes.

]]>Measurement-based quantum computing (MBQC) is a promising alternative to traditional circuit-based quantum computing predicated on the construction and measurement of cluster states. Recent work has demonstrated that MBQC provides a more general framework for fault-tolerance that extends beyond foliated quantum error-correcting codes. We systematically expand on that paradigm, and use combinatorial tiling theory to study and construct new examples of fault-tolerant cluster states derived from crystal structures. Included among these is a robust self-dual cluster state requiring only degree-$3$ connectivity. We benchmark several of these cluster states in the presence of circuit-level noise, and find a variety of promising candidates whose performance depends on the specifics of the noise model. By eschewing the distinction between data and ancilla, this malleable framework lays a foundation for the development of creative and competitive fault-tolerance schemes beyond conventional error-correcting codes.

]]>The theory of cellular automata in operational probabilistic theories is developed. We start introducing the composition of infinitely many elementary systems, and then use this notion to define update rules for such infinite composite systems. The notion of causal influence is introduced, and its relation with the usual property of signalling is discussed. We then introduce homogeneity, namely the property of an update rule to evolve every system in the same way, and prove that systems evolving by a homogeneous rule always correspond to vertices of a Cayley graph. Next, we define the notion of locality for update rules. Cellular automata are then defined as homogeneous and local update rules. Finally, we prove a general version of the wrapping lemma, that connects CA on different Cayley graphs sharing some small-scale structure of neighbourhoods.

]]>The theory of cellular automata in operational probabilistic theories is developed. We start introducing the composition of infinitely many elementary systems, and then use this notion to define update rules for such infinite composite systems. The notion of causal influence is introduced, and its relation with the usual property of signalling is discussed. We then introduce homogeneity, namely the property of an update rule to evolve every system in the same way, and prove that systems evolving by a homogeneous rule always correspond to vertices of a Cayley graph. Next, we define the notion of locality for update rules. Cellular automata are then defined as homogeneous and local update rules. Finally, we prove a general version of the wrapping lemma, that connects CA on different Cayley graphs sharing some small-scale structure of neighbourhoods.

]]>The Platonic solids is the name traditionally given to the five regular convex polyhedra, namely the tetrahedron, the octahedron, the cube, the icosahedron and the dodecahedron. Perhaps strongly boosted by the towering historical influence of their namesake, these beautiful solids have, in well over two millennia, transcended traditional boundaries and entered the stage in a range of disciplines. Examples include natural philosophy and mathematics from classical antiquity, scientific modeling during the days of the European scientific revolution and visual arts ranging from the renaissance to modernity. Motivated by mathematical beauty and a rich history, we consider the Platonic solids in the context of modern quantum mechanics. Specifically, we construct Bell inequalities whose maximal violations are achieved with measurements pointing to the vertices of the Platonic solids. These Platonic Bell inequalities are constructed only by inspecting the visible symmetries of the Platonic solids. We also construct Bell inequalities for more general polyhedra and find a Bell inequality that is more robust to noise than the celebrated Clauser-Horne-Shimony-Holt Bell inequality. Finally, we elaborate on the tension between mathematical beauty, which was our initial motivation, and experimental friendliness, which is necessary in all empirical sciences.

]]>The Platonic solids is the name traditionally given to the five regular convex polyhedra, namely the tetrahedron, the octahedron, the cube, the icosahedron and the dodecahedron. Perhaps strongly boosted by the towering historical influence of their namesake, these beautiful solids have, in well over two millennia, transcended traditional boundaries and entered the stage in a range of disciplines. Examples include natural philosophy and mathematics from classical antiquity, scientific modeling during the days of the European scientific revolution and visual arts ranging from the renaissance to modernity. Motivated by mathematical beauty and a rich history, we consider the Platonic solids in the context of modern quantum mechanics. Specifically, we construct Bell inequalities whose maximal violations are achieved with measurements pointing to the vertices of the Platonic solids. These Platonic Bell inequalities are constructed only by inspecting the visible symmetries of the Platonic solids. We also construct Bell inequalities for more general polyhedra and find a Bell inequality that is more robust to noise than the celebrated Clauser-Horne-Shimony-Holt Bell inequality. Finally, we elaborate on the tension between mathematical beauty, which was our initial motivation, and experimental friendliness, which is necessary in all empirical sciences.

]]>Quantum metrology theory has up to now focused on the resolution gains obtainable thanks to the entanglement among $N$ probes. Typically, a quadratic gain in resolution is achievable, going from the $1/\sqrt{N}$ of the central limit theorem to the $1/N$ of the Heisenberg bound. Here we focus instead on quantum squeezing and provide a unified framework for metrology with squeezing, showing that, similarly, one can generally attain a quadratic gain when comparing the resolution achievable by a squeezed probe to the best $N$-probe classical strategy achievable with the same energy. Namely, here we give a quantification of the Heisenberg squeezing bound for arbitrary estimation strategies that employ squeezing. Our theory recovers known results (e.g. in quantum optics and spin squeezing), but it uses the general theory of squeezing and holds for arbitrary quantum systems.

]]>Quantum metrology theory has up to now focused on the resolution gains obtainable thanks to the entanglement among $N$ probes. Typically, a quadratic gain in resolution is achievable, going from the $1/\sqrt{N}$ of the central limit theorem to the $1/N$ of the Heisenberg bound. Here we focus instead on quantum squeezing and provide a unified framework for metrology with squeezing, showing that, similarly, one can generally attain a quadratic gain when comparing the resolution achievable by a squeezed probe to the best $N$-probe classical strategy achievable with the same energy. Namely, here we give a quantification of the Heisenberg squeezing bound for arbitrary estimation strategies that employ squeezing. Our theory recovers known results (e.g. in quantum optics and spin squeezing), but it uses the general theory of squeezing and holds for arbitrary quantum systems.

]]>We present a methodology to price options and portfolios of options on a gate-based quantum computer using amplitude estimation, an algorithm which provides a quadratic speedup compared to classical Monte Carlo methods. The options that we cover include vanilla options, multi-asset options and path-dependent options such as barrier options. We put an emphasis on the implementation of the quantum circuits required to build the input states and operators needed by amplitude estimation to price the different option types. Additionally, we show simulation results to highlight how the circuits that we implement price the different option contracts. Finally, we examine the performance of option pricing circuits on quantum hardware using the IBM Q Tokyo quantum device. We employ a simple, yet effective, error mitigation scheme that allows us to significantly reduce the errors arising from noisy two-qubit gates.

]]>We present a methodology to price options and portfolios of options on a gate-based quantum computer using amplitude estimation, an algorithm which provides a quadratic speedup compared to classical Monte Carlo methods. The options that we cover include vanilla options, multi-asset options and path-dependent options such as barrier options. We put an emphasis on the implementation of the quantum circuits required to build the input states and operators needed by amplitude estimation to price the different option types. Additionally, we show simulation results to highlight how the circuits that we implement price the different option contracts. Finally, we examine the performance of option pricing circuits on quantum hardware using the IBM Q Tokyo quantum device. We employ a simple, yet effective, error mitigation scheme that allows us to significantly reduce the errors arising from noisy two-qubit gates.

]]>We propose a variational quantum algorithm to prepare ground states of 1D lattice quantum Hamiltonians specifically tailored for programmable quantum devices where interactions among qubits are mediated by Quantum Data Buses (QDB). For trapped ions with the axial Center-Of-Mass (COM) vibrational mode as single QDB, our scheme uses resonant sideband optical pulses as resource operations, which are potentially faster than off-resonant couplings and thus less prone to decoherence. The disentangling of the QDB from the qubits by the end of the state preparation comes as a byproduct of the variational optimization. We numerically simulate the ground state preparation for the Su-Schrieffer-Heeger model in ions and show that our strategy is scalable while being tolerant to finite temperatures of the COM mode.

]]>We propose a variational quantum algorithm to prepare ground states of 1D lattice quantum Hamiltonians specifically tailored for programmable quantum devices where interactions among qubits are mediated by Quantum Data Buses (QDB). For trapped ions with the axial Center-Of-Mass (COM) vibrational mode as single QDB, our scheme uses resonant sideband optical pulses as resource operations, which are potentially faster than off-resonant couplings and thus less prone to decoherence. The disentangling of the QDB from the qubits by the end of the state preparation comes as a byproduct of the variational optimization. We numerically simulate the ground state preparation for the Su-Schrieffer-Heeger model in ions and show that our strategy is scalable while being tolerant to finite temperatures of the COM mode.

]]>We consider the evolution of an arbitrary quantum dynamical semigroup of a finite-dimensional quantum system under frequent kicks, where each kick is a generic quantum operation. We develop a generalization of the Baker-Campbell-Hausdorff formula allowing to reformulate such pulsed dynamics as a continuous one. This reveals an adiabatic evolution. We obtain a general type of quantum Zeno dynamics, which unifies all known manifestations in the literature as well as describing new types.

]]>We consider the evolution of an arbitrary quantum dynamical semigroup of a finite-dimensional quantum system under frequent kicks, where each kick is a generic quantum operation. We develop a generalization of the Baker-Campbell-Hausdorff formula allowing to reformulate such pulsed dynamics as a continuous one. This reveals an adiabatic evolution. We obtain a general type of quantum Zeno dynamics, which unifies all known manifestations in the literature as well as describing new types.

]]>We derive a necessary and sufficient condition for the possibility of achieving the Heisenberg scaling in general adaptive multi-parameter estimation schemes in presence of Markovian noise. In situations where the Heisenberg scaling is achievable, we provide a semidefinite program to identify the optimal quantum error correcting (QEC) protocol that yields the best estimation precision. We overcome the technical challenges associated with potential incompatibility of the measurement optimally extracting information on different parameters by utilizing the Holevo Cramér-Rao (HCR) bound for pure states. We provide examples of significant advantages offered by our joint-QEC protocols, that sense all the parameters utilizing a single error-corrected subspace, over separate-QEC protocols where each parameter is effectively sensed in a separate subspace.

]]>We derive a necessary and sufficient condition for the possibility of achieving the Heisenberg scaling in general adaptive multi-parameter estimation schemes in presence of Markovian noise. In situations where the Heisenberg scaling is achievable, we provide a semidefinite program to identify the optimal quantum error correcting (QEC) protocol that yields the best estimation precision. We overcome the technical challenges associated with potential incompatibility of the measurement optimally extracting information on different parameters by utilizing the Holevo Cramér-Rao (HCR) bound for pure states. We provide examples of significant advantages offered by our joint-QEC protocols, that sense all the parameters utilizing a single error-corrected subspace, over separate-QEC protocols where each parameter is effectively sensed in a separate subspace.

]]>We present a detailed circuit implementation of Szegedy's quantization of the Metropolis-Hastings walk. This quantum walk is usually defined with respect to an oracle. We find that a direct implementation of this oracle requires costly arithmetic operations. We thus reformulate the quantum walk, circumventing its implementation altogether by closely following the classical Metropolis-Hastings walk. We also present heuristic quantum algorithms that use the quantum walk in the context of discrete optimization problems and numerically study their performances. Our numerical results indicate polynomial quantum speedups in heuristic settings.

]]>We present a detailed circuit implementation of Szegedy's quantization of the Metropolis-Hastings walk. This quantum walk is usually defined with respect to an oracle. We find that a direct implementation of this oracle requires costly arithmetic operations. We thus reformulate the quantum walk, circumventing its implementation altogether by closely following the classical Metropolis-Hastings walk. We also present heuristic quantum algorithms that use the quantum walk in the context of discrete optimization problems and numerically study their performances. Our numerical results indicate polynomial quantum speedups in heuristic settings.

]]>We revisit the task of visible compression of an ensemble of quantum states with entanglement assistance in the one-shot setting. The protocols achieving the best compression use many more qubits of shared entanglement than the number of qubits in the states in the ensemble. Other compression protocols, with potentially larger communication cost, have entanglement cost bounded by the number of qubits in the given states. This motivates the question as to whether entanglement is truly necessary for compression, and if so, how much of it is needed. Motivated by questions in communication complexity, we lift certain restrictions that are placed on compression protocols in tasks such as state-splitting and channel simulation. We show that an ensemble of the form designed by Jain, Radhakrishnan, and Sen (ICALP'03) saturates the known bounds on the sum of communication and entanglement costs, even with the relaxed compression protocols we study. The ensemble and the associated one-way communication protocol have several remarkable properties. The ensemble is incompressible by more than a constant number of qubits without shared entanglement, even when constant error is allowed. Moreover, in the presence of shared entanglement, the communication cost of compression can be arbitrarily smaller than the entanglement cost. The quantum information cost of the protocol can thus be arbitrarily smaller than the cost of compression without shared entanglement. The ensemble can also be used to show the impossibility of reducing, via compression, the shared entanglement used in two-party protocols for computing Boolean functions.

]]>We revisit the task of visible compression of an ensemble of quantum states with entanglement assistance in the one-shot setting. The protocols achieving the best compression use many more qubits of shared entanglement than the number of qubits in the states in the ensemble. Other compression protocols, with potentially larger communication cost, have entanglement cost bounded by the number of qubits in the given states. This motivates the question as to whether entanglement is truly necessary for compression, and if so, how much of it is needed. Motivated by questions in communication complexity, we lift certain restrictions that are placed on compression protocols in tasks such as state-splitting and channel simulation. We show that an ensemble of the form designed by Jain, Radhakrishnan, and Sen (ICALP'03) saturates the known bounds on the sum of communication and entanglement costs, even with the relaxed compression protocols we study. The ensemble and the associated one-way communication protocol have several remarkable properties. The ensemble is incompressible by more than a constant number of qubits without shared entanglement, even when constant error is allowed. Moreover, in the presence of shared entanglement, the communication cost of compression can be arbitrarily smaller than the entanglement cost. The quantum information cost of the protocol can thus be arbitrarily smaller than the cost of compression without shared entanglement. The ensemble can also be used to show the impossibility of reducing, via compression, the shared entanglement used in two-party protocols for computing Boolean functions.

]]>Coherent and anticoherent states of spin systems up to spin $j=2$ are known to be optimal in order to detect rotations by a known angle but unknown rotation axis. These optimal quantum rotosensors are characterized by minimal fidelity, given by the overlap of a state before and after a rotation, averaged over all directions in space. We calculate a closed-form expression for the average fidelity in terms of anticoherent measures, valid for arbitrary values of the quantum number $j$. We identify optimal rotosensors (i) for arbitrary rotation angles in the case of spin quantum numbers up to $j=7/2$ and (ii) for small rotation angles in the case of spin quantum numbers up to $j=5$. The closed-form expression we derive allows us to explain the central role of anticoherence measures in the problem of optimal detection of rotation angles for arbitrary values of $j$.

]]>Coherent and anticoherent states of spin systems up to spin $j=2$ are known to be optimal in order to detect rotations by a known angle but unknown rotation axis. These optimal quantum rotosensors are characterized by minimal fidelity, given by the overlap of a state before and after a rotation, averaged over all directions in space. We calculate a closed-form expression for the average fidelity in terms of anticoherent measures, valid for arbitrary values of the quantum number $j$. We identify optimal rotosensors (i) for arbitrary rotation angles in the case of spin quantum numbers up to $j=7/2$ and (ii) for small rotation angles in the case of spin quantum numbers up to $j=5$. The closed-form expression we derive allows us to explain the central role of anticoherence measures in the problem of optimal detection of rotation angles for arbitrary values of $j$.

]]>We present new bounds on the existence of general quantum maximum distance separable codes (QMDS): the length $n$ of all QMDS codes with local dimension $D$ and distance $d \geq 3$ is bounded by $n \leq D^2 + d - 2$. We obtain their weight distribution and present additional bounds that arise from Rains' shadow inequalities. Our main result can be seen as a generalization of bounds that are known for the two special cases of stabilizer QMDS codes and absolutely maximally entangled states, and confirms the quantum MDS conjecture in the special case of distance-three codes. As the existence of QMDS codes is linked to that of highly entangled subspaces (in which every vector has uniform $r$-body marginals) of maximal dimension, our methods directly carry over to address questions in multipartite entanglement.

]]>We present new bounds on the existence of general quantum maximum distance separable codes (QMDS): the length $n$ of all QMDS codes with local dimension $D$ and distance $d \geq 3$ is bounded by $n \leq D^2 + d - 2$. We obtain their weight distribution and present additional bounds that arise from Rains' shadow inequalities. Our main result can be seen as a generalization of bounds that are known for the two special cases of stabilizer QMDS codes and absolutely maximally entangled states, and confirms the quantum MDS conjecture in the special case of distance-three codes. As the existence of QMDS codes is linked to that of highly entangled subspaces (in which every vector has uniform $r$-body marginals) of maximal dimension, our methods directly carry over to address questions in multipartite entanglement.

]]>Physical observation is made relative to a reference frame. A reference frame is essentially a quantum system given the universal validity of quantum mechanics. Thus, a quantum system must be described relative to a quantum reference frame (QRF). Further requirements on QRF include using only relational observables and not assuming the existence of external reference frame. To address these requirements, two approaches are proposed in the literature. The first one is an operational approach (F. Giacomini, et al, Nat. Comm. 10:494, 2019) which focuses on the quantization of transformation between QRFs. The second approach attempts to derive the quantum transformation between QRFs from first principles (A. Vanrietvelde, et al, $\textit{Quantum}$ 4:225, 2020). Such first principle approach describes physical systems as symmetry induced constrained Hamiltonian systems. The Dirac quantization of such systems before removing redundancy is interpreted as perspective-neutral description. Then, a systematic redundancy reduction procedure is introduced to derive description from perspective of a QRF. The first principle approach recovers some of the results from the operational approach, but not yet include an important part of a quantum theory - the measurement theory. This paper is intended to bridge the gap. We show that the von Neumann quantum measurement theory can be embedded into the perspective-neutral framework. This allows us to successfully recover the results found in the operational approach, with the advantage that the transformation operator can be derived from the first principle. In addition, the formulation presented here reveals several interesting conceptual insights. For instance, the projection operation in measurement needs to be performed after redundancy reduction, and the projection operator must be transformed accordingly when switching QRFs. These results represent one step forward in understanding how quantum measurement should be formulated when the reference frame is also a quantum system.

]]>Physical observation is made relative to a reference frame. A reference frame is essentially a quantum system given the universal validity of quantum mechanics. Thus, a quantum system must be described relative to a quantum reference frame (QRF). Further requirements on QRF include using only relational observables and not assuming the existence of external reference frame. To address these requirements, two approaches are proposed in the literature. The first one is an operational approach (F. Giacomini, et al, Nat. Comm. 10:494, 2019) which focuses on the quantization of transformation between QRFs. The second approach attempts to derive the quantum transformation between QRFs from first principles (A. Vanrietvelde, et al, $\textit{Quantum}$ 4:225, 2020). Such first principle approach describes physical systems as symmetry induced constrained Hamiltonian systems. The Dirac quantization of such systems before removing redundancy is interpreted as perspective-neutral description. Then, a systematic redundancy reduction procedure is introduced to derive description from perspective of a QRF. The first principle approach recovers some of the results from the operational approach, but not yet include an important part of a quantum theory - the measurement theory. This paper is intended to bridge the gap. We show that the von Neumann quantum measurement theory can be embedded into the perspective-neutral framework. This allows us to successfully recover the results found in the operational approach, with the advantage that the transformation operator can be derived from the first principle. In addition, the formulation presented here reveals several interesting conceptual insights. For instance, the projection operation in measurement needs to be performed after redundancy reduction, and the projection operator must be transformed accordingly when switching QRFs. These results represent one step forward in understanding how quantum measurement should be formulated when the reference frame is also a quantum system.

]]>We describe a two-player non-local game, with a fixed small number of questions and answers, such that an $\epsilon$-close to optimal strategy requires an entangled state of dimension $2^{\Omega(\epsilon^{-1/8})}$. Our non-local game is inspired by the three-player non-local game of Ji, Leung and Vidick [17]. It reduces the number of players from three to two, as well as the question and answer set sizes. Moreover, it provides an (arguably) elementary proof of the non-closure of the set of quantum correlations, based on embezzlement and self-testing. In contrast, previous proofs [26,16,19] involved representation theoretic machinery for finitely-presented groups and $C^*$-algebras.

]]>We describe a two-player non-local game, with a fixed small number of questions and answers, such that an $\epsilon$-close to optimal strategy requires an entangled state of dimension $2^{\Omega(\epsilon^{-1/8})}$. Our non-local game is inspired by the three-player non-local game of Ji, Leung and Vidick [17]. It reduces the number of players from three to two, as well as the question and answer set sizes. Moreover, it provides an (arguably) elementary proof of the non-closure of the set of quantum correlations, based on embezzlement and self-testing. In contrast, previous proofs [26,16,19] involved representation theoretic machinery for finitely-presented groups and $C^*$-algebras.

]]>We study the out-of-equilibrium properties of $1+1$ dimensional quantum electrodynamics (QED), discretized via the staggered-fermion Schwinger model with an Abelian $\mathbb{Z}_{n}$ gauge group. We look at two relevant phenomena: first, we analyze the stability of the Dirac vacuum with respect to particle/antiparticle pair production, both spontaneous and induced by an external electric field; then, we examine the string breaking mechanism. We observe a strong effect of confinement, which acts by suppressing both spontaneous pair production and string breaking into quark/antiquark pairs, indicating that the system dynamics displays a number of out-of-equilibrium features.

]]>We study the out-of-equilibrium properties of $1+1$ dimensional quantum electrodynamics (QED), discretized via the staggered-fermion Schwinger model with an Abelian $\mathbb{Z}_{n}$ gauge group. We look at two relevant phenomena: first, we analyze the stability of the Dirac vacuum with respect to particle/antiparticle pair production, both spontaneous and induced by an external electric field; then, we examine the string breaking mechanism. We observe a strong effect of confinement, which acts by suppressing both spontaneous pair production and string breaking into quark/antiquark pairs, indicating that the system dynamics displays a number of out-of-equilibrium features.

]]>We take a resource-theoretic approach to the problem of quantifying nonclassicality in Bell scenarios. The resources are conceptualized as probabilistic processes from the setting variables to the outcome variables having a particular causal structure, namely, one wherein the wings are only connected by a common cause. We term them "common-cause boxes". We define the distinction between classical and nonclassical resources in terms of whether or not a classical causal model can explain the correlations. One can then quantify the relative nonclassicality of resources by considering their interconvertibility relative to the set of operations that can be implemented using a classical common cause (which correspond to local operations and shared randomness). We prove that the set of free operations forms a polytope, which in turn allows us to derive an efficient algorithm for deciding whether one resource can be converted to another. We moreover define two distinct monotones with simple closed-form expressions in the two-party binary-setting binary-outcome scenario, and use these to reveal various properties of the pre-order of resources, including a lower bound on the cardinality of any complete set of monotones. In particular, we show that the information contained in the degrees of violation of facet-defining Bell inequalities is not sufficient for quantifying nonclassicality, even though it is sufficient for witnessing nonclassicality. Finally, we show that the continuous set of convexly extremal quantumly realizable correlations are all at the top of the pre-order of quantumly realizable correlations. In addition to providing new insights on Bell nonclassicality, our work also sets the stage for quantifying nonclassicality in more general causal networks.

]]>We take a resource-theoretic approach to the problem of quantifying nonclassicality in Bell scenarios. The resources are conceptualized as probabilistic processes from the setting variables to the outcome variables having a particular causal structure, namely, one wherein the wings are only connected by a common cause. We term them "common-cause boxes". We define the distinction between classical and nonclassical resources in terms of whether or not a classical causal model can explain the correlations. One can then quantify the relative nonclassicality of resources by considering their interconvertibility relative to the set of operations that can be implemented using a classical common cause (which correspond to local operations and shared randomness). We prove that the set of free operations forms a polytope, which in turn allows us to derive an efficient algorithm for deciding whether one resource can be converted to another. We moreover define two distinct monotones with simple closed-form expressions in the two-party binary-setting binary-outcome scenario, and use these to reveal various properties of the pre-order of resources, including a lower bound on the cardinality of any complete set of monotones. In particular, we show that the information contained in the degrees of violation of facet-defining Bell inequalities is not sufficient for quantifying nonclassicality, even though it is sufficient for witnessing nonclassicality. Finally, we show that the continuous set of convexly extremal quantumly realizable correlations are all at the top of the pre-order of quantumly realizable correlations. In addition to providing new insights on Bell nonclassicality, our work also sets the stage for quantifying nonclassicality in more general causal networks.

]]>We present a completely new approach to quantum circuit optimisation, based on the ZX-calculus. We first interpret quantum circuits as ZX-diagrams, which provide a flexible, lower-level language for describing quantum computations graphically. Then, using the rules of the ZX-calculus, we give a simplification strategy for ZX-diagrams based on the two graph transformations of local complementation and pivoting and show that the resulting reduced diagram can be transformed back into a quantum circuit. While little is known about extracting circuits from arbitrary ZX-diagrams, we show that the underlying graph of our simplified ZX-diagram always has a graph-theoretic property called generalised flow, which in turn yields a deterministic circuit extraction procedure. For Clifford circuits, this extraction procedure yields a new normal form that is both asymptotically optimal in size and gives a new, smaller upper bound on gate depth for nearest-neighbour architectures. For Clifford+T and more general circuits, our technique enables us to to `see around' gates that obstruct the Clifford structure and produce smaller circuits than naïve `cut-and-resynthesise' methods.

]]>We present a completely new approach to quantum circuit optimisation, based on the ZX-calculus. We first interpret quantum circuits as ZX-diagrams, which provide a flexible, lower-level language for describing quantum computations graphically. Then, using the rules of the ZX-calculus, we give a simplification strategy for ZX-diagrams based on the two graph transformations of local complementation and pivoting and show that the resulting reduced diagram can be transformed back into a quantum circuit. While little is known about extracting circuits from arbitrary ZX-diagrams, we show that the underlying graph of our simplified ZX-diagram always has a graph-theoretic property called generalised flow, which in turn yields a deterministic circuit extraction procedure. For Clifford circuits, this extraction procedure yields a new normal form that is both asymptotically optimal in size and gives a new, smaller upper bound on gate depth for nearest-neighbour architectures. For Clifford+T and more general circuits, our technique enables us to to `see around' gates that obstruct the Clifford structure and produce smaller circuits than naïve `cut-and-resynthesise' methods.

]]>Exactly solvable models are essential in physics. For many-body spin-$\mathbf{\sf{1}/{2}}$ systems, an important class of such models consists of those that can be mapped to free fermions hopping on a graph. We provide a complete characterization of models which can be solved this way. Specifically, we reduce the problem of recognizing such spin models to the graph-theoretic problem of recognizing line graphs, which has been solved optimally. A corollary of our result is a complete set of constant-sized commutation structures that constitute the obstructions to a free-fermion solution. We find that symmetries are tightly constrained in these models. Pauli symmetries correspond to either: (i) cycles on the fermion hopping graph, (ii) the fermion parity operator, or (iii) logically encoded qubits. Clifford symmetries within one of these symmetry sectors, with three exceptions, must be symmetries of the free-fermion model itself. We demonstrate how several exact free-fermion solutions from the literature fit into our formalism and give an explicit example of a new model previously unknown to be solvable by free fermions.

]]>Exactly solvable models are essential in physics. For many-body spin-$\mathbf{\sf{1}/{2}}$ systems, an important class of such models consists of those that can be mapped to free fermions hopping on a graph. We provide a complete characterization of models which can be solved this way. Specifically, we reduce the problem of recognizing such spin models to the graph-theoretic problem of recognizing line graphs, which has been solved optimally. A corollary of our result is a complete set of constant-sized commutation structures that constitute the obstructions to a free-fermion solution. We find that symmetries are tightly constrained in these models. Pauli symmetries correspond to either: (i) cycles on the fermion hopping graph, (ii) the fermion parity operator, or (iii) logically encoded qubits. Clifford symmetries within one of these symmetry sectors, with three exceptions, must be symmetries of the free-fermion model itself. We demonstrate how several exact free-fermion solutions from the literature fit into our formalism and give an explicit example of a new model previously unknown to be solvable by free fermions.

]]>We demonstrate how to do many computations for doubled topological phases with defects. These defects may be 1-dimensional domain walls or 0-dimensional point defects. Using $\operatorname{Vec}(S_3)$ as a guiding example, we demonstrate how domain wall fusion and associators can be computed using generalized tube algebra techniques. These domain walls can be both between distinct or identical phases. Additionally, we show how to compute all possible point defects, and the fusion and associator data of these. Worked examples, tabulated data and Mathematica code are provided.

]]>We demonstrate how to do many computations for doubled topological phases with defects. These defects may be 1-dimensional domain walls or 0-dimensional point defects. Using $\operatorname{Vec}(S_3)$ as a guiding example, we demonstrate how domain wall fusion and associators can be computed using generalized tube algebra techniques. These domain walls can be both between distinct or identical phases. Additionally, we show how to compute all possible point defects, and the fusion and associator data of these. Worked examples, tabulated data and Mathematica code are provided.

]]>We introduce a fermion-to-qubit mapping defined on ternary trees, where any single Majorana operator on an $n$-mode fermionic system is mapped to a multi-qubit Pauli operator acting nontrivially on $\lceil \log_3(2n+1)\rceil$ qubits. The mapping has a simple structure and is optimal in the sense that it is impossible to construct Pauli operators in any fermion-to-qubit mapping acting nontrivially on less than $\log_3(2n)$ qubits on average. We apply it to the problem of learning $k$-fermion reduced density matrix (RDM), a problem relevant in various quantum simulation applications. We show that one can determine individual elements of all $k$-fermion RDMs in parallel, to precision $\epsilon$, by repeating a single quantum circuit for $\lesssim (2n+1)^k \epsilon^{-2}$ times. This result is based on a method we develop here that allows one to determine individual elements of all $k$-qubit RDMs in parallel, to precision $\epsilon$, by repeating a single quantum circuit for $\lesssim 3^k \epsilon^{-2}$ times, independent of the system size. This improves over existing schemes for determining qubit RDMs.

]]>We introduce a fermion-to-qubit mapping defined on ternary trees, where any single Majorana operator on an $n$-mode fermionic system is mapped to a multi-qubit Pauli operator acting nontrivially on $\lceil \log_3(2n+1)\rceil$ qubits. The mapping has a simple structure and is optimal in the sense that it is impossible to construct Pauli operators in any fermion-to-qubit mapping acting nontrivially on less than $\log_3(2n)$ qubits on average. We apply it to the problem of learning $k$-fermion reduced density matrix (RDM), a problem relevant in various quantum simulation applications. We show that one can determine individual elements of all $k$-fermion RDMs in parallel, to precision $\epsilon$, by repeating a single quantum circuit for $\lesssim (2n+1)^k \epsilon^{-2}$ times. This result is based on a method we develop here that allows one to determine individual elements of all $k$-qubit RDMs in parallel, to precision $\epsilon$, by repeating a single quantum circuit for $\lesssim 3^k \epsilon^{-2}$ times, independent of the system size. This improves over existing schemes for determining qubit RDMs.

]]>We study the notion of causal orders for the cases of (classical and quantum) circuits and spacetime events. We show that every circuit can be immersed into a classical spacetime, preserving the compatibility between the two causal structures. Using the process matrix formalism, we analyse the realisations of the quantum switch using 4 and 3 spacetime events in classical spacetimes with fixed causal orders, and the realisation of a gravitational switch with only 2 spacetime events that features superpositions of different gravitational field configurations and their respective causal orders. We show that the current quantum switch experimental implementations do not feature superpositions of causal orders between spacetime events, and that these superpositions can only occur in the context of superposed gravitational fields. We also discuss a recently introduced operational notion of an event, which does allow for superpositions of respective causal orders in flat spacetime quantum switch implementations. We construct two observables that can distinguish between the quantum switch realisations in classical spacetimes, and gravitational switch implementations in superposed spacetimes. Finally, we discuss our results in the light of the modern relational approach to physics.

]]>We study the notion of causal orders for the cases of (classical and quantum) circuits and spacetime events. We show that every circuit can be immersed into a classical spacetime, preserving the compatibility between the two causal structures. Using the process matrix formalism, we analyse the realisations of the quantum switch using 4 and 3 spacetime events in classical spacetimes with fixed causal orders, and the realisation of a gravitational switch with only 2 spacetime events that features superpositions of different gravitational field configurations and their respective causal orders. We show that the current quantum switch experimental implementations do not feature superpositions of causal orders between spacetime events, and that these superpositions can only occur in the context of superposed gravitational fields. We also discuss a recently introduced operational notion of an event, which does allow for superpositions of respective causal orders in flat spacetime quantum switch implementations. We construct two observables that can distinguish between the quantum switch realisations in classical spacetimes, and gravitational switch implementations in superposed spacetimes. Finally, we discuss our results in the light of the modern relational approach to physics.

]]>Passive states are special configurations of a quantum system which exhibit no energy decrement at the end of an arbitrary cyclic driving of the model Hamiltonian. When applied to an increasing number of copies of the initial density matrix, the requirement of passivity induces a hierarchical ordering which, in the asymptotic limit of infinitely many elements, pinpoints ground states and thermal Gibbs states. In particular, for large values of $N$ the energy content of a $N$-passive state which is also structurally stable (i.e. capable to maintain its passivity status under small perturbations of the model Hamiltonian), is expected to be close to the corresponding value of the thermal Gibbs state which has the same entropy. In the present paper we provide a quantitative assessment of this fact, by producing an upper bound for the energy of an arbitrary $N$-passive, structurally stable state which only depends on the spectral properties of the Hamiltonian of the system. We also show the condition under which our inequality can be saturated. A generalization of the bound is finally presented that, for sufficiently large $N$, applies to states which are $N$-passive, but not necessarily structurally stable.

]]>Passive states are special configurations of a quantum system which exhibit no energy decrement at the end of an arbitrary cyclic driving of the model Hamiltonian. When applied to an increasing number of copies of the initial density matrix, the requirement of passivity induces a hierarchical ordering which, in the asymptotic limit of infinitely many elements, pinpoints ground states and thermal Gibbs states. In particular, for large values of $N$ the energy content of a $N$-passive state which is also structurally stable (i.e. capable to maintain its passivity status under small perturbations of the model Hamiltonian), is expected to be close to the corresponding value of the thermal Gibbs state which has the same entropy. In the present paper we provide a quantitative assessment of this fact, by producing an upper bound for the energy of an arbitrary $N$-passive, structurally stable state which only depends on the spectral properties of the Hamiltonian of the system. We also show the condition under which our inequality can be saturated. A generalization of the bound is finally presented that, for sufficiently large $N$, applies to states which are $N$-passive, but not necessarily structurally stable.

]]>We introduce structured random matrix ensembles, constructed to model many-body quantum systems with local interactions. These ensembles are employed to study equilibration of isolated many-body quantum systems, showing that rather complex matrix structures, well beyond Wigner's full or banded random matrices, are required to faithfully model equilibration times. Viewing the random matrices as connectivities of graphs, we analyse the resulting network of classical oscillators in Hilbert space with tools from network theory. One of these tools, called the maximum flow value, is found to be an excellent proxy for equilibration times. Since maximum flow values are less expensive to compute, they give access to approximate equilibration times for system sizes beyond those accessible by exact diagonalisation.

]]>We introduce structured random matrix ensembles, constructed to model many-body quantum systems with local interactions. These ensembles are employed to study equilibration of isolated many-body quantum systems, showing that rather complex matrix structures, well beyond Wigner's full or banded random matrices, are required to faithfully model equilibration times. Viewing the random matrices as connectivities of graphs, we analyse the resulting network of classical oscillators in Hilbert space with tools from network theory. One of these tools, called the maximum flow value, is found to be an excellent proxy for equilibration times. Since maximum flow values are less expensive to compute, they give access to approximate equilibration times for system sizes beyond those accessible by exact diagonalisation.

]]>We benchmark the accuracy of a variational quantum eigensolver based on a finite-depth quantum circuit encoding ground state of local Hamiltonians. We show that in gapped phases, the accuracy improves exponentially with the depth of the circuit. When trying to encode the ground state of conformally invariant Hamiltonians, we observe two regimes. A $\textit{finite-depth}$ regime, where the accuracy improves slowly with the number of layers, and a $\textit{finite-size}$ regime where it improves again exponentially. The cross-over between the two regimes happens at a critical number of layers whose value increases linearly with the size of the system. We discuss the implication of these observations in the context of comparing different variational ansatz and their effectiveness in describing critical ground states.

]]>We benchmark the accuracy of a variational quantum eigensolver based on a finite-depth quantum circuit encoding ground state of local Hamiltonians. We show that in gapped phases, the accuracy improves exponentially with the depth of the circuit. When trying to encode the ground state of conformally invariant Hamiltonians, we observe two regimes. A $\textit{finite-depth}$ regime, where the accuracy improves slowly with the number of layers, and a $\textit{finite-size}$ regime where it improves again exponentially. The cross-over between the two regimes happens at a critical number of layers whose value increases linearly with the size of the system. We discuss the implication of these observations in the context of comparing different variational ansatz and their effectiveness in describing critical ground states.

]]>Speeding up the dynamics of a quantum system is of paramount importance for quantum technologies. However, in finite dimensions and without full knowledge of the details of the system, it is easily shown to be $impossible$. In contrast we show that continuous variable systems described by a certain class of quadratic Hamiltonians can be sped up without such detailed knowledge. We call the resultant procedure $\textit{Hamiltonian amplification}$ (HA). The HA method relies on the application of local squeezing operations allowing for amplifying even unknown or noisy couplings and frequencies by acting on individual modes. Furthermore, we show how to combine HA with dynamical decoupling to achieve amplified Hamiltonians that are free from environmental noise. Finally, we illustrate a significant reduction in gate times of cavity resonator qubits as one potential use of HA.

]]>Speeding up the dynamics of a quantum system is of paramount importance for quantum technologies. However, in finite dimensions and without full knowledge of the details of the system, it is easily shown to be $impossible$. In contrast we show that continuous variable systems described by a certain class of quadratic Hamiltonians can be sped up without such detailed knowledge. We call the resultant procedure $\textit{Hamiltonian amplification}$ (HA). The HA method relies on the application of local squeezing operations allowing for amplifying even unknown or noisy couplings and frequencies by acting on individual modes. Furthermore, we show how to combine HA with dynamical decoupling to achieve amplified Hamiltonians that are free from environmental noise. Finally, we illustrate a significant reduction in gate times of cavity resonator qubits as one potential use of HA.

]]>Time crystals are genuinely non-equilibrium quantum phases of matter that break time-translational symmetry. While in non-equilibrium closed systems time crystals have been experimentally realized, it remains an open question whether or not such a phase survives when systems are coupled to an environment. Although dissipation caused by the coupling to a bath may stabilize time crystals in some regimes, the introduction of incoherent noise may also destroy the time crystalline order. Therefore, the mechanisms that stabilize a time crystal in open and closed systems are not necessarily the same. Here, we propose a way to identify an open system time crystal based on a single object: the Floquet propagator. Armed with such a description we show time-crystalline behavior in an explicitly short-range interacting open system and demonstrate the crucial role of the nature of the decay processes.

]]>Time crystals are genuinely non-equilibrium quantum phases of matter that break time-translational symmetry. While in non-equilibrium closed systems time crystals have been experimentally realized, it remains an open question whether or not such a phase survives when systems are coupled to an environment. Although dissipation caused by the coupling to a bath may stabilize time crystals in some regimes, the introduction of incoherent noise may also destroy the time crystalline order. Therefore, the mechanisms that stabilize a time crystal in open and closed systems are not necessarily the same. Here, we propose a way to identify an open system time crystal based on a single object: the Floquet propagator. Armed with such a description we show time-crystalline behavior in an explicitly short-range interacting open system and demonstrate the crucial role of the nature of the decay processes.

]]>A quantum generalization of Natural Gradient Descent is presented as part of a general-purpose optimization framework for variational quantum circuits. The optimization dynamics is interpreted as moving in the steepest descent direction with respect to the Quantum Information Geometry, corresponding to the real part of the Quantum Geometric Tensor (QGT), also known as the Fubini-Study metric tensor. An efficient algorithm is presented for computing a block-diagonal approximation to the Fubini-Study metric tensor for parametrized quantum circuits, which may be of independent interest.

]]>A quantum generalization of Natural Gradient Descent is presented as part of a general-purpose optimization framework for variational quantum circuits. The optimization dynamics is interpreted as moving in the steepest descent direction with respect to the Quantum Information Geometry, corresponding to the real part of the Quantum Geometric Tensor (QGT), also known as the Fubini-Study metric tensor. An efficient algorithm is presented for computing a block-diagonal approximation to the Fubini-Study metric tensor for parametrized quantum circuits, which may be of independent interest.

]]>We consider a large class of Ramsey interferometry protocols which are enhanced by squeezing and un-squeezing operations before and after a phase signal is imprinted on the collective spin of $N$ particles. We report an analytical optimization for any given particle number and strengths of (un-)squeezing. These results can be applied even when experimentally relevant decoherence processes during the squeezing and un-squeezing interactions are included. Noise between the two interactions is however not considered in this work. This provides a generalized characterization of squeezing echo protocols, recovering a number of known quantum metrological protocols as local sensitivity maxima, thereby proving their optimality. We discover a single new protocol. Its sensitivity enhancement relies on a double inversion of squeezing. In the general class of echo protocols, the newly found over-un-twisting protocol is singled out due to its Heisenberg scaling even at strong collective dephasing.

]]>We consider a large class of Ramsey interferometry protocols which are enhanced by squeezing and un-squeezing operations before and after a phase signal is imprinted on the collective spin of $N$ particles. We report an analytical optimization for any given particle number and strengths of (un-)squeezing. These results can be applied even when experimentally relevant decoherence processes during the squeezing and un-squeezing interactions are included. Noise between the two interactions is however not considered in this work. This provides a generalized characterization of squeezing echo protocols, recovering a number of known quantum metrological protocols as local sensitivity maxima, thereby proving their optimality. We discover a single new protocol. Its sensitivity enhancement relies on a double inversion of squeezing. In the general class of echo protocols, the newly found over-un-twisting protocol is singled out due to its Heisenberg scaling even at strong collective dephasing.

]]>We present a comprehensive study of the impact of non-uniform, i.e. path-dependent, photonic losses on the computational complexity of linear-optical processes. Our main result states that, if each beam splitter in a network induces some loss probability, non-uniform network designs cannot circumvent the efficient classical simulations based on losses. To achieve our result we obtain new intermediate results that can be of independent interest. First we show that, for any network of lossy beam-splitters, it is possible to extract a layer of non-uniform losses that depends on the network geometry. We prove that, for every input mode of the network it is possible to commute $s_i$ layers of losses to the input, where $s_i$ is the length of the shortest path connecting the $i$th input to any output. We then extend a recent classical simulation algorithm due to P. Clifford and R. Clifford to allow for arbitrary $n$-photon input Fock states (i.e. to include collision states). Consequently, we identify two types of input states where boson sampling becomes classically simulable: (A) when $n$ input photons occupy a constant number of input modes; (B) when all but $O(\log n)$ photons are concentrated on a single input mode, while an additional $O(\log n)$ modes contain one photon each.

]]>We present a comprehensive study of the impact of non-uniform, i.e. path-dependent, photonic losses on the computational complexity of linear-optical processes. Our main result states that, if each beam splitter in a network induces some loss probability, non-uniform network designs cannot circumvent the efficient classical simulations based on losses. To achieve our result we obtain new intermediate results that can be of independent interest. First we show that, for any network of lossy beam-splitters, it is possible to extract a layer of non-uniform losses that depends on the network geometry. We prove that, for every input mode of the network it is possible to commute $s_i$ layers of losses to the input, where $s_i$ is the length of the shortest path connecting the $i$th input to any output. We then extend a recent classical simulation algorithm due to P. Clifford and R. Clifford to allow for arbitrary $n$-photon input Fock states (i.e. to include collision states). Consequently, we identify two types of input states where boson sampling becomes classically simulable: (A) when $n$ input photons occupy a constant number of input modes; (B) when all but $O(\log n)$ photons are concentrated on a single input mode, while an additional $O(\log n)$ modes contain one photon each.

]]>We show that every language in QMA admits a classical-verifier, quantum-prover zero-knowledge argument system which is sound against quantum polynomial-time provers and zero-knowledge for classical (and quantum) polynomial-time verifiers. The protocol builds upon two recent results: a computational zero-knowledge proof system for languages in QMA, with a quantum verifier, introduced by Broadbent et al. (FOCS 2016), and an argument system for languages in QMA, with a classical verifier, introduced by Mahadev (FOCS 2018).

]]>We show that every language in QMA admits a classical-verifier, quantum-prover zero-knowledge argument system which is sound against quantum polynomial-time provers and zero-knowledge for classical (and quantum) polynomial-time verifiers. The protocol builds upon two recent results: a computational zero-knowledge proof system for languages in QMA, with a quantum verifier, introduced by Broadbent et al. (FOCS 2016), and an argument system for languages in QMA, with a classical verifier, introduced by Mahadev (FOCS 2018).

]]>