Non-Markovian dynamics are characterized by information backflows, where the evolving open quantum system retrieves part of the information previously lost in the environment. Hence, the very definition of non-Markovianity implies an initial time interval when the evolution is noisy, otherwise no backflow could take place. We identify two types of initial noise, where the first has the only effect of degrading the information content of the system, while the latter is essential for the appearance of non-Markovian phenomena. Therefore, all non-Markovian evolutions can be divided into two classes: noisy non-Markovian (NNM), showing both types of noise, and pure non-Markovian (PNM), implementing solely essential noise. We make this distinction through a timing analysis of fundamental non-Markovian features. First, we prove that all NNM dynamics can be simulated through a Markovian pre-processing of a PNM core. We quantify the gains in terms of information backflows and non-Markovianity measures provided by PNM evolutions. Similarly, we study how the entanglement breaking property behaves in this framework and we discuss a technique to activate correlation backflows. Finally, we show the applicability of our results through the study of several well-know dynamical models.

]]>Non-Markovian dynamics are characterized by information backflows, where the evolving open quantum system retrieves part of the information previously lost in the environment. Hence, the very definition of non-Markovianity implies an initial time interval when the evolution is noisy, otherwise no backflow could take place. We identify two types of initial noise, where the first has the only effect of degrading the information content of the system, while the latter is essential for the appearance of non-Markovian phenomena. Therefore, all non-Markovian evolutions can be divided into two classes: noisy non-Markovian (NNM), showing both types of noise, and pure non-Markovian (PNM), implementing solely essential noise. We make this distinction through a timing analysis of fundamental non-Markovian features. First, we prove that all NNM dynamics can be simulated through a Markovian pre-processing of a PNM core. We quantify the gains in terms of information backflows and non-Markovianity measures provided by PNM evolutions. Similarly, we study how the entanglement breaking property behaves in this framework and we discuss a technique to activate correlation backflows. Finally, we show the applicability of our results through the study of several well-know dynamical models.

]]>A common approach to studying the performance of quantum error correcting codes is to assume independent and identically distributed single-qubit errors. However, the available experimental data shows that realistic errors in modern multi-qubit devices are typically neither independent nor identical across qubits. In this work, we develop and investigate the properties of topological surface codes adapted to a known noise structure by Clifford conjugations. We show that the surface code locally tailored to non-uniform single-qubit noise in conjunction with a scalable matching decoder yields an increase in error thresholds and exponential suppression of sub-threshold failure rates when compared to the standard surface code. Furthermore, we study the behaviour of the tailored surface code under local two-qubit noise and show the role that code degeneracy plays in correcting such noise. The proposed methods do not require additional overhead in terms of the number of qubits or gates and use a standard matching decoder, hence come at no extra cost compared to the standard surface-code error correction.

]]>A common approach to studying the performance of quantum error correcting codes is to assume independent and identically distributed single-qubit errors. However, the available experimental data shows that realistic errors in modern multi-qubit devices are typically neither independent nor identical across qubits. In this work, we develop and investigate the properties of topological surface codes adapted to a known noise structure by Clifford conjugations. We show that the surface code locally tailored to non-uniform single-qubit noise in conjunction with a scalable matching decoder yields an increase in error thresholds and exponential suppression of sub-threshold failure rates when compared to the standard surface code. Furthermore, we study the behaviour of the tailored surface code under local two-qubit noise and show the role that code degeneracy plays in correcting such noise. The proposed methods do not require additional overhead in terms of the number of qubits or gates and use a standard matching decoder, hence come at no extra cost compared to the standard surface-code error correction.

]]>Recently, a class of fractal surface codes (FSCs), has been constructed on fractal lattices with Hausdorff dimension $2+\epsilon$, which admits a fault-tolerant non-Clifford CCZ gate [1]. We investigate the performance of such FSCs as fault-tolerant quantum memories. We prove that there exist decoding strategies with non-zero thresholds for bit-flip and phase-flip errors in the FSCs with Hausdorff dimension $2+\epsilon$. For the bit-flip errors, we adapt the sweep decoder, developed for string-like syndromes in the regular 3D surface code, to the FSCs by designing suitable modifications on the boundaries of the holes in the fractal lattice. Our adaptation of the sweep decoder for the FSCs maintains its self-correcting and single-shot nature. For the phase-flip errors, we employ the minimum-weight-perfect-matching (MWPM) decoder for the point-like syndromes. We report a sustainable fault-tolerant threshold ($\sim 1.7\%$) under phenomenological noise for the sweep decoder and the code capacity threshold (lower bounded by $2.95\%$) for the MWPM decoder for a particular FSC with Hausdorff dimension $D_H\approx2.966$. The latter can be mapped to a lower bound of the critical point of a confinement-Higgs transition on the fractal lattice, which is tunable via the Hausdorff dimension.

]]>Recently, a class of fractal surface codes (FSCs), has been constructed on fractal lattices with Hausdorff dimension $2+\epsilon$, which admits a fault-tolerant non-Clifford CCZ gate [1]. We investigate the performance of such FSCs as fault-tolerant quantum memories. We prove that there exist decoding strategies with non-zero thresholds for bit-flip and phase-flip errors in the FSCs with Hausdorff dimension $2+\epsilon$. For the bit-flip errors, we adapt the sweep decoder, developed for string-like syndromes in the regular 3D surface code, to the FSCs by designing suitable modifications on the boundaries of the holes in the fractal lattice. Our adaptation of the sweep decoder for the FSCs maintains its self-correcting and single-shot nature. For the phase-flip errors, we employ the minimum-weight-perfect-matching (MWPM) decoder for the point-like syndromes. We report a sustainable fault-tolerant threshold ($\sim 1.7\%$) under phenomenological noise for the sweep decoder and the code capacity threshold (lower bounded by $2.95\%$) for the MWPM decoder for a particular FSC with Hausdorff dimension $D_H\approx2.966$. The latter can be mapped to a lower bound of the critical point of a confinement-Higgs transition on the fractal lattice, which is tunable via the Hausdorff dimension.

]]>We generalize the Quantum Approximate Optimization Algorithm (QAOA) of Farhi et al. (2014) to allow for arbitrary separable initial states with corresponding mixers such that the starting state is the most excited state of the mixing Hamiltonian. We demonstrate this version of QAOA, which we call $QAOA-warmest$, by simulating Max-Cut on weighted graphs. We initialize the starting state as a $warm-start$ using $2$ and $3$-dimensional approximations obtained using randomized projections of solutions to Max-Cut's semi-definite program, and define a warm-start dependent $custom mixer$. We show that these warm-starts initialize the QAOA circuit with constant-factor approximations of $0.658$ for $2$-dimensional and $0.585$ for $3$-dimensional warm-starts for graphs with non-negative edge weights, improving upon previously known trivial (i.e., $0.5$ for standard initialization) worst-case bounds at $p=0$. These factors in fact lower bound the approximation achieved for Max-Cut at higher circuit depths, since we also show that QAOA-warmest with any separable initial state converges to Max-Cut under the adiabatic limit as $p\rightarrow \infty$. However, the choice of warm-starts significantly impacts the rate of convergence to Max-Cut, and we show empirically that our warm-starts achieve a faster convergence compared to existing approaches. Additionally, our numerical simulations show higher quality cuts compared to standard QAOA, the classical Goemans-Williamson algorithm, and a warm-started QAOA without custom mixers for an instance library of $1148$ graphs (upto $11$ nodes) and depth $p=8$. We further show that QAOA-warmest outperforms the standard QAOA of Farhi et al. in experiments on current IBM-Q and Quantinuum hardware.

]]>We generalize the Quantum Approximate Optimization Algorithm (QAOA) of Farhi et al. (2014) to allow for arbitrary separable initial states with corresponding mixers such that the starting state is the most excited state of the mixing Hamiltonian. We demonstrate this version of QAOA, which we call $QAOA-warmest$, by simulating Max-Cut on weighted graphs. We initialize the starting state as a $warm-start$ using $2$ and $3$-dimensional approximations obtained using randomized projections of solutions to Max-Cut's semi-definite program, and define a warm-start dependent $custom mixer$. We show that these warm-starts initialize the QAOA circuit with constant-factor approximations of $0.658$ for $2$-dimensional and $0.585$ for $3$-dimensional warm-starts for graphs with non-negative edge weights, improving upon previously known trivial (i.e., $0.5$ for standard initialization) worst-case bounds at $p=0$. These factors in fact lower bound the approximation achieved for Max-Cut at higher circuit depths, since we also show that QAOA-warmest with any separable initial state converges to Max-Cut under the adiabatic limit as $p\rightarrow \infty$. However, the choice of warm-starts significantly impacts the rate of convergence to Max-Cut, and we show empirically that our warm-starts achieve a faster convergence compared to existing approaches. Additionally, our numerical simulations show higher quality cuts compared to standard QAOA, the classical Goemans-Williamson algorithm, and a warm-started QAOA without custom mixers for an instance library of $1148$ graphs (upto $11$ nodes) and depth $p=8$. We further show that QAOA-warmest outperforms the standard QAOA of Farhi et al. in experiments on current IBM-Q and Quantinuum hardware.

]]>Symmetry is a unifying concept in physics. In quantum information and beyond, it is known that quantum states possessing symmetry are not useful for certain information-processing tasks. For example, states that commute with a Hamiltonian realizing a time evolution are not useful for timekeeping during that evolution, and bipartite states that are highly extendible are not strongly entangled and thus not useful for basic tasks like teleportation. Motivated by this perspective, this paper details several quantum algorithms that test the symmetry of quantum states and channels. For the case of testing Bose symmetry of a state, we show that there is a simple and efficient quantum algorithm, while the tests for other kinds of symmetry rely on the aid of a quantum prover. We prove that the acceptance probability of each algorithm is equal to the maximum symmetric fidelity of the state being tested, thus giving a firm operational meaning to these latter resource quantifiers. Special cases of the algorithms test for incoherence or separability of quantum states. We evaluate the performance of these algorithms on choice examples by using the variational approach to quantum algorithms, replacing the quantum prover with a parameterized circuit. We demonstrate this approach for numerous examples using the IBM quantum noiseless and noisy simulators, and we observe that the algorithms perform well in the noiseless case and exhibit noise resilience in the noisy case. We also show that the maximum symmetric fidelities can be calculated by semi-definite programs, which is useful for benchmarking the performance of these algorithms for sufficiently small examples. Finally, we establish various generalizations of the resource theory of asymmetry, with the upshot being that the acceptance probabilities of the algorithms are resource monotones and thus well motivated from the resource-theoretic perspective.

]]>Symmetry is a unifying concept in physics. In quantum information and beyond, it is known that quantum states possessing symmetry are not useful for certain information-processing tasks. For example, states that commute with a Hamiltonian realizing a time evolution are not useful for timekeeping during that evolution, and bipartite states that are highly extendible are not strongly entangled and thus not useful for basic tasks like teleportation. Motivated by this perspective, this paper details several quantum algorithms that test the symmetry of quantum states and channels. For the case of testing Bose symmetry of a state, we show that there is a simple and efficient quantum algorithm, while the tests for other kinds of symmetry rely on the aid of a quantum prover. We prove that the acceptance probability of each algorithm is equal to the maximum symmetric fidelity of the state being tested, thus giving a firm operational meaning to these latter resource quantifiers. Special cases of the algorithms test for incoherence or separability of quantum states. We evaluate the performance of these algorithms on choice examples by using the variational approach to quantum algorithms, replacing the quantum prover with a parameterized circuit. We demonstrate this approach for numerous examples using the IBM quantum noiseless and noisy simulators, and we observe that the algorithms perform well in the noiseless case and exhibit noise resilience in the noisy case. We also show that the maximum symmetric fidelities can be calculated by semi-definite programs, which is useful for benchmarking the performance of these algorithms for sufficiently small examples. Finally, we establish various generalizations of the resource theory of asymmetry, with the upshot being that the acceptance probabilities of the algorithms are resource monotones and thus well motivated from the resource-theoretic perspective.

]]>Quantum interference phenomena are widely viewed as posing a challenge to the classical worldview. Feynman even went so far as to proclaim that they are the $\textit{only mystery}$ and the $\textit{basic peculiarity}$ of quantum mechanics. Many have also argued that basic interference phenomena force us to accept a number of radical interpretational conclusions, including: that a photon is neither a particle nor a wave but rather a Jekyll-and-Hyde sort of entity that toggles between the two possibilities, that reality is observer-dependent, and that systems either do not have properties prior to measurements or else have properties that are subject to nonlocal or backwards-in-time causal influences. In this work, we show that such conclusions are not, in fact, forced on us by basic interference phenomena. We do so by describing an alternative to quantum theory, a statistical theory of a classical discrete field (the `toy field theory') that reproduces the relevant phenomenology of quantum interference while rejecting these radical interpretational claims. It also reproduces a number of related interference experiments that are thought to support these interpretational claims, such as the Elitzur-Vaidman bomb tester, Wheeler's delayed-choice experiment, and the quantum eraser experiment. The systems in the toy field theory are field modes, each of which possesses, at all times, $both$ a particle-like property (a discrete occupation number) and a wave-like property (a discrete phase). Although these two properties are jointly possessed, the theory stipulates that they cannot be jointly $known$. The phenomenology that is generally cited in favour of nonlocal or backwards-in-time $\textit{causal influences}$ ends up being explained in terms of $inferences$ about distant or past systems, and all that is observer-dependent is the observer's $knowledge$ of reality, not reality itself.

Contributed talk by Lorenzo Catani at QIP 2023: Contributed talk by Robert Spekkens at the "Conference on Quantum Information and Quantum Control IX" — University of Toronto 2022: Invited talk by Lorenzo Catani at the "Physics and the first-person perspective" Essentia Foundation conference 2022 : Seminar by Lorenzo Catani at IQOQI Vienna 2022: Contributed talk by Lorenzo Catani at QPL 2022:

]]>Quantum interference phenomena are widely viewed as posing a challenge to the classical worldview. Feynman even went so far as to proclaim that they are the $\textit{only mystery}$ and the $\textit{basic peculiarity}$ of quantum mechanics. Many have also argued that basic interference phenomena force us to accept a number of radical interpretational conclusions, including: that a photon is neither a particle nor a wave but rather a Jekyll-and-Hyde sort of entity that toggles between the two possibilities, that reality is observer-dependent, and that systems either do not have properties prior to measurements or else have properties that are subject to nonlocal or backwards-in-time causal influences. In this work, we show that such conclusions are not, in fact, forced on us by basic interference phenomena. We do so by describing an alternative to quantum theory, a statistical theory of a classical discrete field (the `toy field theory') that reproduces the relevant phenomenology of quantum interference while rejecting these radical interpretational claims. It also reproduces a number of related interference experiments that are thought to support these interpretational claims, such as the Elitzur-Vaidman bomb tester, Wheeler's delayed-choice experiment, and the quantum eraser experiment. The systems in the toy field theory are field modes, each of which possesses, at all times, $both$ a particle-like property (a discrete occupation number) and a wave-like property (a discrete phase). Although these two properties are jointly possessed, the theory stipulates that they cannot be jointly $known$. The phenomenology that is generally cited in favour of nonlocal or backwards-in-time $\textit{causal influences}$ ends up being explained in terms of $inferences$ about distant or past systems, and all that is observer-dependent is the observer's $knowledge$ of reality, not reality itself.

Contributed talk by Lorenzo Catani at QIP 2023: Contributed talk by Robert Spekkens at the "Conference on Quantum Information and Quantum Control IX" — University of Toronto 2022: Invited talk by Lorenzo Catani at the "Physics and the first-person perspective" Essentia Foundation conference 2022 : Seminar by Lorenzo Catani at IQOQI Vienna 2022: Contributed talk by Lorenzo Catani at QPL 2022:

]]>Superconducting quantum circuits are a promising hardware platform for realizing a fault-tolerant quantum computer. Accelerating progress in this field of research demands general approaches and computational tools to analyze and design more complex superconducting circuits. We develop a framework to systematically construct a superconducting quantum circuit's quantized Hamiltonian from its physical description. As is often the case with quantum descriptions of multicoordinate systems, the complexity rises rapidly with the number of variables. Therefore, we introduce a set of coordinate transformations with which we can find bases to diagonalize the Hamiltonian efficiently. Furthermore, we broaden our framework's scope to calculate the circuit's key properties required for optimizing and discovering novel qubits. We implement the methods described in this work in an open-source Python package $\tt{SQcircuit}$. In this manuscript, we introduce the reader to the $\tt{SQcircuit}$ environment and functionality. We show through a series of examples how to analyze a number of interesting quantum circuits and obtain features such as the spectrum, coherence times, transition matrix elements, coupling operators, and the phase coordinate representation of eigenfunctions.

]]>Superconducting quantum circuits are a promising hardware platform for realizing a fault-tolerant quantum computer. Accelerating progress in this field of research demands general approaches and computational tools to analyze and design more complex superconducting circuits. We develop a framework to systematically construct a superconducting quantum circuit's quantized Hamiltonian from its physical description. As is often the case with quantum descriptions of multicoordinate systems, the complexity rises rapidly with the number of variables. Therefore, we introduce a set of coordinate transformations with which we can find bases to diagonalize the Hamiltonian efficiently. Furthermore, we broaden our framework's scope to calculate the circuit's key properties required for optimizing and discovering novel qubits. We implement the methods described in this work in an open-source Python package $\tt{SQcircuit}$. In this manuscript, we introduce the reader to the $\tt{SQcircuit}$ environment and functionality. We show through a series of examples how to analyze a number of interesting quantum circuits and obtain features such as the spectrum, coherence times, transition matrix elements, coupling operators, and the phase coordinate representation of eigenfunctions.

]]>Sharing multi-partite quantum entanglement between parties allows for diverse secure communication tasks to be performed. Among them, conference key agreement (CKA) - an extension of key distribution to multiple parties - has received much attention recently. Interestingly, CKA can also be performed in a way that protects the identities of the participating parties, therefore providing $anonymity$. In this work, we propose an anonymous CKA protocol for three parties that is implemented in a highly practical network setting. Specifically, a line of quantum repeater nodes is used to build a linear cluster state among all nodes, which is then used to anonymously establish a secret key between any three of them. The nodes need only share maximally entangled pairs with their neighbours, therefore avoiding the necessity of a central server sharing entangled states. This linear chain setup makes our protocol an excellent candidate for implementation in future quantum networks. We explicitly prove that our protocol protects the identities of the participants from one another and perform an analysis of the key rate in the finite regime, contributing to the quest of identifying feasible quantum communication tasks for network architectures beyond point-to-point.

]]>Sharing multi-partite quantum entanglement between parties allows for diverse secure communication tasks to be performed. Among them, conference key agreement (CKA) - an extension of key distribution to multiple parties - has received much attention recently. Interestingly, CKA can also be performed in a way that protects the identities of the participating parties, therefore providing $anonymity$. In this work, we propose an anonymous CKA protocol for three parties that is implemented in a highly practical network setting. Specifically, a line of quantum repeater nodes is used to build a linear cluster state among all nodes, which is then used to anonymously establish a secret key between any three of them. The nodes need only share maximally entangled pairs with their neighbours, therefore avoiding the necessity of a central server sharing entangled states. This linear chain setup makes our protocol an excellent candidate for implementation in future quantum networks. We explicitly prove that our protocol protects the identities of the participants from one another and perform an analysis of the key rate in the finite regime, contributing to the quest of identifying feasible quantum communication tasks for network architectures beyond point-to-point.

]]>We consider the combined effect of readout errors and coherent errors, i.e., deterministic phase rotations, on the surface code. We use a recently developed numerical approach, via a mapping of the physical qubits to Majorana fermions. We show how to use this approach in the presence of readout errors, treated on the phenomenological level: perfect projective measurements with potentially incorrectly recorded outcomes, and multiple repeated measurement rounds. We find a threshold for this combination of errors, with an error rate close to the threshold of the corresponding incoherent error channel (random Pauli-Z and readout errors). The value of the threshold error rate, using the worst case fidelity as the measure of logical errors, is 2.6%. Below the threshold, scaling up the code leads to the rapid loss of coherence in the logical-level errors, but error rates that are greater than those of the corresponding incoherent error channel. We also vary the coherent and readout error rates independently, and find that the surface code is more sensitive to coherent errors than to readout errors. Our work extends the recent results on coherent errors with perfect readout to the experimentally more realistic situation where readout errors also occur.

]]>We consider the combined effect of readout errors and coherent errors, i.e., deterministic phase rotations, on the surface code. We use a recently developed numerical approach, via a mapping of the physical qubits to Majorana fermions. We show how to use this approach in the presence of readout errors, treated on the phenomenological level: perfect projective measurements with potentially incorrectly recorded outcomes, and multiple repeated measurement rounds. We find a threshold for this combination of errors, with an error rate close to the threshold of the corresponding incoherent error channel (random Pauli-Z and readout errors). The value of the threshold error rate, using the worst case fidelity as the measure of logical errors, is 2.6%. Below the threshold, scaling up the code leads to the rapid loss of coherence in the logical-level errors, but error rates that are greater than those of the corresponding incoherent error channel. We also vary the coherent and readout error rates independently, and find that the surface code is more sensitive to coherent errors than to readout errors. Our work extends the recent results on coherent errors with perfect readout to the experimentally more realistic situation where readout errors also occur.

]]>It is widely accepted that the dynamic of entanglement in presence of a generic circuit can be predicted by the knowledge of the statistical properties of the entanglement spectrum. We tested this assumption by applying a Metropolis-like entanglement cooling algorithm generated by different sets of local gates, on states sharing the same statistic. We employ the ground states of a unique model, namely the one-dimensional Ising chain with a transverse field, but belonging to different macroscopic phases such as the paramagnetic, the magnetically ordered, and the topological frustrated ones. Quite surprisingly, we observe that the entanglement dynamics are strongly dependent not just on the different sets of gates but also on the phase, indicating that different phases can possess different types of entanglement (which we characterize as purely local, GHZ-like, and W-state-like) with different degree of resilience against the cooling process. Our work highlights the fact that the knowledge of the entanglement spectrum alone is not sufficient to determine its dynamics, thereby demonstrating its incompleteness as a characterization tool. Moreover, it shows a subtle interplay between locality and non-local constraints.

]]>It is widely accepted that the dynamic of entanglement in presence of a generic circuit can be predicted by the knowledge of the statistical properties of the entanglement spectrum. We tested this assumption by applying a Metropolis-like entanglement cooling algorithm generated by different sets of local gates, on states sharing the same statistic. We employ the ground states of a unique model, namely the one-dimensional Ising chain with a transverse field, but belonging to different macroscopic phases such as the paramagnetic, the magnetically ordered, and the topological frustrated ones. Quite surprisingly, we observe that the entanglement dynamics are strongly dependent not just on the different sets of gates but also on the phase, indicating that different phases can possess different types of entanglement (which we characterize as purely local, GHZ-like, and W-state-like) with different degree of resilience against the cooling process. Our work highlights the fact that the knowledge of the entanglement spectrum alone is not sufficient to determine its dynamics, thereby demonstrating its incompleteness as a characterization tool. Moreover, it shows a subtle interplay between locality and non-local constraints.

]]>The device-independent paradigm has had spectacular successes in randomness generation, key distribution and self-testing, however most of these results have been obtained under the assumption that parties hold trusted and private random seeds. In efforts to relax the assumption of measurement independence, Hardy's non-locality tests have been proposed as ideal candidates. In this paper, we introduce a family of tilted Hardy paradoxes that allow to self-test general pure two-qubit entangled states, as well as certify up to $1$ bit of local randomness. We then use these tilted Hardy tests to obtain an improvement in the generation rate in the state-of-the-art randomness amplification protocols for Santha-Vazirani (SV) sources with arbitrarily limited measurement independence. Our result shows that device-independent randomness amplification is possible for arbitrarily biased SV sources and from almost separable states. Finally, we introduce a family of Hardy tests for maximally entangled states of local dimension $4, 8$ as the potential candidates for DI randomness extraction to certify up to the maximum possible $2 \log d$ bits of global randomness.

]]>The device-independent paradigm has had spectacular successes in randomness generation, key distribution and self-testing, however most of these results have been obtained under the assumption that parties hold trusted and private random seeds. In efforts to relax the assumption of measurement independence, Hardy's non-locality tests have been proposed as ideal candidates. In this paper, we introduce a family of tilted Hardy paradoxes that allow to self-test general pure two-qubit entangled states, as well as certify up to $1$ bit of local randomness. We then use these tilted Hardy tests to obtain an improvement in the generation rate in the state-of-the-art randomness amplification protocols for Santha-Vazirani (SV) sources with arbitrarily limited measurement independence. Our result shows that device-independent randomness amplification is possible for arbitrarily biased SV sources and from almost separable states. Finally, we introduce a family of Hardy tests for maximally entangled states of local dimension $4, 8$ as the potential candidates for DI randomness extraction to certify up to the maximum possible $2 \log d$ bits of global randomness.

]]>Although tensor networks are powerful tools for simulating low-dimensional quantum physics, tensor network algorithms are very computationally costly in higher spatial dimensions. We introduce $\textit{quantum gauge networks}$: a different kind of tensor network ansatz for which the computation cost of simulations does not explicitly increase for larger spatial dimensions. We take inspiration from the gauge picture of quantum dynamics, which consists of a local wavefunction for each patch of space, with neighboring patches related by unitary connections. A quantum gauge network (QGN) has a similar structure, except the Hilbert space dimensions of the local wavefunctions and connections are truncated. We describe how a QGN can be obtained from a generic wavefunction or matrix product state (MPS). All $2k$-point correlation functions of any wavefunction for $M$ many operators can be encoded exactly by a QGN with bond dimension $O(M^k)$. In comparison, for just $k=1$, an exponentially larger bond dimension of $2^{M/6}$ is generically required for an MPS of qubits. We provide a simple QGN algorithm for approximate simulations of quantum dynamics in any spatial dimension. The approximate dynamics can achieve exact energy conservation for time-independent Hamiltonians, and spatial symmetries can also be maintained exactly. We benchmark the algorithm by simulating the quantum quench of fermionic Hamiltonians in up to three spatial dimensions.

]]>Although tensor networks are powerful tools for simulating low-dimensional quantum physics, tensor network algorithms are very computationally costly in higher spatial dimensions. We introduce $\textit{quantum gauge networks}$: a different kind of tensor network ansatz for which the computation cost of simulations does not explicitly increase for larger spatial dimensions. We take inspiration from the gauge picture of quantum dynamics, which consists of a local wavefunction for each patch of space, with neighboring patches related by unitary connections. A quantum gauge network (QGN) has a similar structure, except the Hilbert space dimensions of the local wavefunctions and connections are truncated. We describe how a QGN can be obtained from a generic wavefunction or matrix product state (MPS). All $2k$-point correlation functions of any wavefunction for $M$ many operators can be encoded exactly by a QGN with bond dimension $O(M^k)$. In comparison, for just $k=1$, an exponentially larger bond dimension of $2^{M/6}$ is generically required for an MPS of qubits. We provide a simple QGN algorithm for approximate simulations of quantum dynamics in any spatial dimension. The approximate dynamics can achieve exact energy conservation for time-independent Hamiltonians, and spatial symmetries can also be maintained exactly. We benchmark the algorithm by simulating the quantum quench of fermionic Hamiltonians in up to three spatial dimensions.

]]>A recent paper by two of us and co-workers [1], based on an extended Wigner's friend scenario, demonstrated that certain empirical correlations predicted by quantum theory (QT) violate inequalities derived from a set of metaphysical assumptions we called "Local Friendliness" (LF). These assumptions are strictly weaker than those used for deriving Bell inequalities. Crucial to the theorem was the premise that a quantum system with reversible evolution could be an observer (colloquially, a "friend"). However, that paper was noncommittal on what would constitute an observer for the purpose of an experiment. Here, we present a new LF no-go theorem which takes seriously the idea that a system's having $thoughts$ is a sufficient condition for it to be an observer. Our new derivation of the LF inequalities uses four metaphysical assumptions, three of which are thought-related, including one that is explicitly called "Friendliness". These four assumptions, in conjunction, allow one to derive LF inequalities for experiments involving the type of system that "Friendliness" refers to. In addition to these four metaphysical assumptions, this new no-go theorem requires two assumptions about what is $technologically$ feasible: Human-Level Artificial Intelligence, and Universal Quantum Computing which is fast and large scale. The latter is often motivated by the belief that QT is universal, but this is $not$ an assumption of the theorem. The intent of the new theorem is to give a clear goal for future experimentalists, and a clear motivation for trying to achieve that goal. We review various approaches to QT in light of our theorem. The popular stance that "quantum theory needs no interpretation" does not question any of our assumptions and so is ruled out. Finally, we quantitatively discuss how difficult the experiment we envisage would be, and briefly discuss milestones on the paths towards it.

]]>A recent paper by two of us and co-workers [1], based on an extended Wigner's friend scenario, demonstrated that certain empirical correlations predicted by quantum theory (QT) violate inequalities derived from a set of metaphysical assumptions we called "Local Friendliness" (LF). These assumptions are strictly weaker than those used for deriving Bell inequalities. Crucial to the theorem was the premise that a quantum system with reversible evolution could be an observer (colloquially, a "friend"). However, that paper was noncommittal on what would constitute an observer for the purpose of an experiment. Here, we present a new LF no-go theorem which takes seriously the idea that a system's having $thoughts$ is a sufficient condition for it to be an observer. Our new derivation of the LF inequalities uses four metaphysical assumptions, three of which are thought-related, including one that is explicitly called "Friendliness". These four assumptions, in conjunction, allow one to derive LF inequalities for experiments involving the type of system that "Friendliness" refers to. In addition to these four metaphysical assumptions, this new no-go theorem requires two assumptions about what is $technologically$ feasible: Human-Level Artificial Intelligence, and Universal Quantum Computing which is fast and large scale. The latter is often motivated by the belief that QT is universal, but this is $not$ an assumption of the theorem. The intent of the new theorem is to give a clear goal for future experimentalists, and a clear motivation for trying to achieve that goal. We review various approaches to QT in light of our theorem. The popular stance that "quantum theory needs no interpretation" does not question any of our assumptions and so is ruled out. Finally, we quantitatively discuss how difficult the experiment we envisage would be, and briefly discuss milestones on the paths towards it.

]]>Challenging combinatorial optimization problems are ubiquitous in science and engineering. Several quantum methods for optimization have recently been developed, in different settings including both exact and approximate solvers. Addressing this field of research, this manuscript has three distinct purposes. First, we present an intuitive method for synthesizing and analyzing discrete ($i.e.,$ integer-based) optimization problems, wherein the problem and corresponding algorithmic primitives are expressed using a discrete quantum intermediate representation (DQIR) that is encoding-independent. This compact representation often allows for more efficient problem compilation, automated analyses of different encoding choices, easier interpretability, more complex runtime procedures, and richer programmability, as compared to previous approaches, which we demonstrate with a number of examples. Second, we perform numerical studies comparing several qubit encodings; the results exhibit a number of preliminary trends that help guide the choice of encoding for a particular set of hardware and a particular problem and algorithm. Our study includes problems related to graph coloring, the traveling salesperson problem, factory/machine scheduling, financial portfolio rebalancing, and integer linear programming. Third, we design low-depth graph-derived partial mixers (GDPMs) up to 16-level quantum variables, demonstrating that compact (binary) encodings are more amenable to QAOA than previously understood. We expect this toolkit of programming abstractions and low-level building blocks to aid in designing quantum algorithms for discrete combinatorial problems.

]]>Challenging combinatorial optimization problems are ubiquitous in science and engineering. Several quantum methods for optimization have recently been developed, in different settings including both exact and approximate solvers. Addressing this field of research, this manuscript has three distinct purposes. First, we present an intuitive method for synthesizing and analyzing discrete ($i.e.,$ integer-based) optimization problems, wherein the problem and corresponding algorithmic primitives are expressed using a discrete quantum intermediate representation (DQIR) that is encoding-independent. This compact representation often allows for more efficient problem compilation, automated analyses of different encoding choices, easier interpretability, more complex runtime procedures, and richer programmability, as compared to previous approaches, which we demonstrate with a number of examples. Second, we perform numerical studies comparing several qubit encodings; the results exhibit a number of preliminary trends that help guide the choice of encoding for a particular set of hardware and a particular problem and algorithm. Our study includes problems related to graph coloring, the traveling salesperson problem, factory/machine scheduling, financial portfolio rebalancing, and integer linear programming. Third, we design low-depth graph-derived partial mixers (GDPMs) up to 16-level quantum variables, demonstrating that compact (binary) encodings are more amenable to QAOA than previously understood. We expect this toolkit of programming abstractions and low-level building blocks to aid in designing quantum algorithms for discrete combinatorial problems.

]]>We present two quantum interior point methods for semidefinite optimization problems, building on recent advances in quantum linear system algorithms. The first scheme, more similar to a classical solution algorithm, computes an inexact search direction and is not guaranteed to explore only feasible points; the second scheme uses a nullspace representation of the Newton linear system to ensure feasibility even with inexact search directions. The second is a novel scheme that might seem impractical in the classical world, but it is well-suited for a hybrid quantum-classical setting. We show that both schemes converge to an optimal solution of the semidefinite optimization problem under standard assumptions. By comparing the theoretical performance of classical and quantum interior point methods with respect to various input parameters, we show that our second scheme obtains a speedup over classical algorithms in terms of the dimension of the problem $n$, but has worse dependence on other numerical parameters.

]]>We present two quantum interior point methods for semidefinite optimization problems, building on recent advances in quantum linear system algorithms. The first scheme, more similar to a classical solution algorithm, computes an inexact search direction and is not guaranteed to explore only feasible points; the second scheme uses a nullspace representation of the Newton linear system to ensure feasibility even with inexact search directions. The second is a novel scheme that might seem impractical in the classical world, but it is well-suited for a hybrid quantum-classical setting. We show that both schemes converge to an optimal solution of the semidefinite optimization problem under standard assumptions. By comparing the theoretical performance of classical and quantum interior point methods with respect to various input parameters, we show that our second scheme obtains a speedup over classical algorithms in terms of the dimension of the problem $n$, but has worse dependence on other numerical parameters.

]]>In the lead up to fault tolerance, the utility of quantum computing will be determined by how adequately the effects of noise can be circumvented in quantum algorithms. Hybrid quantum-classical algorithms such as the variational quantum eigensolver (VQE) have been designed for the short-term regime. However, as problems scale, VQE results are generally scrambled by noise on present-day hardware. While error mitigation techniques alleviate these issues to some extent, there is a pressing need to develop algorithmic approaches with higher robustness to noise. Here, we explore the robustness properties of the recently introduced quantum computed moments (QCM) approach to ground state energy problems, and show through an analytic example how the underlying energy estimate explicitly filters out incoherent noise. Motivated by this observation, we implement QCM for a model of quantum magnetism on IBM Quantum hardware to examine the noise-filtering effect with increasing circuit depth. We find that QCM maintains a remarkably high degree of error robustness where VQE completely fails. On instances of the quantum magnetism model up to 20 qubits for ultra-deep trial state circuits of up to 500 CNOTs, QCM is still able to extract reasonable energy estimates. The observation is bolstered by an extensive set of experimental results. To match these results, VQE would need hardware improvement by some 2 orders of magnitude on error rates.

]]>In the lead up to fault tolerance, the utility of quantum computing will be determined by how adequately the effects of noise can be circumvented in quantum algorithms. Hybrid quantum-classical algorithms such as the variational quantum eigensolver (VQE) have been designed for the short-term regime. However, as problems scale, VQE results are generally scrambled by noise on present-day hardware. While error mitigation techniques alleviate these issues to some extent, there is a pressing need to develop algorithmic approaches with higher robustness to noise. Here, we explore the robustness properties of the recently introduced quantum computed moments (QCM) approach to ground state energy problems, and show through an analytic example how the underlying energy estimate explicitly filters out incoherent noise. Motivated by this observation, we implement QCM for a model of quantum magnetism on IBM Quantum hardware to examine the noise-filtering effect with increasing circuit depth. We find that QCM maintains a remarkably high degree of error robustness where VQE completely fails. On instances of the quantum magnetism model up to 20 qubits for ultra-deep trial state circuits of up to 500 CNOTs, QCM is still able to extract reasonable energy estimates. The observation is bolstered by an extensive set of experimental results. To match these results, VQE would need hardware improvement by some 2 orders of magnitude on error rates.

]]>Efficient methods for the representation and simulation of quantum states and quantum operations are crucial for the optimization of quantum circuits. Decision diagrams (DDs), a well-studied data structure originally used to represent Boolean functions, have proven capable of capturing relevant aspects of quantum systems, but their limits are not well understood. In this work, we investigate and bridge the gap between existing DD-based structures and the stabilizer formalism, an important tool for simulating quantum circuits in the tractable regime. We first show that although DDs were suggested to succinctly represent important quantum states, they actually require exponential space for certain stabilizer states. To remedy this, we introduce a more powerful decision diagram variant, called Local Invertible Map-DD (LIMDD). We prove that the set of quantum states represented by poly-sized LIMDDs strictly contains the union of stabilizer states and other decision diagram variants. Finally, there exist circuits which LIMDDs can efficiently simulate, while their output states cannot be succinctly represented by two state-of-the-art simulation paradigms: the stabilizer decomposition techniques for Clifford + $T$ circuits and Matrix-Product States. By uniting two successful approaches, LIMDDs thus pave the way for fundamentally more powerful solutions for simulation and analysis of quantum computing.

]]>Efficient methods for the representation and simulation of quantum states and quantum operations are crucial for the optimization of quantum circuits. Decision diagrams (DDs), a well-studied data structure originally used to represent Boolean functions, have proven capable of capturing relevant aspects of quantum systems, but their limits are not well understood. In this work, we investigate and bridge the gap between existing DD-based structures and the stabilizer formalism, an important tool for simulating quantum circuits in the tractable regime. We first show that although DDs were suggested to succinctly represent important quantum states, they actually require exponential space for certain stabilizer states. To remedy this, we introduce a more powerful decision diagram variant, called Local Invertible Map-DD (LIMDD). We prove that the set of quantum states represented by poly-sized LIMDDs strictly contains the union of stabilizer states and other decision diagram variants. Finally, there exist circuits which LIMDDs can efficiently simulate, while their output states cannot be succinctly represented by two state-of-the-art simulation paradigms: the stabilizer decomposition techniques for Clifford + $T$ circuits and Matrix-Product States. By uniting two successful approaches, LIMDDs thus pave the way for fundamentally more powerful solutions for simulation and analysis of quantum computing.

]]>Recently, quantum computing experiments have for the first time exceeded the capability of classical computers to perform certain computations – a milestone termed "quantum computational advantage." However, verifying the output of the quantum device in these experiments required extremely large classical computations. An exciting next step for demonstrating quantum capability would be to implement tests of quantum computational advantage with efficient classical verification, such that larger system sizes can be tested and verified. One of the first proposals for an efficiently-verifiable test of quantumness consists of hiding a secret classical bitstring inside a circuit of the class IQP, in such a way that samples from the circuit's output distribution are correlated with the secret. The classical hardness of this protocol has been supported by evidence that directly simulating IQP circuits is hard, but the security of the protocol against other (non-simulating) classical attacks has remained an open question. In this work we demonstrate that the protocol is not secure against classical forgery. We describe a classical algorithm that can not only convince the verifier that the (classical) prover is quantum, but can in fact can extract the secret key underlying a given protocol instance. Furthermore, we show that the key extraction algorithm is efficient in practice for problem sizes of hundreds of qubits. Finally, we provide an implementation of the algorithm, and give the secret vector underlying the "\$25 challenge" posted online by the authors of the original paper.

]]>Recently, quantum computing experiments have for the first time exceeded the capability of classical computers to perform certain computations – a milestone termed "quantum computational advantage." However, verifying the output of the quantum device in these experiments required extremely large classical computations. An exciting next step for demonstrating quantum capability would be to implement tests of quantum computational advantage with efficient classical verification, such that larger system sizes can be tested and verified. One of the first proposals for an efficiently-verifiable test of quantumness consists of hiding a secret classical bitstring inside a circuit of the class IQP, in such a way that samples from the circuit's output distribution are correlated with the secret. The classical hardness of this protocol has been supported by evidence that directly simulating IQP circuits is hard, but the security of the protocol against other (non-simulating) classical attacks has remained an open question. In this work we demonstrate that the protocol is not secure against classical forgery. We describe a classical algorithm that can not only convince the verifier that the (classical) prover is quantum, but can in fact can extract the secret key underlying a given protocol instance. Furthermore, we show that the key extraction algorithm is efficient in practice for problem sizes of hundreds of qubits. Finally, we provide an implementation of the algorithm, and give the secret vector underlying the "\$25 challenge" posted online by the authors of the original paper.

]]>We introduce distance measures between quantum states, measurements, and channels based on their statistical distinguishability in generic experiments. Specifically, we analyze the average Total Variation Distance (TVD) between output statistics of protocols in which quantum objects are intertwined with random circuits and measured in standard basis. We show that for circuits forming approximate 4-designs, the average TVDs can be approximated by simple explicit functions of the underlying objects – the average-case distances (ACDs). We apply them to analyze the effects of noise in quantum advantage experiments and for efficient discrimination of high-dimensional states and channels without quantum memory. We argue that ACDs are better suited for assessing the quality of NISQ devices than common distance measures such as trace distance or the diamond norm.

]]>We introduce distance measures between quantum states, measurements, and channels based on their statistical distinguishability in generic experiments. Specifically, we analyze the average Total Variation Distance (TVD) between output statistics of protocols in which quantum objects are intertwined with random circuits and measured in standard basis. We show that for circuits forming approximate 4-designs, the average TVDs can be approximated by simple explicit functions of the underlying objects – the average-case distances (ACDs). We apply them to analyze the effects of noise in quantum advantage experiments and for efficient discrimination of high-dimensional states and channels without quantum memory. We argue that ACDs are better suited for assessing the quality of NISQ devices than common distance measures such as trace distance or the diamond norm.

]]>We study the problem of testing identity of a collection of unknown quantum states given sample access to this collection, each state appearing with some known probability. We show that for a collection of $d$-dimensional quantum states of cardinality $N$, the sample complexity is $O(\sqrt{N}d/\epsilon^2)$, with a matching lower bound, up to a multiplicative constant. The test is obtained by estimating the mean squared Hilbert-Schmidt distance between the states, thanks to a suitable generalization of the estimator of the Hilbert-Schmidt distance between two unknown states by Bădescu, O'Donnell, and Wright [13].

]]>We study the problem of testing identity of a collection of unknown quantum states given sample access to this collection, each state appearing with some known probability. We show that for a collection of $d$-dimensional quantum states of cardinality $N$, the sample complexity is $O(\sqrt{N}d/\epsilon^2)$, with a matching lower bound, up to a multiplicative constant. The test is obtained by estimating the mean squared Hilbert-Schmidt distance between the states, thanks to a suitable generalization of the estimator of the Hilbert-Schmidt distance between two unknown states by Bădescu, O'Donnell, and Wright [13].

]]>A powerful operational paradigm for distributed quantum information processing involves manipulating pre-shared entanglement by local operations and classical communication (LOCC). The LOCC round complexity of a given task describes how many rounds of classical communication are needed to complete the task. Despite some results separating one-round versus two-round protocols, very little is known about higher round complexities. In this paper, we revisit the task of one-shot random-party entanglement distillation as a way to highlight some interesting features of LOCC round complexity. We first show that for random-party distillation in three qubits, the number of communication rounds needed in an optimal protocol depends on the entanglement measure used; for the same fixed state some entanglement measures need only two rounds to maximize whereas others need an unbounded number of rounds. In doing so, we construct a family of LOCC instruments that require an unbounded number of rounds to implement. We then prove explicit tight lower bounds on the LOCC round number as a function of distillation success probability. Our calculations show that the original W-state random distillation protocol by Fortescue and Lo is essentially optimal in terms of round complexity.

]]>A powerful operational paradigm for distributed quantum information processing involves manipulating pre-shared entanglement by local operations and classical communication (LOCC). The LOCC round complexity of a given task describes how many rounds of classical communication are needed to complete the task. Despite some results separating one-round versus two-round protocols, very little is known about higher round complexities. In this paper, we revisit the task of one-shot random-party entanglement distillation as a way to highlight some interesting features of LOCC round complexity. We first show that for random-party distillation in three qubits, the number of communication rounds needed in an optimal protocol depends on the entanglement measure used; for the same fixed state some entanglement measures need only two rounds to maximize whereas others need an unbounded number of rounds. In doing so, we construct a family of LOCC instruments that require an unbounded number of rounds to implement. We then prove explicit tight lower bounds on the LOCC round number as a function of distillation success probability. Our calculations show that the original W-state random distillation protocol by Fortescue and Lo is essentially optimal in terms of round complexity.

]]>We show that the proof of the generalised quantum Stein's lemma [Brandão & Plenio, Commun. Math. Phys. 295, 791 (2010)] is not correct due to a gap in the argument leading to Lemma III.9. Hence, the main achievability result of Brandão & Plenio is not known to hold. This puts into question a number of established results in the literature, in particular the reversibility of quantum entanglement [Brandão & Plenio, Commun. Math. Phys. 295, 829 (2010); Nat. Phys. 4, 873 (2008)] and of general quantum resources [Brandão & Gour, Phys. Rev. Lett. 115, 070503 (2015)] under asymptotically resource non-generating operations. We discuss potential ways to recover variants of the newly unsettled results using other approaches.

]]>We show that the proof of the generalised quantum Stein's lemma [Brandão & Plenio, Commun. Math. Phys. 295, 791 (2010)] is not correct due to a gap in the argument leading to Lemma III.9. Hence, the main achievability result of Brandão & Plenio is not known to hold. This puts into question a number of established results in the literature, in particular the reversibility of quantum entanglement [Brandão & Plenio, Commun. Math. Phys. 295, 829 (2010); Nat. Phys. 4, 873 (2008)] and of general quantum resources [Brandão & Gour, Phys. Rev. Lett. 115, 070503 (2015)] under asymptotically resource non-generating operations. We discuss potential ways to recover variants of the newly unsettled results using other approaches.

]]>Integral representations of quantum relative entropy, and of the directional second and higher order derivatives of von Neumann entropy, are established, and used to give simple proofs of fundamental, known data processing inequalities: the Holevo bound on the quantity of information transmitted by a quantum communication channel, and, much more generally, the monotonicity of quantum relative entropy under trace-preserving positive linear maps – complete positivity of the map need not be assumed. The latter result was first proved by Müller-Hermes and Reeb, based on work of Beigi. For a simple application of such monotonicities, we consider any `divergence' that is non-increasing under quantum measurements, such as the concavity of von Neumann entropy, or various known quantum divergences. An elegant argument due to Hiai, Ohya, and Tsukada is used to show that the infimum of such a `divergence' on pairs of quantum states with prescribed trace distance is the same as the corresponding infimum on pairs of binary classical states. Applications of the new integral formulae to the general probabilistic model of information theory, and a related integral formula for the classical Rényi divergence, are also discussed.

]]>Integral representations of quantum relative entropy, and of the directional second and higher order derivatives of von Neumann entropy, are established, and used to give simple proofs of fundamental, known data processing inequalities: the Holevo bound on the quantity of information transmitted by a quantum communication channel, and, much more generally, the monotonicity of quantum relative entropy under trace-preserving positive linear maps – complete positivity of the map need not be assumed. The latter result was first proved by Müller-Hermes and Reeb, based on work of Beigi. For a simple application of such monotonicities, we consider any `divergence' that is non-increasing under quantum measurements, such as the concavity of von Neumann entropy, or various known quantum divergences. An elegant argument due to Hiai, Ohya, and Tsukada is used to show that the infimum of such a `divergence' on pairs of quantum states with prescribed trace distance is the same as the corresponding infimum on pairs of binary classical states. Applications of the new integral formulae to the general probabilistic model of information theory, and a related integral formula for the classical Rényi divergence, are also discussed.

]]>An orthogonal set of states in multipartite systems is called to be strong quantum nonlocality if it is locally irreducible under every bipartition of the subsystems [46]. In this work, we study a subclass of locally irreducible sets: the only possible orthogonality preserving measurement on each subsystems are trivial measurements. We call the set with this property is locally stable. We find that in the case of two qubits systems locally stable sets are coincide with locally indistinguishable sets. Then we present a characterization of locally stable sets via the dimensions of some states depended spaces. Moreover, we construct two orthogonal sets in general multipartite quantum systems which are locally stable under every bipartition of the subsystems. As a consequence, we obtain a lower bound and an upper bound on the size of the smallest set which is locally stable for each bipartition of the subsystems. Our results provide a complete answer to an open question (that is, can we show strong quantum nonlocality in $\mathbb{C}^{d_1} \otimes \mathbb{C}^{d_1}\otimes \cdots \otimes \mathbb{C}^{d_N} $ for any $d_i \geq 2$ and $1\leq i\leq N$?) raised in a recent paper [54]. Compared with all previous relevant proofs, our proof here is quite concise.

]]>An orthogonal set of states in multipartite systems is called to be strong quantum nonlocality if it is locally irreducible under every bipartition of the subsystems [46]. In this work, we study a subclass of locally irreducible sets: the only possible orthogonality preserving measurement on each subsystems are trivial measurements. We call the set with this property is locally stable. We find that in the case of two qubits systems locally stable sets are coincide with locally indistinguishable sets. Then we present a characterization of locally stable sets via the dimensions of some states depended spaces. Moreover, we construct two orthogonal sets in general multipartite quantum systems which are locally stable under every bipartition of the subsystems. As a consequence, we obtain a lower bound and an upper bound on the size of the smallest set which is locally stable for each bipartition of the subsystems. Our results provide a complete answer to an open question (that is, can we show strong quantum nonlocality in $\mathbb{C}^{d_1} \otimes \mathbb{C}^{d_1}\otimes \cdots \otimes \mathbb{C}^{d_N} $ for any $d_i \geq 2$ and $1\leq i\leq N$?) raised in a recent paper [54]. Compared with all previous relevant proofs, our proof here is quite concise.

]]>The Zeno effect, in which repeated observation freezes the dynamics of a quantum system, stands as an iconic oddity of quantum mechanics. When a measurement is unable to distinguish between states in a subspace, the dynamics within that subspace can be profoundly altered, leading to non-trivial behavior. Here we show that such a measurement can turn a non-interacting system with only single-qubit control into a two- or multi-qubit entangling gate, which we call a Zeno gate. The gate works by imparting a geometric phase on the system, conditioned on it lying within a particular nonlocal subspace. We derive simple closed-form expressions for the gate fidelity under a number of non-idealities and show that the gate is viable for implementation in circuit and cavity QED systems. More specifically, we illustrate the functioning of the gate via dispersive readout in both the Markovian and non-Markovian readout regimes, and derive conditions for longitudinal readout to ideally realize the gate.

]]>The Zeno effect, in which repeated observation freezes the dynamics of a quantum system, stands as an iconic oddity of quantum mechanics. When a measurement is unable to distinguish between states in a subspace, the dynamics within that subspace can be profoundly altered, leading to non-trivial behavior. Here we show that such a measurement can turn a non-interacting system with only single-qubit control into a two- or multi-qubit entangling gate, which we call a Zeno gate. The gate works by imparting a geometric phase on the system, conditioned on it lying within a particular nonlocal subspace. We derive simple closed-form expressions for the gate fidelity under a number of non-idealities and show that the gate is viable for implementation in circuit and cavity QED systems. More specifically, we illustrate the functioning of the gate via dispersive readout in both the Markovian and non-Markovian readout regimes, and derive conditions for longitudinal readout to ideally realize the gate.

]]>Spin-photon interfaces (SPIs) are key devices of quantum technologies, aimed at coherently transferring quantum information between spin qubits and propagating pulses of polarized light. We study the potential of a SPI for quantum non demolition (QND) measurements of a spin state. After being initialized and scattered by the SPI, the state of a light pulse depends on the spin state. It thus plays the role of a pointer state, information being encoded in the light's temporal and polarization degrees of freedom. Building on the fully Hamiltonian resolution of the spin-light dynamics, we show that quantum superpositions of zero and single photon states outperform coherent pulses of light, producing pointer states which are more distinguishable with the same photon budget. The energetic advantage provided by quantum pulses over coherent ones is maintained when information on the spin state is extracted at the classical level by performing projective measurements on the light pulses. The proposed schemes are robust against imperfections in state of the art semi-conducting devices.

]]>Spin-photon interfaces (SPIs) are key devices of quantum technologies, aimed at coherently transferring quantum information between spin qubits and propagating pulses of polarized light. We study the potential of a SPI for quantum non demolition (QND) measurements of a spin state. After being initialized and scattered by the SPI, the state of a light pulse depends on the spin state. It thus plays the role of a pointer state, information being encoded in the light's temporal and polarization degrees of freedom. Building on the fully Hamiltonian resolution of the spin-light dynamics, we show that quantum superpositions of zero and single photon states outperform coherent pulses of light, producing pointer states which are more distinguishable with the same photon budget. The energetic advantage provided by quantum pulses over coherent ones is maintained when information on the spin state is extracted at the classical level by performing projective measurements on the light pulses. The proposed schemes are robust against imperfections in state of the art semi-conducting devices.

]]>In standard quantum mechanics, reference frames are treated as abstract entities. We can think of them as idealized, infinite-mass subsystems which decouple from the rest of the system. In nature, however, all reference frames are realized through finite-mass systems that are subject to the laws of quantum mechanics and must be included in the dynamical evolution. A fundamental physical theory should take this fact seriously. In this paper, we further develop a symmetry-inspired approach to describe physics from the perspective of quantum reference frames. We find a unifying framework allowing us to systematically derive a broad class of perspective dependent descriptions and the transformations between them. Working with a translational-invariant toy model of three free particles, we discover that the introduction of relative coordinates leads to a Hamiltonian structure with two non-commuting constraints. This structure can be said to contain all observer-perspectives at once, while the redundancies prevent an immediate operational interpretation. We show that the operationally meaningful perspective dependent descriptions are given by Darboux coordinates on the constraint surface and that reference frame transformations correspond to reparametrizations of the constraint surface. We conclude by constructing a quantum perspective neutral structure, via which we can derive and change perspective dependent descriptions without referring to the classical theory. In addition to the physical findings, this work illuminates the interrelation of first and second class constrained systems and their respective quantization procedures.

]]>In standard quantum mechanics, reference frames are treated as abstract entities. We can think of them as idealized, infinite-mass subsystems which decouple from the rest of the system. In nature, however, all reference frames are realized through finite-mass systems that are subject to the laws of quantum mechanics and must be included in the dynamical evolution. A fundamental physical theory should take this fact seriously. In this paper, we further develop a symmetry-inspired approach to describe physics from the perspective of quantum reference frames. We find a unifying framework allowing us to systematically derive a broad class of perspective dependent descriptions and the transformations between them. Working with a translational-invariant toy model of three free particles, we discover that the introduction of relative coordinates leads to a Hamiltonian structure with two non-commuting constraints. This structure can be said to contain all observer-perspectives at once, while the redundancies prevent an immediate operational interpretation. We show that the operationally meaningful perspective dependent descriptions are given by Darboux coordinates on the constraint surface and that reference frame transformations correspond to reparametrizations of the constraint surface. We conclude by constructing a quantum perspective neutral structure, via which we can derive and change perspective dependent descriptions without referring to the classical theory. In addition to the physical findings, this work illuminates the interrelation of first and second class constrained systems and their respective quantization procedures.

]]>Linear optical quantum circuits with photon number resolving (PNR) detectors are used for both Gaussian Boson Sampling (GBS) and for the preparation of non-Gaussian states such as Gottesman-Kitaev-Preskill (GKP), cat and NOON states. They are crucial in many schemes of quantum computing and quantum metrology. Classically optimizing circuits with PNR detectors is challenging due to their exponentially large Hilbert space, and quadratically more challenging in the presence of decoherence as state vectors are replaced by density matrices. To tackle this problem, we introduce a family of algorithms that calculate detection probabilities, conditional states (as well as their gradients with respect to circuit parametrizations) with a complexity that is comparable to the noiseless case. As a consequence we can simulate and optimize circuits with twice the number of modes as we could before, using the same resources. More precisely, for an $M$-mode noisy circuit with detected modes $D$ and undetected modes $U$, the complexity of our algorithm is $O(M^2 \prod_{i \mskip2mu \in \mskip2mu U} C_i^2 \prod_{i \mskip2mu \in \mskip2mu D} C_i)$, rather than $O(M^2 \prod_{\mskip2mu i \mskip2mu \in \mskip2mu D \mskip3mu \cup \mskip3mu U} C_i^2)$, where $C_i$ is the Fock cutoff of mode $i$. As a particular case, our approach offers a full quadratic speedup for calculating detection probabilities, as in that case all modes are detected. Finally, these algorithms are implemented and ready to use in the open-source photonic optimization library MrMustard.

Animated versions of some figures in the manuscript (GIFs) are included in the Supplementary Materials, as well as here: https://github.com/rdprins/GIFs_NoisyCircuits

]]>Linear optical quantum circuits with photon number resolving (PNR) detectors are used for both Gaussian Boson Sampling (GBS) and for the preparation of non-Gaussian states such as Gottesman-Kitaev-Preskill (GKP), cat and NOON states. They are crucial in many schemes of quantum computing and quantum metrology. Classically optimizing circuits with PNR detectors is challenging due to their exponentially large Hilbert space, and quadratically more challenging in the presence of decoherence as state vectors are replaced by density matrices. To tackle this problem, we introduce a family of algorithms that calculate detection probabilities, conditional states (as well as their gradients with respect to circuit parametrizations) with a complexity that is comparable to the noiseless case. As a consequence we can simulate and optimize circuits with twice the number of modes as we could before, using the same resources. More precisely, for an $M$-mode noisy circuit with detected modes $D$ and undetected modes $U$, the complexity of our algorithm is $O(M^2 \prod_{i \mskip2mu \in \mskip2mu U} C_i^2 \prod_{i \mskip2mu \in \mskip2mu D} C_i)$, rather than $O(M^2 \prod_{\mskip2mu i \mskip2mu \in \mskip2mu D \mskip3mu \cup \mskip3mu U} C_i^2)$, where $C_i$ is the Fock cutoff of mode $i$. As a particular case, our approach offers a full quadratic speedup for calculating detection probabilities, as in that case all modes are detected. Finally, these algorithms are implemented and ready to use in the open-source photonic optimization library MrMustard.

Animated versions of some figures in the manuscript (GIFs) are included in the Supplementary Materials, as well as here: https://github.com/rdprins/GIFs_NoisyCircuits

]]>Neural network approaches to approximate the ground state of quantum hamiltonians require the numerical solution of a highly nonlinear optimization problem. We introduce a statistical learning approach that makes the optimization trivial by using kernel methods. Our scheme is an approximate realization of the power method, where supervised learning is used to learn the next step of the power iteration. We show that the ground state properties of arbitrary gapped quantum hamiltonians can be reached with polynomial resources under the assumption that the supervised learning is efficient. Using kernel ridge regression, we provide numerical evidence that the learning assumption is verified by applying our scheme to find the ground states of several prototypical interacting many-body quantum systems, both in one and two dimensions, showing the flexibility of our approach.

]]>Neural network approaches to approximate the ground state of quantum hamiltonians require the numerical solution of a highly nonlinear optimization problem. We introduce a statistical learning approach that makes the optimization trivial by using kernel methods. Our scheme is an approximate realization of the power method, where supervised learning is used to learn the next step of the power iteration. We show that the ground state properties of arbitrary gapped quantum hamiltonians can be reached with polynomial resources under the assumption that the supervised learning is efficient. Using kernel ridge regression, we provide numerical evidence that the learning assumption is verified by applying our scheme to find the ground states of several prototypical interacting many-body quantum systems, both in one and two dimensions, showing the flexibility of our approach.

]]>Recent studies showed the finite-size security of binary-modulation CV-QKD protocols against general attacks. However, they gave poor key-rate scaling against transmission distance. Here, we extend the security proof based on complementarity, which is used in the discrete-variable QKD, to the previously developed binary-modulation CV-QKD protocols with the reverse reconciliation under the finite-size regime and obtain large improvements in the key rates. Notably, the key rate in the asymptotic limit scales linearly against the attenuation rate, which is known to be optimal scaling but is not achieved in previous finite-size analyses. This refined security approach may offer full-fledged security proofs for other discrete-modulation CV-QKD protocols.

]]>Recent studies showed the finite-size security of binary-modulation CV-QKD protocols against general attacks. However, they gave poor key-rate scaling against transmission distance. Here, we extend the security proof based on complementarity, which is used in the discrete-variable QKD, to the previously developed binary-modulation CV-QKD protocols with the reverse reconciliation under the finite-size regime and obtain large improvements in the key rates. Notably, the key rate in the asymptotic limit scales linearly against the attenuation rate, which is known to be optimal scaling but is not achieved in previous finite-size analyses. This refined security approach may offer full-fledged security proofs for other discrete-modulation CV-QKD protocols.

]]>