Conventional methods of quantum simulation involve trade-offs that limit their applicability to specific contexts where their use is optimal. In particular, the interaction picture simulation has been found to provide substantial asymptotic advantages for some Hamiltonians, but incurs prohibitive constant factors and is incompatible with methods like qubitization. We provide a framework that allows different simulation methods to be hybridized and thereby improve performance for interaction picture simulations over known algorithms. These approaches show asymptotic improvements over the individual methods that comprise them and further make interaction picture simulation methods practical in the near term. Physical applications of these hybridized methods yield a gate complexity scaling as $\log^2 \Lambda$ in the electric cutoff $\Lambda$ for the Schwinger Model and independent of the electron density for collective neutrino oscillations, outperforming the scaling for all current algorithms with these parameters. For the general problem of Hamiltonian simulation subject to dynamical constraints, these methods yield a query complexity independent of the penalty parameter $\lambda$ used to impose an energy cost on time-evolution into an unphysical subspace.

]]>Conventional methods of quantum simulation involve trade-offs that limit their applicability to specific contexts where their use is optimal. In particular, the interaction picture simulation has been found to provide substantial asymptotic advantages for some Hamiltonians, but incurs prohibitive constant factors and is incompatible with methods like qubitization. We provide a framework that allows different simulation methods to be hybridized and thereby improve performance for interaction picture simulations over known algorithms. These approaches show asymptotic improvements over the individual methods that comprise them and further make interaction picture simulation methods practical in the near term. Physical applications of these hybridized methods yield a gate complexity scaling as $\log^2 \Lambda$ in the electric cutoff $\Lambda$ for the Schwinger Model and independent of the electron density for collective neutrino oscillations, outperforming the scaling for all current algorithms with these parameters. For the general problem of Hamiltonian simulation subject to dynamical constraints, these methods yield a query complexity independent of the penalty parameter $\lambda$ used to impose an energy cost on time-evolution into an unphysical subspace.

]]>Recently, table-top experiments involving massive quantum systems have been proposed to test the interface of quantum theory and gravity. In particular, the crucial point of the debate is whether it is possible to conclude anything on the quantum nature of the gravitational field, provided that two quantum systems become entangled solely due to the gravitational interaction. Typically, this question has been addressed by assuming a specific physical theory to describe the gravitational interaction, but no systematic approach to characterise the set of possible gravitational theories which are compatible with the observation of entanglement has been proposed. Here, we remedy this by introducing the framework of Generalised Probabilistic Theories (GPTs) to the study of the nature of the gravitational field. This framework enables us to systematically study all theories compatible with the detection of entanglement generated via the gravitational interaction between two systems. We prove a no-go theorem stating that the following statements are incompatible: i) gravity is able to generate entanglement; ii) gravity mediates the interaction between the systems; iii) gravity is classical. We analyse the violation of each condition, in particular with respect to alternative non-linear models such as the Schrödinger-Newton equation and Collapse Models.

]]>Recently, table-top experiments involving massive quantum systems have been proposed to test the interface of quantum theory and gravity. In particular, the crucial point of the debate is whether it is possible to conclude anything on the quantum nature of the gravitational field, provided that two quantum systems become entangled solely due to the gravitational interaction. Typically, this question has been addressed by assuming a specific physical theory to describe the gravitational interaction, but no systematic approach to characterise the set of possible gravitational theories which are compatible with the observation of entanglement has been proposed. Here, we remedy this by introducing the framework of Generalised Probabilistic Theories (GPTs) to the study of the nature of the gravitational field. This framework enables us to systematically study all theories compatible with the detection of entanglement generated via the gravitational interaction between two systems. We prove a no-go theorem stating that the following statements are incompatible: i) gravity is able to generate entanglement; ii) gravity mediates the interaction between the systems; iii) gravity is classical. We analyse the violation of each condition, in particular with respect to alternative non-linear models such as the Schrödinger-Newton equation and Collapse Models.

]]>Mutually unbiased bases correspond to highly useful pairs of measurements in quantum information theory. In the smallest composite dimension, six, it is known that between three and seven mutually unbiased bases exist, with a decades-old conjecture, known as Zauner's conjecture, stating that there exist at most three. Here we tackle Zauner's conjecture numerically through the construction of Bell inequalities for every pair of integers $n,d \ge 2$ that can be maximally violated in dimension $d$ if and only if $n$ MUBs exist in that dimension. Hence we turn Zauner's conjecture into an optimisation problem, which we address by means of three numerical methods: see-saw optimisation, non-linear semidefinite programming and Monte Carlo techniques. All three methods correctly identify the known cases in low dimensions and all suggest that there do not exist four mutually unbiased bases in dimension six, with all finding the same bases that numerically optimise the corresponding Bell inequality. Moreover, these numerical optimisers appear to coincide with the ``four most distant bases'' in dimension six, found through numerically optimising a distance measure in [P. Raynal, X. Lü, B.-G. Englert, {Phys. Rev. A}, { 83} 062303 (2011)]. Finally, the Monte Carlo results suggest that at most three MUBs exist in dimension ten.

]]>Mutually unbiased bases correspond to highly useful pairs of measurements in quantum information theory. In the smallest composite dimension, six, it is known that between three and seven mutually unbiased bases exist, with a decades-old conjecture, known as Zauner's conjecture, stating that there exist at most three. Here we tackle Zauner's conjecture numerically through the construction of Bell inequalities for every pair of integers $n,d \ge 2$ that can be maximally violated in dimension $d$ if and only if $n$ MUBs exist in that dimension. Hence we turn Zauner's conjecture into an optimisation problem, which we address by means of three numerical methods: see-saw optimisation, non-linear semidefinite programming and Monte Carlo techniques. All three methods correctly identify the known cases in low dimensions and all suggest that there do not exist four mutually unbiased bases in dimension six, with all finding the same bases that numerically optimise the corresponding Bell inequality. Moreover, these numerical optimisers appear to coincide with the ``four most distant bases'' in dimension six, found through numerically optimising a distance measure in [P. Raynal, X. Lü, B.-G. Englert, {Phys. Rev. A}, { 83} 062303 (2011)]. Finally, the Monte Carlo results suggest that at most three MUBs exist in dimension ten.

]]>We propose and assess an alternative quantum generator architecture in the context of generative adversarial learning for Monte Carlo event generation, used to simulate particle physics processes at the Large Hadron Collider (LHC). We validate this methodology by implementing the quantum network on artificial data generated from known underlying distributions. The network is then applied to Monte Carlo-generated datasets of specific LHC scattering processes. The new quantum generator architecture leads to a generalization of the state-of-the-art implementations, achieving smaller Kullback-Leibler divergences even with shallow-depth networks. Moreover, the quantum generator successfully learns the underlying distribution functions even if trained with small training sample sets; this is particularly interesting for data augmentation applications. We deploy this novel methodology on two different quantum hardware architectures, trapped-ion and superconducting technologies, to test its hardware-independent viability.

]]>We propose and assess an alternative quantum generator architecture in the context of generative adversarial learning for Monte Carlo event generation, used to simulate particle physics processes at the Large Hadron Collider (LHC). We validate this methodology by implementing the quantum network on artificial data generated from known underlying distributions. The network is then applied to Monte Carlo-generated datasets of specific LHC scattering processes. The new quantum generator architecture leads to a generalization of the state-of-the-art implementations, achieving smaller Kullback-Leibler divergences even with shallow-depth networks. Moreover, the quantum generator successfully learns the underlying distribution functions even if trained with small training sample sets; this is particularly interesting for data augmentation applications. We deploy this novel methodology on two different quantum hardware architectures, trapped-ion and superconducting technologies, to test its hardware-independent viability.

]]>The classical shadows protocol, recently introduced by Huang, Kueng, and Preskill [Nat. Phys. 16, 1050 (2020)], is a quantum-classical protocol to estimate properties of an unknown quantum state. Unlike full quantum state tomography, the protocol can be implemented on near-term quantum hardware and requires few quantum measurements to make many predictions with a high success probability. In this paper, we study the effects of noise on the classical shadows protocol. In particular, we consider the scenario in which the quantum circuits involved in the protocol are subject to various known noise channels and derive an analytical upper bound for the sample complexity in terms of a shadow seminorm for both local and global noise. Additionally, by modifying the classical post-processing step of the noiseless protocol, we define a new estimator that remains unbiased in the presence of noise. As applications, we show that our results can be used to prove rigorous sample complexity upper bounds in the cases of depolarizing noise and amplitude damping.

]]>The classical shadows protocol, recently introduced by Huang, Kueng, and Preskill [Nat. Phys. 16, 1050 (2020)], is a quantum-classical protocol to estimate properties of an unknown quantum state. Unlike full quantum state tomography, the protocol can be implemented on near-term quantum hardware and requires few quantum measurements to make many predictions with a high success probability. In this paper, we study the effects of noise on the classical shadows protocol. In particular, we consider the scenario in which the quantum circuits involved in the protocol are subject to various known noise channels and derive an analytical upper bound for the sample complexity in terms of a shadow seminorm for both local and global noise. Additionally, by modifying the classical post-processing step of the noiseless protocol, we define a new estimator that remains unbiased in the presence of noise. As applications, we show that our results can be used to prove rigorous sample complexity upper bounds in the cases of depolarizing noise and amplitude damping.

]]>The task of determining whether a given quantum channel has positive capacity to transmit quantum information is a fundamental open problem in quantum information theory. In general, the coherent information needs to be computed for an unbounded number of copies of a channel in order to detect a positive value of its quantum capacity. However, in this paper, we show that the coherent information of a $\textit{single copy}$ of a $\textit{randomly selected channel}$ is positive almost surely if the channel's output space is larger than its environment. Hence, in this case, a single copy of the channel typically suffices to determine positivity of its quantum capacity. Put differently, channels with zero coherent information have measure zero in the subset of channels for which the output space is larger than the environment. On the other hand, if the environment is larger than the channel's output space, identical results hold for the channel's complement.

]]>The task of determining whether a given quantum channel has positive capacity to transmit quantum information is a fundamental open problem in quantum information theory. In general, the coherent information needs to be computed for an unbounded number of copies of a channel in order to detect a positive value of its quantum capacity. However, in this paper, we show that the coherent information of a $\textit{single copy}$ of a $\textit{randomly selected channel}$ is positive almost surely if the channel's output space is larger than its environment. Hence, in this case, a single copy of the channel typically suffices to determine positivity of its quantum capacity. Put differently, channels with zero coherent information have measure zero in the subset of channels for which the output space is larger than the environment. On the other hand, if the environment is larger than the channel's output space, identical results hold for the channel's complement.

]]>We introduce Mitiq, a Python package for error mitigation on noisy quantum computers. Error mitigation techniques can reduce the impact of noise on near-term quantum computers with minimal overhead in quantum resources by relying on a mixture of quantum sampling and classical post-processing techniques. Mitiq is an extensible toolkit of different error mitigation methods, including zero-noise extrapolation, probabilistic error cancellation, and Clifford data regression. The library is designed to be compatible with generic backends and interfaces with different quantum software frameworks. We describe Mitiq using code snippets to demonstrate usage and discuss features and contribution guidelines. We present several examples demonstrating error mitigation on IBM and Rigetti superconducting quantum processors as well as on noisy simulators.

]]>We introduce Mitiq, a Python package for error mitigation on noisy quantum computers. Error mitigation techniques can reduce the impact of noise on near-term quantum computers with minimal overhead in quantum resources by relying on a mixture of quantum sampling and classical post-processing techniques. Mitiq is an extensible toolkit of different error mitigation methods, including zero-noise extrapolation, probabilistic error cancellation, and Clifford data regression. The library is designed to be compatible with generic backends and interfaces with different quantum software frameworks. We describe Mitiq using code snippets to demonstrate usage and discuss features and contribution guidelines. We present several examples demonstrating error mitigation on IBM and Rigetti superconducting quantum processors as well as on noisy simulators.

]]>Quantum state preparation is an important ingredient for other higher-level quantum algorithms, such as Hamiltonian simulation, or for loading distributions into a quantum device to be used e.g. in the context of optimization tasks such as machine learning. Starting with a generic "black box" method devised by Grover in 2000, which employs amplitude amplification to load coefficients calculated by an oracle, there has been a long series of results and improvements with various additional conditions on the amplitudes to be loaded, culminating in Sanders et al.'s work which avoids almost all arithmetic during the preparation stage. In this work, we construct an optimized black box state loading scheme with which various important sets of coefficients can be loaded significantly faster than in $O(\sqrt N)$ rounds of amplitude amplification, up to only $O(1)$ many. We achieve this with two variants of our algorithm. The first employs a modification of the oracle from Sanders et al., which requires fewer ancillas ($\log_2 g$ vs $g+2$ in the bit precision $g$), and fewer non-Clifford operations per amplitude amplification round within the context of our algorithm. The second utilizes the same oracle, but at slightly increased cost in terms of ancillas ($g+\log_2g$) and non-Clifford operations per amplification round. As the number of amplitude amplification rounds enters as multiplicative factor, our black box state loading scheme yields an up to exponential speedup as compared to prior methods. This speedup translates beyond the black box case.

]]>Quantum state preparation is an important ingredient for other higher-level quantum algorithms, such as Hamiltonian simulation, or for loading distributions into a quantum device to be used e.g. in the context of optimization tasks such as machine learning. Starting with a generic "black box" method devised by Grover in 2000, which employs amplitude amplification to load coefficients calculated by an oracle, there has been a long series of results and improvements with various additional conditions on the amplitudes to be loaded, culminating in Sanders et al.'s work which avoids almost all arithmetic during the preparation stage. In this work, we construct an optimized black box state loading scheme with which various important sets of coefficients can be loaded significantly faster than in $O(\sqrt N)$ rounds of amplitude amplification, up to only $O(1)$ many. We achieve this with two variants of our algorithm. The first employs a modification of the oracle from Sanders et al., which requires fewer ancillas ($\log_2 g$ vs $g+2$ in the bit precision $g$), and fewer non-Clifford operations per amplitude amplification round within the context of our algorithm. The second utilizes the same oracle, but at slightly increased cost in terms of ancillas ($g+\log_2g$) and non-Clifford operations per amplification round. As the number of amplitude amplification rounds enters as multiplicative factor, our black box state loading scheme yields an up to exponential speedup as compared to prior methods. This speedup translates beyond the black box case.

]]>We provide a stochastic interpretation of non-commutative Dirichlet forms in the context of quantum filtering. For stochastic processes motivated by quantum optics experiments, we derive an optimal finite time deviation bound expressed in terms of the non-commutative Dirichlet form. Introducing and developing new non-commutative functional inequalities, we deduce concentration inequalities for these processes. Examples satisfying our bounds include tensor products of quantum Markov semigroups as well as Gibbs samplers above a threshold temperature.

]]>We provide a stochastic interpretation of non-commutative Dirichlet forms in the context of quantum filtering. For stochastic processes motivated by quantum optics experiments, we derive an optimal finite time deviation bound expressed in terms of the non-commutative Dirichlet form. Introducing and developing new non-commutative functional inequalities, we deduce concentration inequalities for these processes. Examples satisfying our bounds include tensor products of quantum Markov semigroups as well as Gibbs samplers above a threshold temperature.

]]>Detection-efficiency mismatch is a common problem in practical quantum key distribution (QKD) systems. Current security proofs of QKD with detection-efficiency mismatch rely either on the assumption of the single-photon light source on the sender side or on the assumption of the single-photon input of the receiver side. These assumptions impose restrictions on the class of possible eavesdropping strategies. Here we present a rigorous security proof without these assumptions and, thus, solve this important problem and prove the security of QKD with detection-efficiency mismatch against general attacks (in the asymptotic regime). In particular, we adapt the decoy state method to the case of detection-efficiency mismatch.

]]>Detection-efficiency mismatch is a common problem in practical quantum key distribution (QKD) systems. Current security proofs of QKD with detection-efficiency mismatch rely either on the assumption of the single-photon light source on the sender side or on the assumption of the single-photon input of the receiver side. These assumptions impose restrictions on the class of possible eavesdropping strategies. Here we present a rigorous security proof without these assumptions and, thus, solve this important problem and prove the security of QKD with detection-efficiency mismatch against general attacks (in the asymptotic regime). In particular, we adapt the decoy state method to the case of detection-efficiency mismatch.

]]>We introduce a quantum algorithm to compute the market risk of financial derivatives. Previous work has shown that quantum amplitude estimation can accelerate derivative pricing quadratically in the target error and we extend this to a quadratic error scaling advantage in market risk computation. We show that employing quantum gradient estimation algorithms can deliver a further quadratic advantage in the number of the associated market sensitivities, usually called $greeks$. By numerically simulating the quantum gradient estimation algorithms on financial derivatives of practical interest, we demonstrate that not only can we successfully estimate the greeks in the examples studied, but that the resource requirements can be significantly lower in practice than what is expected by theoretical complexity bounds. This additional advantage in the computation of financial market risk lowers the estimated logical clock rate required for financial quantum advantage from Chakrabarti et al. [Quantum 5, 463 (2021)] by a factor of ~7, from 50MHz to 7MHz, even for a modest number of greeks by industry standards (four). Moreover, we show that if we have access to enough resources, the quantum algorithm can be parallelized across 60 QPUs, in which case the logical clock rate of each device required to achieve the same overall runtime as the serial execution would be ~100kHz. Throughout this work, we summarize and compare several different combinations of quantum and classical approaches that could be used for computing the market risk of financial derivatives.

]]>We introduce a quantum algorithm to compute the market risk of financial derivatives. Previous work has shown that quantum amplitude estimation can accelerate derivative pricing quadratically in the target error and we extend this to a quadratic error scaling advantage in market risk computation. We show that employing quantum gradient estimation algorithms can deliver a further quadratic advantage in the number of the associated market sensitivities, usually called $greeks$. By numerically simulating the quantum gradient estimation algorithms on financial derivatives of practical interest, we demonstrate that not only can we successfully estimate the greeks in the examples studied, but that the resource requirements can be significantly lower in practice than what is expected by theoretical complexity bounds. This additional advantage in the computation of financial market risk lowers the estimated logical clock rate required for financial quantum advantage from Chakrabarti et al. [Quantum 5, 463 (2021)] by a factor of ~7, from 50MHz to 7MHz, even for a modest number of greeks by industry standards (four). Moreover, we show that if we have access to enough resources, the quantum algorithm can be parallelized across 60 QPUs, in which case the logical clock rate of each device required to achieve the same overall runtime as the serial execution would be ~100kHz. Throughout this work, we summarize and compare several different combinations of quantum and classical approaches that could be used for computing the market risk of financial derivatives.

]]>We present an algorithm to reliably generate various quantum states critical to quantum error correction and universal continuous-variable (CV) quantum computing, such as Schrödinger cat states and Gottesman-Kitaev-Preskill (GKP) grid states, out of Gaussian CV cluster states. Our algorithm is based on the Photon-counting-Assisted Node-Teleportation Method (PhANTM), which uses standard Gaussian information processing on the cluster state with the only addition of local photon-number-resolving measurements. We show that PhANTM can apply polynomial gates and embed cat states within the cluster. This method stabilizes cat states against Gaussian noise and perpetuates non-Gaussianity within the cluster. We show that existing protocols for breeding cat states can be embedded into cluster state processing using PhANTM.

]]>We present an algorithm to reliably generate various quantum states critical to quantum error correction and universal continuous-variable (CV) quantum computing, such as Schrödinger cat states and Gottesman-Kitaev-Preskill (GKP) grid states, out of Gaussian CV cluster states. Our algorithm is based on the Photon-counting-Assisted Node-Teleportation Method (PhANTM), which uses standard Gaussian information processing on the cluster state with the only addition of local photon-number-resolving measurements. We show that PhANTM can apply polynomial gates and embed cat states within the cluster. This method stabilizes cat states against Gaussian noise and perpetuates non-Gaussianity within the cluster. We show that existing protocols for breeding cat states can be embedded into cluster state processing using PhANTM.

]]>Quantum Phase Estimation is one of the most useful quantum computing algorithms for quantum chemistry and as such, significant effort has been devoted to designing efficient implementations. In this article, we introduce TFermion, a library designed to estimate the T-gate cost of such algorithms, for an arbitrary molecule. As examples of usage, we estimate the T-gate cost of a few simple molecules and compare the same Taylorization algorithms using Gaussian and plane-wave basis.

Presentation of TFermion in the APS March meeting 2022:

]]>Quantum Phase Estimation is one of the most useful quantum computing algorithms for quantum chemistry and as such, significant effort has been devoted to designing efficient implementations. In this article, we introduce TFermion, a library designed to estimate the T-gate cost of such algorithms, for an arbitrary molecule. As examples of usage, we estimate the T-gate cost of a few simple molecules and compare the same Taylorization algorithms using Gaussian and plane-wave basis.

Presentation of TFermion in the APS March meeting 2022:

]]>Quantum error correction has recently been shown to benefit greatly from specific physical encodings of the code qubits. In particular, several researchers have considered the individual code qubits being encoded with the continuous variable GottesmanKitaev-Preskill (GKP) code, and then imposed an outer discrete-variable code such as the surface code on these GKP qubits. Under such a concatenation scheme, the analog information from the inner GKP error correction improves the noise threshold of the outer code. However, the surface code has vanishing rate and demands a lot of resources with growing distance. In this work, we concatenate the GKP code with generic quantum low-density parity-check (QLDPC) codes and demonstrate a natural way to exploit the GKP analog information in iterative decoding algorithms. We first show the noise thresholds for two lifted product QLDPC code families, and then show the improvements of noise thresholds when the iterative decoder – a hardware-friendly min-sum algorithm (MSA) – utilizes the GKP analog information. We also show that, when the GKP analog information is combined with a sequential update schedule for MSA, the scheme surpasses the well-known CSS Hamming bound for these code families. Furthermore, we observe that the GKP analog information helps the iterative decoder in escaping harmful trapping sets in the Tanner graph of the QLDPC code, thereby eliminating or significantly lowering the error floor of the logical error rate curves. Finally, we discuss new fundamental and practical questions that arise from this work on channel capacity under GKP analog information, and on improving decoder design and analysis.

]]>Quantum error correction has recently been shown to benefit greatly from specific physical encodings of the code qubits. In particular, several researchers have considered the individual code qubits being encoded with the continuous variable GottesmanKitaev-Preskill (GKP) code, and then imposed an outer discrete-variable code such as the surface code on these GKP qubits. Under such a concatenation scheme, the analog information from the inner GKP error correction improves the noise threshold of the outer code. However, the surface code has vanishing rate and demands a lot of resources with growing distance. In this work, we concatenate the GKP code with generic quantum low-density parity-check (QLDPC) codes and demonstrate a natural way to exploit the GKP analog information in iterative decoding algorithms. We first show the noise thresholds for two lifted product QLDPC code families, and then show the improvements of noise thresholds when the iterative decoder – a hardware-friendly min-sum algorithm (MSA) – utilizes the GKP analog information. We also show that, when the GKP analog information is combined with a sequential update schedule for MSA, the scheme surpasses the well-known CSS Hamming bound for these code families. Furthermore, we observe that the GKP analog information helps the iterative decoder in escaping harmful trapping sets in the Tanner graph of the QLDPC code, thereby eliminating or significantly lowering the error floor of the logical error rate curves. Finally, we discuss new fundamental and practical questions that arise from this work on channel capacity under GKP analog information, and on improving decoder design and analysis.

]]>We study a three-fold variant of the hypergraph product code construction, differing from the standard homological product of three classical codes. When instantiated with 3 classical LDPC codes, this "XYZ product" yields a non CSS quantum LDPC code which might display a large minimum distance. The simplest instance of this construction, corresponding to the product of 3 repetition codes, is a non CSS variant of the 3-dimensional toric code known as the Chamon code. The general construction was introduced in Denise Maurice's PhD thesis, but has remained poorly understood so far. The reason is that while hypergraph product codes can be analyzed with combinatorial tools, the XYZ product codes also depend crucially on the algebraic properties of the parity-check matrices of the three classical codes, making their analysis much more involved. Our main motivation for studying XYZ product codes is that the natural representatives of logical operators are two-dimensional objects. This contrasts with standard hypergraph product codes in 3 dimensions which always admit one-dimensional logical operators. In particular, specific instances of XYZ product codes with constant rate might display a minimum distance as large as $\Theta(N^{2/3})$. While we do not prove this result here, we obtain the dimension of a large class of XYZ product codes, and when restricting to codes with dimension 1, we reduce the problem of computing the minimum distance to a more elementary combinatorial problem involving binary 3-tensors. We also discuss in detail some families of XYZ product codes that can be embedded in three dimensions with local interaction. Some of these codes seem to share properties with Haah's cubic codes and might be interesting candidates for self-correcting quantum memories with a logarithmic energy barrier.

]]>We study a three-fold variant of the hypergraph product code construction, differing from the standard homological product of three classical codes. When instantiated with 3 classical LDPC codes, this "XYZ product" yields a non CSS quantum LDPC code which might display a large minimum distance. The simplest instance of this construction, corresponding to the product of 3 repetition codes, is a non CSS variant of the 3-dimensional toric code known as the Chamon code. The general construction was introduced in Denise Maurice's PhD thesis, but has remained poorly understood so far. The reason is that while hypergraph product codes can be analyzed with combinatorial tools, the XYZ product codes also depend crucially on the algebraic properties of the parity-check matrices of the three classical codes, making their analysis much more involved. Our main motivation for studying XYZ product codes is that the natural representatives of logical operators are two-dimensional objects. This contrasts with standard hypergraph product codes in 3 dimensions which always admit one-dimensional logical operators. In particular, specific instances of XYZ product codes with constant rate might display a minimum distance as large as $\Theta(N^{2/3})$. While we do not prove this result here, we obtain the dimension of a large class of XYZ product codes, and when restricting to codes with dimension 1, we reduce the problem of computing the minimum distance to a more elementary combinatorial problem involving binary 3-tensors. We also discuss in detail some families of XYZ product codes that can be embedded in three dimensions with local interaction. Some of these codes seem to share properties with Haah's cubic codes and might be interesting candidates for self-correcting quantum memories with a logarithmic energy barrier.

]]>It is well-known that in a Bell experiment, the observed correlation between measurement outcomes – as predicted by quantum theory – can be stronger than that allowed by local causality, yet not fully constrained by the principle of relativistic causality. In practice, the characterization of the set $Q$ of quantum correlations is carried out, often, through a converging hierarchy of outer approximations. On the other hand, some subsets of $Q$ arising from additional constraints [e.g., originating from quantum states having positive-partial-transposition (PPT) or being finite-dimensional maximally entangled (MES)] turn out to be also amenable to similar numerical characterizations. How, then, at a quantitative level, are all these naturally restricted subsets of nonsignaling correlations different? Here, we consider several bipartite Bell scenarios and numerically estimate their volume relative to that of the set of nonsignaling correlations. Within the number of cases investigated, we have observed that (1) for a given number of inputs $n_s$ (outputs $n_o$), the relative volume of both the Bell-local set and the quantum set increases (decreases) rapidly with increasing $n_o$ ($n_s$) (2) although the so-called macroscopically local set $Q_1$ may approximate $Q$ well in the two-input scenarios, it can be a very poor approximation of the quantum set when $n_s$$\gt$$n_o$ (3) the almost-quantum set $\tilde{Q}_1$ is an exceptionally-good approximation to the quantum set (4) the difference between $Q$ and the set of correlations originating from MES is most significant when $n_o=2$, whereas (5) the difference between the Bell-local set and the PPT set generally becomes more significant with increasing $n_o$. This last comparison, in particular, allows us to identify Bell scenarios where there is little hope of realizing the Bell violation by PPT states and those that deserve further exploration.

]]>It is well-known that in a Bell experiment, the observed correlation between measurement outcomes – as predicted by quantum theory – can be stronger than that allowed by local causality, yet not fully constrained by the principle of relativistic causality. In practice, the characterization of the set $Q$ of quantum correlations is carried out, often, through a converging hierarchy of outer approximations. On the other hand, some subsets of $Q$ arising from additional constraints [e.g., originating from quantum states having positive-partial-transposition (PPT) or being finite-dimensional maximally entangled (MES)] turn out to be also amenable to similar numerical characterizations. How, then, at a quantitative level, are all these naturally restricted subsets of nonsignaling correlations different? Here, we consider several bipartite Bell scenarios and numerically estimate their volume relative to that of the set of nonsignaling correlations. Within the number of cases investigated, we have observed that (1) for a given number of inputs $n_s$ (outputs $n_o$), the relative volume of both the Bell-local set and the quantum set increases (decreases) rapidly with increasing $n_o$ ($n_s$) (2) although the so-called macroscopically local set $Q_1$ may approximate $Q$ well in the two-input scenarios, it can be a very poor approximation of the quantum set when $n_s$$\gt$$n_o$ (3) the almost-quantum set $\tilde{Q}_1$ is an exceptionally-good approximation to the quantum set (4) the difference between $Q$ and the set of correlations originating from MES is most significant when $n_o=2$, whereas (5) the difference between the Bell-local set and the PPT set generally becomes more significant with increasing $n_o$. This last comparison, in particular, allows us to identify Bell scenarios where there is little hope of realizing the Bell violation by PPT states and those that deserve further exploration.

]]>Measuring time means counting the occurrence of periodic phenomena. Over the past centuries a major effort was put to make stable and precise oscillators to be used as clock regulators. Here we consider a different class of clocks based on stochastic clicking processes. We provide a rigorous statistical framework to study the performances of such devices and apply our results to a single coherently driven two-level atom under photodetection as an extreme example of non-periodic clock. Quantum Jump MonteCarlo simulations and photon counting waiting time distribution will provide independent checks on the main results.

]]>Measuring time means counting the occurrence of periodic phenomena. Over the past centuries a major effort was put to make stable and precise oscillators to be used as clock regulators. Here we consider a different class of clocks based on stochastic clicking processes. We provide a rigorous statistical framework to study the performances of such devices and apply our results to a single coherently driven two-level atom under photodetection as an extreme example of non-periodic clock. Quantum Jump MonteCarlo simulations and photon counting waiting time distribution will provide independent checks on the main results.

]]>In general, for a bipartite quantum system $\mathbb{C}^{d}\otimes\mathbb{C}^{d}$ and an integer $k$ such that $4\leq k\le d$,there are few necessary and sufficient conditions for local discrimination of sets of $k$ generalized Bell states (GBSs) and it is difficult to locally distinguish $k$-GBS sets.The purpose of this paper is to completely solve the problem of local discrimination of GBS sets in some bipartite quantum systems.Firstly three practical and effective sufficient conditions are given,Fan$^{,}$s and Wang et al.$^{,}$s results [Phys Rev Lett 92, 177905 (2004); Phys Rev A 99, 022307 (2019)] can be deduced as special cases of these conditions.Secondly in $\mathbb{C}^{4}\otimes\mathbb{C}^{4}$, a necessary and sufficient condition for local discrimination of GBS sets is provided, and a list of all locally indistinguishable 4-GBS sets is provided,and then the problem of local discrimination of GBS sets is completely $\mathbb{C}^{5}\otimes\mathbb{C}^{5}$, a concise necessary and sufficient condition for one-way local discrimination of GBS sets is obtained,which gives an affirmative answer to the case $d=5$ of the problem proposed by Wang et al.

]]>In general, for a bipartite quantum system $\mathbb{C}^{d}\otimes\mathbb{C}^{d}$ and an integer $k$ such that $4\leq k\le d$,there are few necessary and sufficient conditions for local discrimination of sets of $k$ generalized Bell states (GBSs) and it is difficult to locally distinguish $k$-GBS sets.The purpose of this paper is to completely solve the problem of local discrimination of GBS sets in some bipartite quantum systems.Firstly three practical and effective sufficient conditions are given,Fan$^{,}$s and Wang et al.$^{,}$s results [Phys Rev Lett 92, 177905 (2004); Phys Rev A 99, 022307 (2019)] can be deduced as special cases of these conditions.Secondly in $\mathbb{C}^{4}\otimes\mathbb{C}^{4}$, a necessary and sufficient condition for local discrimination of GBS sets is provided, and a list of all locally indistinguishable 4-GBS sets is provided,and then the problem of local discrimination of GBS sets is completely $\mathbb{C}^{5}\otimes\mathbb{C}^{5}$, a concise necessary and sufficient condition for one-way local discrimination of GBS sets is obtained,which gives an affirmative answer to the case $d=5$ of the problem proposed by Wang et al.

]]>We consider a quasi-probability distribution of work for an isolated quantum system coupled to the energy-storage device given by the ideal weight. Specifically, we analyze a trade-off between changes in average energy and changes in weight's variance, where work is extracted from the coherent and incoherent ergotropy of the system. Primarily, we reveal that the extraction of positive coherent ergotropy can be accompanied by the reduction of work fluctuations (quantified by a variance loss) by utilizing the non-classical states of a work reservoir. On the other hand, we derive a fluctuation-decoherence relation for a quantum weight, defining a lower bound of its energy dispersion via a dumping function of the coherent contribution to the system's ergotropy. Specifically, it reveals that unlocking ergotropy from coherences results in high fluctuations, which diverge when the total coherent energy is unlocked. The proposed autonomous protocol of work extraction shows a significant difference between extracting coherent and incoherent ergotropy: The former can decrease the variance, but its absolute value diverges if more and more energy is extracted, whereas for the latter, the gain is always non-negative, but a total (incoherent) ergotropy can be extracted with finite work fluctuations. Furthermore, we present the framework in terms of the introduced quasi-probability distribution, which has a physical interpretation of its cumulants, is free from the invasive nature of measurements, and reduces to the two-point measurement scheme (TPM) for incoherent states. Finally, we analytically solve the work-variance trade-off for a qubit, explicitly revealing all the above quantum and classical regimes.

]]>We consider a quasi-probability distribution of work for an isolated quantum system coupled to the energy-storage device given by the ideal weight. Specifically, we analyze a trade-off between changes in average energy and changes in weight's variance, where work is extracted from the coherent and incoherent ergotropy of the system. Primarily, we reveal that the extraction of positive coherent ergotropy can be accompanied by the reduction of work fluctuations (quantified by a variance loss) by utilizing the non-classical states of a work reservoir. On the other hand, we derive a fluctuation-decoherence relation for a quantum weight, defining a lower bound of its energy dispersion via a dumping function of the coherent contribution to the system's ergotropy. Specifically, it reveals that unlocking ergotropy from coherences results in high fluctuations, which diverge when the total coherent energy is unlocked. The proposed autonomous protocol of work extraction shows a significant difference between extracting coherent and incoherent ergotropy: The former can decrease the variance, but its absolute value diverges if more and more energy is extracted, whereas for the latter, the gain is always non-negative, but a total (incoherent) ergotropy can be extracted with finite work fluctuations. Furthermore, we present the framework in terms of the introduced quasi-probability distribution, which has a physical interpretation of its cumulants, is free from the invasive nature of measurements, and reduces to the two-point measurement scheme (TPM) for incoherent states. Finally, we analytically solve the work-variance trade-off for a qubit, explicitly revealing all the above quantum and classical regimes.

]]>Significant effort in applied quantum computing has been devoted to the problem of ground state energy estimation for molecules and materials. Yet, for many applications of practical value, additional properties of the ground state must be estimated. These include Green's functions used to compute electron transport in materials and the one-particle reduced density matrices used to compute electric dipoles of molecules. In this paper, we propose a quantum-classical hybrid algorithm to efficiently estimate such ground state properties with high accuracy using low-depth quantum circuits. We provide an analysis of various costs (circuit repetitions, maximal evolution time, and expected total runtime) as a function of target accuracy, spectral gap, and initial ground state overlap. This algorithm suggests a concrete approach to using early fault tolerant quantum computers for carrying out industry-relevant molecular and materials calculations.

]]>Significant effort in applied quantum computing has been devoted to the problem of ground state energy estimation for molecules and materials. Yet, for many applications of practical value, additional properties of the ground state must be estimated. These include Green's functions used to compute electron transport in materials and the one-particle reduced density matrices used to compute electric dipoles of molecules. In this paper, we propose a quantum-classical hybrid algorithm to efficiently estimate such ground state properties with high accuracy using low-depth quantum circuits. We provide an analysis of various costs (circuit repetitions, maximal evolution time, and expected total runtime) as a function of target accuracy, spectral gap, and initial ground state overlap. This algorithm suggests a concrete approach to using early fault tolerant quantum computers for carrying out industry-relevant molecular and materials calculations.

]]>Walgate and Scott have determined the maximum number of generic pure quantum states that can be unambiguously discriminated by an LOCC measurement [Journal of Physics A: Mathematical and Theoretical, 41:375305, 08 2008]. In this work, we determine this number in a more general setting in which the local parties have access to pre-shared entanglement in the form of a resource state. We find that, for an arbitrary pure resource state, this number is equal to the Krull dimension of (the closure of) the set of pure states obtainable from the resource state by SLOCC. Surprisingly, a generic resource state maximizes this number. Local state discrimination is closely related to the topic of entangled subspaces, which we study in its own right. We introduce $r$-entangled subspaces, which naturally generalize previously studied spaces to higher multipartite entanglement. We use algebraic-geometric methods to determine the maximum dimension of an $r$-entangled subspace, and present novel explicit constructions of such spaces. We obtain similar results for symmetric and antisymmetric $r$-entangled subspaces, which correspond to entangled subspaces of bosonic and fermionic systems, respectively.

]]>Walgate and Scott have determined the maximum number of generic pure quantum states that can be unambiguously discriminated by an LOCC measurement [Journal of Physics A: Mathematical and Theoretical, 41:375305, 08 2008]. In this work, we determine this number in a more general setting in which the local parties have access to pre-shared entanglement in the form of a resource state. We find that, for an arbitrary pure resource state, this number is equal to the Krull dimension of (the closure of) the set of pure states obtainable from the resource state by SLOCC. Surprisingly, a generic resource state maximizes this number. Local state discrimination is closely related to the topic of entangled subspaces, which we study in its own right. We introduce $r$-entangled subspaces, which naturally generalize previously studied spaces to higher multipartite entanglement. We use algebraic-geometric methods to determine the maximum dimension of an $r$-entangled subspace, and present novel explicit constructions of such spaces. We obtain similar results for symmetric and antisymmetric $r$-entangled subspaces, which correspond to entangled subspaces of bosonic and fermionic systems, respectively.

]]>The Quantum Approximate Optimization Algorithm (QAOA) is a general-purpose algorithm for combinatorial optimization problems whose performance can only improve with the number of layers $p$. While QAOA holds promise as an algorithm that can be run on near-term quantum computers, its computational power has not been fully explored. In this work, we study the QAOA applied to the Sherrington-Kirkpatrick (SK) model, which can be understood as energy minimization of $n$ spins with all-to-all random signed couplings. There is a recent classical algorithm by Montanari that, assuming a widely believed conjecture, can efficiently find an approximate solution for a typical instance of the SK model to within $(1-\epsilon)$ times the ground state energy. We hope to match its performance with the QAOA. Our main result is a novel technique that allows us to evaluate the typical-instance energy of the QAOA applied to the SK model. We produce a formula for the expected value of the energy, as a function of the $2p$ QAOA parameters, in the infinite size limit that can be evaluated on a computer with $O(16^p)$ complexity. We evaluate the formula up to $p=12$, and find that the QAOA at $p=11$ outperforms the standard semidefinite programming algorithm. Moreover, we show concentration: With probability tending to one as $n\to\infty$, measurements of the QAOA will produce strings whose energies concentrate at our calculated value. As an algorithm running on a quantum computer, there is no need to search for optimal parameters on an instance-by-instance basis since we can determine them in advance. What we have here is a new framework for analyzing the QAOA, and our techniques can be of broad interest for evaluating its performance on more general problems where classical algorithms may fail.

]]>The Quantum Approximate Optimization Algorithm (QAOA) is a general-purpose algorithm for combinatorial optimization problems whose performance can only improve with the number of layers $p$. While QAOA holds promise as an algorithm that can be run on near-term quantum computers, its computational power has not been fully explored. In this work, we study the QAOA applied to the Sherrington-Kirkpatrick (SK) model, which can be understood as energy minimization of $n$ spins with all-to-all random signed couplings. There is a recent classical algorithm by Montanari that, assuming a widely believed conjecture, can efficiently find an approximate solution for a typical instance of the SK model to within $(1-\epsilon)$ times the ground state energy. We hope to match its performance with the QAOA. Our main result is a novel technique that allows us to evaluate the typical-instance energy of the QAOA applied to the SK model. We produce a formula for the expected value of the energy, as a function of the $2p$ QAOA parameters, in the infinite size limit that can be evaluated on a computer with $O(16^p)$ complexity. We evaluate the formula up to $p=12$, and find that the QAOA at $p=11$ outperforms the standard semidefinite programming algorithm. Moreover, we show concentration: With probability tending to one as $n\to\infty$, measurements of the QAOA will produce strings whose energies concentrate at our calculated value. As an algorithm running on a quantum computer, there is no need to search for optimal parameters on an instance-by-instance basis since we can determine them in advance. What we have here is a new framework for analyzing the QAOA, and our techniques can be of broad interest for evaluating its performance on more general problems where classical algorithms may fail.

]]>Several noisy intermediate-scale quantum computations can be regarded as logarithmic-depth quantum circuits on a sparse quantum computing chip, where two-qubit gates can be directly applied on only some pairs of qubits. In this paper, we propose a method to efficiently verify such noisy intermediate-scale quantum computation. To this end, we first characterize small-scale quantum operations with respect to the diamond norm. Then by using these characterized quantum operations, we estimate the fidelity $\langle\psi_t|\hat{\rho}_{\rm out}|\psi_t\rangle$ between an actual $n$-qubit output state $\hat{\rho}_{\rm out}$ obtained from the noisy intermediate-scale quantum computation and the ideal output state (i.e., the target state) $|\psi_t\rangle$. Although the direct fidelity estimation method requires $O(2^n)$ copies of $\hat{\rho}_{\rm out}$ on average, our method requires only $O(D^32^{12D})$ copies even in the worst case, where $D$ is the denseness of $|\psi_t\rangle$. For logarithmic-depth quantum circuits on a sparse chip, $D$ is at most $O(\log{n})$, and thus $O(D^32^{12D})$ is a polynomial in $n$. By using the IBM Manila 5-qubit chip, we also perform a proof-of-principle experiment to observe the practical performance of our method.

]]>Several noisy intermediate-scale quantum computations can be regarded as logarithmic-depth quantum circuits on a sparse quantum computing chip, where two-qubit gates can be directly applied on only some pairs of qubits. In this paper, we propose a method to efficiently verify such noisy intermediate-scale quantum computation. To this end, we first characterize small-scale quantum operations with respect to the diamond norm. Then by using these characterized quantum operations, we estimate the fidelity $\langle\psi_t|\hat{\rho}_{\rm out}|\psi_t\rangle$ between an actual $n$-qubit output state $\hat{\rho}_{\rm out}$ obtained from the noisy intermediate-scale quantum computation and the ideal output state (i.e., the target state) $|\psi_t\rangle$. Although the direct fidelity estimation method requires $O(2^n)$ copies of $\hat{\rho}_{\rm out}$ on average, our method requires only $O(D^32^{12D})$ copies even in the worst case, where $D$ is the denseness of $|\psi_t\rangle$. For logarithmic-depth quantum circuits on a sparse chip, $D$ is at most $O(\log{n})$, and thus $O(D^32^{12D})$ is a polynomial in $n$. By using the IBM Manila 5-qubit chip, we also perform a proof-of-principle experiment to observe the practical performance of our method.

]]>We consider the power of local algorithms for approximately solving Max $k$XOR, a generalization of two constraint satisfaction problems previously studied with classical and quantum algorithms (MaxCut and Max E3LIN2). In Max $k$XOR each constraint is the XOR of exactly $k$ variables and a parity bit. On instances with either random signs (parities) or no overlapping clauses and $D+1$ clauses per variable, we calculate the expected satisfying fraction of the depth-1 QAOA from Farhi et al [arXiv:1411.4028] and compare with a generalization of the local threshold algorithm from Hirvonen et al [arXiv:1402.2543]. Notably, the quantum algorithm outperforms the threshold algorithm for $k$$\gt$$4$. On the other hand, we highlight potential difficulties for the QAOA to achieve computational quantum advantage on this problem. We first compute a tight upper bound on the maximum satisfying fraction of nearly all large random regular Max $k$XOR instances by numerically calculating the ground state energy density $P(k)$ of a mean-field $k$-spin glass [arXiv:1606.02365]. The upper bound grows with $k$ much faster than the performance of both one-local algorithms. We also identify a new obstruction result for low-depth quantum circuits (including the QAOA) when $k=3$, generalizing a result of Bravyi et al [arXiv:1910.08980] when $k=2$. We conjecture that a similar obstruction exists for all $k$.

]]>We consider the power of local algorithms for approximately solving Max $k$XOR, a generalization of two constraint satisfaction problems previously studied with classical and quantum algorithms (MaxCut and Max E3LIN2). In Max $k$XOR each constraint is the XOR of exactly $k$ variables and a parity bit. On instances with either random signs (parities) or no overlapping clauses and $D+1$ clauses per variable, we calculate the expected satisfying fraction of the depth-1 QAOA from Farhi et al [arXiv:1411.4028] and compare with a generalization of the local threshold algorithm from Hirvonen et al [arXiv:1402.2543]. Notably, the quantum algorithm outperforms the threshold algorithm for $k$$\gt$$4$. On the other hand, we highlight potential difficulties for the QAOA to achieve computational quantum advantage on this problem. We first compute a tight upper bound on the maximum satisfying fraction of nearly all large random regular Max $k$XOR instances by numerically calculating the ground state energy density $P(k)$ of a mean-field $k$-spin glass [arXiv:1606.02365]. The upper bound grows with $k$ much faster than the performance of both one-local algorithms. We also identify a new obstruction result for low-depth quantum circuits (including the QAOA) when $k=3$, generalizing a result of Bravyi et al [arXiv:1910.08980] when $k=2$. We conjecture that a similar obstruction exists for all $k$.

]]>In this paper we study the Platonic Bell inequalities for all possible dimensions. There are five Platonic solids in three dimensions, but there are also solids with Platonic properties (also known as regular polyhedra) in four and higher dimensions. The concept of Platonic Bell inequalities in the three-dimensional Euclidean space was introduced by Tavakoli and Gisin [Quantum 4, 293 (2020)]. For any three-dimensional Platonic solid, an arrangement of projective measurements is associated where the measurement directions point toward the vertices of the solids. For the higher dimensional regular polyhedra, we use the correspondence of the vertices to the measurements in the abstract Tsirelson space. We give a remarkably simple formula for the quantum violation of all the Platonic Bell inequalities, which we prove to attain the maximum possible quantum violation of the Bell inequalities, i.e. the Tsirelson bound. To construct Bell inequalities with a large number of settings, it is crucial to compute the local bound efficiently. In general, the computation time required to compute the local bound grows exponentially with the number of measurement settings. We find a method to compute the local bound exactly for any bipartite two-outcome Bell inequality, where the dependence becomes polynomial whose degree is the rank of the Bell matrix. To show that this algorithm can be used in practice, we compute the local bound of a 300-setting Platonic Bell inequality based on the halved dodecaplex. In addition, we use a diagonal modification of the original Platonic Bell matrix to increase the ratio of quantum to local bound. In this way, we obtain a four-dimensional 60-setting Platonic Bell inequality based on the halved tetraplex for which the quantum violation exceeds the $\sqrt 2$ ratio.

]]>In this paper we study the Platonic Bell inequalities for all possible dimensions. There are five Platonic solids in three dimensions, but there are also solids with Platonic properties (also known as regular polyhedra) in four and higher dimensions. The concept of Platonic Bell inequalities in the three-dimensional Euclidean space was introduced by Tavakoli and Gisin [Quantum 4, 293 (2020)]. For any three-dimensional Platonic solid, an arrangement of projective measurements is associated where the measurement directions point toward the vertices of the solids. For the higher dimensional regular polyhedra, we use the correspondence of the vertices to the measurements in the abstract Tsirelson space. We give a remarkably simple formula for the quantum violation of all the Platonic Bell inequalities, which we prove to attain the maximum possible quantum violation of the Bell inequalities, i.e. the Tsirelson bound. To construct Bell inequalities with a large number of settings, it is crucial to compute the local bound efficiently. In general, the computation time required to compute the local bound grows exponentially with the number of measurement settings. We find a method to compute the local bound exactly for any bipartite two-outcome Bell inequality, where the dependence becomes polynomial whose degree is the rank of the Bell matrix. To show that this algorithm can be used in practice, we compute the local bound of a 300-setting Platonic Bell inequality based on the halved dodecaplex. In addition, we use a diagonal modification of the original Platonic Bell matrix to increase the ratio of quantum to local bound. In this way, we obtain a four-dimensional 60-setting Platonic Bell inequality based on the halved tetraplex for which the quantum violation exceeds the $\sqrt 2$ ratio.

]]>We consider a bipartite transformation that we call $self-embezzlement$ and use it to prove a constant gap between the capabilities of two models of quantum information: the conventional model, where bipartite systems are represented by tensor products of Hilbert spaces; and a natural model of quantum information processing for abstract states on C*-algebras, where joint systems are represented by tensor products of C*-algebras. We call this the $C*-circuit$ model and show that it is a special case of the commuting-operator model (in that it can be translated into such a model). For the conventional model, we show that there exists a constant $epsilon_0$$\gt$$0$ such that self-embezzlement cannot be achieved with precision parameter less than $\epsilon_0$ (i.e., the fidelity cannot be greater than $1 - \epsilon_0$); whereas, in the C*-circuit model---as well as in a commuting-operator model---the precision can be $0$ (i.e., fidelity $1$). Self-embezzlement is not a non-local game, hence our results do not impact the celebrated Connes Embedding conjecture. Instead, the significance of these results is to exhibit a reasonably natural quantum information processing problem for which there is a constant gap between the capabilities of the conventional Hilbert space model and the commuting-operator or C*-circuit model.

]]>We consider a bipartite transformation that we call $self-embezzlement$ and use it to prove a constant gap between the capabilities of two models of quantum information: the conventional model, where bipartite systems are represented by tensor products of Hilbert spaces; and a natural model of quantum information processing for abstract states on C*-algebras, where joint systems are represented by tensor products of C*-algebras. We call this the $C*-circuit$ model and show that it is a special case of the commuting-operator model (in that it can be translated into such a model). For the conventional model, we show that there exists a constant $epsilon_0$$\gt$$0$ such that self-embezzlement cannot be achieved with precision parameter less than $\epsilon_0$ (i.e., the fidelity cannot be greater than $1 - \epsilon_0$); whereas, in the C*-circuit model---as well as in a commuting-operator model---the precision can be $0$ (i.e., fidelity $1$). Self-embezzlement is not a non-local game, hence our results do not impact the celebrated Connes Embedding conjecture. Instead, the significance of these results is to exhibit a reasonably natural quantum information processing problem for which there is a constant gap between the capabilities of the conventional Hilbert space model and the commuting-operator or C*-circuit model.

]]>We give a classical algorithm for linear regression analogous to the quantum matrix inversion algorithm [Harrow, Hassidim, and Lloyd, Physical Review Letters'09] for low-rank matrices [Wossnig, Zhao, and Prakash, Physical Review Letters'18], when the input matrix $A$ is stored in a data structure applicable for QRAM-based state preparation. Namely, suppose we are given an $A \in \mathbb{C}^{m\times n}$ with minimum non-zero singular value $\sigma$ which supports certain efficient $\ell_2$-norm importance sampling queries, along with a $b \in \mathbb{C}^m$. Then, for some $x \in \mathbb{C}^n$ satisfying $\|x – A^+b\| \leq \varepsilon\|A^+b\|$, we can output a measurement of $|x\rangle$ in the computational basis and output an entry of $x$ with classical algorithms that run in $\tilde{\mathcal{O}}\big(\frac{\|A\|_{\mathrm{F}}^6\|A\|^6}{\sigma^{12}\varepsilon^4}\big)$ and $\tilde{\mathcal{O}}\big(\frac{\|A\|_{\mathrm{F}}^6\|A\|^2}{\sigma^8\varepsilon^4}\big)$ time, respectively. This improves on previous "quantum-inspired" algorithms in this line of research by at least a factor of $\frac{\|A\|^{16}}{\sigma^{16}\varepsilon^2}$ [Chia, Gilyén, Li, Lin, Tang, and Wang, STOC'20]. As a consequence, we show that quantum computers can achieve at most a factor-of-12 speedup for linear regression in this QRAM data structure setting and related settings. Our work applies techniques from sketching algorithms and optimization to the quantum-inspired literature. Unlike earlier works, this is a promising avenue that could lead to feasible implementations of classical regression in a quantum-inspired settings, for comparison against future quantum computers.

]]>We give a classical algorithm for linear regression analogous to the quantum matrix inversion algorithm [Harrow, Hassidim, and Lloyd, Physical Review Letters'09] for low-rank matrices [Wossnig, Zhao, and Prakash, Physical Review Letters'18], when the input matrix $A$ is stored in a data structure applicable for QRAM-based state preparation. Namely, suppose we are given an $A \in \mathbb{C}^{m\times n}$ with minimum non-zero singular value $\sigma$ which supports certain efficient $\ell_2$-norm importance sampling queries, along with a $b \in \mathbb{C}^m$. Then, for some $x \in \mathbb{C}^n$ satisfying $\|x – A^+b\| \leq \varepsilon\|A^+b\|$, we can output a measurement of $|x\rangle$ in the computational basis and output an entry of $x$ with classical algorithms that run in $\tilde{\mathcal{O}}\big(\frac{\|A\|_{\mathrm{F}}^6\|A\|^6}{\sigma^{12}\varepsilon^4}\big)$ and $\tilde{\mathcal{O}}\big(\frac{\|A\|_{\mathrm{F}}^6\|A\|^2}{\sigma^8\varepsilon^4}\big)$ time, respectively. This improves on previous "quantum-inspired" algorithms in this line of research by at least a factor of $\frac{\|A\|^{16}}{\sigma^{16}\varepsilon^2}$ [Chia, Gilyén, Li, Lin, Tang, and Wang, STOC'20]. As a consequence, we show that quantum computers can achieve at most a factor-of-12 speedup for linear regression in this QRAM data structure setting and related settings. Our work applies techniques from sketching algorithms and optimization to the quantum-inspired literature. Unlike earlier works, this is a promising avenue that could lead to feasible implementations of classical regression in a quantum-inspired settings, for comparison against future quantum computers.

]]>We present a formalism that captures the process of proving quantum superiority to skeptics as an interactive game between two agents, supervised by a referee. Bob, is sampling from a classical distribution on a quantum device that is supposed to demonstrate a quantum advantage. The other player, the skeptical Alice, is then allowed to propose mock distributions supposed to reproduce Bob's device's statistics. He then needs to provide witness functions to prove that Alice's proposed mock distributions cannot properly approximate his device. Within this framework, we establish three results. First, for random quantum circuits, Bob being able to efficiently distinguish his distribution from Alice's implies efficient approximate simulation of the distribution. Secondly, finding a polynomial time function to distinguish the output of random circuits from the uniform distribution can also spoof the heavy output generation problem in polynomial time. This pinpoints that exponential resources may be unavoidable for even the most basic verification tasks in the setting of random quantum circuits. Beyond this setting, by employing strong data processing inequalities, our framework allows us to analyse the effect of noise on classical simulability and verification of more general near-term quantum advantage proposals.

]]>We present a formalism that captures the process of proving quantum superiority to skeptics as an interactive game between two agents, supervised by a referee. Bob, is sampling from a classical distribution on a quantum device that is supposed to demonstrate a quantum advantage. The other player, the skeptical Alice, is then allowed to propose mock distributions supposed to reproduce Bob's device's statistics. He then needs to provide witness functions to prove that Alice's proposed mock distributions cannot properly approximate his device. Within this framework, we establish three results. First, for random quantum circuits, Bob being able to efficiently distinguish his distribution from Alice's implies efficient approximate simulation of the distribution. Secondly, finding a polynomial time function to distinguish the output of random circuits from the uniform distribution can also spoof the heavy output generation problem in polynomial time. This pinpoints that exponential resources may be unavoidable for even the most basic verification tasks in the setting of random quantum circuits. Beyond this setting, by employing strong data processing inequalities, our framework allows us to analyse the effect of noise on classical simulability and verification of more general near-term quantum advantage proposals.

]]>Quantum finite automata (QFA) are basic computational devices that make binary decisions using quantum operations. They are known to be exponentially memory efficient compared to their classical counterparts. Here, we demonstrate an experimental implementation of multi-qubit QFAs using the orbital angular momentum (OAM) of single photons. We implement different high-dimensional QFAs encoded on a single photon, where multiple qubits operate in parallel without the need for complicated multi-partite operations. Using two to eight OAM quantum states to implement up to four parallel qubits, we show that a high-dimensional QFA is able to detect the prime numbers 5 and 11 while outperforming classical finite automata in terms of the required memory. Our work benefits from the ease of encoding, manipulating, and deciphering multi-qubit states encoded in the OAM degree of freedom of single photons, demonstrating the advantages structured photons provide for complex quantum information tasks.

]]>Quantum finite automata (QFA) are basic computational devices that make binary decisions using quantum operations. They are known to be exponentially memory efficient compared to their classical counterparts. Here, we demonstrate an experimental implementation of multi-qubit QFAs using the orbital angular momentum (OAM) of single photons. We implement different high-dimensional QFAs encoded on a single photon, where multiple qubits operate in parallel without the need for complicated multi-partite operations. Using two to eight OAM quantum states to implement up to four parallel qubits, we show that a high-dimensional QFA is able to detect the prime numbers 5 and 11 while outperforming classical finite automata in terms of the required memory. Our work benefits from the ease of encoding, manipulating, and deciphering multi-qubit states encoded in the OAM degree of freedom of single photons, demonstrating the advantages structured photons provide for complex quantum information tasks.

]]>