We present an end-to-end and practical randomness amplification and privatisation protocol based on Bell tests. This allows the building of device-independent random number generators which output (near-)perfectly unbiased and private numbers, even if using an uncharacterised quantum device potentially built by an adversary. Our generation rates are linear in the repetition rate of the quantum device and the classical randomness post-processing has quasi-linear complexity – making it efficient on a standard personal laptop. The statistical analysis is also tailored for real-world quantum devices. Our protocol is then showcased on several different quantum computers. Although not purposely built for the task, we show that quantum computers can run faithful Bell tests by adding minimal assumptions. In this semi-device-independent manner, our protocol generates (near-)perfectly unbiased and private random numbers on today's quantum computers.

]]>We present an end-to-end and practical randomness amplification and privatisation protocol based on Bell tests. This allows the building of device-independent random number generators which output (near-)perfectly unbiased and private numbers, even if using an uncharacterised quantum device potentially built by an adversary. Our generation rates are linear in the repetition rate of the quantum device and the classical randomness post-processing has quasi-linear complexity – making it efficient on a standard personal laptop. The statistical analysis is also tailored for real-world quantum devices. Our protocol is then showcased on several different quantum computers. Although not purposely built for the task, we show that quantum computers can run faithful Bell tests by adding minimal assumptions. In this semi-device-independent manner, our protocol generates (near-)perfectly unbiased and private random numbers on today's quantum computers.

]]>We compare the proposals that have appeared in the literature to describe a measurement of the time of arrival of a quantum particle at a detector. We show that there are multiple regimes where different proposals give inequivalent, experimentally discriminable, predictions. This analysis paves the way for future experimental tests.

]]>We compare the proposals that have appeared in the literature to describe a measurement of the time of arrival of a quantum particle at a detector. We show that there are multiple regimes where different proposals give inequivalent, experimentally discriminable, predictions. This analysis paves the way for future experimental tests.

]]>We present a family of electron-based coupled-wire models of bosonic orbifold topological phases, referred to as twist liquids, in two spatial dimensions. All local fermion degrees of freedom are gapped and removed from the topological order by many-body interactions. Bosonic chiral spin liquids and anyonic superconductors are constructed on an array of interacting wires, each supports emergent massless Majorana fermions that are non-local (fractional) and constitute the $SO(N)$ Kac-Moody Wess-Zumino-Witten algebra at level 1. We focus on the dihedral $D_k$ symmetry of $SO(2n)_1$, and its promotion to a gauge symmetry by manipulating the locality of fermion pairs. Gauging the symmetry (sub)group generates the $\mathcal{C}/G$ twist liquids, where $G=\mathbb{Z}_2$ for $\mathcal{C}=U(1)_l$, $SU(n)_1$, and $G=\mathbb{Z}_2$, $\mathbb{Z}_k$, $D_k$ for $\mathcal{C}=SO(2n)_1$. We construct exactly solvable models for all of these topological states. We prove the presence of a bulk excitation energy gap and demonstrate the appearance of edge orbifold conformal field theories corresponding to the twist liquid topological orders. We analyze the statistical properties of the anyon excitations, including the non-Abelian metaplectic anyons and a new class of quasiparticles referred to as Ising-fluxons. We show an eight-fold periodic gauging pattern in $SO(2n)_1/G$ by identifying the non-chiral components of the twist liquids with discrete gauge theories.

]]>We present a family of electron-based coupled-wire models of bosonic orbifold topological phases, referred to as twist liquids, in two spatial dimensions. All local fermion degrees of freedom are gapped and removed from the topological order by many-body interactions. Bosonic chiral spin liquids and anyonic superconductors are constructed on an array of interacting wires, each supports emergent massless Majorana fermions that are non-local (fractional) and constitute the $SO(N)$ Kac-Moody Wess-Zumino-Witten algebra at level 1. We focus on the dihedral $D_k$ symmetry of $SO(2n)_1$, and its promotion to a gauge symmetry by manipulating the locality of fermion pairs. Gauging the symmetry (sub)group generates the $\mathcal{C}/G$ twist liquids, where $G=\mathbb{Z}_2$ for $\mathcal{C}=U(1)_l$, $SU(n)_1$, and $G=\mathbb{Z}_2$, $\mathbb{Z}_k$, $D_k$ for $\mathcal{C}=SO(2n)_1$. We construct exactly solvable models for all of these topological states. We prove the presence of a bulk excitation energy gap and demonstrate the appearance of edge orbifold conformal field theories corresponding to the twist liquid topological orders. We analyze the statistical properties of the anyon excitations, including the non-Abelian metaplectic anyons and a new class of quasiparticles referred to as Ising-fluxons. We show an eight-fold periodic gauging pattern in $SO(2n)_1/G$ by identifying the non-chiral components of the twist liquids with discrete gauge theories.

]]>We study variational quantum algorithms from the perspective of free fermions. By deriving the explicit structure of the associated Lie algebras, we show that the Quantum Approximate Optimization Algorithm (QAOA) on a one-dimensional lattice – with and without decoupled angles – is able to prepare all fermionic Gaussian states respecting the symmetries of the circuit. Leveraging these results, we numerically study the interplay between these symmetries and the locality of the target state, and find that an absence of symmetries makes nonlocal states easier to prepare. An efficient classical simulation of Gaussian states, with system sizes up to $80$ and deep circuits, is employed to study the behavior of the circuit when it is overparameterized. In this regime of optimization, we find that the number of iterations to converge to the solution scales linearly with system size. Moreover, we observe that the number of iterations to converge to the solution decreases exponentially with the depth of the circuit, until it saturates at a depth which is quadratic in system size. Finally, we conclude that the improvement in the optimization can be explained in terms of better local linear approximations provided by the gradients.

]]>We study variational quantum algorithms from the perspective of free fermions. By deriving the explicit structure of the associated Lie algebras, we show that the Quantum Approximate Optimization Algorithm (QAOA) on a one-dimensional lattice – with and without decoupled angles – is able to prepare all fermionic Gaussian states respecting the symmetries of the circuit. Leveraging these results, we numerically study the interplay between these symmetries and the locality of the target state, and find that an absence of symmetries makes nonlocal states easier to prepare. An efficient classical simulation of Gaussian states, with system sizes up to $80$ and deep circuits, is employed to study the behavior of the circuit when it is overparameterized. In this regime of optimization, we find that the number of iterations to converge to the solution scales linearly with system size. Moreover, we observe that the number of iterations to converge to the solution decreases exponentially with the depth of the circuit, until it saturates at a depth which is quadratic in system size. Finally, we conclude that the improvement in the optimization can be explained in terms of better local linear approximations provided by the gradients.

]]>Thanks to common-mode noise rejection, differential configurations are crucial for realistic applications of phase and frequency estimation with atom interferometers. Currently, differential protocols with uncorrelated particles and mode-separable settings reach a sensitivity bounded by the standard quantum limit (SQL). Here we show that differential interferometry can be understood as a distributed multiparameter estimation problem and can benefit from both mode and particle entanglement. Our protocol uses a single spin-squeezed state that is mode-swapped among common interferometric modes. The mode swapping is optimized to estimate the differential phase shift with sub-SQL sensitivity. Numerical calculations are supported by analytical approximations that guide the optimization of the protocol. The scheme is also tested with simulation of noise in atomic clocks and interferometers.

]]>Thanks to common-mode noise rejection, differential configurations are crucial for realistic applications of phase and frequency estimation with atom interferometers. Currently, differential protocols with uncorrelated particles and mode-separable settings reach a sensitivity bounded by the standard quantum limit (SQL). Here we show that differential interferometry can be understood as a distributed multiparameter estimation problem and can benefit from both mode and particle entanglement. Our protocol uses a single spin-squeezed state that is mode-swapped among common interferometric modes. The mode swapping is optimized to estimate the differential phase shift with sub-SQL sensitivity. Numerical calculations are supported by analytical approximations that guide the optimization of the protocol. The scheme is also tested with simulation of noise in atomic clocks and interferometers.

]]>We develop and analyze a method for simulating quantum circuits on classical computers by representing quantum states as rooted tree tensor networks. Our algorithm first determines a suitable, fixed tree structure adapted to the expected entanglement generated by the quantum circuit. The gates are sequentially applied to the tree by absorbing single-qubit gates into leaf nodes, and splitting two-qubit gates via singular value decomposition and threading the resulting virtual bond through the tree. We theoretically analyze the applicability of the method as well as its computational cost and memory requirements, and identify advantageous scenarios in terms of required bond dimensions as compared to a matrix product state representation. The study is complemented by numerical experiments for different quantum circuit layouts up to 37 qubits.

]]>We develop and analyze a method for simulating quantum circuits on classical computers by representing quantum states as rooted tree tensor networks. Our algorithm first determines a suitable, fixed tree structure adapted to the expected entanglement generated by the quantum circuit. The gates are sequentially applied to the tree by absorbing single-qubit gates into leaf nodes, and splitting two-qubit gates via singular value decomposition and threading the resulting virtual bond through the tree. We theoretically analyze the applicability of the method as well as its computational cost and memory requirements, and identify advantageous scenarios in terms of required bond dimensions as compared to a matrix product state representation. The study is complemented by numerical experiments for different quantum circuit layouts up to 37 qubits.

]]>Semiclassically, laser pulses can be used to implement arbitrary transformations on atomic systems; quantum mechanically, residual atom-field entanglement spoils this promise. Transcoherent states are field states that fix this problem in the fully quantized regime by generating perfect coherence in an atom initially in its ground or excited state. We extend this fully quantized paradigm in four directions: First, we introduce field states that transform an atom from its ground or excited state to any point on the Bloch sphere without residual atom-field entanglement. The best strong pulses for carrying out rotations by angle $\theta$ are are squeezed in photon-number variance by a factor of $\rm{sinc}\theta$. Next, we investigate implementing rotation gates, showing that the optimal Gaussian field state for enacting a $\theta$ pulse on an atom in an arbitrary, unknown initial state is number squeezed by less: $\rm{sinc}\tfrac{\theta}{2}$. Third, we extend these investigations to fields interacting with multiple atoms simultaneously, discovering once again that number squeezing by $\tfrac{\pi}{2}$ is optimal for enacting $\tfrac{\pi}{2}$ pulses on all of the atoms simultaneously, with small corrections on the order of the ratio of the number of atoms to the average number of photons. Finally, we find field states that best perform arbitrary rotations by $\theta$ through nonlinear interactions involving $m$-photon absorption, where the same optimal squeezing factor is found to be $\rm{sinc}\theta$. Backaction in a wide variety of atom-field interactions can thus be mitigated by squeezing the control fields by optimal amounts.

]]>Semiclassically, laser pulses can be used to implement arbitrary transformations on atomic systems; quantum mechanically, residual atom-field entanglement spoils this promise. Transcoherent states are field states that fix this problem in the fully quantized regime by generating perfect coherence in an atom initially in its ground or excited state. We extend this fully quantized paradigm in four directions: First, we introduce field states that transform an atom from its ground or excited state to any point on the Bloch sphere without residual atom-field entanglement. The best strong pulses for carrying out rotations by angle $\theta$ are are squeezed in photon-number variance by a factor of $\rm{sinc}\theta$. Next, we investigate implementing rotation gates, showing that the optimal Gaussian field state for enacting a $\theta$ pulse on an atom in an arbitrary, unknown initial state is number squeezed by less: $\rm{sinc}\tfrac{\theta}{2}$. Third, we extend these investigations to fields interacting with multiple atoms simultaneously, discovering once again that number squeezing by $\tfrac{\pi}{2}$ is optimal for enacting $\tfrac{\pi}{2}$ pulses on all of the atoms simultaneously, with small corrections on the order of the ratio of the number of atoms to the average number of photons. Finally, we find field states that best perform arbitrary rotations by $\theta$ through nonlinear interactions involving $m$-photon absorption, where the same optimal squeezing factor is found to be $\rm{sinc}\theta$. Backaction in a wide variety of atom-field interactions can thus be mitigated by squeezing the control fields by optimal amounts.

]]>The implementation of time-evolution operators $U(t)$, called Hamiltonian simulation, is one of the most promising usage of quantum computers. For time-independent Hamiltonians, qubitization has recently established efficient realization of time-evolution $U(t)=e^{-iHt}$, with achieving the optimal computational resource both in time $t$ and an allowable error $\varepsilon$. In contrast, those for time-dependent systems require larger cost due to the difficulty of handling time-dependency. In this paper, we establish optimal/nearly-optimal Hamiltonian simulation for generic time-dependent systems with time-periodicity, known as Floquet systems. By using a so-called Floquet-Hilbert space equipped with auxiliary states labeling Fourier indices, we develop a way to certainly obtain the target time-evolved state without relying on either time-ordered product or Dyson-series expansion. Consequently, the query complexity, which measures the cost for implementing the time-evolution, has optimal and nearly-optimal dependency respectively in time $t$ and inverse error $\varepsilon$, and becomes sufficiently close to that of qubitization. Thus, our protocol tells us that, among generic time-dependent systems, time-periodic systems provides a class accessible as efficiently as time-independent systems despite the existence of time-dependency. As we also provide applications to simulation of nonequilibrium phenomena and adiabatic state preparation, our results will shed light on nonequilibrium phenomena in condensed matter physics and quantum chemistry, and quantum tasks yielding time-dependency in quantum computation.

]]>The implementation of time-evolution operators $U(t)$, called Hamiltonian simulation, is one of the most promising usage of quantum computers. For time-independent Hamiltonians, qubitization has recently established efficient realization of time-evolution $U(t)=e^{-iHt}$, with achieving the optimal computational resource both in time $t$ and an allowable error $\varepsilon$. In contrast, those for time-dependent systems require larger cost due to the difficulty of handling time-dependency. In this paper, we establish optimal/nearly-optimal Hamiltonian simulation for generic time-dependent systems with time-periodicity, known as Floquet systems. By using a so-called Floquet-Hilbert space equipped with auxiliary states labeling Fourier indices, we develop a way to certainly obtain the target time-evolved state without relying on either time-ordered product or Dyson-series expansion. Consequently, the query complexity, which measures the cost for implementing the time-evolution, has optimal and nearly-optimal dependency respectively in time $t$ and inverse error $\varepsilon$, and becomes sufficiently close to that of qubitization. Thus, our protocol tells us that, among generic time-dependent systems, time-periodic systems provides a class accessible as efficiently as time-independent systems despite the existence of time-dependency. As we also provide applications to simulation of nonequilibrium phenomena and adiabatic state preparation, our results will shed light on nonequilibrium phenomena in condensed matter physics and quantum chemistry, and quantum tasks yielding time-dependency in quantum computation.

]]>We demonstrate an information erasure protocol that resets $N$ qubits at once. The method displays exceptional performances in terms of energy cost (it operates nearly at Landauer energy cost $kT \ln 2$), time duration ($\sim \mu s$) and erasure success rate ($\sim 99,9\%$). The method departs from the standard algorithmic cooling paradigm by exploiting cooperative effects associated to the mechanism of spontaneous symmetry breaking which are amplified by quantum tunnelling phenomena. Such cooperative quantum erasure protocol is experimentally demonstrated on a commercial quantum annealer and could be readily applied in next generation hybrid gate-based/quantum-annealing quantum computers, for fast, effective, and energy efficient initialisation of quantum processing units.

Presentation of some early results of this work at the Quantum Thermodynamics Conference 2022 by Michele Campisi

]]>We demonstrate an information erasure protocol that resets $N$ qubits at once. The method displays exceptional performances in terms of energy cost (it operates nearly at Landauer energy cost $kT \ln 2$), time duration ($\sim \mu s$) and erasure success rate ($\sim 99,9\%$). The method departs from the standard algorithmic cooling paradigm by exploiting cooperative effects associated to the mechanism of spontaneous symmetry breaking which are amplified by quantum tunnelling phenomena. Such cooperative quantum erasure protocol is experimentally demonstrated on a commercial quantum annealer and could be readily applied in next generation hybrid gate-based/quantum-annealing quantum computers, for fast, effective, and energy efficient initialisation of quantum processing units.

Presentation of some early results of this work at the Quantum Thermodynamics Conference 2022 by Michele Campisi

]]>The fidelity susceptibility is a tool for studying quantum phase transitions in the Hermitian condensed matter systems. Recently, it has been generalized with the biorthogonal basis for the non-Hermitian quantum systems. From the general perturbation description with the constraint of parity-time (PT) symmetry, we show that the fidelity $\mathcal{F}$ is always real for the PT-unbroken states. For the PT-broken states, the real part of the fidelity susceptibility $\mathrm{Re}[\mathcal{X}_F]$ is corresponding to considering both the PT partner states, and the negative infinity is explored by the perturbation theory when the parameter approaches the exceptional point (EP). Moreover, at the second-order EP, we prove that the real part of the fidelity between PT-unbroken and PT-broken states is $\mathrm{Re}\mathcal{F}=\frac{1}{2}$. Based on these general properties, we study the two-legged non-Hermitian Su-Schrieffer-Heeger (SSH) model and the non-Hermitian XXZ spin chain. We find that for both interacting and non-interacting systems, the real part of fidelity susceptibility density goes to negative infinity when the parameter approaches the EP, and verifies it is a second-order EP by $\mathrm{Re}\mathcal{F}=\frac{1}{2}$.

]]>The fidelity susceptibility is a tool for studying quantum phase transitions in the Hermitian condensed matter systems. Recently, it has been generalized with the biorthogonal basis for the non-Hermitian quantum systems. From the general perturbation description with the constraint of parity-time (PT) symmetry, we show that the fidelity $\mathcal{F}$ is always real for the PT-unbroken states. For the PT-broken states, the real part of the fidelity susceptibility $\mathrm{Re}[\mathcal{X}_F]$ is corresponding to considering both the PT partner states, and the negative infinity is explored by the perturbation theory when the parameter approaches the exceptional point (EP). Moreover, at the second-order EP, we prove that the real part of the fidelity between PT-unbroken and PT-broken states is $\mathrm{Re}\mathcal{F}=\frac{1}{2}$. Based on these general properties, we study the two-legged non-Hermitian Su-Schrieffer-Heeger (SSH) model and the non-Hermitian XXZ spin chain. We find that for both interacting and non-interacting systems, the real part of fidelity susceptibility density goes to negative infinity when the parameter approaches the EP, and verifies it is a second-order EP by $\mathrm{Re}\mathcal{F}=\frac{1}{2}$.

]]>One of the challenges of quantum computers in the near- and mid- term is the limited number of qubits we can use for computations. Finding methods that achieve useful quantum improvements under size limitations is thus a key question in the field. In this vein, it was recently shown that a hybrid classical-quantum method can help provide polynomial speed-ups to classical divide-and-conquer algorithms, even when only given access to a quantum computer much smaller than the problem itself. In this work, we study the hybrid divide-and-conquer method in the context of tree search algorithms, and extend it by including quantum backtracking, which allows better results than previous Grover-based methods. Further, we provide general criteria for polynomial speed-ups in the tree search context, and provide a number of examples where polynomial speed ups, using arbitrarily smaller quantum computers, can be obtained. We provide conditions for speedups for the well known algorithm of DPLL, and we prove threshold-free speed-ups for the PPSZ algorithm (the core of the fastest exact Boolean satisfiability solver) for well-behaved classes of formulas. We also provide a simple example where speed-ups can be obtained in an algorithm-independent fashion, under certain well-studied complexity-theoretical assumptions. Finally, we briefly discuss the fundamental limitations of hybrid methods in providing speed-ups for larger problems.

]]>One of the challenges of quantum computers in the near- and mid- term is the limited number of qubits we can use for computations. Finding methods that achieve useful quantum improvements under size limitations is thus a key question in the field. In this vein, it was recently shown that a hybrid classical-quantum method can help provide polynomial speed-ups to classical divide-and-conquer algorithms, even when only given access to a quantum computer much smaller than the problem itself. In this work, we study the hybrid divide-and-conquer method in the context of tree search algorithms, and extend it by including quantum backtracking, which allows better results than previous Grover-based methods. Further, we provide general criteria for polynomial speed-ups in the tree search context, and provide a number of examples where polynomial speed ups, using arbitrarily smaller quantum computers, can be obtained. We provide conditions for speedups for the well known algorithm of DPLL, and we prove threshold-free speed-ups for the PPSZ algorithm (the core of the fastest exact Boolean satisfiability solver) for well-behaved classes of formulas. We also provide a simple example where speed-ups can be obtained in an algorithm-independent fashion, under certain well-studied complexity-theoretical assumptions. Finally, we briefly discuss the fundamental limitations of hybrid methods in providing speed-ups for larger problems.

]]>Field mediated entanglement experiments probe the quantum superposition of macroscopically distinct field configurations. We show that this phenomenon can be described by using a transparent quantum field theoretical formulation of electromagnetism and gravity in the field basis. The strength of such a description is that it explicitly displays the superposition of macroscopically distinct states of the field. In the case of (linearised) quantum general relativity, this formulation exhibits the quantum superposition of geometries giving rise to the effect.

]]>Field mediated entanglement experiments probe the quantum superposition of macroscopically distinct field configurations. We show that this phenomenon can be described by using a transparent quantum field theoretical formulation of electromagnetism and gravity in the field basis. The strength of such a description is that it explicitly displays the superposition of macroscopically distinct states of the field. In the case of (linearised) quantum general relativity, this formulation exhibits the quantum superposition of geometries giving rise to the effect.

]]>Isometry operations encode the quantum information of the input system to a larger output system, while the corresponding decoding operation would be an inverse operation of the encoding isometry operation. Given an encoding operation as a black box from a $d$-dimensional system to a $D$-dimensional system, we propose a universal protocol for isometry inversion that constructs a decoder from multiple calls of the encoding operation. This is a probabilistic but exact protocol whose success probability is independent of $D$. For a qubit ($d=2$) encoded in $n$ qubits, our protocol achieves an exponential improvement over any tomography-based or unitary-embedding method, which cannot avoid $D$-dependence. We present a quantum operation that converts multiple parallel calls of any given isometry operation to random parallelized unitary operations, each of dimension $d$. Applied to our setup, it universally compresses the encoded quantum information to a $D$-independent space, while keeping the initial quantum information intact. This compressing operation is combined with a unitary inversion protocol to complete the isometry inversion. We also discover a fundamental difference between our isometry inversion protocol and the known unitary inversion protocols by analyzing isometry complex conjugation and isometry transposition. General protocols including indefinite causal order are searched using semidefinite programming for any improvement in the success probability over the parallel protocols. We find a sequential "success-or-draw" protocol of universal isometry inversion for $d = 2$ and $D = 3$, thus whose success probability exponentially improves over parallel protocols in the number of calls of the input isometry operation for the said case.

]]>Isometry operations encode the quantum information of the input system to a larger output system, while the corresponding decoding operation would be an inverse operation of the encoding isometry operation. Given an encoding operation as a black box from a $d$-dimensional system to a $D$-dimensional system, we propose a universal protocol for isometry inversion that constructs a decoder from multiple calls of the encoding operation. This is a probabilistic but exact protocol whose success probability is independent of $D$. For a qubit ($d=2$) encoded in $n$ qubits, our protocol achieves an exponential improvement over any tomography-based or unitary-embedding method, which cannot avoid $D$-dependence. We present a quantum operation that converts multiple parallel calls of any given isometry operation to random parallelized unitary operations, each of dimension $d$. Applied to our setup, it universally compresses the encoded quantum information to a $D$-independent space, while keeping the initial quantum information intact. This compressing operation is combined with a unitary inversion protocol to complete the isometry inversion. We also discover a fundamental difference between our isometry inversion protocol and the known unitary inversion protocols by analyzing isometry complex conjugation and isometry transposition. General protocols including indefinite causal order are searched using semidefinite programming for any improvement in the success probability over the parallel protocols. We find a sequential "success-or-draw" protocol of universal isometry inversion for $d = 2$ and $D = 3$, thus whose success probability exponentially improves over parallel protocols in the number of calls of the input isometry operation for the said case.

]]>As a cornerstone for many quantum linear algebraic and quantum machine learning algorithms, controlled quantum state preparation (CQSP) aims to provide the transformation of $|i\rangle |0^n\rangle \to |i\rangle |\psi_i\rangle $ for all $i\in \{0,1\}^k$ for the given $n$-qubit states $|\psi_i\rangle$. In this paper, we construct a quantum circuit for implementing CQSP, with depth $O\left(n+k+\frac{2^{n+k}}{n+k+m}\right)$ and size $O(2^{n+k})$ for any given number $m$ of ancillary qubits. These bounds, which can also be viewed as a time-space tradeoff for the transformation, are optimal for any integer parameters $m,k\ge 0$ and $n\ge 1$. When $k=0$, the problem becomes the canonical quantum state preparation (QSP) problem with ancillary qubits, which asks for efficient implementations of the transformation $|0^n\rangle|0^m\rangle \to |\psi\rangle |0^m\rangle$. This problem has many applications with many investigations, yet its circuit complexity remains open. Our construction completely solves this problem, pinning down its depth complexity to $\Theta(n+2^{n}/(n+m))$ and its size complexity to $\Theta(2^{n})$ for any $m$. Another fundamental problem, unitary synthesis, asks to implement a general $n$-qubit unitary by a quantum circuit. Previous work shows a lower bound of $\Omega(n+4^n/(n+m))$ and an upper bound of $O(n2^n)$ for $m=\Omega(2^n/n)$ ancillary qubits. In this paper, we quadratically shrink this gap by presenting a quantum circuit of the depth of $O\left(n2^{n/2}+\frac{n^{1/2}2^{3n/2}}{m^{1/2}}~~\right)$.

]]>As a cornerstone for many quantum linear algebraic and quantum machine learning algorithms, controlled quantum state preparation (CQSP) aims to provide the transformation of $|i\rangle |0^n\rangle \to |i\rangle |\psi_i\rangle $ for all $i\in \{0,1\}^k$ for the given $n$-qubit states $|\psi_i\rangle$. In this paper, we construct a quantum circuit for implementing CQSP, with depth $O\left(n+k+\frac{2^{n+k}}{n+k+m}\right)$ and size $O(2^{n+k})$ for any given number $m$ of ancillary qubits. These bounds, which can also be viewed as a time-space tradeoff for the transformation, are optimal for any integer parameters $m,k\ge 0$ and $n\ge 1$. When $k=0$, the problem becomes the canonical quantum state preparation (QSP) problem with ancillary qubits, which asks for efficient implementations of the transformation $|0^n\rangle|0^m\rangle \to |\psi\rangle |0^m\rangle$. This problem has many applications with many investigations, yet its circuit complexity remains open. Our construction completely solves this problem, pinning down its depth complexity to $\Theta(n+2^{n}/(n+m))$ and its size complexity to $\Theta(2^{n})$ for any $m$. Another fundamental problem, unitary synthesis, asks to implement a general $n$-qubit unitary by a quantum circuit. Previous work shows a lower bound of $\Omega(n+4^n/(n+m))$ and an upper bound of $O(n2^n)$ for $m=\Omega(2^n/n)$ ancillary qubits. In this paper, we quadratically shrink this gap by presenting a quantum circuit of the depth of $O\left(n2^{n/2}+\frac{n^{1/2}2^{3n/2}}{m^{1/2}}~~\right)$.

]]>The time-marching strategy, which propagates the solution from one time step to the next, is a natural strategy for solving time-dependent differential equations on classical computers, as well as for solving the Hamiltonian simulation problem on quantum computers. For more general homogeneous linear differential equations $\frac{\mathrm{d}}{\mathrm{d} t} |\psi(t)\rangle = A(t) |\psi(t)\rangle, |\psi(0)\rangle = |\psi_0\rangle$, a time-marching based quantum solver can suffer from exponentially vanishing success probability with respect to the number of time steps and is thus considered impractical. We solve this problem by repeatedly invoking a technique called the uniform singular value amplification, and the overall success probability can be lower bounded by a quantity that is independent of the number of time steps. The success probability can be further improved using a compression gadget lemma. This provides a path of designing quantum differential equation solvers that is alternative to those based on quantum linear systems algorithms (QLSA). We demonstrate the performance of the time-marching strategy with a high-order integrator based on the truncated Dyson series. The complexity of the algorithm depends linearly on the amplification ratio, which quantifies the deviation from a unitary dynamics. We prove that the linear dependence on the amplification ratio attains the query complexity lower bound and thus cannot be improved in the worst case. This algorithm also surpasses existing QLSA based solvers in three aspects: (1) $A(t)$ does not need to be diagonalizable. (2) $A(t)$ can be non-smooth, and is only of bounded variation. (3) It can use fewer queries to the initial state $|\psi_0\rangle$. Finally, we demonstrate the time-marching strategy with a first-order truncated Magnus series, which simplifies the implementation compared to high-order truncated Dyson series approach, while retaining the aforementioned benefits. Our analysis also raises some open questions concerning the differences between time-marching and QLSA based methods for solving differential equations.

]]>The time-marching strategy, which propagates the solution from one time step to the next, is a natural strategy for solving time-dependent differential equations on classical computers, as well as for solving the Hamiltonian simulation problem on quantum computers. For more general homogeneous linear differential equations $\frac{\mathrm{d}}{\mathrm{d} t} |\psi(t)\rangle = A(t) |\psi(t)\rangle, |\psi(0)\rangle = |\psi_0\rangle$, a time-marching based quantum solver can suffer from exponentially vanishing success probability with respect to the number of time steps and is thus considered impractical. We solve this problem by repeatedly invoking a technique called the uniform singular value amplification, and the overall success probability can be lower bounded by a quantity that is independent of the number of time steps. The success probability can be further improved using a compression gadget lemma. This provides a path of designing quantum differential equation solvers that is alternative to those based on quantum linear systems algorithms (QLSA). We demonstrate the performance of the time-marching strategy with a high-order integrator based on the truncated Dyson series. The complexity of the algorithm depends linearly on the amplification ratio, which quantifies the deviation from a unitary dynamics. We prove that the linear dependence on the amplification ratio attains the query complexity lower bound and thus cannot be improved in the worst case. This algorithm also surpasses existing QLSA based solvers in three aspects: (1) $A(t)$ does not need to be diagonalizable. (2) $A(t)$ can be non-smooth, and is only of bounded variation. (3) It can use fewer queries to the initial state $|\psi_0\rangle$. Finally, we demonstrate the time-marching strategy with a first-order truncated Magnus series, which simplifies the implementation compared to high-order truncated Dyson series approach, while retaining the aforementioned benefits. Our analysis also raises some open questions concerning the differences between time-marching and QLSA based methods for solving differential equations.

]]>Entanglement is the key resource for quantum technologies and is at the root of exciting many-body phenomena. However, quantifying the entanglement between two parts of a real-world quantum system is challenging when it interacts with its environment, as the latter mixes cross-boundary classical with quantum correlations. Here, we efficiently quantify quantum correlations in such realistic open systems using the operator space entanglement spectrum of a mixed state. If the system possesses a fixed charge, we show that a subset of the spectral values encode coherence between different cross-boundary charge configurations. The sum over these values, which we call "configuration coherence", can be used as a quantifier for cross-boundary coherence. Crucially, we prove that for purity non-increasing maps, e.g., Lindblad-type evolutions with Hermitian jump operators, the configuration coherence is an entanglement measure. Moreover, it can be efficiently computed using a tensor network representation of the state's density matrix. We showcase the configuration coherence for spinless particles moving on a chain in presence of dephasing. Our approach can quantify coherence and entanglement in a broad range of systems and motivates efficient entanglement detection.

]]>Entanglement is the key resource for quantum technologies and is at the root of exciting many-body phenomena. However, quantifying the entanglement between two parts of a real-world quantum system is challenging when it interacts with its environment, as the latter mixes cross-boundary classical with quantum correlations. Here, we efficiently quantify quantum correlations in such realistic open systems using the operator space entanglement spectrum of a mixed state. If the system possesses a fixed charge, we show that a subset of the spectral values encode coherence between different cross-boundary charge configurations. The sum over these values, which we call "configuration coherence", can be used as a quantifier for cross-boundary coherence. Crucially, we prove that for purity non-increasing maps, e.g., Lindblad-type evolutions with Hermitian jump operators, the configuration coherence is an entanglement measure. Moreover, it can be efficiently computed using a tensor network representation of the state's density matrix. We showcase the configuration coherence for spinless particles moving on a chain in presence of dephasing. Our approach can quantify coherence and entanglement in a broad range of systems and motivates efficient entanglement detection.

]]>Quantum contextual sets have been recognized as resources for universal quantum computation, quantum steering and quantum communication. Therefore, we focus on engineering the sets that support those resources and on determining their structures and properties. Such engineering and subsequent implementation rely on discrimination between statistics of measurement data of quantum states and those of their classical counterparts. The discriminators considered are inequalities defined for hypergraphs whose structure and generation are determined by their basic properties. The generation is inherently random but with the predetermined quantum probabilities of obtainable data. Two kinds of statistics of the data are defined for the hypergraphs and six kinds of inequalities. One kind of statistics, often applied in the literature, turn out to be inappropriate and two kinds of inequalities turn out not to be noncontextuality inequalities. Results are obtained by making use of universal automated algorithms which generate hypergraphs with both odd and even numbers of hyperedges in any odd and even dimensional space – in this paper, from the smallest contextual set with just three hyperedges and three vertices to arbitrarily many contextual sets in up to 8-dimensional spaces. Higher dimensions are computationally demanding although feasible.

]]>

Quantum contextual sets have been recognized as resources for universal quantum computation, quantum steering and quantum communication. Therefore, we focus on engineering the sets that support those resources and on determining their structures and properties. Such engineering and subsequent implementation rely on discrimination between statistics of measurement data of quantum states and those of their classical counterparts. The discriminators considered are inequalities defined for hypergraphs whose structure and generation are determined by their basic properties. The generation is inherently random but with the predetermined quantum probabilities of obtainable data. Two kinds of statistics of the data are defined for the hypergraphs and six kinds of inequalities. One kind of statistics, often applied in the literature, turn out to be inappropriate and two kinds of inequalities turn out not to be noncontextuality inequalities. Results are obtained by making use of universal automated algorithms which generate hypergraphs with both odd and even numbers of hyperedges in any odd and even dimensional space – in this paper, from the smallest contextual set with just three hyperedges and three vertices to arbitrarily many contextual sets in up to 8-dimensional spaces. Higher dimensions are computationally demanding although feasible.

]]>

We present benchmarks of the parity transformation for the Quantum Approximate Optimization Algorithm (QAOA). We analyse the gate resources required to implement a single QAOA cycle for real-world scenarios. In particular, we consider random spin models with higher order terms, as well as the problems of predicting financial crashes and finding the ground states of electronic structure Hamiltonians. For the spin models studied our findings imply a significant advantage of the parity mapping compared to the standard gate model. In combination with full parallelizability of gates this has the potential to boost the race for demonstrating quantum advantage.

]]>We present benchmarks of the parity transformation for the Quantum Approximate Optimization Algorithm (QAOA). We analyse the gate resources required to implement a single QAOA cycle for real-world scenarios. In particular, we consider random spin models with higher order terms, as well as the problems of predicting financial crashes and finding the ground states of electronic structure Hamiltonians. For the spin models studied our findings imply a significant advantage of the parity mapping compared to the standard gate model. In combination with full parallelizability of gates this has the potential to boost the race for demonstrating quantum advantage.

]]>Constraints make hard optimization problems even harder to solve on quantum devices because they are implemented with large energy penalties and additional qubit overhead. The parity mapping, which has been introduced as an alternative to the spin encoding, translates the problem to a representation using only parity variables that encodes products of spin variables. In combining exchange interaction and single spin flip terms in the parity representation, constraints on sums and products of arbitrary $k$-body terms can be implemented without additional overhead in two-dimensional quantum systems.

]]>Constraints make hard optimization problems even harder to solve on quantum devices because they are implemented with large energy penalties and additional qubit overhead. The parity mapping, which has been introduced as an alternative to the spin encoding, translates the problem to a representation using only parity variables that encodes products of spin variables. In combining exchange interaction and single spin flip terms in the parity representation, constraints on sums and products of arbitrary $k$-body terms can be implemented without additional overhead in two-dimensional quantum systems.

]]>We introduce parity quantum optimization with the aim of solving optimization problems consisting of arbitrary $k$-body interactions and side conditions using planar quantum chip architectures. The method introduces a decomposition of the problem graph with arbitrary $k$-body terms using generalized closed cycles of a hypergraph. Side conditions of the optimization problem in form of hard constraints can be included as open cycles containing the terms involved in the side conditions. The generalized parity mapping thus circumvents the need to translate optimization problems to a quadratic unconstrained binary optimization problem (QUBO) and allows for the direct encoding of higher-order constrained binary optimization problems (HCBO) on a square lattice and full parallelizability of gates.

]]>We introduce parity quantum optimization with the aim of solving optimization problems consisting of arbitrary $k$-body interactions and side conditions using planar quantum chip architectures. The method introduces a decomposition of the problem graph with arbitrary $k$-body terms using generalized closed cycles of a hypergraph. Side conditions of the optimization problem in form of hard constraints can be included as open cycles containing the terms involved in the side conditions. The generalized parity mapping thus circumvents the need to translate optimization problems to a quadratic unconstrained binary optimization problem (QUBO) and allows for the direct encoding of higher-order constrained binary optimization problems (HCBO) on a square lattice and full parallelizability of gates.

]]>Variational quantum algorithms, which have risen to prominence in the noisy intermediate-scale quantum setting, require the implementation of a stochastic optimizer on classical hardware. To date, most research has employed algorithms based on the stochastic gradient iteration as the stochastic classical optimizer. In this work we propose instead using stochastic optimization algorithms that yield stochastic processes emulating the dynamics of classical deterministic algorithms. This approach results in methods with theoretically superior worst-case iteration complexities, at the expense of greater per-iteration sample (shot) complexities. We investigate this trade-off both theoretically and empirically and conclude that preferences for a choice of stochastic optimizer should explicitly depend on a function of both latency and shot execution times.

]]>Variational quantum algorithms, which have risen to prominence in the noisy intermediate-scale quantum setting, require the implementation of a stochastic optimizer on classical hardware. To date, most research has employed algorithms based on the stochastic gradient iteration as the stochastic classical optimizer. In this work we propose instead using stochastic optimization algorithms that yield stochastic processes emulating the dynamics of classical deterministic algorithms. This approach results in methods with theoretically superior worst-case iteration complexities, at the expense of greater per-iteration sample (shot) complexities. We investigate this trade-off both theoretically and empirically and conclude that preferences for a choice of stochastic optimizer should explicitly depend on a function of both latency and shot execution times.

]]>In the framework of ontological models, the inherently nonclassical features of quantum theory always seem to involve properties that are fine tuned, i.e. properties that hold at the operational level but break at the ontological level. Their appearance at the operational level is due to unexplained special choices of the ontological parameters, which is what we mean by a fine tuning. Famous examples of such features are contextuality and nonlocality. In this article, we develop a theory-independent mathematical framework for characterizing operational fine tunings. These are distinct from causal fine tunings – already introduced by Wood and Spekkens in [NJP,17 033002(2015)] – as the definition of an operational fine tuning does not involve any assumptions about the underlying causal structure. We show how known examples of operational fine tunings, such as Spekkens' generalized contextuality, violation of parameter independence in Bell experiment, and ontological time asymmetry, fit into our framework. We discuss the possibility of finding new fine tunings and we use the framework to shed new light on the relation between nonlocality and generalized contextuality. Although nonlocality has often been argued to be a form of contextuality, this is only true when nonlocality consists of a violation of parameter independence. We formulate our framework also in the language of category theory using the concept of functors.

Superdeterminism and Retrocausality – International Centre for Philosophy, Bonn (Germany), 17-20/05/2022. Contributed talk at Quantum physics and logic, online due to pandemic, 1-5/06/2020 Seminar at Perimeter Institute, Waterloo (Canada), 13/09/2019.

]]>In the framework of ontological models, the inherently nonclassical features of quantum theory always seem to involve properties that are fine tuned, i.e. properties that hold at the operational level but break at the ontological level. Their appearance at the operational level is due to unexplained special choices of the ontological parameters, which is what we mean by a fine tuning. Famous examples of such features are contextuality and nonlocality. In this article, we develop a theory-independent mathematical framework for characterizing operational fine tunings. These are distinct from causal fine tunings – already introduced by Wood and Spekkens in [NJP,17 033002(2015)] – as the definition of an operational fine tuning does not involve any assumptions about the underlying causal structure. We show how known examples of operational fine tunings, such as Spekkens' generalized contextuality, violation of parameter independence in Bell experiment, and ontological time asymmetry, fit into our framework. We discuss the possibility of finding new fine tunings and we use the framework to shed new light on the relation between nonlocality and generalized contextuality. Although nonlocality has often been argued to be a form of contextuality, this is only true when nonlocality consists of a violation of parameter independence. We formulate our framework also in the language of category theory using the concept of functors.

Superdeterminism and Retrocausality – International Centre for Philosophy, Bonn (Germany), 17-20/05/2022. Contributed talk at Quantum physics and logic, online due to pandemic, 1-5/06/2020 Seminar at Perimeter Institute, Waterloo (Canada), 13/09/2019.

]]>In the minimal scenario of quantum correlations, two parties can choose from two observables with two possible outcomes each. Probabilities are specified by four marginals and four correlations. The resulting four-dimensional convex body of correlations, denoted $\mathcal{Q}$, is fundamental for quantum information theory. We review and systematize what is known about $\mathcal{Q}$, and add many details, visualizations, and complete proofs. In particular, we provide a detailed description of the boundary, which consists of three-dimensional faces isomorphic to elliptopes and sextic algebraic manifolds of exposed extreme points. These patches are separated by cubic surfaces of non-exposed extreme points. We provide a trigonometric parametrization of all extreme points, along with their exposing Tsirelson inequalities and quantum models. All non-classical extreme points (exposed or not) are self-testing, i.e., realized by an essentially unique quantum model. Two principles, which are specific to the minimal scenario, allow a quick and complete overview: The first is the pushout transformation, i.e., the application of the sine function to each coordinate. This transforms the classical correlation polytope exactly into the correlation body $\mathcal{Q}$, also identifying the boundary structures. The second principle, self-duality, is an isomorphism between $\mathcal{Q}$ and its polar dual, i.e., the set of affine inequalities satisfied by all quantum correlations (“Tsirelson inequalities''). The same isomorphism links the polytope of classical correlations contained in $\mathcal{Q}$ to the polytope of no-signalling correlations, which contains $\mathcal{Q}$. We also discuss the sets of correlations achieved with fixed Hilbert space dimension, fixed state or fixed observables, and establish a new non-linear inequality for $\mathcal{Q}$ involving the determinant of the correlation matrix.

]]>In the minimal scenario of quantum correlations, two parties can choose from two observables with two possible outcomes each. Probabilities are specified by four marginals and four correlations. The resulting four-dimensional convex body of correlations, denoted $\mathcal{Q}$, is fundamental for quantum information theory. We review and systematize what is known about $\mathcal{Q}$, and add many details, visualizations, and complete proofs. In particular, we provide a detailed description of the boundary, which consists of three-dimensional faces isomorphic to elliptopes and sextic algebraic manifolds of exposed extreme points. These patches are separated by cubic surfaces of non-exposed extreme points. We provide a trigonometric parametrization of all extreme points, along with their exposing Tsirelson inequalities and quantum models. All non-classical extreme points (exposed or not) are self-testing, i.e., realized by an essentially unique quantum model. Two principles, which are specific to the minimal scenario, allow a quick and complete overview: The first is the pushout transformation, i.e., the application of the sine function to each coordinate. This transforms the classical correlation polytope exactly into the correlation body $\mathcal{Q}$, also identifying the boundary structures. The second principle, self-duality, is an isomorphism between $\mathcal{Q}$ and its polar dual, i.e., the set of affine inequalities satisfied by all quantum correlations (“Tsirelson inequalities''). The same isomorphism links the polytope of classical correlations contained in $\mathcal{Q}$ to the polytope of no-signalling correlations, which contains $\mathcal{Q}$. We also discuss the sets of correlations achieved with fixed Hilbert space dimension, fixed state or fixed observables, and establish a new non-linear inequality for $\mathcal{Q}$ involving the determinant of the correlation matrix.

]]>Atomic clock interferometers are a valuable tool to test the interface between quantum theory and gravity, in particular via the measurement of gravitational time dilation in the quantum regime. Here, we investigate whether gravitational time dilation may be also used as a resource in quantum information theory. In particular, we show that for a freely falling interferometer and for a Mach-Zehnder interferometer, the gravitational time dilation may enhance the precision in estimating the gravitational acceleration for long interferometric times. To this aim, the interferometric measurements should be performed on both the path and the clock degrees of freedom.

]]>Atomic clock interferometers are a valuable tool to test the interface between quantum theory and gravity, in particular via the measurement of gravitational time dilation in the quantum regime. Here, we investigate whether gravitational time dilation may be also used as a resource in quantum information theory. In particular, we show that for a freely falling interferometer and for a Mach-Zehnder interferometer, the gravitational time dilation may enhance the precision in estimating the gravitational acceleration for long interferometric times. To this aim, the interferometric measurements should be performed on both the path and the clock degrees of freedom.

]]>The quantum switch is a quantum computational primitive that provides computational advantage by applying operations in a superposition of orders. In particular, it can reduce the number of gate queries required for solving promise problems where the goal is to discriminate between a set of properties of a given set of unitary gates. In this work, we use Complex Hadamard matrices to introduce more general promise problems, which reduce to the known Fourier and Hadamard promise problems as limiting cases. Our generalization loosens the restrictions on the size of the matrices, number of gates and dimension of the quantum systems, providing more parameters to explore. In addition, it leads to the conclusion that a continuous variable system is necessary to implement the most general promise problem. In the finite dimensional case, the family of matrices is restricted to the so-called Butson-Hadamard type, and the complexity of the matrix enters as a constraint. We introduce the “query per gate'' parameter and use it to prove that the quantum switch provides computational advantage for both the continuous and discrete cases. Our results should inspire implementations of promise problems using the quantum switch where parameters and therefore experimental setups can be chosen much more freely.

]]>The quantum switch is a quantum computational primitive that provides computational advantage by applying operations in a superposition of orders. In particular, it can reduce the number of gate queries required for solving promise problems where the goal is to discriminate between a set of properties of a given set of unitary gates. In this work, we use Complex Hadamard matrices to introduce more general promise problems, which reduce to the known Fourier and Hadamard promise problems as limiting cases. Our generalization loosens the restrictions on the size of the matrices, number of gates and dimension of the quantum systems, providing more parameters to explore. In addition, it leads to the conclusion that a continuous variable system is necessary to implement the most general promise problem. In the finite dimensional case, the family of matrices is restricted to the so-called Butson-Hadamard type, and the complexity of the matrix enters as a constraint. We introduce the “query per gate'' parameter and use it to prove that the quantum switch provides computational advantage for both the continuous and discrete cases. Our results should inspire implementations of promise problems using the quantum switch where parameters and therefore experimental setups can be chosen much more freely.

]]>A proof of work (PoW) is an important cryptographic construct enabling a party to convince others that they invested some effort in solving a computational task. Arguably, its main impact has been in the setting of cryptocurrencies such as Bitcoin and its underlying blockchain protocol, which received significant attention in recent years due to its potential for various applications as well as for solving fundamental distributed computing questions in novel threat models. PoWs enable the linking of blocks in the blockchain data structure and thus the problem of interest is the feasibility of obtaining a sequence (chain) of such proofs. In this work, we examine the hardness of finding such chain of PoWs against quantum strategies. We prove that the chain of PoWs problem reduces to a problem we call multi-solution Bernoulli search, for which we establish its quantum query complexity. Effectively, this is an extension of a threshold direct product theorem to an average-case unstructured search problem. Our proof, adding to active recent efforts, simplifies and generalizes the recording technique of Zhandry (Crypto'19). As an application, we revisit the formal treatment of security of the core of the Bitcoin consensus protocol, the Bitcoin backbone (Eurocrypt'15), against quantum adversaries, while honest parties are classical and show that protocol's security holds under a quantum analogue of the classical “honest majority'' assumption. Our analysis indicates that the security of Bitcoin backbone is guaranteed provided the number of adversarial quantum queries is bounded so that each quantum query is worth $O(p^{-1/2})$ classical ones, where $p$ is the success probability of a single classical query to the protocol's underlying hash function. Somewhat surprisingly, the wait time for safe settlement in the case of quantum adversaries matches the safe settlement time in the classical case.

]]>A proof of work (PoW) is an important cryptographic construct enabling a party to convince others that they invested some effort in solving a computational task. Arguably, its main impact has been in the setting of cryptocurrencies such as Bitcoin and its underlying blockchain protocol, which received significant attention in recent years due to its potential for various applications as well as for solving fundamental distributed computing questions in novel threat models. PoWs enable the linking of blocks in the blockchain data structure and thus the problem of interest is the feasibility of obtaining a sequence (chain) of such proofs. In this work, we examine the hardness of finding such chain of PoWs against quantum strategies. We prove that the chain of PoWs problem reduces to a problem we call multi-solution Bernoulli search, for which we establish its quantum query complexity. Effectively, this is an extension of a threshold direct product theorem to an average-case unstructured search problem. Our proof, adding to active recent efforts, simplifies and generalizes the recording technique of Zhandry (Crypto'19). As an application, we revisit the formal treatment of security of the core of the Bitcoin consensus protocol, the Bitcoin backbone (Eurocrypt'15), against quantum adversaries, while honest parties are classical and show that protocol's security holds under a quantum analogue of the classical “honest majority'' assumption. Our analysis indicates that the security of Bitcoin backbone is guaranteed provided the number of adversarial quantum queries is bounded so that each quantum query is worth $O(p^{-1/2})$ classical ones, where $p$ is the success probability of a single classical query to the protocol's underlying hash function. Somewhat surprisingly, the wait time for safe settlement in the case of quantum adversaries matches the safe settlement time in the classical case.

]]>We consider a nonlinearly coupled electromechanical system, and develop a quantitative theory for two-phonon cooling. In the presence of two-phonon cooling, the mechanical Hilbert space is effectively reduced to its ground and first excited states, allowing for quantum operations at the level of individual phonons and preparing nonclassical mechanical states with negative Wigner functions. We propose a scheme for performing arbitrary Bloch sphere rotations, and derive the fidelity in the specific case of a $\pi$-pulse. We characterise detrimental processes that reduce the coherence in the system, and demonstrate that our scheme can be implemented in state-of-the-art electromechanical devices.

]]>We consider a nonlinearly coupled electromechanical system, and develop a quantitative theory for two-phonon cooling. In the presence of two-phonon cooling, the mechanical Hilbert space is effectively reduced to its ground and first excited states, allowing for quantum operations at the level of individual phonons and preparing nonclassical mechanical states with negative Wigner functions. We propose a scheme for performing arbitrary Bloch sphere rotations, and derive the fidelity in the specific case of a $\pi$-pulse. We characterise detrimental processes that reduce the coherence in the system, and demonstrate that our scheme can be implemented in state-of-the-art electromechanical devices.

]]>Active quantum error correction is a central ingredient to achieve robust quantum processors. In this paper we investigate the potential of quantum machine learning for quantum error correction in a quantum memory. Specifically, we demonstrate how quantum neural networks, in the form of quantum autoencoders, can be trained to learn optimal strategies for active detection and correction of errors, including spatially correlated computational errors as well as qubit losses. We highlight that the denoising capabilities of quantum autoencoders are not limited to the protection of specific states but extend to the entire logical codespace. We also show that quantum neural networks can be used to discover new logical encodings that are optimally adapted to the underlying noise. Moreover, we find that, even in the presence of moderate noise in the quantum autoencoders themselves, they may still be successfully used to perform beneficial quantum error correction and thereby extend the lifetime of a logical qubit.

]]>Active quantum error correction is a central ingredient to achieve robust quantum processors. In this paper we investigate the potential of quantum machine learning for quantum error correction in a quantum memory. Specifically, we demonstrate how quantum neural networks, in the form of quantum autoencoders, can be trained to learn optimal strategies for active detection and correction of errors, including spatially correlated computational errors as well as qubit losses. We highlight that the denoising capabilities of quantum autoencoders are not limited to the protection of specific states but extend to the entire logical codespace. We also show that quantum neural networks can be used to discover new logical encodings that are optimally adapted to the underlying noise. Moreover, we find that, even in the presence of moderate noise in the quantum autoencoders themselves, they may still be successfully used to perform beneficial quantum error correction and thereby extend the lifetime of a logical qubit.

]]>A simple scheme is presented for realizing robust optically controlled quantum gates for scalable atomic quantum processors by driving the qubits with optical standing waves. Atoms localized close to the antinodes of the standing wave can realize phase-controlled quantum operations that are potentially more than an order of magnitude less sensitive to the local optical phase and atomic motion than corresponding travelling wave configurations. The scheme is compatible with robust optimal control techniques and spatial qubit addressing in atomic arrays to realize phase controlled operations without the need for tight focusing and precise positioning of the control lasers. This will be particularly beneficial for quantum gates involving Doppler sensitive optical frequency transitions and provides an all optical route to scaling up atomic quantum processors.

]]>A simple scheme is presented for realizing robust optically controlled quantum gates for scalable atomic quantum processors by driving the qubits with optical standing waves. Atoms localized close to the antinodes of the standing wave can realize phase-controlled quantum operations that are potentially more than an order of magnitude less sensitive to the local optical phase and atomic motion than corresponding travelling wave configurations. The scheme is compatible with robust optimal control techniques and spatial qubit addressing in atomic arrays to realize phase controlled operations without the need for tight focusing and precise positioning of the control lasers. This will be particularly beneficial for quantum gates involving Doppler sensitive optical frequency transitions and provides an all optical route to scaling up atomic quantum processors.

]]>Self-correcting quantum memories demonstrate robust properties that can be exploited to improve active quantum error-correction protocols. Here we propose a cellular automaton decoder for a variation of the color code where the bases of the physical qubits are locally rotated, which we call the XYZ color code. The local transformation means our decoder demonstrates key properties of a two-dimensional fractal code if the noise acting on the system is infinitely biased towards dephasing, namely, no string-like logical operators. As such, in the high-bias limit, our local decoder reproduces the behavior of a partially self-correcting memory. At low error rates, our simulations show that the memory time diverges polynomially with system size without intervention from a global decoder, up to some critical system size that grows as the error rate is lowered. Furthermore, although we find that we cannot reproduce partially self-correcting behavior at finite bias, our numerics demonstrate improved memory times at realistic noise biases. Our results therefore motivate the design of tailored cellular automaton decoders that help to reduce the bandwidth demands of global decoding for realistic noise models.

]]>Self-correcting quantum memories demonstrate robust properties that can be exploited to improve active quantum error-correction protocols. Here we propose a cellular automaton decoder for a variation of the color code where the bases of the physical qubits are locally rotated, which we call the XYZ color code. The local transformation means our decoder demonstrates key properties of a two-dimensional fractal code if the noise acting on the system is infinitely biased towards dephasing, namely, no string-like logical operators. As such, in the high-bias limit, our local decoder reproduces the behavior of a partially self-correcting memory. At low error rates, our simulations show that the memory time diverges polynomially with system size without intervention from a global decoder, up to some critical system size that grows as the error rate is lowered. Furthermore, although we find that we cannot reproduce partially self-correcting behavior at finite bias, our numerics demonstrate improved memory times at realistic noise biases. Our results therefore motivate the design of tailored cellular automaton decoders that help to reduce the bandwidth demands of global decoding for realistic noise models.

]]>