We introduce a simple sub-universal quantum computing model, which we call the Hadamard-classical circuit with one-qubit (HC1Q) model. It consists of a classical reversible circuit sandwiched by two layers of Hadamard gates, and therefore it is in the second level of the Fourier hierarchy. We show that output probability distributions of the HC1Q model cannot be classically efficiently sampled within a multiplicative error unless the polynomial-time hierarchy collapses to the second level. The proof technique is different from those used for previous sub-universal models, such as IQP, Boson Sampling, and DQC1, and therefore the technique itself might be useful for finding other sub-universal models that are hard to classically simulate. We also study the classical verification of quantum computing in the second level of the Fourier hierarchy. To this end, we define a promise problem, which we call the probability distribution distinguishability with maximum norm (PDD-Max). It is a promise problem to decide whether output probability distributions of two quantum circuits are far apart or close. We show that PDD-Max is BQP-complete, but if the two circuits are restricted to some types in the second level of the Fourier hierarchy, such as the HC1Q model or the IQP model, PDD-Max has a Merlin-Arthur system with quantum polynomial-time Merlin and classical probabilistic polynomial-time Arthur.

Quantum 2, 106 (2018). https://doi.org/10.22331/q-2018-11-15-106

]]>We introduce a simple sub-universal quantum computing model, which we call the Hadamard-classical circuit with one-qubit (HC1Q) model. It consists of a classical reversible circuit sandwiched by two layers of Hadamard gates, and therefore it is in the second level of the Fourier hierarchy. We show that output probability distributions of the HC1Q model cannot be classically efficiently sampled within a multiplicative error unless the polynomial-time hierarchy collapses to the second level. The proof technique is different from those used for previous sub-universal models, such as IQP, Boson Sampling, and DQC1, and therefore the technique itself might be useful for finding other sub-universal models that are hard to classically simulate. We also study the classical verification of quantum computing in the second level of the Fourier hierarchy. To this end, we define a promise problem, which we call the probability distribution distinguishability with maximum norm (PDD-Max). It is a promise problem to decide whether output probability distributions of two quantum circuits are far apart or close. We show that PDD-Max is BQP-complete, but if the two circuits are restricted to some types in the second level of the Fourier hierarchy, such as the HC1Q model or the IQP model, PDD-Max has a Merlin-Arthur system with quantum polynomial-time Merlin and classical probabilistic polynomial-time Arthur.

]]>Markov chain methods are remarkably successful in computational physics, machine learning, and combinatorial optimization. The cost of such methods often reduces to the mixing time, i.e., the time required to reach the steady state of the Markov chain, which scales as $δ^{-1}$, the inverse of the spectral gap. It has long been conjectured that quantum computers offer nearly generic quadratic improvements for mixing problems. However, except in special cases, quantum algorithms achieve a run-time of $\mathcal{O}(\sqrt{δ^{-1}} \sqrt{N})$, which introduces a costly dependence on the Markov chain size $N,$ not present in the classical case. Here, we re-address the problem of mixing of Markov chains when these form a slowly evolving sequence. This setting is akin to the simulated annealing setting and is commonly encountered in physics, material sciences and machine learning. We provide a quantum memory-efficient algorithm with a run-time of $\mathcal{O}(\sqrt{δ^{-1}} \sqrt[4]{N})$, neglecting logarithmic terms, which is an important improvement for large state spaces. Moreover, our algorithms output quantum encodings of distributions, which has advantages over classical outputs. Finally, we discuss the run-time bounds of mixing algorithms and show that, under certain assumptions, our algorithms are optimal.

Quantum 2, 105 (2018). https://doi.org/10.22331/q-2018-11-09-105

]]>Markov chain methods are remarkably successful in computational physics, machine learning, and combinatorial optimization. The cost of such methods often reduces to the mixing time, i.e., the time required to reach the steady state of the Markov chain, which scales as $δ^{-1}$, the inverse of the spectral gap. It has long been conjectured that quantum computers offer nearly generic quadratic improvements for mixing problems. However, except in special cases, quantum algorithms achieve a run-time of $\mathcal{O}(\sqrt{δ^{-1}} \sqrt{N})$, which introduces a costly dependence on the Markov chain size $N,$ not present in the classical case. Here, we re-address the problem of mixing of Markov chains when these form a slowly evolving sequence. This setting is akin to the simulated annealing setting and is commonly encountered in physics, material sciences and machine learning. We provide a quantum memory-efficient algorithm with a run-time of $\mathcal{O}(\sqrt{δ^{-1}} \sqrt[4]{N})$, neglecting logarithmic terms, which is an important improvement for large state spaces. Moreover, our algorithms output quantum encodings of distributions, which has advantages over classical outputs. Finally, we discuss the run-time bounds of mixing algorithms and show that, under certain assumptions, our algorithms are optimal.

]]>Using the existing classification of all alternatives to the measurement postulates of quantum theory we study the properties of bi-partite systems in these alternative theories. We prove that in all these theories the purification principle is violated, meaning that some mixed states are not the reduction of a pure state in a larger system. This allows us to derive the measurement postulates of quantum theory from the structure of pure states and reversible dynamics, and the requirement that the purification principle holds. The violation of the purification principle implies that there is some irreducible classicality in these theories, which appears like an important clue for the problem of deriving the Born rule within the many-worlds interpretation. We also prove that in all such modifications the task of state tomography with local measurements is impossible, and present a simple toy theory displaying all these exotic non-quantum phenomena. This toy model shows that, contrarily to previous claims, it is possible to modify the Born rule without violating the no-signalling principle. Finally, we argue that the quantum measurement postulates are the most non-classical amongst all alternatives.

Quantum 2, 104 (2018). https://doi.org/10.22331/q-2018-11-06-104

]]>Using the existing classification of all alternatives to the measurement postulates of quantum theory we study the properties of bi-partite systems in these alternative theories. We prove that in all these theories the purification principle is violated, meaning that some mixed states are not the reduction of a pure state in a larger system. This allows us to derive the measurement postulates of quantum theory from the structure of pure states and reversible dynamics, and the requirement that the purification principle holds. The violation of the purification principle implies that there is some irreducible classicality in these theories, which appears like an important clue for the problem of deriving the Born rule within the many-worlds interpretation. We also prove that in all such modifications the task of state tomography with local measurements is impossible, and present a simple toy theory displaying all these exotic non-quantum phenomena. This toy model shows that, contrarily to previous claims, it is possible to modify the Born rule without violating the no-signalling principle. Finally, we argue that the quantum measurement postulates are the most non-classical amongst all alternatives.

]]>We discuss the possibility of creating money that is physically impossible to counterfeit. Of course, "physically impossible" is dependent on the theory that is a faithful description of nature. Currently there are several proposals for quantum money which have their security based on the validity of quantum mechanics. In this work, we examine Wiesner's money scheme in the framework of generalised probabilistic theories. This framework is broad enough to allow for essentially any potential theory of nature, provided that it admits an operational description. We prove that under a quantifiable version of the no-cloning theorem, one can create physical money which has an exponentially small chance of being counterfeited. Our proof relies on cone programming, a natural generalisation of semidefinite programming. Moreover, we discuss some of the difficulties that arise when considering non-quantum theories.

Quantum 2, 103 (2018). https://doi.org/10.22331/q-2018-11-02-103

]]>We discuss the possibility of creating money that is physically impossible to counterfeit. Of course, "physically impossible" is dependent on the theory that is a faithful description of nature. Currently there are several proposals for quantum money which have their security based on the validity of quantum mechanics. In this work, we examine Wiesner's money scheme in the framework of generalised probabilistic theories. This framework is broad enough to allow for essentially any potential theory of nature, provided that it admits an operational description. We prove that under a quantifiable version of the no-cloning theorem, one can create physical money which has an exponentially small chance of being counterfeited. Our proof relies on cone programming, a natural generalisation of semidefinite programming. Moreover, we discuss some of the difficulties that arise when considering non-quantum theories.

]]>Fault tolerance is a prerequisite for scalable quantum computing. Architectures based on 2D topological codes are effective for near-term implementations of fault tolerance. To obtain high performance with these architectures, we require a decoder which can adapt to the wide variety of error models present in experiments. The typical approach to the problem of decoding the surface code is to reduce it to minimum-weight perfect matching in a way that provides a suboptimal threshold error rate, and is specialized to correct a specific error model. Recently, optimal threshold error rates for a variety of error models have been obtained by methods which do not use minimum-weight perfect matching, showing that such thresholds can be achieved in polynomial time. It is an open question whether these results can also be achieved by minimum-weight perfect matching. In this work, we use belief propagation and a novel algorithm for producing edge weights to increase the utility of minimum-weight perfect matching for decoding surface codes. This allows us to correct depolarizing errors using the rotated surface code, obtaining a threshold of $17.76 \pm 0.02 \%$. This is larger than the threshold achieved by previous matching-based decoders ($14.88 \pm 0.02 \%$), though still below the known upper bound of $\sim 18.9 \%$.

Quantum 2, 102 (2018). https://doi.org/10.22331/q-2018-10-19-102

]]>Fault tolerance is a prerequisite for scalable quantum computing. Architectures based on 2D topological codes are effective for near-term implementations of fault tolerance. To obtain high performance with these architectures, we require a decoder which can adapt to the wide variety of error models present in experiments. The typical approach to the problem of decoding the surface code is to reduce it to minimum-weight perfect matching in a way that provides a suboptimal threshold error rate, and is specialized to correct a specific error model. Recently, optimal threshold error rates for a variety of error models have been obtained by methods which do not use minimum-weight perfect matching, showing that such thresholds can be achieved in polynomial time. It is an open question whether these results can also be achieved by minimum-weight perfect matching. In this work, we use belief propagation and a novel algorithm for producing edge weights to increase the utility of minimum-weight perfect matching for decoding surface codes. This allows us to correct depolarizing errors using the rotated surface code, obtaining a threshold of $17.76 \pm 0.02 \%$. This is larger than the threshold achieved by previous matching-based decoders ($14.88 \pm 0.02 \%$), though still below the known upper bound of $\sim 18.9 \%$.

]]>The color code is both an interesting example of an exactly solved topologically ordered phase of matter and also among the most promising candidate models to realize fault-tolerant quantum computation with minimal resource overhead. The contributions of this work are threefold. First of all, we build upon the abstract theory of boundaries and domain walls of topological phases of matter to comprehensively catalog the objects realizable in color codes. Together with our classification we also provide lattice representations of these objects which include three new types of boundaries as well as a generating set for all 72 color code twist defects. Our work thus provides an explicit toy model that will help to better understand the abstract theory of domain walls. Secondly, we discover a number of interesting new applications of the cataloged objects for quantum information protocols. These include improved methods for performing quantum computations by code deformation, a new four-qubit error-detecting code, as well as families of new quantum error-correcting codes we call stellated color codes, which encode logical qubits at the same distance as the next best color code, but using approximately half the number of physical qubits. To the best of our knowledge, our new topological codes have the highest encoding rate of local stabilizer codes with bounded-weight stabilizers in two dimensions. Finally, we show how the boundaries and twist defects of the color code are represented by multiple copies of other phases. Indeed, in addition to the well studied comparison between the color code and two copies of the surface code, we also compare the color code to two copies of the three-fermion model. In particular, we find that this analogy offers a very clear lens through which we can view the symmetries of the color code which gives rise to its multitude of domain walls.

Quantum 2, 101 (2018). https://doi.org/10.22331/q-2018-10-19-101

]]>The color code is both an interesting example of an exactly solved topologically ordered phase of matter and also among the most promising candidate models to realize fault-tolerant quantum computation with minimal resource overhead. The contributions of this work are threefold. First of all, we build upon the abstract theory of boundaries and domain walls of topological phases of matter to comprehensively catalog the objects realizable in color codes. Together with our classification we also provide lattice representations of these objects which include three new types of boundaries as well as a generating set for all 72 color code twist defects. Our work thus provides an explicit toy model that will help to better understand the abstract theory of domain walls. Secondly, we discover a number of interesting new applications of the cataloged objects for quantum information protocols. These include improved methods for performing quantum computations by code deformation, a new four-qubit error-detecting code, as well as families of new quantum error-correcting codes we call stellated color codes, which encode logical qubits at the same distance as the next best color code, but using approximately half the number of physical qubits. To the best of our knowledge, our new topological codes have the highest encoding rate of local stabilizer codes with bounded-weight stabilizers in two dimensions. Finally, we show how the boundaries and twist defects of the color code are represented by multiple copies of other phases. Indeed, in addition to the well studied comparison between the color code and two copies of the surface code, we also compare the color code to two copies of the three-fermion model. In particular, we find that this analogy offers a very clear lens through which we can view the symmetries of the color code which gives rise to its multitude of domain walls.

]]>Coherent superposition is a key feature of quantum mechanics that underlies the advantage of quantum technologies over their classical counterparts. Recently, coherence has been recast as a resource theory in an attempt to identify and quantify it in an operationally well-defined manner. Here we study how the coherence present in a state can be used to implement a quantum channel via incoherent operations and, in turn, to assess its degree of coherence. We introduce the robustness of coherence of a quantum channel-which reduces to the homonymous measure for states when computed on constant-output channels-and prove that: i) it quantifies the minimal rank of a maximally coherent state required to implement the channel; ii) its logarithm quantifies the amortized cost of implementing the channel provided some coherence is recovered at the output; iii) its logarithm also quantifies the zero-error asymptotic cost of implementation of many independent copies of a channel. We also consider the generalized problem of imperfect implementation with arbitrary resource states. Using the robustness of coherence, we find that in general a quantum channel can be implemented without employing a maximally coherent resource state. In fact, we prove that every pure coherent state in dimension larger than $2$, however weakly so, turns out to be a valuable resource to implement some coherent unitary channel. We illustrate our findings for the case of single-qubit unitary channels.

Quantum 2, 100 (2018). https://doi.org/10.22331/q-2018-10-19-100

]]>Coherent superposition is a key feature of quantum mechanics that underlies the advantage of quantum technologies over their classical counterparts. Recently, coherence has been recast as a resource theory in an attempt to identify and quantify it in an operationally well-defined manner. Here we study how the coherence present in a state can be used to implement a quantum channel via incoherent operations and, in turn, to assess its degree of coherence. We introduce the robustness of coherence of a quantum channel-which reduces to the homonymous measure for states when computed on constant-output channels-and prove that: i) it quantifies the minimal rank of a maximally coherent state required to implement the channel; ii) its logarithm quantifies the amortized cost of implementing the channel provided some coherence is recovered at the output; iii) its logarithm also quantifies the zero-error asymptotic cost of implementation of many independent copies of a channel. We also consider the generalized problem of imperfect implementation with arbitrary resource states. Using the robustness of coherence, we find that in general a quantum channel can be implemented without employing a maximally coherent resource state. In fact, we prove that every pure coherent state in dimension larger than $2$, however weakly so, turns out to be a valuable resource to implement some coherent unitary channel. We illustrate our findings for the case of single-qubit unitary channels.

]]>One of the reasons for the heated debates around the interpretations of quantum theory is a simple confusion between the notions of formalism $\textit{versus}$ interpretation. In this note, we make a clear distinction between them and show that there are actually two $\textit{inequivalent}$ quantum formalisms, namely the relative-state formalism and the standard formalism with the Born and measurement-update rules. We further propose a different probability rule for the relative-state formalism and discuss how Wigner's-friend-type experiments could show the inequivalence with the standard formalism. The feasibility in principle of such experiments, however, remains an open question.

Quantum 2, 99 (2018). https://doi.org/10.22331/q-2018-10-15-99

]]>One of the reasons for the heated debates around the interpretations of quantum theory is a simple confusion between the notions of formalism $\textit{versus}$ interpretation. In this note, we make a clear distinction between them and show that there are actually two $\textit{inequivalent}$ quantum formalisms, namely the relative-state formalism and the standard formalism with the Born and measurement-update rules. We further propose a different probability rule for the relative-state formalism and discuss how Wigner's-friend-type experiments could show the inequivalence with the standard formalism. The feasibility in principle of such experiments, however, remains an open question.

]]>The $\textit{quantum strategy}$ (or $\textit{quantum combs}$) framework is a useful tool for reasoning about interactions among entities that process and exchange quantum information over the course of multiple turns. We prove a time-reversal property for a class of linear functions, defined on quantum strategy representations within this framework, that corresponds to the set of rank-one positive semidefinite operators on a certain space. This time-reversal property states that the maximum value obtained by such a function over all valid quantum strategies is also obtained when the direction of time for the function is reversed, despite the fact that the strategies themselves are generally not time reversible. An application of this fact is an alternative proof of a known relationship between the conditional min- and max-entropy of bipartite quantum states, along with generalizations of this relationship.

Quantum 2, 98 (2018). https://doi.org/10.22331/q-2018-10-04-98

]]>The $\textit{quantum strategy}$ (or $\textit{quantum combs}$) framework is a useful tool for reasoning about interactions among entities that process and exchange quantum information over the course of multiple turns. We prove a time-reversal property for a class of linear functions, defined on quantum strategy representations within this framework, that corresponds to the set of rank-one positive semidefinite operators on a certain space. This time-reversal property states that the maximum value obtained by such a function over all valid quantum strategies is also obtained when the direction of time for the function is reversed, despite the fact that the strategies themselves are generally not time reversible. An application of this fact is an alternative proof of a known relationship between the conditional min- and max-entropy of bipartite quantum states, along with generalizations of this relationship.

]]>Quantum emitters coupled to structured photonic reservoirs experience unconventional individual and collective dynamics emerging from the interplay between dimensionality and non-trivial photon energy dispersions. In this work, we systematically study several paradigmatic three dimensional structured baths with qualitative differences in their bath spectral density. We discover non-Markovian individual and collective effects absent in simplified descriptions, such as perfect subradiant states or long-range anisotropic interactions. Furthermore, we show how to implement these models using only cold atoms in state-dependent optical lattices and show how this unconventional dynamics can be observed with these systems.

Quantum 2, 97 (2018). https://doi.org/10.22331/q-2018-10-01-97

]]>Quantum emitters coupled to structured photonic reservoirs experience unconventional individual and collective dynamics emerging from the interplay between dimensionality and non-trivial photon energy dispersions. In this work, we systematically study several paradigmatic three dimensional structured baths with qualitative differences in their bath spectral density. We discover non-Markovian individual and collective effects absent in simplified descriptions, such as perfect subradiant states or long-range anisotropic interactions. Furthermore, we show how to implement these models using only cold atoms in state-dependent optical lattices and show how this unconventional dynamics can be observed with these systems.

]]>For both unitary and open qubit dynamics, we compare asymmetry monotone-based bounds on the minimal time required for an initial qubit state to evolve to a final qubit state from which it is probabilistically distinguishable with fixed minimal error probability (i.e., the minimal error distinguishability time). For the case of unitary dynamics generated by a time-independent Hamiltonian, we derive a necessary and sufficient condition on two asymmetry monotones that guarantees that an arbitrary state of a two-level quantum system or a separable state of $N$ two-level quantum systems will unitarily evolve to another state from which it can be distinguished with a fixed minimal error probability $\delta \in [0,1/2]$. This condition is used to order the set of qubit states based on their distinguishability time, and to derive an optimal release time for driven two-level systems such as those that occur, e.g., in the Landau-Zener problem. For the case of non-unitary dynamics, we compare three lower bounds to the distinguishability time, including a new type of lower bound which is formulated in terms of the asymmetry of the uniformly time-twirled initial system-plus-environment state with respect to the generator $H_{SE}$ of the Stinespring isometry corresponding to the dynamics, specifically, in terms of $\Vert [H_{SE},\rho_{\text{av}}(\tau)]\Vert_{1}$, where $\rho_{\text{av}}(\tau):={1\over \tau}\int_{0}^{\tau}dt\, e^{-iH_{SE}t}\rho \otimes \vert 0\rangle_{E}\langle 0\vert_{E} e^{iH_{SE}t}$.

Quantum 2, 96 (2018). https://doi.org/10.22331/q-2018-10-01-96

]]>For both unitary and open qubit dynamics, we compare asymmetry monotone-based bounds on the minimal time required for an initial qubit state to evolve to a final qubit state from which it is probabilistically distinguishable with fixed minimal error probability (i.e., the minimal error distinguishability time). For the case of unitary dynamics generated by a time-independent Hamiltonian, we derive a necessary and sufficient condition on two asymmetry monotones that guarantees that an arbitrary state of a two-level quantum system or a separable state of $N$ two-level quantum systems will unitarily evolve to another state from which it can be distinguished with a fixed minimal error probability $\delta \in [0,1/2]$. This condition is used to order the set of qubit states based on their distinguishability time, and to derive an optimal release time for driven two-level systems such as those that occur, e.g., in the Landau-Zener problem. For the case of non-unitary dynamics, we compare three lower bounds to the distinguishability time, including a new type of lower bound which is formulated in terms of the asymmetry of the uniformly time-twirled initial system-plus-environment state with respect to the generator $H_{SE}$ of the Stinespring isometry corresponding to the dynamics, specifically, in terms of $\Vert [H_{SE},\rho_{\text{av}}(\tau)]\Vert_{1}$, where $\rho_{\text{av}}(\tau):={1\over \tau}\int_{0}^{\tau}dt\, e^{-iH_{SE}t}\rho \otimes \vert 0\rangle_{E}\langle 0\vert_{E} e^{iH_{SE}t}$.

]]>We provide a classification of translation invariant one-dimensional quantum walks with respect to continuous deformations preserving unitarity, locality, translation invariance, a gap condition, and some symmetry of the tenfold way. The classification largely matches the one recently obtained (arXiv:1611.04439) for a similar setting leaving out translation invariance. However, the translation invariant case has some finer distinctions, because some walks may be connected only by breaking translation invariance along the way, retaining only invariance by an even number of sites. Similarly, if walks are considered equivalent when they differ only by adding a trivial walk, i.e., one that allows no jumps between cells, then the classification collapses also to the general one. The indices of the general classification can be computed in practice only for walks closely related to some translation invariant ones. We prove a completed collection of simple formulas in terms of winding numbers of band structures covering all symmetry types. Furthermore, we determine the strength of the locality conditions, and show that the continuity of the band structure, which is a minimal requirement for topological classifications in terms of winding numbers to make sense, implies the compactness of the commutator of the walk with a half-space projection, a condition which was also the basis of the general theory. In order to apply the theory to the joining of large but finite bulk pieces, one needs to determine the asymptotic behaviour of a stationary Schrödinger equation. We show exponential behaviour, and give a practical method for computing the decay constants.

Quantum 2, 95 (2018). https://doi.org/10.22331/q-2018-09-24-95

]]>We provide a classification of translation invariant one-dimensional quantum walks with respect to continuous deformations preserving unitarity, locality, translation invariance, a gap condition, and some symmetry of the tenfold way. The classification largely matches the one recently obtained (arXiv:1611.04439) for a similar setting leaving out translation invariance. However, the translation invariant case has some finer distinctions, because some walks may be connected only by breaking translation invariance along the way, retaining only invariance by an even number of sites. Similarly, if walks are considered equivalent when they differ only by adding a trivial walk, i.e., one that allows no jumps between cells, then the classification collapses also to the general one. The indices of the general classification can be computed in practice only for walks closely related to some translation invariant ones. We prove a completed collection of simple formulas in terms of winding numbers of band structures covering all symmetry types. Furthermore, we determine the strength of the locality conditions, and show that the continuity of the band structure, which is a minimal requirement for topological classifications in terms of winding numbers to make sense, implies the compactness of the commutator of the walk with a half-space projection, a condition which was also the basis of the general theory. In order to apply the theory to the joining of large but finite bulk pieces, one needs to determine the asymptotic behaviour of a stationary Schrödinger equation. We show exponential behaviour, and give a practical method for computing the decay constants.

]]>Feynman's circuit-to-Hamiltonian construction connects quantum computation and ground states of many-body quantum systems. Kitaev applied this construction to demonstrate QMA-completeness of the local Hamiltonian problem, and Aharanov et al. used it to show the equivalence of adiabatic computation and the quantum circuit model. In this work, we analyze the low energy properties of a class of modified circuit Hamiltonians, which include features like complex weights and branching transitions. For history states with linear clocks and complex weights, we develop a method for modifying the circuit propagation Hamiltonian to implement any desired distribution over the time steps of the circuit in a frustration-free ground state, and show that this can be used to obtain a constant output probability for universal adiabatic computation while retaining the $\Omega(T^{-2})$ scaling of the spectral gap, and without any additional overhead in terms of numbers of qubits. Furthermore, we establish limits on the increase in the ground energy due to input and output penalty terms for modified tridiagonal clocks with non-uniform distributions on the time steps by proving a tight $O(T^{-2})$ upper bound on the product of the spectral gap and ground state overlap with the endpoints of the computation. Using variational techniques which go beyond the $\Omega(T^{-3})$ scaling that follows from the usual geometrical lemma, we prove that the standard Feynman-Kitaev Hamiltonian already saturates this bound. We review the formalism of unitary labeled graphs which replace the usual linear clock by graphs that allow branching and loops, and we extend the $O(T^{-2})$ bound from linear clocks to this more general setting. In order to achieve this, we apply Chebyshev polynomials to generalize an upper bound on the spectral gap in terms of the graph diameter to the context of arbitrary Hermitian matrices.

Quantum 2, 94 (2018). https://doi.org/10.22331/q-2018-09-19-94

]]>Feynman's circuit-to-Hamiltonian construction connects quantum computation and ground states of many-body quantum systems. Kitaev applied this construction to demonstrate QMA-completeness of the local Hamiltonian problem, and Aharanov et al. used it to show the equivalence of adiabatic computation and the quantum circuit model. In this work, we analyze the low energy properties of a class of modified circuit Hamiltonians, which include features like complex weights and branching transitions. For history states with linear clocks and complex weights, we develop a method for modifying the circuit propagation Hamiltonian to implement any desired distribution over the time steps of the circuit in a frustration-free ground state, and show that this can be used to obtain a constant output probability for universal adiabatic computation while retaining the $\Omega(T^{-2})$ scaling of the spectral gap, and without any additional overhead in terms of numbers of qubits. Furthermore, we establish limits on the increase in the ground energy due to input and output penalty terms for modified tridiagonal clocks with non-uniform distributions on the time steps by proving a tight $O(T^{-2})$ upper bound on the product of the spectral gap and ground state overlap with the endpoints of the computation. Using variational techniques which go beyond the $\Omega(T^{-3})$ scaling that follows from the usual geometrical lemma, we prove that the standard Feynman-Kitaev Hamiltonian already saturates this bound. We review the formalism of unitary labeled graphs which replace the usual linear clock by graphs that allow branching and loops, and we extend the $O(T^{-2})$ bound from linear clocks to this more general setting. In order to achieve this, we apply Chebyshev polynomials to generalize an upper bound on the spectral gap in terms of the graph diameter to the context of arbitrary Hermitian matrices.

]]>We present a quantum repeater scheme that is based on individual erbium and europium ions. Erbium ions are attractive because they emit photons at telecommunication wavelength, while europium ions offer exceptional spin coherence for long-term storage. Entanglement between distant erbium ions is created by photon detection. The photon emission rate of each erbium ion is enhanced by a microcavity with high Purcell factor, as has recently been demonstrated. Entanglement is then transferred to nearby europium ions for storage. Gate operations between nearby ions are performed using dynamically controlled electric-dipole coupling. These gate operations allow entanglement swapping to be employed in order to extend the distance over which entanglement is distributed. The deterministic character of the gate operations allows improved entanglement distribution rates in comparison to atomic ensemble-based protocols. We also propose an approach that utilizes multiplexing in order to enhance the entanglement distribution rate.

Quantum 2, 93 (2018). https://doi.org/10.22331/q-2018-09-13-93

]]>We present a quantum repeater scheme that is based on individual erbium and europium ions. Erbium ions are attractive because they emit photons at telecommunication wavelength, while europium ions offer exceptional spin coherence for long-term storage. Entanglement between distant erbium ions is created by photon detection. The photon emission rate of each erbium ion is enhanced by a microcavity with high Purcell factor, as has recently been demonstrated. Entanglement is then transferred to nearby europium ions for storage. Gate operations between nearby ions are performed using dynamically controlled electric-dipole coupling. These gate operations allow entanglement swapping to be employed in order to extend the distance over which entanglement is distributed. The deterministic character of the gate operations allows improved entanglement distribution rates in comparison to atomic ensemble-based protocols. We also propose an approach that utilizes multiplexing in order to enhance the entanglement distribution rate.

]]>Bell-inequality violations establish that two systems share some quantum entanglement. We give a simple test to certify that two systems share an asymptotically large amount of entanglement, $n$ EPR states. The test is efficient: unlike earlier tests that play many games, in sequence or in parallel, our test requires only one or two CHSH games. One system is directed to play a CHSH game on a random specified qubit $i$, and the other is told to play games on qubits $\{i,j\}$, without knowing which index is $i$. The test is robust: a success probability within $\delta$ of optimal guarantees distance $O(n^{5/2} \sqrt{\delta})$ from $n$ EPR states. However, the test does not tolerate constant $\delta$; it breaks down for $\delta = \tilde\Omega (1/\sqrt{n})$. We give an adversarial strategy that succeeds within delta of the optimum probability using only $\tilde O(\delta^{-2})$ EPR states.

Quantum 2, 92 (2018). https://doi.org/10.22331/q-2018-09-03-92

]]>Bell-inequality violations establish that two systems share some quantum entanglement. We give a simple test to certify that two systems share an asymptotically large amount of entanglement, $n$ EPR states. The test is efficient: unlike earlier tests that play many games, in sequence or in parallel, our test requires only one or two CHSH games. One system is directed to play a CHSH game on a random specified qubit $i$, and the other is told to play games on qubits $\{i,j\}$, without knowing which index is $i$. The test is robust: a success probability within $\delta$ of optimal guarantees distance $O(n^{5/2} \sqrt{\delta})$ from $n$ EPR states. However, the test does not tolerate constant $\delta$; it breaks down for $\delta = \tilde\Omega (1/\sqrt{n})$. We give an adversarial strategy that succeeds within delta of the optimum probability using only $\tilde O(\delta^{-2})$ EPR states.

]]>The dynamical Casimir effect is an intriguing phenomenon in which photons are generated from vacuum due to a non-adiabatic change in some boundary conditions. In particular, it connects the motion of an accelerated mechanical mirror to the generation of photons. While pioneering experiments demonstrating this effect exist, a conclusive measurement involving a mechanical generation is still missing. We show that a hybrid system consisting of a piezoelectric mechanical resonator coupled to a superconducting cavity may allow to electro-mechanically generate measurable photons from vacuum, intrinsically associated to the dynamical Casimir effect. Such an experiment may be achieved with current technology, based on film bulk acoustic resonators directly coupled to a superconducting cavity. Our results predict a measurable photon generation rate, which can be further increased through additional improvements such as using superconducting metamaterials.

Quantum 2, 91 (2018). https://doi.org/10.22331/q-2018-09-03-91

]]>The dynamical Casimir effect is an intriguing phenomenon in which photons are generated from vacuum due to a non-adiabatic change in some boundary conditions. In particular, it connects the motion of an accelerated mechanical mirror to the generation of photons. While pioneering experiments demonstrating this effect exist, a conclusive measurement involving a mechanical generation is still missing. We show that a hybrid system consisting of a piezoelectric mechanical resonator coupled to a superconducting cavity may allow to electro-mechanically generate measurable photons from vacuum, intrinsically associated to the dynamical Casimir effect. Such an experiment may be achieved with current technology, based on film bulk acoustic resonators directly coupled to a superconducting cavity. Our results predict a measurable photon generation rate, which can be further increased through additional improvements such as using superconducting metamaterials.

]]>Correlations between different partitions of quantum systems play a central role in a variety of many-body quantum systems, and they have been studied exhaustively in experimental and theoretical research. Here, we investigate dynamical correlations in the time evolution of multiple parts of a composite quantum system. A rigorous measure to quantify correlations in quantum dynamics based on a full tomographic reconstruction of the quantum process has been introduced recently [Á. Rivas et al., New Journal of Physics, 17(6) 062001 (2015).]. In this work, we derive a lower bound for this correlation measure, which does not require full knowledge of the quantum dynamics. Furthermore we also extend the correlation measure to multipartite systems. We directly apply the developed methods to a trapped ion quantum information processor to experimentally characterize the correlations in quantum dynamics for two- and four-qubit systems. The method proposed and demonstrated in this work is scalable, platform-independent and applicable to other composite quantum systems and quantum information processing architectures. We apply the method to estimate spatial correlations in environmental noise processes, which are crucial for the performance of quantum error correction procedures.

Quantum 2, 90 (2018). https://doi.org/10.22331/q-2018-09-03-90

]]>Correlations between different partitions of quantum systems play a central role in a variety of many-body quantum systems, and they have been studied exhaustively in experimental and theoretical research. Here, we investigate dynamical correlations in the time evolution of multiple parts of a composite quantum system. A rigorous measure to quantify correlations in quantum dynamics based on a full tomographic reconstruction of the quantum process has been introduced recently [Á. Rivas et al., New Journal of Physics, 17(6) 062001 (2015).]. In this work, we derive a lower bound for this correlation measure, which does not require full knowledge of the quantum dynamics. Furthermore we also extend the correlation measure to multipartite systems. We directly apply the developed methods to a trapped ion quantum information processor to experimentally characterize the correlations in quantum dynamics for two- and four-qubit systems. The method proposed and demonstrated in this work is scalable, platform-independent and applicable to other composite quantum systems and quantum information processing architectures. We apply the method to estimate spatial correlations in environmental noise processes, which are crucial for the performance of quantum error correction procedures.

]]>We introduce a definition of the fidelity function for multi-round quantum strategies, which we call the $\textit{strategy fidelity}$, that is a generalization of the fidelity function for quantum states. We provide many properties of the strategy fidelity including a Fuchs-van de Graaf relationship with the strategy norm. We also provide a general monotinicity result for both the strategy fidelity and strategy norm under the actions of strategy-to-strategy linear maps. We illustrate an operational interpretation of the strategy fidelity in the spirit of Uhlmann's Theorem and discuss its application to the security analysis of quantum protocols for interactive cryptographic tasks such as bit-commitment and oblivious string transfer. Our analysis is general in the sense that the actions of the protocol need not be fully specified, which is in stark contrast to most other security proofs. Lastly, we provide a semidefinite programming formulation of the strategy fidelity.

Quantum 2, 89 (2018). https://doi.org/10.22331/q-2018-09-03-89

]]>We introduce a definition of the fidelity function for multi-round quantum strategies, which we call the $\textit{strategy fidelity}$, that is a generalization of the fidelity function for quantum states. We provide many properties of the strategy fidelity including a Fuchs-van de Graaf relationship with the strategy norm. We also provide a general monotinicity result for both the strategy fidelity and strategy norm under the actions of strategy-to-strategy linear maps. We illustrate an operational interpretation of the strategy fidelity in the spirit of Uhlmann's Theorem and discuss its application to the security analysis of quantum protocols for interactive cryptographic tasks such as bit-commitment and oblivious string transfer. Our analysis is general in the sense that the actions of the protocol need not be fully specified, which is in stark contrast to most other security proofs. Lastly, we provide a semidefinite programming formulation of the strategy fidelity.

]]>Majorana-based quantum computing seeks to use the non-local nature of Majorana zero modes to store and manipulate quantum information in a topologically protected way. While noise is anticipated to be significantly suppressed in such systems, finite temperature and system size result in residual errors. In this work, we connect the underlying physical error processes in Majorana-based systems to the noise models used in a fault tolerance analysis. Standard qubit-based noise models built from Pauli operators do not capture leading order noise processes arising from quasiparticle poisoning events, thus it is not obvious $\textit{a priori}$ that such noise models can be usefully applied to a Majorana-based system. We develop stochastic Majorana noise models that are generalizations of the standard qubit-based models and connect the error probabilities defining these models to parameters of the physical system. Using these models, we compute pseudo-thresholds for the $d=5$ Bacon-Shor subsystem code. Our results emphasize the importance of correlated errors induced in multi-qubit measurements. Moreover, we find that for sufficiently fast quasiparticle relaxation the errors are well described by Pauli operators. This work bridges the divide between physical errors in Majorana-based quantum computing architectures and the significance of these errors in a quantum error correcting code.

Quantum 2, 88 (2018). https://doi.org/10.22331/q-2018-09-03-88

]]>Majorana-based quantum computing seeks to use the non-local nature of Majorana zero modes to store and manipulate quantum information in a topologically protected way. While noise is anticipated to be significantly suppressed in such systems, finite temperature and system size result in residual errors. In this work, we connect the underlying physical error processes in Majorana-based systems to the noise models used in a fault tolerance analysis. Standard qubit-based noise models built from Pauli operators do not capture leading order noise processes arising from quasiparticle poisoning events, thus it is not obvious $\textit{a priori}$ that such noise models can be usefully applied to a Majorana-based system. We develop stochastic Majorana noise models that are generalizations of the standard qubit-based models and connect the error probabilities defining these models to parameters of the physical system. Using these models, we compute pseudo-thresholds for the $d=5$ Bacon-Shor subsystem code. Our results emphasize the importance of correlated errors induced in multi-qubit measurements. Moreover, we find that for sufficiently fast quasiparticle relaxation the errors are well described by Pauli operators. This work bridges the divide between physical errors in Majorana-based quantum computing architectures and the significance of these errors in a quantum error correcting code.

]]>Ernst Specker considered a particular feature of quantum theory to be especially fundamental, namely that pairwise joint measurability of sharp measurements implies their global joint measurability ($\href{https://vimeo.com/52923835}{vimeo.com/52923835}$). To date, Specker's principle seemed incapable of singling out quantum theory from the space of all general probabilistic theories. In particular, its well-known consequence for experimental statistics, the principle of consistent exclusivity, does not rule out the set of correlations known as almost quantum, which is strictly larger than the set of quantum correlations. Here we show that, contrary to the popular belief, Specker's principle cannot be satisfied in any theory that yields almost quantum correlations.

Quantum 2, 87 (2018). https://doi.org/10.22331/q-2018-08-27-87

]]>Ernst Specker considered a particular feature of quantum theory to be especially fundamental, namely that pairwise joint measurability of sharp measurements implies their global joint measurability ($\href{https://vimeo.com/52923835}{vimeo.com/52923835}$). To date, Specker's principle seemed incapable of singling out quantum theory from the space of all general probabilistic theories. In particular, its well-known consequence for experimental statistics, the principle of consistent exclusivity, does not rule out the set of correlations known as almost quantum, which is strictly larger than the set of quantum correlations. Here we show that, contrary to the popular belief, Specker's principle cannot be satisfied in any theory that yields almost quantum correlations.

]]>In quantum cryptography, device-independent (DI) protocols can be certified secure without requiring assumptions about the inner workings of the devices used to perform the protocol. In order to display nonlocality, which is an essential feature in DI protocols, the device must consist of at least two separate components sharing entanglement. This raises a fundamental question: how much entanglement is needed to run such DI protocols? We present a two-device protocol for DI random number generation (DIRNG) which produces approximately $n$ bits of randomness starting from $n$ pairs of arbitrarily weakly entangled qubits. We also consider a variant of the protocol where $m$ singlet states are diluted into $n$ partially entangled states before performing the first protocol, and show that the number $m$ of singlet states need only scale sublinearly with the number $n$ of random bits produced. Operationally, this leads to a DIRNG protocol between distant laboratories that requires only a sublinear amount of quantum communication to prepare the devices.

Quantum 2, 86 (2018). https://doi.org/10.22331/q-2018-08-22-86

]]>In quantum cryptography, device-independent (DI) protocols can be certified secure without requiring assumptions about the inner workings of the devices used to perform the protocol. In order to display nonlocality, which is an essential feature in DI protocols, the device must consist of at least two separate components sharing entanglement. This raises a fundamental question: how much entanglement is needed to run such DI protocols? We present a two-device protocol for DI random number generation (DIRNG) which produces approximately $n$ bits of randomness starting from $n$ pairs of arbitrarily weakly entangled qubits. We also consider a variant of the protocol where $m$ singlet states are diluted into $n$ partially entangled states before performing the first protocol, and show that the number $m$ of singlet states need only scale sublinearly with the number $n$ of random bits produced. Operationally, this leads to a DIRNG protocol between distant laboratories that requires only a sublinear amount of quantum communication to prepare the devices.

]]>Randomized benchmarking provides a tool for obtaining precise quantitative estimates of the average error rate of a physical quantum channel. Here we define real randomized benchmarking, which enables a separate determination of the average error rate in the real and complex parts of the channel. This provides more fine-grained information about average error rates with approximately the same cost as the standard protocol. The protocol requires only averaging over the real Clifford group, a subgroup of the full complex Clifford group, and makes use of the fact that it forms an orthogonal 2-design. It therefore allows benchmarking of fault-tolerant gates for an encoding which does not contain the full Clifford group transversally. Furthermore, our results are especially useful when considering quantum computations on rebits (or real encodings of complex computations), in which case the real Clifford group now plays the role of the complex Clifford group when studying stabilizer circuits.

Quantum 2, 85 (2018). https://doi.org/10.22331/q-2018-08-22-85

]]>Randomized benchmarking provides a tool for obtaining precise quantitative estimates of the average error rate of a physical quantum channel. Here we define real randomized benchmarking, which enables a separate determination of the average error rate in the real and complex parts of the channel. This provides more fine-grained information about average error rates with approximately the same cost as the standard protocol. The protocol requires only averaging over the real Clifford group, a subgroup of the full complex Clifford group, and makes use of the fact that it forms an orthogonal 2-design. It therefore allows benchmarking of fault-tolerant gates for an encoding which does not contain the full Clifford group transversally. Furthermore, our results are especially useful when considering quantum computations on rebits (or real encodings of complex computations), in which case the real Clifford group now plays the role of the complex Clifford group when studying stabilizer circuits.

]]>A discrete-time quantum walk (QW) is essentially a unitary operator driving the evolution of a single particle on the lattice. Some QWs have familiar physics PDEs as their continuum limit. Some slight generalization of them (allowing for prior encoding and larger neighbourhoods) even have the curved spacetime Dirac equation, as their continuum limit. In the $(1+1)-$dimensional massless case, this equation decouples as scalar transport equations with tunable speeds. We characterise and construct all those QWs that lead to scalar transport with tunable speeds. The local coin operator dictates that speed; we provide concrete techniques to tune the speed of propagation, by making use only of a finite number of coin operators-differently from previous models, in which the speed of propagation depends upon a continuous parameter of the quantum coin. The interest of such a discretization is twofold : to allow for easier experimental implementations on the one hand, and to evaluate ways of quantizing the metric field, on the other.

Quantum 2, 84 (2018). https://doi.org/10.22331/q-2018-08-22-84

]]>A discrete-time quantum walk (QW) is essentially a unitary operator driving the evolution of a single particle on the lattice. Some QWs have familiar physics PDEs as their continuum limit. Some slight generalization of them (allowing for prior encoding and larger neighbourhoods) even have the curved spacetime Dirac equation, as their continuum limit. In the $(1+1)-$dimensional massless case, this equation decouples as scalar transport equations with tunable speeds. We characterise and construct all those QWs that lead to scalar transport with tunable speeds. The local coin operator dictates that speed; we provide concrete techniques to tune the speed of propagation, by making use only of a finite number of coin operators-differently from previous models, in which the speed of propagation depends upon a continuous parameter of the quantum coin. The interest of such a discretization is twofold : to allow for easier experimental implementations on the one hand, and to evaluate ways of quantizing the metric field, on the other.

]]>We show that a two-level atom resonantly coupled to one of the modes of a cavity field can be used as a sensitive tool to measure the proper acceleration of a combined atom-cavity system. To achieve it we investigate the relation between the transition probability of a two-level atom placed within an ideal cavity and study how it is affected by the acceleration of the whole. We indicate how to choose the position of the atom as well as its characteristic frequency in order to maximize the sensitivity to acceleration.

Quantum 2, 83 (2018). https://doi.org/10.22331/q-2018-08-18-83

]]>We show that a two-level atom resonantly coupled to one of the modes of a cavity field can be used as a sensitive tool to measure the proper acceleration of a combined atom-cavity system. To achieve it we investigate the relation between the transition probability of a two-level atom placed within an ideal cavity and study how it is affected by the acceleration of the whole. We indicate how to choose the position of the atom as well as its characteristic frequency in order to maximize the sensitivity to acceleration.

]]>The detection of nonlocal correlations in a Bell experiment implies almost by definition some intrinsic randomness in the measurement outcomes. For given correlations, or for a given Bell violation, the amount of randomness predicted by quantum physics, quantified by the guessing probability, can generally be bounded numerically. However, currently only a few exact analytic solutions are known for violations of the bipartite Clauser-Horne-Shimony-Holt Bell inequality. Here, we study the randomness in a Bell experiment where three parties test the tripartite Mermin-Bell inequality. We give tight upper bounds on the guessing probabilities associated with one and two of the parties' measurement outcomes as a function of the Mermin inequality violation. Finally, we discuss the possibility of device-independent secret sharing based on the Mermin inequality and argue that the idea seems unlikely to work.

Quantum 2, 82 (2018). https://doi.org/10.22331/q-2018-08-17-82

]]>The detection of nonlocal correlations in a Bell experiment implies almost by definition some intrinsic randomness in the measurement outcomes. For given correlations, or for a given Bell violation, the amount of randomness predicted by quantum physics, quantified by the guessing probability, can generally be bounded numerically. However, currently only a few exact analytic solutions are known for violations of the bipartite Clauser-Horne-Shimony-Holt Bell inequality. Here, we study the randomness in a Bell experiment where three parties test the tripartite Mermin-Bell inequality. We give tight upper bounds on the guessing probabilities associated with one and two of the parties' measurement outcomes as a function of the Mermin inequality violation. Finally, we discuss the possibility of device-independent secret sharing based on the Mermin inequality and argue that the idea seems unlikely to work.

]]>All articles published in Quantum and their meta-data are now securely and permanently preserved in the CLOCKSS Archive.

In addition to our own backup strategy, which involves off-site storage of backups in multiple physically separate locations and a thoroughly tested backup-and-restore procedure, and the availability of all papers on the arXiv, this adds another layer of protection to ensure that all scientific works published in Quantum are permanently preserved and remain accessible to the scientific community.

]]>We provide a fine-grained definition for monogamous measure of entanglement that does not invoke any particular monogamy relation. Our definition is given in terms an equality, as oppose to inequality, that we call the "disentangling condition". We relate our definition to the more traditional one, by showing that it generates standard monogamy relations. We then show that all quantum Markov states satisfy the disentangling condition for any entanglement monotone. In addition, we demonstrate that entanglement monotones that are given in terms of a convex roof extension are monogamous if they are monogamous on pure states, and show that for any quantum state that satisfies the disentangling condition, its entanglement of formation equals the entanglement of assistance. We characterize all bipartite mixed states with this property, and use it to show that the G-concurrence is monogamous. In the case of two qubits, we show that the equality between entanglement of formation and assistance holds if and only if the state is a rank 2 bipartite state that can be expressed as the marginal of a pure 3-qubit state in the W class.

Quantum 2, 81 (2018). https://doi.org/10.22331/q-2018-08-13-81

]]>We provide a fine-grained definition for monogamous measure of entanglement that does not invoke any particular monogamy relation. Our definition is given in terms an equality, as oppose to inequality, that we call the "disentangling condition". We relate our definition to the more traditional one, by showing that it generates standard monogamy relations. We then show that all quantum Markov states satisfy the disentangling condition for any entanglement monotone. In addition, we demonstrate that entanglement monotones that are given in terms of a convex roof extension are monogamous if they are monogamous on pure states, and show that for any quantum state that satisfies the disentangling condition, its entanglement of formation equals the entanglement of assistance. We characterize all bipartite mixed states with this property, and use it to show that the G-concurrence is monogamous. In the case of two qubits, we show that the equality between entanglement of formation and assistance holds if and only if the state is a rank 2 bipartite state that can be expressed as the marginal of a pure 3-qubit state in the W class.

]]>Quantum simulators, machines that can replicate the dynamics of quantum systems, are being built as useful devices and are seen as a stepping stone to universal quantum computers. A key difference between the two is that computers have the ability to perform the logic gates that make up algorithms. We propose a method for learning how to construct these gates efficiently by using the simulator to perform optimal control on itself. This bypasses two major problems of purely classical approaches to the control problem: the need to have an accurate model of the system, and a classical computer more powerful than the quantum one to carry out the required simulations. Strong evidence that the scheme scales polynomially in the number of qubits, for systems of up to 9 qubits with Ising interactions, is presented from numerical simulations carried out in different topologies. This suggests that this in situ approach is a practical way of upgrading quantum simulators to computers.

Quantum 2, 80 (2018). https://doi.org/10.22331/q-2018-08-08-80

]]>Quantum simulators, machines that can replicate the dynamics of quantum systems, are being built as useful devices and are seen as a stepping stone to universal quantum computers. A key difference between the two is that computers have the ability to perform the logic gates that make up algorithms. We propose a method for learning how to construct these gates efficiently by using the simulator to perform optimal control on itself. This bypasses two major problems of purely classical approaches to the control problem: the need to have an accurate model of the system, and a classical computer more powerful than the quantum one to carry out the required simulations. Strong evidence that the scheme scales polynomially in the number of qubits, for systems of up to 9 qubits with Ising interactions, is presented from numerical simulations carried out in different topologies. This suggests that this in situ approach is a practical way of upgrading quantum simulators to computers.

]]>Noisy Intermediate-Scale Quantum (NISQ) technology will be available in the near future. Quantum computers with 50-100 qubits may be able to perform tasks which surpass the capabilities of today's classical digital computers, but noise in quantum gates will limit the size of quantum circuits that can be executed reliably. NISQ devices will be useful tools for exploring many-body quantum physics, and may have other useful applications, but the 100-qubit quantum computer will not change the world right away - we should regard it as a significant step toward the more powerful quantum technologies of the future. Quantum technologists should continue to strive for more accurate quantum gates and, eventually, fully fault-tolerant quantum computing.

Quantum 2, 79 (2018). https://doi.org/10.22331/q-2018-08-06-79

]]>Noisy Intermediate-Scale Quantum (NISQ) technology will be available in the near future. Quantum computers with 50-100 qubits may be able to perform tasks which surpass the capabilities of today's classical digital computers, but noise in quantum gates will limit the size of quantum circuits that can be executed reliably. NISQ devices will be useful tools for exploring many-body quantum physics, and may have other useful applications, but the 100-qubit quantum computer will not change the world right away - we should regard it as a significant step toward the more powerful quantum technologies of the future. Quantum technologists should continue to strive for more accurate quantum gates and, eventually, fully fault-tolerant quantum computing.

]]>Xanadu becomes the first quantum computing startup to support fair open-access publishing in quantum science by supporting Quantum with a small yearly donation that will contribute to cover the operating costs of the journal.

Xanadu is thereby joining IQOQI Vienna, CQT Singapore, and the Stanhill Foundation, who are also helping to ensure that Quantum can keep offering an unconditional waiver of the standard 200€ article processing charge, to make publishing in Quantum affordable for everyone. We strongly believe that there should never be financial barriers to publishing great scientific results and would like to wholeheartedly thank Xanadu for their commitment!

Xanadu is a quantum technologies company powered by light. Xanadu designs and integrates quantum silicon photonic chips into existing hardware to create truly full-stack quantum computing.

]]>