Motivated by recent work showing that a quantum error correcting code can be generated by hybrid dynamics of unitaries and measurements, we study the long time behavior of such systems. We demonstrate that even in the ``mixed'' phase, a maximally mixed initial density matrix is purified on a time scale equal to the Hilbert space dimension (i.e., exponential in system size), albeit with noisy dynamics at intermediate times which we connect to Dyson Brownian motion. In contrast, we show that free fermion systems $—$ i.e., ones where the unitaries are generated by quadratic Hamiltonians and the measurements are of fermion bilinears $—$ purify in a time quadratic in the system size. In particular, a volume law phase for the entanglement entropy cannot be sustained in a free fermion system.

]]>Motivated by recent work showing that a quantum error correcting code can be generated by hybrid dynamics of unitaries and measurements, we study the long time behavior of such systems. We demonstrate that even in the ``mixed'' phase, a maximally mixed initial density matrix is purified on a time scale equal to the Hilbert space dimension (i.e., exponential in system size), albeit with noisy dynamics at intermediate times which we connect to Dyson Brownian motion. In contrast, we show that free fermion systems $—$ i.e., ones where the unitaries are generated by quadratic Hamiltonians and the measurements are of fermion bilinears $—$ purify in a time quadratic in the system size. In particular, a volume law phase for the entanglement entropy cannot be sustained in a free fermion system.

]]>There are many different types of time keeping devices. We use the phrase $\textit{ticking clock}$ to describe those which – simply put – ``tick'' at approximately regular intervals. Various important results have been derived for ticking clocks, and more are in the pipeline. It is thus important to understand the underlying models on which these results are founded. The aim of this paper is to introduce a new ticking clock model from axiomatic principles that overcomes concerns in the community about the physicality of the assumptions made in previous models. The ticking clock model in [1] achieves high accuracy, yet lacks the autonomy of the less accurate model in [2]. Importantly, the model we introduce here achieves the best of both models: it retains the autonomy of [2] while allowing for the high accuracies of [1]. What is more, [2] is revealed to be a special case of the new ticking clock model.

]]>There are many different types of time keeping devices. We use the phrase $\textit{ticking clock}$ to describe those which – simply put – ``tick'' at approximately regular intervals. Various important results have been derived for ticking clocks, and more are in the pipeline. It is thus important to understand the underlying models on which these results are founded. The aim of this paper is to introduce a new ticking clock model from axiomatic principles that overcomes concerns in the community about the physicality of the assumptions made in previous models. The ticking clock model in [1] achieves high accuracy, yet lacks the autonomy of the less accurate model in [2]. Importantly, the model we introduce here achieves the best of both models: it retains the autonomy of [2] while allowing for the high accuracies of [1]. What is more, [2] is revealed to be a special case of the new ticking clock model.

]]>We present a simple but general framework for constructing quantum circuits that implement the multiply-controlled unitary $\text{Select}(H) := \sum_\ell |\ell\rangle\langle\ell|\otimes H_\ell$, where $H = \sum_\ell H_\ell$ is the Jordan-Wigner transform of an arbitrary second-quantised fermionic Hamiltonian. $\text{Select}(H)$ is one of the main subroutines of several quantum algorithms, including state-of-the-art techniques for Hamiltonian simulation. If each term in the second-quantised Hamiltonian involves at most $k$ spin-orbitals and $k$ is a constant independent of the total number of spin-orbitals $n$ (as is the case for the majority of quantum chemistry and condensed matter models considered in the literature, for which $k$ is typically 2 or 4), our implementation of $\text{Select}(H)$ requires no ancilla qubits and uses $\mathcal{O}(n)$ Clifford+T gates, with the Clifford gates applied in $\mathcal{O}(log^2 n)$ layers and the $T$ gates in $O(log n)$ layers. This achieves an exponential improvement in both Clifford- and T-depth over previous work, while maintaining linear gate count and reducing the number of ancillae to zero.

]]>We present a simple but general framework for constructing quantum circuits that implement the multiply-controlled unitary $\text{Select}(H) := \sum_\ell |\ell\rangle\langle\ell|\otimes H_\ell$, where $H = \sum_\ell H_\ell$ is the Jordan-Wigner transform of an arbitrary second-quantised fermionic Hamiltonian. $\text{Select}(H)$ is one of the main subroutines of several quantum algorithms, including state-of-the-art techniques for Hamiltonian simulation. If each term in the second-quantised Hamiltonian involves at most $k$ spin-orbitals and $k$ is a constant independent of the total number of spin-orbitals $n$ (as is the case for the majority of quantum chemistry and condensed matter models considered in the literature, for which $k$ is typically 2 or 4), our implementation of $\text{Select}(H)$ requires no ancilla qubits and uses $\mathcal{O}(n)$ Clifford+T gates, with the Clifford gates applied in $\mathcal{O}(log^2 n)$ layers and the $T$ gates in $O(log n)$ layers. This achieves an exponential improvement in both Clifford- and T-depth over previous work, while maintaining linear gate count and reducing the number of ancillae to zero.

]]>Quantum backflow is usually understood as a quantum interference phenomenon where probability current of a quantum particle points in the opposite direction to particle's momentum. Here, we quantify the amount of quantum backflow for arbitrary momentum distributions, paving the way towards its experimental verification. We give examples of backflow in gravitational and harmonic potential, and discuss experimental procedures required for the demonstration using atomic gravimeters. Such an experiment would show that the probability of finding a free falling particle above initial level could grow for suitably prepared quantum state with most momentum downwards.

]]>Quantum backflow is usually understood as a quantum interference phenomenon where probability current of a quantum particle points in the opposite direction to particle's momentum. Here, we quantify the amount of quantum backflow for arbitrary momentum distributions, paving the way towards its experimental verification. We give examples of backflow in gravitational and harmonic potential, and discuss experimental procedures required for the demonstration using atomic gravimeters. Such an experiment would show that the probability of finding a free falling particle above initial level could grow for suitably prepared quantum state with most momentum downwards.

]]>The year 2021 has begun and Plan S, an initiative aimed at pushing open-access publishing that is backed by an international consortium of funding agencies and research organisations called cOAlition S, is coming into effect. In short, Plan S requires that scientific publications that result from research funded (at least partially) by grants of cOAlition S members starting in 2021 must be published in compliant Open Access journals or platforms.

Plan S defines a set of organisational and technical criteria that journals and other platforms must fulfill to be Plan S compliant. The organisational criteria are easily fulfilled by a non-profit, fully open-access journal such as Quantum:

- Full and immediate Open access
- Authors retaining copyright
- Transparent and fair publication fees
- Fee waiver policy

In addition to this, Plan S includes a long list of technical requirements. At Quantum we have been working hard over the past years to make Quantum fully compliant with all these requirements. Some requirements leave some room for interpretation. Therefore no definite and final assessment of the compliance of journals is possible at this point in time while the Plan S journal checker tool is still in beta. Quantum is currently recognized as compliant with Plan S by this tool and on a very good track to be fully compliant with all technical requirements before the first research funded by 2021 cOAlition S grants is ready for publication. Quantum will aim to fulfill all requirements for all papers published in the upcoming 2021 volume.

The mandatory technical requirements are:

**Creative Commons license:**Quantum is publishing all works under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.**DOAJ:**Quantum is listed in the Directory of Open Access Journals (DOAJ)**Long-term archiving:**Quantum is depositing the full text of all publications in the CLOCKSS archive in addition to being an arXiv overlay journal.**DOIs:**Quantum is using Digital Object Identifiers (DOIs) for all its publications and is depositing high-quality meta-data into the Crossref database.**Editorial and peer review processes and policies:**Quantum has well defined publicly available peer-review and editorial policies in place and has been recognized for this by the European Physical Society**License information embedded in published articles:**Quantum is embedding machine readable license information in the PDF document produced by the arXiv prior to publication.**Rationale for publication fees:**Quantum is practicing open accounting, provides extensive rational for its article processing charges, and is by legal construction not allowed to run sustained profits.**Funding information in meta-data:**Journals must deposit the name of the funder and the grant number/identifier as part of the meta-data.

With the exception of the last point Quantum is already fully compliant today.

To collect funder information from our authors and submit this as part of our meta-data to Corssref, Quantum will roll out a new online form through which authors will be able to submit their final manuscript for publication in the first quarter of 2021 and we will retroactively add such meta-data for all works published after January 1st 2021.

With this, Quantum should be fully compliant with all Plan S requirements. We will continue to monitor changes in the Plan S requirements and ensure that Quantum remains an attractive publishing venue for cOAlition S funded authors.

]]>A sequential effect algebra (SEA) is an effect algebra equipped with a $\textit{sequential product}$ operation modeled after the Lüders product $(a,b)\mapsto \sqrt{a}b\sqrt{a}$ on C$^*$-algebras. A SEA is called $\textit{normal}$ when it has all suprema of directed sets, and the sequential product interacts suitably with these suprema. The effects on a Hilbert space and the unit interval of a von Neumann or JBW algebra are examples of normal SEAs that are in addition $\textit{convex}$, i.e. possess a suitable action of the real unit interval on the algebra. Complete Boolean algebras form normal SEAs too, which are convex only when $0=1$. We show that any normal SEA $E$ splits as a direct sum $E= E_b\oplus E_c \oplus E_{ac}$ of a complete Boolean algebra $E_b$, a convex normal SEA $E_c$, and a newly identified type of normal SEA $E_{ac}$ we dub $\textit{purely almost-convex}$. Along the way we show, among other things, that a SEA which contains only idempotents must be a Boolean algebra; and we establish a spectral theorem using which we settle for the class of normal SEAs a problem of Gudder regarding the uniqueness of square roots. After establishing our main result, we propose a simple extra axiom for normal SEAs that excludes the seemingly pathological a-convex SEAs. We conclude the paper by a study of SEAs with an associative sequential product. We find that associativity forces normal SEAs satisfying our new axiom to be commutative, shedding light on the question of why the sequential product in quantum theory should be non-associative.

]]>A sequential effect algebra (SEA) is an effect algebra equipped with a $\textit{sequential product}$ operation modeled after the Lüders product $(a,b)\mapsto \sqrt{a}b\sqrt{a}$ on C$^*$-algebras. A SEA is called $\textit{normal}$ when it has all suprema of directed sets, and the sequential product interacts suitably with these suprema. The effects on a Hilbert space and the unit interval of a von Neumann or JBW algebra are examples of normal SEAs that are in addition $\textit{convex}$, i.e. possess a suitable action of the real unit interval on the algebra. Complete Boolean algebras form normal SEAs too, which are convex only when $0=1$. We show that any normal SEA $E$ splits as a direct sum $E= E_b\oplus E_c \oplus E_{ac}$ of a complete Boolean algebra $E_b$, a convex normal SEA $E_c$, and a newly identified type of normal SEA $E_{ac}$ we dub $\textit{purely almost-convex}$. Along the way we show, among other things, that a SEA which contains only idempotents must be a Boolean algebra; and we establish a spectral theorem using which we settle for the class of normal SEAs a problem of Gudder regarding the uniqueness of square roots. After establishing our main result, we propose a simple extra axiom for normal SEAs that excludes the seemingly pathological a-convex SEAs. We conclude the paper by a study of SEAs with an associative sequential product. We find that associativity forces normal SEAs satisfying our new axiom to be commutative, shedding light on the question of why the sequential product in quantum theory should be non-associative.

]]>Quantum refrigerators pump heat from a cold to a hot reservoir. In the few-particle regime, counter-diabatic (CD) driving of, originally adiabatic, work-exchange strokes is a promising candidate to overcome the bottleneck of vanishing cooling power. Here, we present a finite-time many-body quantum refrigerator that yields finite cooling power at high coefficient of performance, that considerably outperforms its non-adiabatic counterpart. We employ multi-spin CD driving and numerically investigate the scaling behavior of the refrigeration performance with system size. We further prove that optimal refrigeration via the exact CD protocol is a catalytic process.

]]>Quantum refrigerators pump heat from a cold to a hot reservoir. In the few-particle regime, counter-diabatic (CD) driving of, originally adiabatic, work-exchange strokes is a promising candidate to overcome the bottleneck of vanishing cooling power. Here, we present a finite-time many-body quantum refrigerator that yields finite cooling power at high coefficient of performance, that considerably outperforms its non-adiabatic counterpart. We employ multi-spin CD driving and numerically investigate the scaling behavior of the refrigeration performance with system size. We further prove that optimal refrigeration via the exact CD protocol is a catalytic process.

]]>Photons offer the potential to carry large amounts of information in their spectral, spatial, and polarisation degrees of freedom. While state-of-the-art classical communication systems routinely aim to maximize this information-carrying capacity via wavelength and spatial-mode division multiplexing, quantum systems based on multi-mode entanglement usually suffer from low state quality, long measurement times, and limited encoding capacity. At the same time, entanglement certification methods often rely on assumptions that compromise security. Here we show the certification of photonic high-dimensional entanglement in the transverse position-momentum degree-of-freedom with a record quality, measurement speed, and entanglement dimensionality, without making any assumptions about the state or channels. Using a tailored macro-pixel basis, precise spatial-mode measurements, and a modified entanglement witness, we demonstrate state fidelities of up to 94.4% in a 19-dimensional state-space, entanglement in up to 55 local dimensions, and an entanglement-of-formation of up to 4 ebits. Furthermore, our measurement times show an improvement of more than two orders of magnitude over previous state-of-the-art demonstrations. Our results pave the way for noise-robust quantum networks that saturate the information-carrying capacity of single photons.

[video width="640" height="384" mp4="https://quantum-journal.org/wp-content/uploads/2020/12/VID-20201224-WA0003.mp4"][/video]

]]>Photons offer the potential to carry large amounts of information in their spectral, spatial, and polarisation degrees of freedom. While state-of-the-art classical communication systems routinely aim to maximize this information-carrying capacity via wavelength and spatial-mode division multiplexing, quantum systems based on multi-mode entanglement usually suffer from low state quality, long measurement times, and limited encoding capacity. At the same time, entanglement certification methods often rely on assumptions that compromise security. Here we show the certification of photonic high-dimensional entanglement in the transverse position-momentum degree-of-freedom with a record quality, measurement speed, and entanglement dimensionality, without making any assumptions about the state or channels. Using a tailored macro-pixel basis, precise spatial-mode measurements, and a modified entanglement witness, we demonstrate state fidelities of up to 94.4% in a 19-dimensional state-space, entanglement in up to 55 local dimensions, and an entanglement-of-formation of up to 4 ebits. Furthermore, our measurement times show an improvement of more than two orders of magnitude over previous state-of-the-art demonstrations. Our results pave the way for noise-robust quantum networks that saturate the information-carrying capacity of single photons.

[video width="640" height="384" mp4="https://quantum-journal.org/wp-content/uploads/2020/12/VID-20201224-WA0003.mp4"][/video]

]]>The minimal-coupling quantum heat engine is a thermal machine consisting of an explicit energy storage system, heat baths, and a working body, which alternatively couples to subsystems through discrete strokes --- energy-conserving two-body quantum operations. Within this paradigm, we present a general framework of quantum thermodynamics, where a work extraction process is fundamentally limited by a flow of non-passive energy (ergotropy), while energy dissipation is expressed through a flow of passive energy. It turns out that small dimensionality of the working body and a restriction only to two-body operations make the engine fundamentally irreversible. Our main result is finding the optimal efficiency and work production per cycle within the whole class of irreversible minimal-coupling engines composed of three strokes and with the two-level working body, where we take into account all possible quantum correlations between the working body and the battery. One of the key new tools is the introduced ``control-marginal state" --- one which acts only on a working body Hilbert space, but encapsulates all features regarding work extraction of the total working body-battery system. In addition, we propose a generalization of the many-stroke engine, and we analyze efficiency vs extracted work trade-offs, as well as work fluctuations after many cycles of the running of the engine.

]]>The minimal-coupling quantum heat engine is a thermal machine consisting of an explicit energy storage system, heat baths, and a working body, which alternatively couples to subsystems through discrete strokes --- energy-conserving two-body quantum operations. Within this paradigm, we present a general framework of quantum thermodynamics, where a work extraction process is fundamentally limited by a flow of non-passive energy (ergotropy), while energy dissipation is expressed through a flow of passive energy. It turns out that small dimensionality of the working body and a restriction only to two-body operations make the engine fundamentally irreversible. Our main result is finding the optimal efficiency and work production per cycle within the whole class of irreversible minimal-coupling engines composed of three strokes and with the two-level working body, where we take into account all possible quantum correlations between the working body and the battery. One of the key new tools is the introduced ``control-marginal state" --- one which acts only on a working body Hilbert space, but encapsulates all features regarding work extraction of the total working body-battery system. In addition, we propose a generalization of the many-stroke engine, and we analyze efficiency vs extracted work trade-offs, as well as work fluctuations after many cycles of the running of the engine.

]]>We show that there exist non-relativistic scattering experiments which, if successful, freeze out, speed up or even reverse the free dynamics of any ensemble of quantum systems present in the scattering region. This ``time translation'' effect is universal, i.e., it is independent of the particular interaction between the scattering particles and the target systems, or the (possibly non-Hermitian) Hamiltonian governing the evolution of the latter. The protocols require careful preparation of the probes which are scattered, and success is heralded by projective measurements of these probes at the conclusion of the experiment. We fully characterize the possible time translations which we can effect on multiple target systems through a scattering protocol of fixed duration. The core results are: a) when the target is a single system, we can translate it backwards in time for an amount proportional to the experimental runtime; b) when $n$ targets are present in the scattering region, we can make a single system evolve $n$ times faster (backwards or forwards), at the cost of keeping the remaining $n-1$ systems stationary in time. For high $n$ our protocols therefore allow one to map, in short experimental time, a system to the state it would have reached with a very long unperturbed evolution in either positive or negative time.

]]>We show that there exist non-relativistic scattering experiments which, if successful, freeze out, speed up or even reverse the free dynamics of any ensemble of quantum systems present in the scattering region. This ``time translation'' effect is universal, i.e., it is independent of the particular interaction between the scattering particles and the target systems, or the (possibly non-Hermitian) Hamiltonian governing the evolution of the latter. The protocols require careful preparation of the probes which are scattered, and success is heralded by projective measurements of these probes at the conclusion of the experiment. We fully characterize the possible time translations which we can effect on multiple target systems through a scattering protocol of fixed duration. The core results are: a) when the target is a single system, we can translate it backwards in time for an amount proportional to the experimental runtime; b) when $n$ targets are present in the scattering region, we can make a single system evolve $n$ times faster (backwards or forwards), at the cost of keeping the remaining $n-1$ systems stationary in time. For high $n$ our protocols therefore allow one to map, in short experimental time, a system to the state it would have reached with a very long unperturbed evolution in either positive or negative time.

]]>The variational principle of quantum mechanics is the backbone of hybrid quantum computing for a range of applications. However, as the problem size grows, quantum logic errors and the effect of barren plateaus overwhelm the quality of the results. There is now a clear focus on strategies that require fewer quantum circuit steps and are robust to device errors. Here we present an approach in which problem complexity is transferred to dynamic quantities computed on the quantum processor – Hamiltonian moments, $\langle H^n\rangle$. From these quantum computed moments, an estimate of the ground-state energy can be obtained using the ``infimum'' theorem from Lanczos cumulant expansions which manifestly corrects the associated variational calculation. With higher order effects in Hilbert space generated via the moments, the burden on the trial-state quantum circuit depth is eased. The method is introduced and demonstrated on 2D quantum magnetism models on lattices up to $5\times 5$ (25 qubits) implemented on IBM Quantum superconducting qubit devices. Moments were quantum computed to fourth order with respect to a parameterised antiferromagnetic trial-state. A comprehensive comparison with benchmark variational calculations was performed, including over an ensemble of random coupling instances. The results showed that the infimum estimate consistently outperformed the benchmark variational approach for the same trial-state. These initial investigations suggest that the quantum computed moments approach has a high degree of stability against trial-state variation, quantum gate errors and shot noise, all of which bodes well for further investigation and applications of the approach.

]]>The variational principle of quantum mechanics is the backbone of hybrid quantum computing for a range of applications. However, as the problem size grows, quantum logic errors and the effect of barren plateaus overwhelm the quality of the results. There is now a clear focus on strategies that require fewer quantum circuit steps and are robust to device errors. Here we present an approach in which problem complexity is transferred to dynamic quantities computed on the quantum processor – Hamiltonian moments, $\langle H^n\rangle$. From these quantum computed moments, an estimate of the ground-state energy can be obtained using the ``infimum'' theorem from Lanczos cumulant expansions which manifestly corrects the associated variational calculation. With higher order effects in Hilbert space generated via the moments, the burden on the trial-state quantum circuit depth is eased. The method is introduced and demonstrated on 2D quantum magnetism models on lattices up to $5\times 5$ (25 qubits) implemented on IBM Quantum superconducting qubit devices. Moments were quantum computed to fourth order with respect to a parameterised antiferromagnetic trial-state. A comprehensive comparison with benchmark variational calculations was performed, including over an ensemble of random coupling instances. The results showed that the infimum estimate consistently outperformed the benchmark variational approach for the same trial-state. These initial investigations suggest that the quantum computed moments approach has a high degree of stability against trial-state variation, quantum gate errors and shot noise, all of which bodes well for further investigation and applications of the approach.

]]>Preparing the ground state of a given Hamiltonian and estimating its ground energy are important but computationally hard tasks. However, given some additional information, these problems can be solved efficiently on a quantum computer. We assume that an initial state with non-trivial overlap with the ground state can be efficiently prepared, and the spectral gap between the ground energy and the first excited energy is bounded from below. With these assumptions we design an algorithm that prepares the ground state when an upper bound of the ground energy is known, whose runtime has a logarithmic dependence on the inverse error. When such an upper bound is not known, we propose a hybrid quantum-classical algorithm to estimate the ground energy, where the dependence of the number of queries to the initial state on the desired precision is exponentially improved compared to the current state-of-the-art algorithm proposed in [Ge et al. 2019]. These two algorithms can then be combined to prepare a ground state without knowing an upper bound of the ground energy. We also prove that our algorithms reach the complexity lower bounds by applying it to the unstructured search problem and the quantum approximate counting problem.

]]>Preparing the ground state of a given Hamiltonian and estimating its ground energy are important but computationally hard tasks. However, given some additional information, these problems can be solved efficiently on a quantum computer. We assume that an initial state with non-trivial overlap with the ground state can be efficiently prepared, and the spectral gap between the ground energy and the first excited energy is bounded from below. With these assumptions we design an algorithm that prepares the ground state when an upper bound of the ground energy is known, whose runtime has a logarithmic dependence on the inverse error. When such an upper bound is not known, we propose a hybrid quantum-classical algorithm to estimate the ground energy, where the dependence of the number of queries to the initial state on the desired precision is exponentially improved compared to the current state-of-the-art algorithm proposed in [Ge et al. 2019]. These two algorithms can then be combined to prepare a ground state without knowing an upper bound of the ground energy. We also prove that our algorithms reach the complexity lower bounds by applying it to the unstructured search problem and the quantum approximate counting problem.

]]>The Prime state of $n$ qubits, ${|\mathbb{P}_n{\rangle}}$, is defined as the uniform superposition of all the computational-basis states corresponding to prime numbers smaller than $2^n$. This state encodes, quantum mechanically, arithmetic properties of the primes. We first show that the Quantum Fourier Transform of the Prime state provides a direct access to Chebyshev-like biases in the distribution of prime numbers. We next study the entanglement entropy of ${|\mathbb{P}_n{\rangle}}$ up to $n=30$ qubits, and find a relation between its scaling and the Shannon entropy of the density of square-free integers. This relation also holds when the Prime state is constructed using a qudit basis, showing that this property is intrinsic to the distribution of primes. The same feature is found when considering states built from the superposition of primes in arithmetic progressions. Finally, we explore the properties of other number-theoretical quantum states, such as those defined from odd composite numbers, square-free integers and starry primes. For this study, we have developed an open-source library that diagonalizes matrices using floats of arbitrary precision.

]]>The Prime state of $n$ qubits, ${|\mathbb{P}_n{\rangle}}$, is defined as the uniform superposition of all the computational-basis states corresponding to prime numbers smaller than $2^n$. This state encodes, quantum mechanically, arithmetic properties of the primes. We first show that the Quantum Fourier Transform of the Prime state provides a direct access to Chebyshev-like biases in the distribution of prime numbers. We next study the entanglement entropy of ${|\mathbb{P}_n{\rangle}}$ up to $n=30$ qubits, and find a relation between its scaling and the Shannon entropy of the density of square-free integers. This relation also holds when the Prime state is constructed using a qudit basis, showing that this property is intrinsic to the distribution of primes. The same feature is found when considering states built from the superposition of primes in arithmetic progressions. Finally, we explore the properties of other number-theoretical quantum states, such as those defined from odd composite numbers, square-free integers and starry primes. For this study, we have developed an open-source library that diagonalizes matrices using floats of arbitrary precision.

]]>The teleportation model of quantum computation introduced by Gottesman and Chuang (1999) motivated the development of the Clifford hierarchy. Despite its intrinsic value for quantum computing, the widespread use of magic state distillation, which is closely related to this model, emphasizes the importance of comprehending the hierarchy. There is currently a limited understanding of the structure of this hierarchy, apart from the case of diagonal unitaries (Cui et al., 2017; Rengaswamy et al. 2019). We explore the structure of the second and third levels of the hierarchy, the first level being the ubiquitous Pauli group, via the Weyl (i.e., Pauli) expansion of unitaries at these levels. In particular, we characterize the support of the standard Clifford operations on the Pauli group. Since conjugation of a Pauli by a third level unitary produces traceless Hermitian Cliffords, we characterize their Pauli support as well. Semi-Clifford unitaries are known to have ancilla savings in the teleportation model, and we explore their Pauli support via symplectic transvections. Finally, we show that, up to multiplication by a Clifford, every third level unitary commutes with at least one Pauli matrix. This can be used inductively to show that, up to a multiplication by a Clifford, every third level unitary is supported on a maximal commutative subgroup of the Pauli group. Additionally, it can be easily seen that the latter implies the generalized semi-Clifford conjecture, proven by Beigi and Shor (2010). We discuss potential applications in quantum error correction and the design of flag gadgets.

]]>The teleportation model of quantum computation introduced by Gottesman and Chuang (1999) motivated the development of the Clifford hierarchy. Despite its intrinsic value for quantum computing, the widespread use of magic state distillation, which is closely related to this model, emphasizes the importance of comprehending the hierarchy. There is currently a limited understanding of the structure of this hierarchy, apart from the case of diagonal unitaries (Cui et al., 2017; Rengaswamy et al. 2019). We explore the structure of the second and third levels of the hierarchy, the first level being the ubiquitous Pauli group, via the Weyl (i.e., Pauli) expansion of unitaries at these levels. In particular, we characterize the support of the standard Clifford operations on the Pauli group. Since conjugation of a Pauli by a third level unitary produces traceless Hermitian Cliffords, we characterize their Pauli support as well. Semi-Clifford unitaries are known to have ancilla savings in the teleportation model, and we explore their Pauli support via symplectic transvections. Finally, we show that, up to multiplication by a Clifford, every third level unitary commutes with at least one Pauli matrix. This can be used inductively to show that, up to a multiplication by a Clifford, every third level unitary is supported on a maximal commutative subgroup of the Pauli group. Additionally, it can be easily seen that the latter implies the generalized semi-Clifford conjecture, proven by Beigi and Shor (2010). We discuss potential applications in quantum error correction and the design of flag gadgets.

]]>In this note we present explicit canonical forms for all the elements in the two-qubit CNOT-Dihedral group, with minimal numbers of controlled-$S$ ($CS$) and controlled-$X$ ($CX$) gates, using the generating set of quantum gates $[X, T, CX, CS]$. We provide an algorithm to successively construct the $n$-qubit CNOT-Dihedral group, asserting an optimal number of controlled-$X$ ($CX$) gates. These results are needed to estimate gate errors via non-Clifford randomized benchmarking and may have further applications to circuit optimization over fault-tolerant gate sets.

]]>In this note we present explicit canonical forms for all the elements in the two-qubit CNOT-Dihedral group, with minimal numbers of controlled-$S$ ($CS$) and controlled-$X$ ($CX$) gates, using the generating set of quantum gates $[X, T, CX, CS]$. We provide an algorithm to successively construct the $n$-qubit CNOT-Dihedral group, asserting an optimal number of controlled-$X$ ($CX$) gates. These results are needed to estimate gate errors via non-Clifford randomized benchmarking and may have further applications to circuit optimization over fault-tolerant gate sets.

]]>Discretizing spacetime is often a natural step towards modelling physical systems. For quantum systems, if we also demand a strict bound on the speed of information propagation, we get quantum cellular automata (QCAs). These originally arose as an alternative paradigm for quantum computation, though more recently they have found application in understanding topological phases of matter and have} been proposed as models of periodically driven (Floquet) quantum systems, where QCA methods were used to classify their phases. QCAs have also been used as a natural discretization of quantum field theory, and some interesting examples of QCAs have been introduced that become interacting quantum field theories in the continuum limit. This review discusses all of these applications, as well as some other interesting results on the structure of quantum cellular automata, including the tensor-network unitary approach, the index theory and higher dimensional classifications of QCAs.

]]>Discretizing spacetime is often a natural step towards modelling physical systems. For quantum systems, if we also demand a strict bound on the speed of information propagation, we get quantum cellular automata (QCAs). These originally arose as an alternative paradigm for quantum computation, though more recently they have found application in understanding topological phases of matter and have} been proposed as models of periodically driven (Floquet) quantum systems, where QCA methods were used to classify their phases. QCAs have also been used as a natural discretization of quantum field theory, and some interesting examples of QCAs have been introduced that become interacting quantum field theories in the continuum limit. This review discusses all of these applications, as well as some other interesting results on the structure of quantum cellular automata, including the tensor-network unitary approach, the index theory and higher dimensional classifications of QCAs.

]]>A fully relational quantum theory necessarily requires an account of changes of quantum reference frames, where quantum reference frames are quantum systems relative to which other systems are described. By introducing a relational formalism which identifies coordinate systems with elements of a symmetry group $G$, we define a general operator for reversibly changing between quantum reference frames associated to a group $G$. This generalises the known operator for translations and boosts to arbitrary finite and locally compact groups, including non-Abelian groups. We show under which conditions one can uniquely assign coordinate choices to physical systems (to form reference frames) and how to reversibly transform between them, providing transformations between coordinate systems which are `in a superposition' of other coordinate systems. We obtain the change of quantum reference frame from the principles of relational physics and of coherent change of reference frame. We prove a theorem stating that the change of quantum reference frame consistent with these principles is unitary if and only if the reference systems carry the left and right regular representations of $G$. We also define irreversible changes of reference frame for classical and quantum systems in the case where the symmetry group $G$ is a semi-direct product $G = N \rtimes P$ or a direct product $G = N \times P$, providing multiple examples of both reversible and irreversible changes of quantum reference system along the way. Finally, we apply the relational formalism and changes of reference frame developed in this work to the Wigner's friend scenario, finding similar conclusions to those in relational quantum mechanics using an explicit change of reference frame as opposed to indirect reasoning using measurement operators.

]]>A fully relational quantum theory necessarily requires an account of changes of quantum reference frames, where quantum reference frames are quantum systems relative to which other systems are described. By introducing a relational formalism which identifies coordinate systems with elements of a symmetry group $G$, we define a general operator for reversibly changing between quantum reference frames associated to a group $G$. This generalises the known operator for translations and boosts to arbitrary finite and locally compact groups, including non-Abelian groups. We show under which conditions one can uniquely assign coordinate choices to physical systems (to form reference frames) and how to reversibly transform between them, providing transformations between coordinate systems which are `in a superposition' of other coordinate systems. We obtain the change of quantum reference frame from the principles of relational physics and of coherent change of reference frame. We prove a theorem stating that the change of quantum reference frame consistent with these principles is unitary if and only if the reference systems carry the left and right regular representations of $G$. We also define irreversible changes of reference frame for classical and quantum systems in the case where the symmetry group $G$ is a semi-direct product $G = N \rtimes P$ or a direct product $G = N \times P$, providing multiple examples of both reversible and irreversible changes of quantum reference system along the way. Finally, we apply the relational formalism and changes of reference frame developed in this work to the Wigner's friend scenario, finding similar conclusions to those in relational quantum mechanics using an explicit change of reference frame as opposed to indirect reasoning using measurement operators.

]]>Parametrized quantum optical circuits are a class of quantum circuits in which the carriers of quantum information are photons and the gates are optical transformations. Classically optimizing these circuits is challenging due to the infinite dimensionality of the photon number vector space that is associated to each optical mode. Truncating the space dimension is unavoidable, and it can lead to incorrect results if the gates populate photon number states beyond the cutoff. To tackle this issue, we present an algorithm that is orders of magnitude faster than the current state of the art, to recursively compute the exact matrix elements of Gaussian operators and their gradient with respect to a parametrization. These operators, when augmented with a non-Gaussian transformation such as the Kerr gate, achieve universal quantum computation. Our approach brings two advantages: first, by computing the matrix elements of Gaussian operators directly, we don't need to construct them by combining several other operators; second, we can use any variant of the gradient descent algorithm by plugging our gradients into an automatic differentiation framework such as TensorFlow or PyTorch. Our results will find applications in quantum optical hardware research, quantum machine learning, optical data processing, device discovery and device design.

]]>Parametrized quantum optical circuits are a class of quantum circuits in which the carriers of quantum information are photons and the gates are optical transformations. Classically optimizing these circuits is challenging due to the infinite dimensionality of the photon number vector space that is associated to each optical mode. Truncating the space dimension is unavoidable, and it can lead to incorrect results if the gates populate photon number states beyond the cutoff. To tackle this issue, we present an algorithm that is orders of magnitude faster than the current state of the art, to recursively compute the exact matrix elements of Gaussian operators and their gradient with respect to a parametrization. These operators, when augmented with a non-Gaussian transformation such as the Kerr gate, achieve universal quantum computation. Our approach brings two advantages: first, by computing the matrix elements of Gaussian operators directly, we don't need to construct them by combining several other operators; second, we can use any variant of the gradient descent algorithm by plugging our gradients into an automatic differentiation framework such as TensorFlow or PyTorch. Our results will find applications in quantum optical hardware research, quantum machine learning, optical data processing, device discovery and device design.

]]>The de Broglie-Bohm theory is a hidden-variable interpretation of quantum mechanics which involves particles moving through space along deterministic trajectories. This theory singles out position as the primary ontological variable. Mathematically, it is possible to construct a similar theory where particles are moving through momentum-space, and momentum is singled out as the primary ontological variable. In this paper, we construct the putative particle trajectories for a two-slit experiment in both the position and momentum-space theories by simulating particle dynamics with coherent light. Using a method for constructing trajectories in the primary and non-primary spaces, we compare the phase-space dynamics offered by the two theories and show that they do not agree. This contradictory behaviour underscores the difficulty of selecting one picture of reality from the infinite number of possibilities offered by Bohm-like theories.

]]>The de Broglie-Bohm theory is a hidden-variable interpretation of quantum mechanics which involves particles moving through space along deterministic trajectories. This theory singles out position as the primary ontological variable. Mathematically, it is possible to construct a similar theory where particles are moving through momentum-space, and momentum is singled out as the primary ontological variable. In this paper, we construct the putative particle trajectories for a two-slit experiment in both the position and momentum-space theories by simulating particle dynamics with coherent light. Using a method for constructing trajectories in the primary and non-primary spaces, we compare the phase-space dynamics offered by the two theories and show that they do not agree. This contradictory behaviour underscores the difficulty of selecting one picture of reality from the infinite number of possibilities offered by Bohm-like theories.

]]>As increasingly impressive quantum information processors are realized in laboratories around the world, robust and reliable characterization of these devices is now more urgent than ever. These diagnostics can take many forms, but one of the most popular categories is $\textit{tomography}$, where an underlying parameterized model is proposed for a device and inferred by experiments. Here, we introduce and implement efficient operational tomography, which uses experimental observables as these model parameters. This addresses a problem of ambiguity in representation that arises in current tomographic approaches (the $\textit{gauge problem}$). Solving the gauge problem enables us to efficiently implement operational tomography in a Bayesian framework computationally, and hence gives us a natural way to include prior information and discuss uncertainty in fit parameters. We demonstrate this new tomography in a variety of different experimentally-relevant scenarios, including standard process tomography, Ramsey interferometry, randomized benchmarking, and gate set tomography.

]]>As increasingly impressive quantum information processors are realized in laboratories around the world, robust and reliable characterization of these devices is now more urgent than ever. These diagnostics can take many forms, but one of the most popular categories is $\textit{tomography}$, where an underlying parameterized model is proposed for a device and inferred by experiments. Here, we introduce and implement efficient operational tomography, which uses experimental observables as these model parameters. This addresses a problem of ambiguity in representation that arises in current tomographic approaches (the $\textit{gauge problem}$). Solving the gauge problem enables us to efficiently implement operational tomography in a Bayesian framework computationally, and hence gives us a natural way to include prior information and discuss uncertainty in fit parameters. We demonstrate this new tomography in a variety of different experimentally-relevant scenarios, including standard process tomography, Ramsey interferometry, randomized benchmarking, and gate set tomography.

]]>Any measurement is intended to provide $information$ on a system, namely knowledge about its state. However, we learn from quantum theory that it is generally impossible to extract information without disturbing the state of the system or its correlations with other systems. In this paper we address the issue of the interplay between information and disturbance for a general operational probabilistic theory. The traditional notion of disturbance considers the fate of the system state after the measurement. However, the fact that the system state is left untouched ensures that also correlations are preserved only in the presence of local discriminability. Here we provide the definition of disturbance that is appropriate for a general theory. Moreover, since in a theory without causality information can be gathered also on the effect, we generalise the notion of no-information test. We then prove an equivalent condition for no-information without disturbance---$\textit{atomicity of the identity}$---namely the impossibility of achieving the trivial evolution---the $identity$---as the coarse-graining of a set of non trivial ones. We prove a general theorem showing that information that can be retrieved without disturbance corresponds to perfectly repeatable and discriminating tests. Based on this, we prove a structure theorem for operational probabilistic theories, showing that the set of states of any system decomposes as a direct sum of perfectly discriminable sets, and such decomposition is preserved under system composition. As a consequence, a theory is such that any information can be extracted without disturbance only if all its systems are classical. Finally, we show via concrete examples that no-information without disturbance is independent of both local discriminability and purification.

]]>Any measurement is intended to provide $information$ on a system, namely knowledge about its state. However, we learn from quantum theory that it is generally impossible to extract information without disturbing the state of the system or its correlations with other systems. In this paper we address the issue of the interplay between information and disturbance for a general operational probabilistic theory. The traditional notion of disturbance considers the fate of the system state after the measurement. However, the fact that the system state is left untouched ensures that also correlations are preserved only in the presence of local discriminability. Here we provide the definition of disturbance that is appropriate for a general theory. Moreover, since in a theory without causality information can be gathered also on the effect, we generalise the notion of no-information test. We then prove an equivalent condition for no-information without disturbance---$\textit{atomicity of the identity}$---namely the impossibility of achieving the trivial evolution---the $identity$---as the coarse-graining of a set of non trivial ones. We prove a general theorem showing that information that can be retrieved without disturbance corresponds to perfectly repeatable and discriminating tests. Based on this, we prove a structure theorem for operational probabilistic theories, showing that the set of states of any system decomposes as a direct sum of perfectly discriminable sets, and such decomposition is preserved under system composition. As a consequence, a theory is such that any information can be extracted without disturbance only if all its systems are classical. Finally, we show via concrete examples that no-information without disturbance is independent of both local discriminability and purification.

]]>We propose a very large family of benchmarks for probing the performance of quantum computers. We call them $\textit{volumetric benchmarks}$ (VBs) because they generalize IBM's benchmark for measuring quantum volume [1]. The quantum volume benchmark defines a family of $\textit{square}$ circuits whose depth $d$ and width $w$ are the same. A volumetric benchmark defines a family of $\textit{rectangular}$ quantum circuits, for which $d$ and $w$ are uncoupled to allow the study of time/space performance trade-offs. Each VB defines a mapping from circuit shapes — $(w,d)$ pairs — to test suites $\mathcal{C}$$\textit(w,d)$. A test suite is an ensemble of test circuits that share a common structure. The test suite $\mathcal{C}$ for a given circuit shape may be a single circuit $C$, a specific list of circuits $\{C_1\ldots C_N\}$ that must all be run, or a large set of possible circuits equipped with a distribution $Pr(C)$. The circuits in a given VB share a structure, which is limited only by designers' creativity. We list some known benchmarks, and other circuit families, that fit into the VB framework: several families of random circuits, periodic circuits, and algorithm-inspired circuits. The last ingredient defining a benchmark is a success criterion that defines when a processor is judged to have ``passed'' a given test circuit. We discuss several options. Benchmark data can be analyzed in many ways to extract many properties, but we propose a simple, universal graphical summary of results that illustrates the Pareto frontier of the $d$ vs $w$ trade-off for the processor being benchmarked.

]]>We propose a very large family of benchmarks for probing the performance of quantum computers. We call them $\textit{volumetric benchmarks}$ (VBs) because they generalize IBM's benchmark for measuring quantum volume [1]. The quantum volume benchmark defines a family of $\textit{square}$ circuits whose depth $d$ and width $w$ are the same. A volumetric benchmark defines a family of $\textit{rectangular}$ quantum circuits, for which $d$ and $w$ are uncoupled to allow the study of time/space performance trade-offs. Each VB defines a mapping from circuit shapes — $(w,d)$ pairs — to test suites $\mathcal{C}$$\textit(w,d)$. A test suite is an ensemble of test circuits that share a common structure. The test suite $\mathcal{C}$ for a given circuit shape may be a single circuit $C$, a specific list of circuits $\{C_1\ldots C_N\}$ that must all be run, or a large set of possible circuits equipped with a distribution $Pr(C)$. The circuits in a given VB share a structure, which is limited only by designers' creativity. We list some known benchmarks, and other circuit families, that fit into the VB framework: several families of random circuits, periodic circuits, and algorithm-inspired circuits. The last ingredient defining a benchmark is a success criterion that defines when a processor is judged to have ``passed'' a given test circuit. We discuss several options. Benchmark data can be analyzed in many ways to extract many properties, but we propose a simple, universal graphical summary of results that illustrates the Pareto frontier of the $d$ vs $w$ trade-off for the processor being benchmarked.

]]>We present a quantum eigenstate filtering algorithm based on quantum signal processing (QSP) and minimax polynomials. The algorithm allows us to efficiently prepare a target eigenstate of a given Hamiltonian, if we have access to an initial state with non-trivial overlap with the target eigenstate and have a reasonable lower bound for the spectral gap. We apply this algorithm to the quantum linear system problem (QLSP), and present two algorithms based on quantum adiabatic computing (AQC) and quantum Zeno effect respectively. Both algorithms prepare the final solution as a pure state, and achieves the near optimal $\mathcal{\widetilde{O}}(d\kappa\log(1/\epsilon))$ query complexity for a $d$-sparse matrix, where $\kappa$ is the condition number, and $\epsilon$ is the desired precision. Neither algorithm uses phase estimation or amplitude amplification.

]]>We present a quantum eigenstate filtering algorithm based on quantum signal processing (QSP) and minimax polynomials. The algorithm allows us to efficiently prepare a target eigenstate of a given Hamiltonian, if we have access to an initial state with non-trivial overlap with the target eigenstate and have a reasonable lower bound for the spectral gap. We apply this algorithm to the quantum linear system problem (QLSP), and present two algorithms based on quantum adiabatic computing (AQC) and quantum Zeno effect respectively. Both algorithms prepare the final solution as a pure state, and achieves the near optimal $\mathcal{\widetilde{O}}(d\kappa\log(1/\epsilon))$ query complexity for a $d$-sparse matrix, where $\kappa$ is the condition number, and $\epsilon$ is the desired precision. Neither algorithm uses phase estimation or amplitude amplification.

]]>We address the problem of existence of completely positive trace preserving (CPTP) maps between two sets of density matrices. We refine the result of Alberti and Uhlmann and derive a necessary and sufficient condition for the existence of a unital channel between two pairs of qubit states which ultimately boils down to three simple inequalities.

]]>We address the problem of existence of completely positive trace preserving (CPTP) maps between two sets of density matrices. We refine the result of Alberti and Uhlmann and derive a necessary and sufficient condition for the existence of a unital channel between two pairs of qubit states which ultimately boils down to three simple inequalities.

]]>We consider possible non-signaling composites of probabilistic models based on euclidean Jordan algebras (EJAs), satisfying some reasonable additional constraints motivated by the desire to construct dagger-compact categories of such models. We show that no such composite has the exceptional Jordan algebra as a direct summand, nor does any such composite exist if one factor has an exceptional summand, unless the other factor is a direct sum of one-dimensional Jordan algebras (representing essentially a classical system). Moreover, we show that any composite of simple, non-exceptional EJAs is a direct summand of their universal tensor product, sharply limiting the possibilities. These results warrant our focussing on concrete Jordan algebras of hermitian matrices, i.e., euclidean Jordan algebras with a preferred embedding in a complex matrix algebra. We show that these can be organized in a natural way as a symmetric monoidal category, albeit one that is not compact closed. We then construct a related category $\mbox{InvQM}$ of embedded euclidean Jordan algebras, having fewer objects but more morphisms, that is not only compact closed but dagger-compact. This category unifies finite-dimensional real, complex and quaternionic mixed-state quantum mechanics, except that the composite of two complex quantum systems comes with an extra classical bit. Our notion of composite requires neither tomographic locality, nor preservation of purity under tensor product. The categories we construct include examples in which both of these conditions fail. {In such cases, the information capacity (the maximum number of mutually distinguishable states) of a composite is greater than the product of the capacities of its constituents.}

]]>We consider possible non-signaling composites of probabilistic models based on euclidean Jordan algebras (EJAs), satisfying some reasonable additional constraints motivated by the desire to construct dagger-compact categories of such models. We show that no such composite has the exceptional Jordan algebra as a direct summand, nor does any such composite exist if one factor has an exceptional summand, unless the other factor is a direct sum of one-dimensional Jordan algebras (representing essentially a classical system). Moreover, we show that any composite of simple, non-exceptional EJAs is a direct summand of their universal tensor product, sharply limiting the possibilities. These results warrant our focussing on concrete Jordan algebras of hermitian matrices, i.e., euclidean Jordan algebras with a preferred embedding in a complex matrix algebra. We show that these can be organized in a natural way as a symmetric monoidal category, albeit one that is not compact closed. We then construct a related category $\mbox{InvQM}$ of embedded euclidean Jordan algebras, having fewer objects but more morphisms, that is not only compact closed but dagger-compact. This category unifies finite-dimensional real, complex and quaternionic mixed-state quantum mechanics, except that the composite of two complex quantum systems comes with an extra classical bit. Our notion of composite requires neither tomographic locality, nor preservation of purity under tensor product. The categories we construct include examples in which both of these conditions fail. {In such cases, the information capacity (the maximum number of mutually distinguishable states) of a composite is greater than the product of the capacities of its constituents.}

]]>We investigate quantum error correction using continuous parity measurements to correct bit-flip errors with the three-qubit code. Continuous monitoring of errors brings the benefit of a continuous stream of information, which facilitates passive error tracking in real time. It reduces overhead from the standard gate-based approach that periodically entangles and measures additional ancilla qubits. However, the noisy analog signals from continuous parity measurements mandate more complicated signal processing to interpret syndromes accurately. We analyze the performance of several practical filtering methods for continuous error correction and demonstrate that they are viable alternatives to the standard ancilla-based approach. As an optimal filter, we discuss an unnormalized (linear) Bayesian filter, with improved computational efficiency compared to the related Wonham filter introduced by Mabuchi [New J. Phys. 11, 105044 (2009)]. We compare this optimal continuous filter to two practical variations of the simplest periodic boxcar-averaging-and-thresholding filter, targeting real-time hardware implementations with low-latency circuitry. As variations, we introduce a non-Markovian ``half-boxcar'' filter and a Markovian filter with a second adjustable threshold; these filters eliminate the dominant source of error in the boxcar filter, and compare favorably to the optimal filter. For each filter, we derive analytic results for the decay in average fidelity and verify them with numerical simulations.

]]>We investigate quantum error correction using continuous parity measurements to correct bit-flip errors with the three-qubit code. Continuous monitoring of errors brings the benefit of a continuous stream of information, which facilitates passive error tracking in real time. It reduces overhead from the standard gate-based approach that periodically entangles and measures additional ancilla qubits. However, the noisy analog signals from continuous parity measurements mandate more complicated signal processing to interpret syndromes accurately. We analyze the performance of several practical filtering methods for continuous error correction and demonstrate that they are viable alternatives to the standard ancilla-based approach. As an optimal filter, we discuss an unnormalized (linear) Bayesian filter, with improved computational efficiency compared to the related Wonham filter introduced by Mabuchi [New J. Phys. 11, 105044 (2009)]. We compare this optimal continuous filter to two practical variations of the simplest periodic boxcar-averaging-and-thresholding filter, targeting real-time hardware implementations with low-latency circuitry. As variations, we introduce a non-Markovian ``half-boxcar'' filter and a Markovian filter with a second adjustable threshold; these filters eliminate the dominant source of error in the boxcar filter, and compare favorably to the optimal filter. For each filter, we derive analytic results for the decay in average fidelity and verify them with numerical simulations.

]]>In ``Playing Pool with $\pi$'' [1], Galperin invented an extraordinary method to learn the digits of $\pi$ by counting the collisions of billiard balls. Here I demonstrate an exact isomorphism between Galperin's bouncing billiards and Grover's algorithm for quantum search. This provides an illuminating way to visualize Grover's algorithm.

]]>In ``Playing Pool with $\pi$'' [1], Galperin invented an extraordinary method to learn the digits of $\pi$ by counting the collisions of billiard balls. Here I demonstrate an exact isomorphism between Galperin's bouncing billiards and Grover's algorithm for quantum search. This provides an illuminating way to visualize Grover's algorithm.

]]>In this paper we propose a technique for distributing entanglement in architectures in which interactions between pairs of qubits are constrained to a fixed network $G$. This allows for two-qubit operations to be performed between qubits which are remote from each other in $G$, through gate teleportation. We demonstrate how adapting $\textit{quantum linear network coding}$ to this problem of entanglement distribution in a network of qubits can be used to solve the problem of distributing Bell states and GHZ states in parallel, when bottlenecks in $G$ would otherwise force such entangled states to be distributed sequentially. In particular, we show that by reduction to classical network coding protocols for the $k$-pairs problem or multiple multicast problem in a fixed network $G$, one can distribute entanglement between the transmitters and receivers with a Clifford circuit whose quantum depth is some (typically small and easily computed) constant, which does not depend on the size of $G$, however remote the transmitters and receivers are, or the number of transmitters and receivers. These results also generalise straightforwardly to qudits of any prime dimension. We demonstrate our results using a specialised formalism, distinct from and more efficient than the stabiliser formalism, which is likely to be helpful to reason about and prototype such quantum linear network coding circuits.

]]>In this paper we propose a technique for distributing entanglement in architectures in which interactions between pairs of qubits are constrained to a fixed network $G$. This allows for two-qubit operations to be performed between qubits which are remote from each other in $G$, through gate teleportation. We demonstrate how adapting $\textit{quantum linear network coding}$ to this problem of entanglement distribution in a network of qubits can be used to solve the problem of distributing Bell states and GHZ states in parallel, when bottlenecks in $G$ would otherwise force such entangled states to be distributed sequentially. In particular, we show that by reduction to classical network coding protocols for the $k$-pairs problem or multiple multicast problem in a fixed network $G$, one can distribute entanglement between the transmitters and receivers with a Clifford circuit whose quantum depth is some (typically small and easily computed) constant, which does not depend on the size of $G$, however remote the transmitters and receivers are, or the number of transmitters and receivers. These results also generalise straightforwardly to qudits of any prime dimension. We demonstrate our results using a specialised formalism, distinct from and more efficient than the stabiliser formalism, which is likely to be helpful to reason about and prototype such quantum linear network coding circuits.

]]>Quantum resource theories (QRTs) provide a unified theoretical framework for understanding inherent quantum-mechanical properties that serve as resources in quantum information processing, but resources motivated by physics may possess structure whose analysis is mathematically intractable, such as non-uniqueness of maximally resourceful states, lack of convexity, and infinite dimension. We investigate state conversion and resource measures in general QRTs under minimal assumptions to figure out universal properties of physically motivated quantum resources that may have such mathematical structure whose analysis is intractable. In the general setting, we prove the existence of maximally resourceful states in one-shot state conversion. Also analyzing asymptotic state conversion, we discover $\textit{catalytic replication}$ of quantum resources, where a resource state is infinitely replicable by free operations. In QRTs without assuming the uniqueness of maximally resourceful states, we formulate the tasks of distillation and formation of quantum resources, and introduce distillable resource and resource cost based on the distillation and the formation, respectively. Furthermore, we introduce $\textit{consistent resource measures}$ that quantify the amount of quantum resources without contradicting the rate of state conversion even in QRTs with non-unique maximally resourceful states. Progressing beyond the previous work showing a uniqueness theorem for additive resource measures, we prove the corresponding uniqueness inequality for the consistent resource measures; that is, consistent resource measures of a quantum state take values between the distillable resource and the resource cost of the state. These formulations and results establish a foundation of QRTs applicable in a unified way to physically motivated quantum resources whose analysis can be mathematically intractable.

]]>Quantum resource theories (QRTs) provide a unified theoretical framework for understanding inherent quantum-mechanical properties that serve as resources in quantum information processing, but resources motivated by physics may possess structure whose analysis is mathematically intractable, such as non-uniqueness of maximally resourceful states, lack of convexity, and infinite dimension. We investigate state conversion and resource measures in general QRTs under minimal assumptions to figure out universal properties of physically motivated quantum resources that may have such mathematical structure whose analysis is intractable. In the general setting, we prove the existence of maximally resourceful states in one-shot state conversion. Also analyzing asymptotic state conversion, we discover $\textit{catalytic replication}$ of quantum resources, where a resource state is infinitely replicable by free operations. In QRTs without assuming the uniqueness of maximally resourceful states, we formulate the tasks of distillation and formation of quantum resources, and introduce distillable resource and resource cost based on the distillation and the formation, respectively. Furthermore, we introduce $\textit{consistent resource measures}$ that quantify the amount of quantum resources without contradicting the rate of state conversion even in QRTs with non-unique maximally resourceful states. Progressing beyond the previous work showing a uniqueness theorem for additive resource measures, we prove the corresponding uniqueness inequality for the consistent resource measures; that is, consistent resource measures of a quantum state take values between the distillable resource and the resource cost of the state. These formulations and results establish a foundation of QRTs applicable in a unified way to physically motivated quantum resources whose analysis can be mathematically intractable.

]]>Time in quantum mechanics is peculiar: it is an observable that cannot be associated to an Hermitian operator. As a consequence it is impossible to explain dynamics in an isolated system without invoking an external classical clock, a fact that becomes particularly problematic in the context of quantum gravity. An unconventional solution was pioneered by Page and Wootters (PaW) in 1983. PaW showed that dynamics can be an emergent property of the entanglement between two subsystems of a static Universe. In this work we first investigate the possibility to introduce in this framework a Hermitian time operator complement of a clock Hamiltonian having an equally-spaced energy spectrum. An Hermitian operator complement of such Hamiltonian was introduced by Pegg in 1998, who named it "Age". We show here that Age, when introduced in the PaW context, can be interpreted as a proper Hermitian time operator conjugate to a "good" clock Hamiltonian. We therefore show that, still following Pegg's formalism, it is possible to introduce in the PaW framework bounded clock Hamiltonians with an unequally-spaced energy spectrum with rational energy ratios. In this case time is described by a POVM and we demonstrate that Pegg's POVM states provide a consistent dynamical evolution of the system even if they are not orthogonal, and therefore partially undistinguishables.

]]>Time in quantum mechanics is peculiar: it is an observable that cannot be associated to an Hermitian operator. As a consequence it is impossible to explain dynamics in an isolated system without invoking an external classical clock, a fact that becomes particularly problematic in the context of quantum gravity. An unconventional solution was pioneered by Page and Wootters (PaW) in 1983. PaW showed that dynamics can be an emergent property of the entanglement between two subsystems of a static Universe. In this work we first investigate the possibility to introduce in this framework a Hermitian time operator complement of a clock Hamiltonian having an equally-spaced energy spectrum. An Hermitian operator complement of such Hamiltonian was introduced by Pegg in 1998, who named it "Age". We show here that Age, when introduced in the PaW context, can be interpreted as a proper Hermitian time operator conjugate to a "good" clock Hamiltonian. We therefore show that, still following Pegg's formalism, it is possible to introduce in the PaW framework bounded clock Hamiltonians with an unequally-spaced energy spectrum with rational energy ratios. In this case time is described by a POVM and we demonstrate that Pegg's POVM states provide a consistent dynamical evolution of the system even if they are not orthogonal, and therefore partially undistinguishables.

]]>