One of the basic distinctions between classical and quantum mechanics is the existence of fundamentally incompatible quantities. Such quantities are present on all levels of quantum objects: states, measurements, quantum channels, and even higher order dynamics. In this manuscript, we show that two seemingly different aspects of quantum incompatibility: the quantum marginal problem of states and the incompatibility on the level of quantum channels are in many-to-one correspondence. Importantly, as incompatibility of measurements is a special case of the latter, it also forms an instance of the quantum marginal problem. The generality of the connection is harnessed by solving the marginal problem for Gaussian and Bell diagonal states, as well as for pure states under depolarizing noise. Furthermore, we derive entropic criteria for channel compatibility, and develop a converging hierarchy of semi-definite programs for quantifying the strength of quantum memories.

]]>One of the basic distinctions between classical and quantum mechanics is the existence of fundamentally incompatible quantities. Such quantities are present on all levels of quantum objects: states, measurements, quantum channels, and even higher order dynamics. In this manuscript, we show that two seemingly different aspects of quantum incompatibility: the quantum marginal problem of states and the incompatibility on the level of quantum channels are in many-to-one correspondence. Importantly, as incompatibility of measurements is a special case of the latter, it also forms an instance of the quantum marginal problem. The generality of the connection is harnessed by solving the marginal problem for Gaussian and Bell diagonal states, as well as for pure states under depolarizing noise. Furthermore, we derive entropic criteria for channel compatibility, and develop a converging hierarchy of semi-definite programs for quantifying the strength of quantum memories.

]]>A Physical Unclonable Function (PUF) is a device with unique behaviour that is hard to clone hence providing a secure fingerprint. A variety of PUF structures and PUF-based applications have been explored theoretically as well as being implemented in practical settings. Recently, the inherent unclonability of quantum states has been exploited to derive the quantum analogue of PUF as well as new proposals for the implementation of PUF. We present the first comprehensive study of quantum Physical Unclonable Functions (qPUFs) with quantum cryptographic tools. We formally define qPUFs, encapsulating all requirements of classical PUFs as well as introducing a new testability feature inherent to the quantum setting only. We use a quantum game-based framework to define different levels of security for qPUFs: quantum exponential unforgeability, quantum existential unforgeability and quantum selective unforgeability. We introduce a new quantum attack technique based on the universal quantum emulator algorithm of Marvin and Lloyd to prove no qPUF can provide quantum existential unforgeability. On the other hand, we prove that a large family of qPUFs (called unitary PUFs) can provide quantum selective unforgeability which is the desired level of security for most PUF-based applications.

]]>A Physical Unclonable Function (PUF) is a device with unique behaviour that is hard to clone hence providing a secure fingerprint. A variety of PUF structures and PUF-based applications have been explored theoretically as well as being implemented in practical settings. Recently, the inherent unclonability of quantum states has been exploited to derive the quantum analogue of PUF as well as new proposals for the implementation of PUF. We present the first comprehensive study of quantum Physical Unclonable Functions (qPUFs) with quantum cryptographic tools. We formally define qPUFs, encapsulating all requirements of classical PUFs as well as introducing a new testability feature inherent to the quantum setting only. We use a quantum game-based framework to define different levels of security for qPUFs: quantum exponential unforgeability, quantum existential unforgeability and quantum selective unforgeability. We introduce a new quantum attack technique based on the universal quantum emulator algorithm of Marvin and Lloyd to prove no qPUF can provide quantum existential unforgeability. On the other hand, we prove that a large family of qPUFs (called unitary PUFs) can provide quantum selective unforgeability which is the desired level of security for most PUF-based applications.

]]>At the dawn of Quantum Physics, Wigner and Weisskopf obtained a full analytical description (a $\textit{photon portrait}$) of the emission of a single photon by a two-level system, using the basis of frequency modes (Weisskopf and Wigner, "Zeitschrift für Physik", 63, 1930). A direct experimental reconstruction of this portrait demands an accurate measurement of a time resolved fluorescence spectrum, with high sensitivity to the off-resonant frequencies and ultrafast dynamics describing the photon creation. In this work we demonstrate such an experimental technique in a superconducting waveguide Quantum Electrodynamics (wQED) platform, using single transmon qubit and two coupled transmon qubits as quantum emitters. In both scenarios, the photon portraits agree quantitatively with the predictions of the input-output theory and qualitatively with Wigner-Weisskopf theory. We believe that our technique allows not only for interesting visualization of fundamental principles, but may serve as a tool, e.g. to realize multi-dimensional spectroscopy in waveguide Quantum Electrodynamics.

]]>At the dawn of Quantum Physics, Wigner and Weisskopf obtained a full analytical description (a $\textit{photon portrait}$) of the emission of a single photon by a two-level system, using the basis of frequency modes (Weisskopf and Wigner, "Zeitschrift für Physik", 63, 1930). A direct experimental reconstruction of this portrait demands an accurate measurement of a time resolved fluorescence spectrum, with high sensitivity to the off-resonant frequencies and ultrafast dynamics describing the photon creation. In this work we demonstrate such an experimental technique in a superconducting waveguide Quantum Electrodynamics (wQED) platform, using single transmon qubit and two coupled transmon qubits as quantum emitters. In both scenarios, the photon portraits agree quantitatively with the predictions of the input-output theory and qualitatively with Wigner-Weisskopf theory. We believe that our technique allows not only for interesting visualization of fundamental principles, but may serve as a tool, e.g. to realize multi-dimensional spectroscopy in waveguide Quantum Electrodynamics.

]]>Recently a new class of quantum algorithms that are based on the quantum computation of the connected moment expansion has been reported to find the ground and excited state energies. In particular, the Peeters-Devreese-Soldatov (PDS) formulation is found variational and bearing the potential for further combining with the existing variational quantum infrastructure. Here we find that the PDS formulation can be considered as a new energy functional of which the PDS energy gradient can be employed in a conventional variational quantum solver. In comparison with the usual variational quantum eigensolver (VQE) and the original static PDS approach, this new variational quantum solver offers an effective approach to navigate the dynamics to be free from getting trapped in the local minima that refer to different states, and achieve high accuracy at finding the ground state and its energy through the rotation of the trial wave function of modest quality, thus improves the accuracy and efficiency of the quantum simulation. We demonstrate the performance of the proposed variational quantum solver for toy models, H$_2$ molecule, and strongly correlated planar H$_4$ system in some challenging situations. In all the case studies, the proposed variational quantum approach outperforms the usual VQE and static PDS calculations even at the lowest order. We also discuss the limitations of the proposed approach and its preliminary execution for model Hamiltonian on the NISQ device.

]]>Recently a new class of quantum algorithms that are based on the quantum computation of the connected moment expansion has been reported to find the ground and excited state energies. In particular, the Peeters-Devreese-Soldatov (PDS) formulation is found variational and bearing the potential for further combining with the existing variational quantum infrastructure. Here we find that the PDS formulation can be considered as a new energy functional of which the PDS energy gradient can be employed in a conventional variational quantum solver. In comparison with the usual variational quantum eigensolver (VQE) and the original static PDS approach, this new variational quantum solver offers an effective approach to navigate the dynamics to be free from getting trapped in the local minima that refer to different states, and achieve high accuracy at finding the ground state and its energy through the rotation of the trial wave function of modest quality, thus improves the accuracy and efficiency of the quantum simulation. We demonstrate the performance of the proposed variational quantum solver for toy models, H$_2$ molecule, and strongly correlated planar H$_4$ system in some challenging situations. In all the case studies, the proposed variational quantum approach outperforms the usual VQE and static PDS calculations even at the lowest order. We also discuss the limitations of the proposed approach and its preliminary execution for model Hamiltonian on the NISQ device.

]]>This work analyzes correlations arising from quantum systems subject to sequential projective measurements to certify that the system in question has a quantum dimension greater than some $d$. We refine previous known methods and show that dimension greater than two can be certified in scenarios which are considerably simpler than the ones presented before and, for the first time in this sequential projective scenario, we certify quantum systems with dimension strictly greater than three. We also perform a systematic numerical analysis in terms of robustness and conclude that performing random projective measurements on random pure qutrit states allows a robust certification of quantum dimensions with very high probability.

]]>This work analyzes correlations arising from quantum systems subject to sequential projective measurements to certify that the system in question has a quantum dimension greater than some $d$. We refine previous known methods and show that dimension greater than two can be certified in scenarios which are considerably simpler than the ones presented before and, for the first time in this sequential projective scenario, we certify quantum systems with dimension strictly greater than three. We also perform a systematic numerical analysis in terms of robustness and conclude that performing random projective measurements on random pure qutrit states allows a robust certification of quantum dimensions with very high probability.

]]>Negativity of the Wigner function is arguably one of the most striking non-classical features of quantum states. Beyond its fundamental relevance, it is also a necessary resource for quantum speedup with continuous variables. As quantum technologies emerge, the need to identify and characterize the resources which provide an advantage over existing classical technologies becomes more pressing. Here we derive witnesses for Wigner negativity of single mode and multimode quantum states, based on fidelities with Fock states, which can be reliably measured using standard detection setups. They possess a threshold expectation value indicating whether the measured state has a negative Wigner function. Moreover, the amount of violation provides an operational quantification of Wigner negativity. We phrase the problem of finding the threshold values for our witnesses as an infinite-dimensional linear optimisation. By relaxing and restricting the corresponding linear programs, we derive two hierarchies of semidefinite programs, which provide numerical sequences of increasingly tighter upper and lower bounds for the threshold values. We further show that both sequences converge to the threshold value. Moreover, our witnesses form a complete family – each Wigner negative state is detected by at least one witness – thus providing a reliable method for experimentally witnessing Wigner negativity of quantum states from few measurements. From a foundational perspective, our findings provide insights on the set of positive Wigner functions which still lacks a proper characterisation.

]]>Negativity of the Wigner function is arguably one of the most striking non-classical features of quantum states. Beyond its fundamental relevance, it is also a necessary resource for quantum speedup with continuous variables. As quantum technologies emerge, the need to identify and characterize the resources which provide an advantage over existing classical technologies becomes more pressing. Here we derive witnesses for Wigner negativity of single mode and multimode quantum states, based on fidelities with Fock states, which can be reliably measured using standard detection setups. They possess a threshold expectation value indicating whether the measured state has a negative Wigner function. Moreover, the amount of violation provides an operational quantification of Wigner negativity. We phrase the problem of finding the threshold values for our witnesses as an infinite-dimensional linear optimisation. By relaxing and restricting the corresponding linear programs, we derive two hierarchies of semidefinite programs, which provide numerical sequences of increasingly tighter upper and lower bounds for the threshold values. We further show that both sequences converge to the threshold value. Moreover, our witnesses form a complete family – each Wigner negative state is detected by at least one witness – thus providing a reliable method for experimentally witnessing Wigner negativity of quantum states from few measurements. From a foundational perspective, our findings provide insights on the set of positive Wigner functions which still lacks a proper characterisation.

]]>Recently, it was shown that when reference frames are associated to quantum systems, the transformation laws between such quantum reference frames need to be modified to take into account the quantum and dynamical features of the reference frames. This led to a relational description of the phase space variables of the quantum system of which the quantum reference frames are part of. While such transformations were shown to be symmetries of the system's Hamiltonian, the question remained unanswered as to whether they enjoy a group structure, similar to that of the Galilei group relating classical reference frames in quantum mechanics. In this work, we identify the canonical transformations on the phase space of the quantum systems comprising the quantum reference frames, and show that these transformations close a group structure defined by a Lie algebra, which is different from the usual Galilei algebra of quantum mechanics. We further find that the elements of this new algebra are in fact the building blocks of the quantum reference frames transformations previously identified, which we recover. Finally, we show how the transformations between classical reference frames described by the standard Galilei group symmetries can be obtained from the group of transformations between quantum reference frames by taking the zero limit of the parameter that governs the additional noncommutativity introduced by the quantum nature of inertial transformations.

]]>Recently, it was shown that when reference frames are associated to quantum systems, the transformation laws between such quantum reference frames need to be modified to take into account the quantum and dynamical features of the reference frames. This led to a relational description of the phase space variables of the quantum system of which the quantum reference frames are part of. While such transformations were shown to be symmetries of the system's Hamiltonian, the question remained unanswered as to whether they enjoy a group structure, similar to that of the Galilei group relating classical reference frames in quantum mechanics. In this work, we identify the canonical transformations on the phase space of the quantum systems comprising the quantum reference frames, and show that these transformations close a group structure defined by a Lie algebra, which is different from the usual Galilei algebra of quantum mechanics. We further find that the elements of this new algebra are in fact the building blocks of the quantum reference frames transformations previously identified, which we recover. Finally, we show how the transformations between classical reference frames described by the standard Galilei group symmetries can be obtained from the group of transformations between quantum reference frames by taking the zero limit of the parameter that governs the additional noncommutativity introduced by the quantum nature of inertial transformations.

]]>We describe an efficient implementation of Bayesian quantum phase estimation in the presence of noise and multiple eigenstates. The main contribution of this work is the dynamic switching between different representations of the phase distributions, namely truncated Fourier series and normal distributions. The Fourier-series representation has the advantage of being exact in many cases, but suffers from increasing complexity with each update of the prior. This necessitates truncation of the series, which eventually causes the distribution to become unstable. We derive bounds on the error in representing normal distributions with a truncated Fourier series, and use these to decide when to switch to the normal-distribution representation. This representation is much simpler, and was proposed in conjunction with rejection filtering for approximate Bayesian updates. We show that, in many cases, the update can be done exactly using analytic expressions, thereby greatly reducing the time complexity of the updates. Finally, when dealing with a superposition of several eigenstates, we need to estimate the relative weights. This can be formulated as a convex optimization problem, which we solve using a gradient-projection algorithm. By updating the weights at exponentially scaled iterations we greatly reduce the computational complexity without affecting the overall accuracy.

]]>We describe an efficient implementation of Bayesian quantum phase estimation in the presence of noise and multiple eigenstates. The main contribution of this work is the dynamic switching between different representations of the phase distributions, namely truncated Fourier series and normal distributions. The Fourier-series representation has the advantage of being exact in many cases, but suffers from increasing complexity with each update of the prior. This necessitates truncation of the series, which eventually causes the distribution to become unstable. We derive bounds on the error in representing normal distributions with a truncated Fourier series, and use these to decide when to switch to the normal-distribution representation. This representation is much simpler, and was proposed in conjunction with rejection filtering for approximate Bayesian updates. We show that, in many cases, the update can be done exactly using analytic expressions, thereby greatly reducing the time complexity of the updates. Finally, when dealing with a superposition of several eigenstates, we need to estimate the relative weights. This can be formulated as a convex optimization problem, which we solve using a gradient-projection algorithm. By updating the weights at exponentially scaled iterations we greatly reduce the computational complexity without affecting the overall accuracy.

]]>We study the implications of the anyon fusion equation $a\times b=c$ on global properties of $2+1$D topological quantum field theories (TQFTs). Here $a$ and $b$ are anyons that fuse together to give a unique anyon, $c$. As is well known, when at least one of $a$ and $b$ is abelian, such equations describe aspects of the one-form symmetry of the theory. When $a$ and $b$ are non-abelian, the most obvious way such fusions arise is when a TQFT can be resolved into a product of TQFTs with trivial mutual braiding, and $a$ and $b$ lie in separate factors. More generally, we argue that the appearance of such fusions for non-abelian $a$ and $b$ can also be an indication of zero-form symmetries in a TQFT, of what we term "quasi-zero-form symmetries" (as in the case of discrete gauge theories based on the largest Mathieu group, $M_{24}$), or of the existence of non-modular fusion subcategories. We study these ideas in a variety of TQFT settings from (twisted and untwisted) discrete gauge theories to Chern-Simons theories based on continuous gauge groups and related cosets. Along the way, we prove various useful theorems.

]]>We study the implications of the anyon fusion equation $a\times b=c$ on global properties of $2+1$D topological quantum field theories (TQFTs). Here $a$ and $b$ are anyons that fuse together to give a unique anyon, $c$. As is well known, when at least one of $a$ and $b$ is abelian, such equations describe aspects of the one-form symmetry of the theory. When $a$ and $b$ are non-abelian, the most obvious way such fusions arise is when a TQFT can be resolved into a product of TQFTs with trivial mutual braiding, and $a$ and $b$ lie in separate factors. More generally, we argue that the appearance of such fusions for non-abelian $a$ and $b$ can also be an indication of zero-form symmetries in a TQFT, of what we term "quasi-zero-form symmetries" (as in the case of discrete gauge theories based on the largest Mathieu group, $M_{24}$), or of the existence of non-modular fusion subcategories. We study these ideas in a variety of TQFT settings from (twisted and untwisted) discrete gauge theories to Chern-Simons theories based on continuous gauge groups and related cosets. Along the way, we prove various useful theorems.

]]>Estimating correctly the quantum phase of a physical system is a central problem in quantum parameter estimation theory due to its wide range of applications from quantum metrology to cryptography. Ideally, the optimal quantum estimator is given by the so-called quantum Cramér-Rao bound, so any measurement strategy aims to obtain estimations as close as possible to it. However, more often than not, the current state-of-the-art methods to estimate quantum phases fail to reach this bound as they rely on maximum likelihood estimators of non-identifiable likelihood functions. In this work we thoroughly review various schemes for estimating the phase of a qubit, identifying the underlying problem which prohibits these methods to reach the quantum Cramér-Rao bound, and propose a new adaptive scheme based on covariant measurements to circumvent this problem. Our findings are carefully checked by Monte Carlo simulations, showing that the method we propose is both mathematically and experimentally more realistic and more efficient than the methods currently available.

]]>Estimating correctly the quantum phase of a physical system is a central problem in quantum parameter estimation theory due to its wide range of applications from quantum metrology to cryptography. Ideally, the optimal quantum estimator is given by the so-called quantum Cramér-Rao bound, so any measurement strategy aims to obtain estimations as close as possible to it. However, more often than not, the current state-of-the-art methods to estimate quantum phases fail to reach this bound as they rely on maximum likelihood estimators of non-identifiable likelihood functions. In this work we thoroughly review various schemes for estimating the phase of a qubit, identifying the underlying problem which prohibits these methods to reach the quantum Cramér-Rao bound, and propose a new adaptive scheme based on covariant measurements to circumvent this problem. Our findings are carefully checked by Monte Carlo simulations, showing that the method we propose is both mathematically and experimentally more realistic and more efficient than the methods currently available.

]]>In this paper, we propose a general scheme to analyze the gradient vanishing phenomenon, also known as the barren plateau phenomenon, in training quantum neural networks with the ZX-calculus. More precisely, we extend the barren plateaus theorem from unitary 2-design circuits to any parameterized quantum circuits under certain reasonable assumptions. The main technical contribution of this paper is representing certain integrations as ZX-diagrams and computing them with the ZX-calculus. The method is used to analyze four concrete quantum neural networks with different structures. It is shown that, for the hardware efficient ansatz and the MPS-inspired ansatz, there exist barren plateaus, while for the QCNN ansatz and the tree tensor network ansatz, there exists no barren plateau.

]]>In this paper, we propose a general scheme to analyze the gradient vanishing phenomenon, also known as the barren plateau phenomenon, in training quantum neural networks with the ZX-calculus. More precisely, we extend the barren plateaus theorem from unitary 2-design circuits to any parameterized quantum circuits under certain reasonable assumptions. The main technical contribution of this paper is representing certain integrations as ZX-diagrams and computing them with the ZX-calculus. The method is used to analyze four concrete quantum neural networks with different structures. It is shown that, for the hardware efficient ansatz and the MPS-inspired ansatz, there exist barren plateaus, while for the QCNN ansatz and the tree tensor network ansatz, there exists no barren plateau.

]]>The problem of sampling outputs of quantum circuits has been proposed as a candidate for demonstrating a quantum computational advantage (sometimes referred to as quantum "supremacy"). In this work, we investigate whether quantum advantage demonstrations can be achieved for more physically-motivated sampling problems, related to measurements of physical observables. We focus on the problem of sampling the outcomes of an energy measurement, performed on a simple-to-prepare product quantum state – a problem we refer to as energy sampling. For different regimes of measurement resolution and measurement errors, we provide complexity theoretic arguments showing that the existence of efficient classical algorithms for energy sampling is unlikely. In particular, we describe a family of Hamiltonians with nearest-neighbour interactions on a 2D lattice that can be efficiently measured with high resolution using a quantum circuit of commuting gates (IQP circuit), whereas an efficient classical simulation of this process should be impossible. In this high resolution regime, which can only be achieved for Hamiltonians that can be $\textit{exponentially fast-forwarded}$, it is possible to use current theoretical tools tying quantum advantage statements to a polynomial-hierarchy collapse whereas for lower resolution measurements such arguments fail. Nevertheless, we show that efficient classical algorithms for low-resolution energy sampling can still be ruled out if we assume that quantum computers are strictly more powerful than classical ones. We believe our work brings a new perspective to the problem of demonstrating quantum advantage and leads to interesting new questions in Hamiltonian complexity.

]]>The problem of sampling outputs of quantum circuits has been proposed as a candidate for demonstrating a quantum computational advantage (sometimes referred to as quantum "supremacy"). In this work, we investigate whether quantum advantage demonstrations can be achieved for more physically-motivated sampling problems, related to measurements of physical observables. We focus on the problem of sampling the outcomes of an energy measurement, performed on a simple-to-prepare product quantum state – a problem we refer to as energy sampling. For different regimes of measurement resolution and measurement errors, we provide complexity theoretic arguments showing that the existence of efficient classical algorithms for energy sampling is unlikely. In particular, we describe a family of Hamiltonians with nearest-neighbour interactions on a 2D lattice that can be efficiently measured with high resolution using a quantum circuit of commuting gates (IQP circuit), whereas an efficient classical simulation of this process should be impossible. In this high resolution regime, which can only be achieved for Hamiltonians that can be $\textit{exponentially fast-forwarded}$, it is possible to use current theoretical tools tying quantum advantage statements to a polynomial-hierarchy collapse whereas for lower resolution measurements such arguments fail. Nevertheless, we show that efficient classical algorithms for low-resolution energy sampling can still be ruled out if we assume that quantum computers are strictly more powerful than classical ones. We believe our work brings a new perspective to the problem of demonstrating quantum advantage and leads to interesting new questions in Hamiltonian complexity.

]]>Measurement noise is one of the main sources of errors in currently available quantum devices based on superconducting qubits. At the same time, the complexity of its characterization and mitigation often exhibits exponential scaling with the system size. In this work, we introduce a correlated measurement noise model that can be efficiently described and characterized, and which admits effective noise-mitigation on the level of marginal probability distributions. Noise mitigation can be performed up to some error for which we derive upper bounds. Characterization of the model is done efficiently using Diagonal Detector Overlapping Tomography – a generalization of the recently introduced Quantum Overlapping Tomography to the problem of reconstruction of readout noise with restricted locality. The procedure allows to characterize $k$-local measurement cross-talk on $N$-qubit device using $O(k2^klog(N))$ circuits containing random combinations of X and identity gates. We perform experiments on 15 (23) qubits using IBM's (Rigetti's) devices to test both the noise model and the error-mitigation scheme, and obtain an average reduction of errors by a factor $>22$ ($>5.5$) compared to no mitigation. Interestingly, we find that correlations in the measurement noise do not correspond to the physical layout of the device. Furthermore, we study numerically the effects of readout noise on the performance of the Quantum Approximate Optimization Algorithm (QAOA). We observe in simulations that for numerous objective Hamiltonians, including random MAX-2-SAT instances and the Sherrington-Kirkpatrick model, the noise-mitigation improves the quality of the optimization. Finally, we provide arguments why in the course of QAOA optimization the estimates of the local energy (or cost) terms often behave like uncorrelated variables, which greatly reduces sampling complexity of the energy estimation compared to the pessimistic error analysis. We also show that similar effects are expected for Haar-random quantum states and states generated by shallow-depth random circuits.

]]>Measurement noise is one of the main sources of errors in currently available quantum devices based on superconducting qubits. At the same time, the complexity of its characterization and mitigation often exhibits exponential scaling with the system size. In this work, we introduce a correlated measurement noise model that can be efficiently described and characterized, and which admits effective noise-mitigation on the level of marginal probability distributions. Noise mitigation can be performed up to some error for which we derive upper bounds. Characterization of the model is done efficiently using Diagonal Detector Overlapping Tomography – a generalization of the recently introduced Quantum Overlapping Tomography to the problem of reconstruction of readout noise with restricted locality. The procedure allows to characterize $k$-local measurement cross-talk on $N$-qubit device using $O(k2^klog(N))$ circuits containing random combinations of X and identity gates. We perform experiments on 15 (23) qubits using IBM's (Rigetti's) devices to test both the noise model and the error-mitigation scheme, and obtain an average reduction of errors by a factor $>22$ ($>5.5$) compared to no mitigation. Interestingly, we find that correlations in the measurement noise do not correspond to the physical layout of the device. Furthermore, we study numerically the effects of readout noise on the performance of the Quantum Approximate Optimization Algorithm (QAOA). We observe in simulations that for numerous objective Hamiltonians, including random MAX-2-SAT instances and the Sherrington-Kirkpatrick model, the noise-mitigation improves the quality of the optimization. Finally, we provide arguments why in the course of QAOA optimization the estimates of the local energy (or cost) terms often behave like uncorrelated variables, which greatly reduces sampling complexity of the energy estimation compared to the pessimistic error analysis. We also show that similar effects are expected for Haar-random quantum states and states generated by shallow-depth random circuits.

]]>We give an upper bound on the resources required for valuable quantum advantage in pricing derivatives. To do so, we give the first complete resource estimates for useful quantum derivative pricing, using autocallable and Target Accrual Redemption Forward (TARF) derivatives as benchmark use cases. We uncover blocking challenges in known approaches and introduce a new method for quantum derivative pricing – the $\textit{re-parameterization method}$ – that avoids them. This method combines pre-trained variational circuits with fault-tolerant quantum computing to dramatically reduce resource requirements. We find that the benchmark use cases we examine require 8k logical qubits and a T-depth of 54 million. We estimate that quantum advantage would require executing this program at the order of a second. While the resource requirements given here are out of reach of current systems, we hope they will provide a roadmap for further improvements in algorithms, implementations, and planned hardware architectures.

]]>We give an upper bound on the resources required for valuable quantum advantage in pricing derivatives. To do so, we give the first complete resource estimates for useful quantum derivative pricing, using autocallable and Target Accrual Redemption Forward (TARF) derivatives as benchmark use cases. We uncover blocking challenges in known approaches and introduce a new method for quantum derivative pricing – the $\textit{re-parameterization method}$ – that avoids them. This method combines pre-trained variational circuits with fault-tolerant quantum computing to dramatically reduce resource requirements. We find that the benchmark use cases we examine require 8k logical qubits and a T-depth of 54 million. We estimate that quantum advantage would require executing this program at the order of a second. While the resource requirements given here are out of reach of current systems, we hope they will provide a roadmap for further improvements in algorithms, implementations, and planned hardware architectures.

]]>In this paper, we derive sharp lower bounds, also known as quantum speed limits, for the time it takes to transform a quantum system into a state such that an observable assumes its lowest average value. We assume that the system is initially in an incoherent state relative to the observable and that the state evolves according to a von Neumann equation with a Hamiltonian whose bandwidth is uniformly bounded. The transformation time depends intricately on the observable's and the initial state's eigenvalue spectrum and the relative constellation of the associated eigenspaces. The problem of finding quantum speed limits consequently divides into different cases requiring different strategies. We derive quantum speed limits in a large number of cases, and we simultaneously develop a method to break down complex cases into manageable ones. The derivations involve both combinatorial and differential geometric techniques. We also study multipartite systems and show that allowing correlations between the parts can speed up the transformation time. In a final section, we use the quantum speed limits to obtain upper bounds on the power with which energy can be extracted from quantum batteries.

]]>In this paper, we derive sharp lower bounds, also known as quantum speed limits, for the time it takes to transform a quantum system into a state such that an observable assumes its lowest average value. We assume that the system is initially in an incoherent state relative to the observable and that the state evolves according to a von Neumann equation with a Hamiltonian whose bandwidth is uniformly bounded. The transformation time depends intricately on the observable's and the initial state's eigenvalue spectrum and the relative constellation of the associated eigenspaces. The problem of finding quantum speed limits consequently divides into different cases requiring different strategies. We derive quantum speed limits in a large number of cases, and we simultaneously develop a method to break down complex cases into manageable ones. The derivations involve both combinatorial and differential geometric techniques. We also study multipartite systems and show that allowing correlations between the parts can speed up the transformation time. In a final section, we use the quantum speed limits to obtain upper bounds on the power with which energy can be extracted from quantum batteries.

]]>Nonlinear SU(1,1) interferometers are fruitful and promising tools for spectral engineering and precise measurements with phase sensitivity below the classical bound. Such interferometers have been successfully realized in bulk and fiber-based configurations. However, rapidly developing integrated technologies provide higher efficiencies, smaller footprints, and pave the way to quantum-enhanced on-chip interferometry. In this work, we theoretically realised an integrated architecture of the multimode SU(1,1) interferometer which can be applied to various integrated platforms. The presented interferometer includes a polarization converter between two photon sources and utilizes a continuous-wave (CW) pump. Based on the potassium titanyl phosphate (KTP) platform, we show that this configuration results in almost perfect destructive interference at the output and supersensitivity regions below the classical limit. In addition, we discuss the fundamental difference between single-mode and highly multimode SU(1,1) interferometers in the properties of phase sensitivity and its limits. Finally, we explore how to improve the phase sensitivity by filtering the output radiation and using different seeding states in different modes with various detection strategies.

]]>Nonlinear SU(1,1) interferometers are fruitful and promising tools for spectral engineering and precise measurements with phase sensitivity below the classical bound. Such interferometers have been successfully realized in bulk and fiber-based configurations. However, rapidly developing integrated technologies provide higher efficiencies, smaller footprints, and pave the way to quantum-enhanced on-chip interferometry. In this work, we theoretically realised an integrated architecture of the multimode SU(1,1) interferometer which can be applied to various integrated platforms. The presented interferometer includes a polarization converter between two photon sources and utilizes a continuous-wave (CW) pump. Based on the potassium titanyl phosphate (KTP) platform, we show that this configuration results in almost perfect destructive interference at the output and supersensitivity regions below the classical limit. In addition, we discuss the fundamental difference between single-mode and highly multimode SU(1,1) interferometers in the properties of phase sensitivity and its limits. Finally, we explore how to improve the phase sensitivity by filtering the output radiation and using different seeding states in different modes with various detection strategies.

]]>The power of a quantum circuit is determined through the number of two-qubit entangling gates that can be performed within the coherence time of the system. In the absence of parallel quantum gate operations, this would make the quantum simulators limited to shallow circuits. Here, we propose a protocol to parallelize the implementation of two-qubit entangling gates between multiple users which are spatially separated, and use a commonly shared spin chain data-bus. Our protocol works through inducing effective interaction between each pair of qubits without disturbing the others, therefore, it increases the rate of gate operations without creating crosstalk. This is achieved by tuning the Hamiltonian parameters appropriately, described in the form of two different strategies. The tuning of the parameters makes different bilocalized eigenstates responsible for the realization of the entangling gates between different pairs of distant qubits. Remarkably, the performance of our protocol is robust against increasing the length of the data-bus and the number of users. Moreover, we show that this protocol can tolerate various types of disorders and is applicable in the context of superconductor-based systems. The proposed protocol can serve for realizing two-way quantum communication.

]]>The power of a quantum circuit is determined through the number of two-qubit entangling gates that can be performed within the coherence time of the system. In the absence of parallel quantum gate operations, this would make the quantum simulators limited to shallow circuits. Here, we propose a protocol to parallelize the implementation of two-qubit entangling gates between multiple users which are spatially separated, and use a commonly shared spin chain data-bus. Our protocol works through inducing effective interaction between each pair of qubits without disturbing the others, therefore, it increases the rate of gate operations without creating crosstalk. This is achieved by tuning the Hamiltonian parameters appropriately, described in the form of two different strategies. The tuning of the parameters makes different bilocalized eigenstates responsible for the realization of the entangling gates between different pairs of distant qubits. Remarkably, the performance of our protocol is robust against increasing the length of the data-bus and the number of users. Moreover, we show that this protocol can tolerate various types of disorders and is applicable in the context of superconductor-based systems. The proposed protocol can serve for realizing two-way quantum communication.

]]>The accuracy of quantum dynamics simulation is usually measured by the error of the unitary evolution operator in the operator norm, which in turn depends on certain norm of the Hamiltonian. For unbounded operators, after suitable discretization, the norm of the Hamiltonian can be very large, which significantly increases the simulation cost. However, the operator norm measures the worst-case error of the quantum simulation, while practical simulation concerns the error with respect to a given initial vector at hand. We demonstrate that under suitable assumptions of the Hamiltonian and the initial vector, if the error is measured in terms of the vector norm, the computational cost may not increase at all as the norm of the Hamiltonian increases using Trotter type methods. In this sense, our result outperforms all previous error bounds in the quantum simulation literature. Our result extends that of [Jahnke, Lubich, BIT Numer. Math. 2000] to the time-dependent setting. We also clarify the existence and the importance of commutator scalings of Trotter and generalized Trotter methods for time-dependent Hamiltonian simulations.

]]>The accuracy of quantum dynamics simulation is usually measured by the error of the unitary evolution operator in the operator norm, which in turn depends on certain norm of the Hamiltonian. For unbounded operators, after suitable discretization, the norm of the Hamiltonian can be very large, which significantly increases the simulation cost. However, the operator norm measures the worst-case error of the quantum simulation, while practical simulation concerns the error with respect to a given initial vector at hand. We demonstrate that under suitable assumptions of the Hamiltonian and the initial vector, if the error is measured in terms of the vector norm, the computational cost may not increase at all as the norm of the Hamiltonian increases using Trotter type methods. In this sense, our result outperforms all previous error bounds in the quantum simulation literature. Our result extends that of [Jahnke, Lubich, BIT Numer. Math. 2000] to the time-dependent setting. We also clarify the existence and the importance of commutator scalings of Trotter and generalized Trotter methods for time-dependent Hamiltonian simulations.

]]>We theoretically analyze the phase sensitivity of the Induced-Coherence (Mandel-Type) Interferometer, including the case where the sensitivity is "boosted" into the bright input regime with coherent-light seeding. We find scaling which reaches below the shot noise limit, even when seeding the spatial mode which does not interact with the sample – or when seeding the undetected mode. It is a hybrid of a linear and a non-linear (Yurke-Type) interferometer, and aside from the supersensitivity, is distinguished from other systems by "preferring" an imbalance in the gains of the two non-linearities (with the second gain being optimal at $\textit{low}$ values), and non-monotonic behavior of the sensitivity as a function of the gain of the second non-linearity. Furthermore, the setup allows use of subtracted intensity measurements, instead of direct (additive) or homodyne measurements – a significant practical advantage. Bright, super-sensitive phase estimation of an object with different light fields for interaction and detection is possible, with various potential applications, especially in cases where the sample may be sensitive to light, or is most interesting in frequency domains outside what is easily detected, or when desiring bright-light phase estimation with sensitive/delicate detectors. We use an analysis in terms of general squeezing and discover that super-sensitivity occurs only in this case – that is, the effect is not present with the spontaneous-parametric-down-conversion approximation, which many previous analyses and experiments have focused on.

]]>We theoretically analyze the phase sensitivity of the Induced-Coherence (Mandel-Type) Interferometer, including the case where the sensitivity is "boosted" into the bright input regime with coherent-light seeding. We find scaling which reaches below the shot noise limit, even when seeding the spatial mode which does not interact with the sample – or when seeding the undetected mode. It is a hybrid of a linear and a non-linear (Yurke-Type) interferometer, and aside from the supersensitivity, is distinguished from other systems by "preferring" an imbalance in the gains of the two non-linearities (with the second gain being optimal at $\textit{low}$ values), and non-monotonic behavior of the sensitivity as a function of the gain of the second non-linearity. Furthermore, the setup allows use of subtracted intensity measurements, instead of direct (additive) or homodyne measurements – a significant practical advantage. Bright, super-sensitive phase estimation of an object with different light fields for interaction and detection is possible, with various potential applications, especially in cases where the sample may be sensitive to light, or is most interesting in frequency domains outside what is easily detected, or when desiring bright-light phase estimation with sensitive/delicate detectors. We use an analysis in terms of general squeezing and discover that super-sensitivity occurs only in this case – that is, the effect is not present with the spontaneous-parametric-down-conversion approximation, which many previous analyses and experiments have focused on.

]]>We introduce a general framework for analysing general probabilistic theories, which emphasises the distinction between the dynamical and probabilistic structures of a system. The dynamical structure is the set of pure states together with the action of the reversible dynamics, whilst the probabilistic structure determines the measurements and the outcome probabilities. For transitive dynamical structures whose dynamical group and stabiliser subgroup form a Gelfand pair we show that all probabilistic structures are rigid (cannot be infinitesimally deformed) and are in one-to-one correspondence with the spherical representations of the dynamical group. We apply our methods to classify all probabilistic structures when the dynamical structure is that of complex Grassmann manifolds acted on by the unitary group. This is a generalisation of quantum theory where the pure states, instead of being represented by one-dimensional subspaces of a complex vector space, are represented by subspaces of a fixed dimension larger than one. We also show that systems with compact two-point homogeneous dynamical structures (i.e. every pair of pure states with a given distance can be reversibly transformed to any other pair of pure states with the same distance), which include systems corresponding to Euclidean Jordan Algebras, all have rigid probabilistic structures.

]]>We introduce a general framework for analysing general probabilistic theories, which emphasises the distinction between the dynamical and probabilistic structures of a system. The dynamical structure is the set of pure states together with the action of the reversible dynamics, whilst the probabilistic structure determines the measurements and the outcome probabilities. For transitive dynamical structures whose dynamical group and stabiliser subgroup form a Gelfand pair we show that all probabilistic structures are rigid (cannot be infinitesimally deformed) and are in one-to-one correspondence with the spherical representations of the dynamical group. We apply our methods to classify all probabilistic structures when the dynamical structure is that of complex Grassmann manifolds acted on by the unitary group. This is a generalisation of quantum theory where the pure states, instead of being represented by one-dimensional subspaces of a complex vector space, are represented by subspaces of a fixed dimension larger than one. We also show that systems with compact two-point homogeneous dynamical structures (i.e. every pair of pure states with a given distance can be reversibly transformed to any other pair of pure states with the same distance), which include systems corresponding to Euclidean Jordan Algebras, all have rigid probabilistic structures.

]]>We describe the $\textit{contextual subspace variational quantum eigensolver}$ (CS-VQE), a hybrid quantum-classical algorithm for approximating the ground state energy of a Hamiltonian. The approximation to the ground state energy is obtained as the sum of two contributions. The first contribution comes from a noncontextual approximation to the Hamiltonian, and is computed classically. The second contribution is obtained by using the variational quantum eigensolver (VQE) technique to compute a contextual correction on a quantum processor. In general the VQE computation of the contextual correction uses fewer qubits and measurements than the VQE computation of the original problem. Varying the number of qubits used for the contextual correction adjusts the quality of the approximation. We simulate CS-VQE on tapered Hamiltonians for small molecules, and find that the number of qubits required to reach chemical accuracy can be reduced by more than a factor of two. The number of terms required to compute the contextual correction can be reduced by more than a factor of ten, without the use of other measurement reduction schemes. This indicates that CS-VQE is a promising approach for eigenvalue computations on noisy intermediate-scale quantum devices.

]]>We describe the $\textit{contextual subspace variational quantum eigensolver}$ (CS-VQE), a hybrid quantum-classical algorithm for approximating the ground state energy of a Hamiltonian. The approximation to the ground state energy is obtained as the sum of two contributions. The first contribution comes from a noncontextual approximation to the Hamiltonian, and is computed classically. The second contribution is obtained by using the variational quantum eigensolver (VQE) technique to compute a contextual correction on a quantum processor. In general the VQE computation of the contextual correction uses fewer qubits and measurements than the VQE computation of the original problem. Varying the number of qubits used for the contextual correction adjusts the quality of the approximation. We simulate CS-VQE on tapered Hamiltonians for small molecules, and find that the number of qubits required to reach chemical accuracy can be reduced by more than a factor of two. The number of terms required to compute the contextual correction can be reduced by more than a factor of ten, without the use of other measurement reduction schemes. This indicates that CS-VQE is a promising approach for eigenvalue computations on noisy intermediate-scale quantum devices.

]]>A number of physically intuitive results for the calculation of multi-time correlations in phase-space representations of quantum mechanics are obtained. They relate time-dependent stochastic samples to multi-time observables, and rely on the presence of derivative-free operator identities. In particular, expressions for time-ordered normal-ordered observables in the positive-P distribution are derived which replace Heisenberg operators with the bare time-dependent stochastic variables, confirming extension of earlier such results for the Glauber-Sudarshan P. Analogous expressions are found for the anti-normal-ordered case of the doubled phase-space Q representation, along with conversion rules among doubled phase-space s-ordered representations. The latter are then shown to be readily exploited to further calculate anti-normal and mixed-ordered multi-time observables in the positive-P, Wigner, and doubled-Wigner representations. Which mixed-order observables are amenable and which are not is indicated, and explicit tallies are given up to 4th order. Overall, the theory of quantum multi-time observables in phase-space representations is extended, allowing non-perturbative treatment of many cases. The accuracy, usability, and scalability of the results to large systems is demonstrated using stochastic simulations of the unconventional photon blockade system and a related Bose-Hubbard chain. In addition, a robust but simple algorithm for integration of stochastic equations for phase-space samples is provided.

]]>A number of physically intuitive results for the calculation of multi-time correlations in phase-space representations of quantum mechanics are obtained. They relate time-dependent stochastic samples to multi-time observables, and rely on the presence of derivative-free operator identities. In particular, expressions for time-ordered normal-ordered observables in the positive-P distribution are derived which replace Heisenberg operators with the bare time-dependent stochastic variables, confirming extension of earlier such results for the Glauber-Sudarshan P. Analogous expressions are found for the anti-normal-ordered case of the doubled phase-space Q representation, along with conversion rules among doubled phase-space s-ordered representations. The latter are then shown to be readily exploited to further calculate anti-normal and mixed-ordered multi-time observables in the positive-P, Wigner, and doubled-Wigner representations. Which mixed-order observables are amenable and which are not is indicated, and explicit tallies are given up to 4th order. Overall, the theory of quantum multi-time observables in phase-space representations is extended, allowing non-perturbative treatment of many cases. The accuracy, usability, and scalability of the results to large systems is demonstrated using stochastic simulations of the unconventional photon blockade system and a related Bose-Hubbard chain. In addition, a robust but simple algorithm for integration of stochastic equations for phase-space samples is provided.

]]>We propose and analyze a set of variational quantum algorithms for solving quadratic unconstrained binary optimization problems where a problem consisting of $n_c$ classical variables can be implemented on $\mathcal O(\log n_c)$ number of qubits. The underlying encoding scheme allows for a systematic increase in correlations among the classical variables captured by a variational quantum state by progressively increasing the number of qubits involved. We first examine the simplest limit where all correlations are neglected, i.e. when the quantum state can only describe statistically independent classical variables. We apply this minimal encoding to find approximate solutions of a general problem instance comprised of $64$ classical variables using $7$ qubits. Next, we show how two-body correlations between the classical variables can be incorporated in the variational quantum state and how it can improve the quality of the approximate solutions. We give an example by solving a $42$-variable Max-Cut problem using only $8$ qubits where we exploit the specific topology of the problem. We analyze whether these cases can be optimized efficiently given the limited resources available in state-of-the-art quantum platforms. Lastly, we present the general framework for extending the expressibility of the probability distribution to any multi-body correlations.

]]>We propose and analyze a set of variational quantum algorithms for solving quadratic unconstrained binary optimization problems where a problem consisting of $n_c$ classical variables can be implemented on $\mathcal O(\log n_c)$ number of qubits. The underlying encoding scheme allows for a systematic increase in correlations among the classical variables captured by a variational quantum state by progressively increasing the number of qubits involved. We first examine the simplest limit where all correlations are neglected, i.e. when the quantum state can only describe statistically independent classical variables. We apply this minimal encoding to find approximate solutions of a general problem instance comprised of $64$ classical variables using $7$ qubits. Next, we show how two-body correlations between the classical variables can be incorporated in the variational quantum state and how it can improve the quality of the approximate solutions. We give an example by solving a $42$-variable Max-Cut problem using only $8$ qubits where we exploit the specific topology of the problem. We analyze whether these cases can be optimized efficiently given the limited resources available in state-of-the-art quantum platforms. Lastly, we present the general framework for extending the expressibility of the probability distribution to any multi-body correlations.

]]>It is well known that a quantum circuit on $N$ qubits composed of Clifford gates with the addition of $k$ non Clifford gates can be simulated on a classical computer by an algorithm scaling as $\text{poly}(N)\exp(k)$[1]. We show that, for a quantum circuit to simulate quantum chaotic behavior, it is both necessary and sufficient that $k=\Theta(N)$. This result implies the impossibility of simulating quantum chaos on a classical computer.

]]>It is well known that a quantum circuit on $N$ qubits composed of Clifford gates with the addition of $k$ non Clifford gates can be simulated on a classical computer by an algorithm scaling as $\text{poly}(N)\exp(k)$[1]. We show that, for a quantum circuit to simulate quantum chaotic behavior, it is both necessary and sufficient that $k=\Theta(N)$. This result implies the impossibility of simulating quantum chaos on a classical computer.

]]>Photon loss is destructive to the performance of quantum photonic devices and therefore suppressing the effects of photon loss is paramount to photonic quantum technologies. We present two schemes to mitigate the effects of photon loss for a Gaussian Boson Sampling device, in particular, to improve the estimation of the sampling probabilities. Instead of using error correction codes which are expensive in terms of their hardware resource overhead, our schemes require only a small amount of hardware modifications or even no modification. Our loss-suppression techniques rely either on collecting additional measurement data or on classical post-processing once the measurement data is obtained. We show that with a moderate cost of classical post processing, the effects of photon loss can be significantly suppressed for a certain amount of loss. The proposed schemes are thus a key enabler for applications of near-term photonic quantum devices.

]]>Photon loss is destructive to the performance of quantum photonic devices and therefore suppressing the effects of photon loss is paramount to photonic quantum technologies. We present two schemes to mitigate the effects of photon loss for a Gaussian Boson Sampling device, in particular, to improve the estimation of the sampling probabilities. Instead of using error correction codes which are expensive in terms of their hardware resource overhead, our schemes require only a small amount of hardware modifications or even no modification. Our loss-suppression techniques rely either on collecting additional measurement data or on classical post-processing once the measurement data is obtained. We show that with a moderate cost of classical post processing, the effects of photon loss can be significantly suppressed for a certain amount of loss. The proposed schemes are thus a key enabler for applications of near-term photonic quantum devices.

]]>Master equations are a vital tool to model heat flow through nanoscale thermodynamic systems. Most practical devices are made up of interacting sub-system, and are often modelled using either $\textit{local}$ master equations (LMEs) or $\textit{global}$ master equations (GMEs). While the limiting cases in which either the LME or the GME breaks down are well understood, there exists a 'grey area' in which both equations capture steady-state heat currents reliably, but predict very different $\textit{transient}$ heat flows. In such cases, which one should we trust? Here, we show that, when it comes to dynamics, the local approach can be more reliable than the global one for weakly interacting open quantum systems. This is due to the fact that the $\textit{secular approximation}$, which underpins the GME, can destroy key dynamical features. To illustrate this, we consider a minimal transport setup and show that its LME displays $\textit{exceptional points}$ (EPs). These singularities have been observed in a superconducting-circuit realisation of the model [1]. However, in stark contrast to experimental evidence, no EPs appear within the global approach. We then show that the EPs are a feature built into the Redfield equation, which is more accurate than the LME and the GME. Finally, we show that the local approach emerges as the weak-interaction limit of the Redfield equation, and that it entirely avoids the secular approximation.

]]>Master equations are a vital tool to model heat flow through nanoscale thermodynamic systems. Most practical devices are made up of interacting sub-system, and are often modelled using either $\textit{local}$ master equations (LMEs) or $\textit{global}$ master equations (GMEs). While the limiting cases in which either the LME or the GME breaks down are well understood, there exists a 'grey area' in which both equations capture steady-state heat currents reliably, but predict very different $\textit{transient}$ heat flows. In such cases, which one should we trust? Here, we show that, when it comes to dynamics, the local approach can be more reliable than the global one for weakly interacting open quantum systems. This is due to the fact that the $\textit{secular approximation}$, which underpins the GME, can destroy key dynamical features. To illustrate this, we consider a minimal transport setup and show that its LME displays $\textit{exceptional points}$ (EPs). These singularities have been observed in a superconducting-circuit realisation of the model [1]. However, in stark contrast to experimental evidence, no EPs appear within the global approach. We then show that the EPs are a feature built into the Redfield equation, which is more accurate than the LME and the GME. Finally, we show that the local approach emerges as the weak-interaction limit of the Redfield equation, and that it entirely avoids the secular approximation.

]]>One of the key ingredients of many LOCC protocols in quantum information is a multiparticle (locally) maximally entangled quantum state, aka a critical state, that possesses local symmetries. We show how to design critical states with arbitrarily large local unitary symmetry. We explain that such states can be realised in a quantum system of distinguishable traps with bosons or fermions occupying a finite number of modes. Then, local symmetries of the designed quantum state are equal to the unitary group of local mode operations acting diagonally on all traps. Therefore, such a group of symmetries is naturally protected against errors that occur in a physical realisation of mode operators. We also link our results with the existence of so-called strictly semistable states with particular asymptotic diagonal symmetries. Our main technical result states that the $N$th tensor power of any irreducible representation of $\mathrm{SU}(N)$ contains a copy of the trivial representation. This is established via a direct combinatorial analysis of Littlewood-Richardson rules utilising certain combinatorial objects which we call telescopes.

]]>One of the key ingredients of many LOCC protocols in quantum information is a multiparticle (locally) maximally entangled quantum state, aka a critical state, that possesses local symmetries. We show how to design critical states with arbitrarily large local unitary symmetry. We explain that such states can be realised in a quantum system of distinguishable traps with bosons or fermions occupying a finite number of modes. Then, local symmetries of the designed quantum state are equal to the unitary group of local mode operations acting diagonally on all traps. Therefore, such a group of symmetries is naturally protected against errors that occur in a physical realisation of mode operators. We also link our results with the existence of so-called strictly semistable states with particular asymptotic diagonal symmetries. Our main technical result states that the $N$th tensor power of any irreducible representation of $\mathrm{SU}(N)$ contains a copy of the trivial representation. This is established via a direct combinatorial analysis of Littlewood-Richardson rules utilising certain combinatorial objects which we call telescopes.

]]>Squeezed state in harmonic systems can be generated through a variety of techniques, including varying the oscillator frequency or using nonlinear two-photon Raman interaction. We focus on these two techniques to drive an initial thermal state into a final squeezed thermal state with controlled squeezing parameters – amplitude and phase – in arbitrary time. The protocols are designed through reverse engineering for both unitary and open dynamics. Control of the dissipation is achieved using stochastic processes, readily implementable via, e.g., continuous quantum measurements. Importantly, this allows controlling the state entropy and can be used for fast thermalization. The developed protocols are thus suited to generate squeezed thermal states at controlled temperature in arbitrary time.

]]>Squeezed state in harmonic systems can be generated through a variety of techniques, including varying the oscillator frequency or using nonlinear two-photon Raman interaction. We focus on these two techniques to drive an initial thermal state into a final squeezed thermal state with controlled squeezing parameters – amplitude and phase – in arbitrary time. The protocols are designed through reverse engineering for both unitary and open dynamics. Control of the dissipation is achieved using stochastic processes, readily implementable via, e.g., continuous quantum measurements. Importantly, this allows controlling the state entropy and can be used for fast thermalization. The developed protocols are thus suited to generate squeezed thermal states at controlled temperature in arbitrary time.

]]>We identify necessary and sufficient conditions for a quantum channel to be optimal for any convex optimization problem in which the optimization is taken over the set of all quantum channels of a fixed size. Optimality conditions for convex optimization problems over the set of all quantum measurements of a given system having a fixed number of measurement outcomes are obtained as a special case. In the case of linear objective functions for measurement optimization problems, our conditions reduce to the well-known Holevo-Yuen-Kennedy-Lax measurement optimality conditions. We illustrate how our conditions can be applied to various state transformation problems having non-linear objective functions based on the fidelity, trace distance, and quantum relative entropy.

]]>We identify necessary and sufficient conditions for a quantum channel to be optimal for any convex optimization problem in which the optimization is taken over the set of all quantum channels of a fixed size. Optimality conditions for convex optimization problems over the set of all quantum measurements of a given system having a fixed number of measurement outcomes are obtained as a special case. In the case of linear objective functions for measurement optimization problems, our conditions reduce to the well-known Holevo-Yuen-Kennedy-Lax measurement optimality conditions. We illustrate how our conditions can be applied to various state transformation problems having non-linear objective functions based on the fidelity, trace distance, and quantum relative entropy.

]]>Quantum data locking is a quantum phenomenon that allows us to encrypt a long message with a small secret key with information-theoretic security. This is in sharp contrast with classical information theory where, according to Shannon, the secret key needs to be at least as long as the message. Here we explore photonic architectures for quantum data locking, where information is encoded in multi-photon states and processed using multi-mode linear optics and photo-detection, with the goal of extending an initial secret key into a longer one. The secret key consumption depends on the number of modes and photons employed. In the no-collision limit, where the likelihood of photon bunching is suppressed, the key consumption is shown to be logarithmic in the dimensions of the system. Our protocol can be viewed as an application of the physics of Boson Sampling to quantum cryptography. Experimental realisations are challenging but feasible with state-of-the-art technology, as techniques recently used to demonstrate Boson Sampling can be adapted to our scheme (e.g., Phys. Rev. Lett. 123, 250503, 2019).

]]>Quantum data locking is a quantum phenomenon that allows us to encrypt a long message with a small secret key with information-theoretic security. This is in sharp contrast with classical information theory where, according to Shannon, the secret key needs to be at least as long as the message. Here we explore photonic architectures for quantum data locking, where information is encoded in multi-photon states and processed using multi-mode linear optics and photo-detection, with the goal of extending an initial secret key into a longer one. The secret key consumption depends on the number of modes and photons employed. In the no-collision limit, where the likelihood of photon bunching is suppressed, the key consumption is shown to be logarithmic in the dimensions of the system. Our protocol can be viewed as an application of the physics of Boson Sampling to quantum cryptography. Experimental realisations are challenging but feasible with state-of-the-art technology, as techniques recently used to demonstrate Boson Sampling can be adapted to our scheme (e.g., Phys. Rev. Lett. 123, 250503, 2019).

]]>