Fisher Information: A Crucial Tool for NISQ Research

This is a Perspective on "Fisher Information in Noisy Intermediate-Scale Quantum Applications" by Johannes Jakob Meyer, published in Quantum 5, 539 (2021).

By Kishor Bharti (Centre for Quantum Technologies, National University of Singapore 117543, Singapore).

 

Quantum computers available today consist of around a few dozen qubits. Each quantum gate operation is afflicted by a certain level of noise, and hence the computations are restricted to a shallow depth. Such quantum devices are commonly known as noisy intermediate-scale quantum (NISQ) devices [1,2]. While NISQ devices are believed to outperform classical devices for sampling problems, a similar advantage for optimization problems has not been established so far. The quest for quantum advantage for practically relevant problems is one of the major research directions. Variational quantum algorithms (VQAs) have been proposed as a prominent approach to harness the potential of NISQ devices [2,3]. Canonical examples of VQAs are the variational quantum eigensolver (VQE) [4,5,6] and the quantum approximate optimization algorithm (QAOA) [7,8]. VQAs are hybrid quantum-classical algorithms that employ a classical device to tune the parameters of a parameterized quantum state using the measurements performed on the quantum device. As the name suggests, a parameterized quantum state depends on a set of parameters $\boldsymbol{\theta}\in\mathbb{R}^{d}$. Such parameterized quantum states are denoted by $\vert\psi(\boldsymbol{\theta})\rangle$ for pure states and $\rho(\boldsymbol{\theta})$ for mixed states. To harness the potential of parameterized quantum states, it is crucial to understand how a parameterized quantum state changes as the underlying parameters $\boldsymbol{\theta}$ are varied.

Parameterized quantum states have long been studied in the field of quantum sensing, where the states are parameterized by the parameters to be estimated. In the case of quantum sensing, some of the common examples of parameters to be learned are temperature and magnetic field. The parameterized quantum states have also been employed in optimal quantum control theory. To study parameterized quantum states, tools such as classical and quantum Fisher information have been heavily employed in the area of quantum sensing. These techniques have recently established a firm foothold in the study of parameterized quantum states for VQAs and other NISQ applications in general. The review titled “Fisher Information in Noisy Intermediate-Scale Quantum Applications,” recently published in Quantum, presents a survey of some of the recent applications of the Fisher information, both classical and quantum, in the setting of NISQ devices [9].

Classical and Quantum Fisher Information

To understand the classical Fisher information, let us define an $n$-outcome measurement via the positive operator-valued measure (POVM) $\mathcal{M}=\left\{ \Pi_{i}\right\} _{i=1}^{n}$, where the positive operator $\Pi_{i}$ corresponds to the $i$th outcome of the measurement. The probability of observing the $k$th outcome for a parameterized quantum state is given by
\begin{equation}
p_{k}(\boldsymbol{\theta})=\textrm{Tr}\left\{ \rho(\boldsymbol{\theta})\Pi_{k}\right\} \label{eq:probabilities}
\end{equation}
for $k\in\left\{ 1,2,\cdots,n\right\} $. Let us represent the full outcome distribution by $p_{\mathcal{M}}(\boldsymbol{\theta})$. The classical Fisher information matrix can be thought of as a metric which one can use to measure distances between probability distributions. A standard approach to measure distance between two probability distributions $p_{\mathcal{M}}\left(\boldsymbol{\theta}\right)$ and $p_{\mathcal{M}}\left(\boldsymbol{\theta}^{\prime}\right)$ is the Kullback-Leibler (KL) divergence $d_{KL}\left(p_{\mathcal{M}}\left(\boldsymbol{\theta}\right),p_{\mathcal{M}}\left(\boldsymbol{\theta}^{\prime}\right)\right)$ given by
\begin{equation}
d_{KL}\left(p_{\mathcal{M}}\left(\boldsymbol{\theta}\right),p_{\mathcal{M}}\left(\boldsymbol{\theta}^{\prime}\right)\right)=\sum_{k=1}^{n}p_{k}(\boldsymbol{\theta})\log\left(\frac{p_{k}(\boldsymbol{\theta})}{p_{k}\left(\boldsymbol{\theta}^{\prime}\right)}\right).\label{eq:KL_divergence}
\end{equation}
Starting from the above distance measure, one gets the following expression for the classical Fisher information matrix,
\begin{equation}
\left[M_{KL}\right]_{i,j}=\sum_{k=1}^{n}\frac{1}{p_{k}(\boldsymbol{\theta})}\frac{\partial p_{k}(\boldsymbol{\theta})}{\partial{\theta}_{i}}\frac{\partial p_{k}(\boldsymbol{\theta})}{\partial{\theta}_{j}}.\label{eq:Classical_Fisher_Info}
\end{equation}
The classical Fisher information matrix is unique in the sense that one gets the same matrix, up to a constant factor, even if one starts from some other monotone distance measure. On the contrary, the quantum Fisher information matrix is not unique and depends on the distance measure. Starting from the distance measure $1-f$, where $f$ is the fidelity, the quantum Fisher information matrix for pure states $\vert\psi(\boldsymbol{\theta})\rangle$ is given by
\begin{equation}
\mathcal{F}_{i,j}=4\textrm{Re}\left[\langle\partial_{i}\psi(\boldsymbol{\theta})\vert\partial_{j}\psi(\boldsymbol{\theta})\rangle-\langle\partial_{i}\psi(\boldsymbol{\theta})\vert\psi(\boldsymbol{\theta})\rangle\langle\psi(\boldsymbol{\theta})\vert\partial_{j}\psi(\boldsymbol{\theta})\rangle\right].\label{eq:QFI_pure}
\end{equation}
For mixed states, one can use the Bures distance to obtain the expression for the quantum Fisher information. Any device which can calculate the output probabilities and their derivatives can calculate the classical Fisher information. However, unlike for the classical Fisher information, the estimation of the quantum Fisher information can be a relatively challenging task. For example, the expression in the equation above requires estimating both the real and imaginary parts for terms such as $\langle\partial_{i}\psi(\boldsymbol{\theta})\vert\partial_{j}\psi(\boldsymbol{\theta})\rangle$ and $\langle\partial_{i}\psi(\boldsymbol{\theta})\vert\psi(\boldsymbol{\theta})\rangle$, which often necessitates the use of circuits with controlled multi-qubit unitaries. Implementing such circuits on a NISQ device is experimentally challenging. In the case of density matrices, to estimate the quantum Fisher information matrix, one has to typically perform a full tomography of the underlying quantum state. The number of samples scales exponentially with the number of qubits, rendering the estimation of the quantum Fisher information matrix challenging for NISQ applications. Devising methods to estimate the quantum Fisher information is an active area of research [10,11,12,13,14,15].

Cramér-Rao Bound

The applications of the classical and quantum Fisher information in quantum sensing are often based on the famous Cramér-Rao bound. Suppose we are interested in learning a vector of parameters $\boldsymbol{\phi}\in\mathbb{R}^{d}$. An estimator $\hat{\boldsymbol{\phi}}$ of the parameter $\boldsymbol{\phi}$ is said to be unbiased if the estimation is correct in expectation, i.e., $\mathbb{E}\left[\hat{\boldsymbol{\phi}}\right]=\boldsymbol{\phi}$. The Cramér-Rao bound limits the precision of any unbiased estimator
\begin{equation}
\textrm{Cov}\left[\hat{\boldsymbol{\phi}}\right]\succcurlyeq\frac{1}{n}I_{\mathcal{M}}(\boldsymbol{\phi})^{-1}\succcurlyeq\frac{1}{n}\mathcal{F}(\boldsymbol{\phi})^{-1}.\label{eq:Cramer_Rao}
\end{equation}
Here, $\textrm{Cov}\left[.\right]$ denotes the covariance matrix, $n$ is the number of observations, $I_{\mathcal{M}}(\boldsymbol{\phi})$ is the classical Fisher information and $\mathcal{F}(\boldsymbol{\phi})$ denotes the quantum Fisher information. Note that $\textrm{Cov}\left[\hat{\boldsymbol{\phi}}\right]\propto\frac{1}{n}$, which means one can reduce the variance of the corresponding estimate by repeating the same experiment $n$ times. This $\frac{1}{n}$ scaling is known as the standard quantum limit. Using quantum resources (such as entanglement), in case of an experiment with overall, say $\nu=n N$ qubits, one can get a scaling of $\frac{1}{n N^{2}}$, provided that the sample size $n$ is still sufficiently large. This $\frac{1}{nN^{2}}$ scaling is known as the Heisenberg limit [16].

Applications

Apart from the well-known applications of the classical and quantum Fisher information in quantum sensing and quantum control, some recent NISQ applications have emerged. In the set-up of NISQ applications, several variational quantum algorithms have been designed for quantum sensing based on the classical or quantum Fisher information [17,18,19]. Recently, the quantum Fisher information has been applied in the context of optimization for VQAs based on quantum natural gradient (QNG) descent [20]. At iteration $t$, the parameter update via gradient descent is given by
\begin{equation}
\boldsymbol{\boldsymbol{\theta}}^{(t+1)}=\boldsymbol{\boldsymbol{\theta}}^{(t)}-\eta\nabla C\left(\boldsymbol{\boldsymbol{\theta}}^{(t)}\right).\label{eq:gradient_descent}
\end{equation}
Here, $\eta$ is the learning rate and the cost function $C$ depends on $\boldsymbol{\boldsymbol{\theta}}$. The idea of QNG is to use the inverse of the quantum Fisher information matrix to update the parameters in the following manner,
\begin{equation}
\boldsymbol{\theta}^{(t+1)}=\boldsymbol{\theta}^{(t)}-\eta\mathcal{F}(\boldsymbol{\theta})^{-1}\nabla C\left(\boldsymbol{\theta}^{(t)}\right).\label{eq:quantum_natural_gradient}
\end{equation}
Optimization via QNG has been shown to be advantageous over approaches based on gradient descent [20,21]. The computing effort necessary to calculate the quantum Fisher information has been demonstrated to be asymptotically insignificant when compared to the resources required to train VQAs [21]. QNG-based training offers a lower overall cost since it approaches the optimum faster [21].

The classical Fisher information has been recently applied to understand the trainability and expressibility of quantum neural networks [22]. Using the spectrum of the quantum Fisher information matrix, a recent approach suggests methods to design expressible and trainable parametric quantum circuits [23]. The quantum Fisher information matrix has been further used in large-scale machine learning based on Kernel methods [24]. Besides the QNG, the quantum Fisher information matrix can also be used to adaptively determine the learning rate of gradient descent to boost training of VQAs [25]. A special type of parametric quantum circuit called the natural parametric quantum circuit has a trivial quantum Fisher information matrix $\mathcal{F}=I$, here $I$ being the identity matrix, for a particular set of parameters, which improves training as well as enhances accuracy for multi-parameter quantum sensing [26].

Outlook

The classical and quantum Fisher information will act as lenses, allowing a more in-depth understanding of NISQ devices to emerge. It is envisaged that the classical and quantum Fisher information will be used in error mitigation and benchmarking tasks. Furthermore, it would be intriguing to develop more efficient methods for approximating the quantum Fisher information.

► BibTeX data

► References

[1] John Preskill, Quantum Computing in the NISQ era and beyond, Quantum 2, 79 (2018).
https:/​/​doi.org/​10.22331/​q-2018-08-06-79

[2] Kishor Bharti, Alba Cervera-Lierta, Thi Ha Kyaw, Tobias Haug, Sumner Alperin-Lea, Abhinav Anand, Matthias Degroote, Hermanni Heimonen, Jakob S. Kottmann, Tim Menke, Wai-Keong Mok, Sukin Sim, Leong-Chuan Kwek, and Alán Aspuru-Guzik, Noisy intermediate-scale quantum (NISQ) algorithms, arXiv:2101.08448 [quant-ph] (2021).
arXiv:2101.08448

[3] Marco Cerezo, Andrew Arrasmith, Ryan Babbush, Simon C Benjamin, Suguru Endo, Keisuke Fujii, Jarrod R. McClean, Kosuke Mitarai, Xiao Yuan, Lukasz Cincio, and Patrick J. Coles, Variational quantum algorithms, Nature Reviews Physics 3, 625 (2021).
https:/​/​doi.org/​10.1038/​s42254-021-00348-9

[4] Alberto Peruzzo, Jarrod McClean, Peter Shadbolt, Man-Hong Yung, Xiao-Qi Zhou, Peter J. Love, Alán Aspuru-Guzik, and Jeremy L. O’Brien, A variational eigenvalue solver on a photonic quantum processor, Nature Commununications 5, 4213 (2014).
https:/​/​doi.org/​10.1038/​ncomms5213

[5] Jarrod R. McClean, Jonathan Romero, Ryan Babbush, and Alán Aspuru-Guzik, The theory of variational hybrid quantum-classical algorithms, New Journal of Physics 18, 023023 (2016).
https:/​/​doi.org/​10.1088/​1367-2630/​18/​2/​023023

[6] Abhinav Kandala, Antonio Mezzacapo, Kristan Temme, Maika Takita, Markus Brink, Jerry M. Chow, and Jay M. Gambetta, Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets, Nature 549, 242 (2017).
https:/​/​doi.org/​10.1038/​nature23879

[7] Edward Farhi, Jeffrey Goldstone, and Sam Gutmann, A quantum approximate optimization algorithm, arXiv:1411.4028 [quant-ph] (2014).
arXiv:1411.4028

[8] Edward Farhi and Aram W. Harrow, Quantum supremacy through the quantum approximate optimization algorithm, arXiv:1602.07674 [quant-ph] (2016).
arXiv:1602.07674

[9] Johannes Jakob Meyer, Fisher Information in Noisy Intermediate-Scale Quantum Applications, Quantum 5, 539 (2021).
https:/​/​doi.org/​10.22331/​q-2021-09-09-539

[10] Yong-Xin Yao, Niladri Gomes, Feng Zhang, Cai-Zhuang Wang, Kai-Ming Ho, Thomas Iadecola, and Peter P. Orth, Adaptive variational quantum dynamics simulations, PRX Quantum 2, 030307 (2021).
https:/​/​doi.org/​10.1103/​PRXQuantum.2.030307

[11] David Wierichs, Josh Izaac, Cody Wang, and Cedric Yen-Yu Lin, General parameter-shift rules for quantum gradients, arXiv:2107.12390 [quant-ph] (2021).
arXiv:2107.12390

[12] Andrea Mari, Thomas R. Bromley, and Nathan Killoran, Estimating the gradient and higher-order derivatives on quantum hardware, Physical Review A 103, 012405 (2021).
https:/​/​doi.org/​10.1103/​PhysRevA.103.012405

[13] Ying Li and Simon C. Benjamin, Efficient variational quantum simulator incorporating active error minimization, Physical Review X 7, 021050 (2017).
https:/​/​doi.org/​10.1103/​PhysRevX.7.021050

[14] Xiao Yuan, Suguru Endo, Qi Zhao, Ying Li, and Simon C. Benjamin, Theory of variational quantum simulation, Quantum 3, 191 (2019).
https:/​/​doi.org/​10.22331/​q-2019-10-07-191

[15] Kosuke Mitarai and Keisuke Fujii, Methodology for replacing indirect measurements with direct measurements, Physical Review Research 1, 013006 (2019).
https:/​/​doi.org/​10.1103/​PhysRevResearch.1.013006

[16] John J. Bollinger, Wayne M. Itano, David J. Wineland, and Daniel J. Heinzen, Optimal frequency measurements with maximally correlated states, Physical Review A 54, R4649 (1996).
https:/​/​doi.org/​10.1103/​PhysRevA.54.R4649

[17] Bálint Koczor, Suguru Endo, Tyson Jones, Yuichiro Matsuzaki, and Simon C. Benjamin, Variational-state quantum metrology, New Journal of Physics 22, 083038 (2020).
https:/​/​doi.org/​10.1088/​1367-2630/​ab965e

[18] Johannes Jakob Meyer, Johannes Borregaard, and Jens Eisert, A variational toolbox for quantum multi-parameter estimation, npj Quantum Information 7, 1 (2021).
https:/​/​doi.org/​10.1038/​s41534-021-00425-y

[19] Ziqi Ma, Pranav Gokhale, Tian-Xing Zheng, Sisi Zhou, Xiaofei Yu, Liang Jiang, Peter Maurer, and Frederic T. Chong, Adaptive circuit learning for quantum metrology, arXiv:2010.08702 [quant-ph] (2020).
arXiv:2010.08702

[20] James Stokes, Josh Izaac, Nathan Killoran, and Giuseppe Carleo, Quantum natural gradient, Quantum 4, 269 (2020).
https:/​/​doi.org/​10.22331/​q-2020-05-25-269

[21] Barnaby van Straaten and Bálint Koczor, Measurement cost of metric-aware variational quantum algorithms, PRX Quantum 2, 030324 (2021).
https:/​/​doi.org/​10.1103/​PRXQuantum.2.030324

[22] Amira Abbas, David Sutter, Christa Zoufal, Aurélien Lucchi, Alessio Figalli, and Stefan Woerner, The power of quantum neural networks, Nature Computational Science 1, 403 (2021).
https:/​/​doi.org/​10.1038/​s43588-021-00084-1

[23] Tobias Haug, Kishor Bharti, and M. S. Kim, Capacity and quantum geometry of parametrized quantum circuits, arXiv:2102.01659 [quant-ph] (2021a).
arXiv:2102.01659

[24] Tobias Haug, Chris N. Self, and M. S. Kim, Large-scale quantum machine learning, arXiv:2108.01039 [quant-ph] (2021b).
arXiv:2108.01039

[25] Tobias Haug and M. S. Kim, Optimal training of variational quantum algorithms without barren plateaus, arXiv:2104.14543 [quant-ph] (2021a).
arXiv:2104.14543

[26] Tobias Haug and M. S. Kim, Natural parameterized quantum circuit, arXiv:2107.14063 [quant-ph] (2021b).
arXiv:2107.14063

Cited by

[1] Rebecca Erbanni, Kishor Bharti, Leong-Chuan Kwek, and Dario Poletti, "NISQ algorithm for the matrix elements of a generic observable", SciPost Physics 15 4, 180 (2023).

[2] Rafael Wagner, Zohar Schwartzman-Nowik, Ismael L Paiva, Amit Te’eni, Antonio Ruiz-Molero, Rui Soares Barbosa, Eliahu Cohen, and Ernesto F Galvão, "Quantum circuits for measuring weak values, Kirkwood–Dirac quasiprobability distributions, and state spectra", Quantum Science and Technology 9 1, 015030 (2024).

[3] Kishor Bharti, Alba Cervera-Lierta, Thi Ha Kyaw, Tobias Haug, Sumner Alperin-Lea, Abhinav Anand, Matthias Degroote, Hermanni Heimonen, Jakob S. Kottmann, Tim Menke, Wai-Keong Mok, Sukin Sim, Leong-Chuan Kwek, and Alán Aspuru-Guzik, "Noisy intermediate-scale quantum algorithms", Reviews of Modern Physics 94 1, 015004 (2022).

The above citations are from Crossref's cited-by service (last updated successfully 2024-03-19 02:12:26). The list may be incomplete as not all publishers provide suitable and complete citation data.

On SAO/NASA ADS no data on citing works was found (last attempt 2024-03-19 02:12:26).