# Approximating Hamiltonian dynamics with the Nyström method

Alessandro Rudi1, Leonard Wossnig2,3, Carlo Ciliberto2, Andrea Rocchetto2,4,5, Massimiliano Pontil6, and Simone Severini2

1INRIA - Sierra project team, Paris, France
2Department of Computer Science, University College London, London, United Kingdom
3Rahko Ltd., London, United Kingdom
4Department of Computer Science, University of Texas at Austin, Austin, United States
5Department of Computer Science, University of Oxford, Oxford, United Kingdom
6Computational Statistics and Machine Learning, IIT, Genoa, Italy

### Abstract

Simulating the time-evolution of quantum mechanical systems is BQP-hard and expected to be one of the foremost applications of quantum computers. We consider classical algorithms for the approximation of Hamiltonian dynamics using subsampling methods from randomized numerical linear algebra. We derive a simulation technique whose runtime scales polynomially in the number of qubits and the Frobenius norm of the Hamiltonian. As an immediate application, we show that sample based quantum simulation, a type of evolution where the Hamiltonian is a density matrix, can be efficiently classically simulated under specific structural conditions. Our main technical contribution is a randomized algorithm for approximating Hermitian matrix exponentials. The proof leverages a low-rank, symmetric approximation via the Nyström method. Our results suggest that under strong sampling assumptions there exist classical poly-logarithmic time simulations of quantum computations.

### ► References

[1] Dorit Aharonov and Amnon Ta-Shma Adiabatic Quantum State Generation and Statistical Zero Knowledge'' Proceedings of the Thirty-Fifth Annual ACM Symposium on Theory of Computing 20-29 (2003).
https:/​/​doi.org/​10.1145/​780542.780546

[2] Alexei Borisovich Aleksandrov and Vladimir Vsevolodovich Peller Operator Lipschitz functions'' Russian Mathematical Surveys 71, 605 (2016).
https:/​/​doi.org/​10.1070/​RM9729

[3] Mohamed-Ali Belabbas and Patrick J Wolfe Fast low-rank approximation for covariance matrices'' 2nd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, 2007. 293–296 (2007).
https:/​/​doi.org/​10.1109/​CAMSAP.2007.4498023

[4] Mohamed-Ali Belabbas and Patrick J Wolfe On sparse representations of linear operators and the approximation of matrix products'' 42nd Annual Conference on Information Sciences and Systems 258–263 (2008).
https:/​/​doi.org/​10.1109/​CISS.2008.4558532

[5] Dominic W Berry, Andrew M Childs, and Robin Kothari, Hamiltonian simulation with nearly optimal dependence on all parameters'' IEEE 56th Annual Symposium on Foundations of Computer Science (FOCS) 792–809 (2015).
https:/​/​doi.org/​10.1109/​FOCS.2015.54

[6] Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, and Seth Lloyd, Quantum machine learning'' Nature 549, 195 (2017).
https:/​/​doi.org/​10.1038/​nature23474

[7] Sergey Bravyi, Dan Browne, Padraic Calpin, Earl Campbell, David Gosset, and Mark Howard, Simulation of quantum circuits by low-rank stabilizer decompositions'' arXiv preprint arXiv:1808.00128 (2018).
https:/​/​doi.org/​10.22331/​q-2019-09-02-181

[8] Nai-Hui Chia, András Gilyén, Tongyang Li, Han-Hsuan Lin, Ewin Tang, and Chunhao Wang, Sampling-based sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning'' arXiv preprint arXiv:1910.06151 (2019).

[9] Andrew M. Childs and Robin Kothari Simulating Sparse Hamiltonians with Star Decompositions'' Proceedings of the 5th Conference on Theory of Quantum Computation, Communication, and Cryptography 94–103 (2011).
https:/​/​doi.org/​10.1007/​978-3-642-18073-6_8

[10] Andrew M. Childs and Nathan Wiebe Hamiltonian Simulation Using Linear Combinations of Unitary Operations'' Quantum Info. Comput. 12, 901–924 (2012).
https:/​/​doi.org/​10.5555/​2481569.2481570

[11] Andrew M. Childs, Richard Cleve, Enrico Deotto, Edward Farhi, Sam Gutmann, and Daniel A. Spielman, Exponential Algorithmic Speedup by a Quantum Walk'' Proceedings of the Thirty-Fifth Annual ACM Symposium on Theory of Computing 59–68 (2003).
https:/​/​doi.org/​10.1145/​780542.780552

[12] Carlo Ciliberto, Mark Herbster, Alessandro Davide Ialongo, Massimiliano Pontil, Andrea Rocchetto, Simone Severini, and Leonard Wossnig, Quantum machine learning: a classical perspective'' Proc. R. Soc. A 474, 20170551 (2018).
https:/​/​doi.org/​10.1098/​rspa.2017.0551

[13] Petros Drineas, Ravi Kannan, and Michael W Mahoney, Fast Monte Carlo algorithms for matrices I: Approximating matrix multiplication'' SIAM Journal on Computing 36, 132–157 (2006).
https:/​/​doi.org/​10.1137/​S0097539704442684

[14] Petros Drineas, Ravi Kannan, and Michael W Mahoney, Fast Monte Carlo algorithms for matrices II: Computing a low-rank approximation to a matrix'' SIAM Journal on computing 36, 158–183 (2006).
https:/​/​doi.org/​10.1137/​S0097539704442696

[15] Petros Drineas and Michael W. Mahoney On the Nyström Method for Approximating a Gram Matrix for Improved Kernel-Based Learning'' J. Mach. Learn. Res. 6, 2153–2175 (2005).
https:/​/​doi.org/​10.5555/​1046920.1194916

[16] Petros Drineas and Michael W Mahoney Lectures on randomized numerical linear algebra'' The Mathematics of Data 25, 1 (2018).

[17] Petros Drineas, Michael W. Mahoney, S. Muthukrishnan, and Tamás Sarlós, Faster Least Squares Approximation'' Numer. Math. 117, 219–249 (2011).
https:/​/​doi.org/​10.1007/​s00211-010-0331-6

[18] Charless Fowlkes, Serge Belongie, Fan Chung, and Jitendra Malik, Spectral Grouping Using the Nyström Method'' IEEE Trans. Pattern Anal. Mach. Intell. 26, 214–225 (2004).
https:/​/​doi.org/​10.1109/​TPAMI.2004.1262185

[19] Alan Frieze, Ravi Kannan, and Santosh Vempala, Fast Monte-Carlo Algorithms for Finding Low-Rank Approximations'' J. ACM 51, 1025–1041 (2004).
https:/​/​doi.org/​10.1145/​1039488.1039494

[20] Jutho Haegeman, J Ignacio Cirac, Tobias J Osborne, Iztok Pižorn, Henri Verschelde, and Frank Verstraete, Time-dependent variational principle for quantum lattices'' Physical review letters 107, 070601 (2011).
https:/​/​doi.org/​10.1103/​PhysRevLett.107.070601

[21] Nicholas J. Higham The Scaling and Squaring Method for the Matrix Exponential Revisited'' SIAM J. Matrix Anal. Appl. 26, 1179–1193 (2005).
https:/​/​doi.org/​10.1137/​04061101X

[22] Nicholas J. Higham The Scaling and Squaring Method for the Matrix Exponential Revisited'' SIAM Rev. 51, 747–764 (2009).
https:/​/​doi.org/​10.1137/​090768539

[23] Daniel Hsu Weighted sampling of outer products'' arXiv preprint arXiv: 1410.4429 (2014).

[24] Cupjin Huang, Michael Newman, and Mario Szegedy, Explicit lower bounds on strong quantum simulation'' arXiv preprint arXiv:1804.10368 (2018).

[25] Bjarni Jónsson, Bela Bauer, and Giuseppe Carleo, Neural-network states for the classical simulation of quantum computing'' arXiv preprint arXiv:1808.05232 (2018).

[26] Iordanis Kerenidis and Anupam Prakash Quantum recommendation systems'' arXiv preprint arXiv:1603.08675 (2016).

[27] Shelby Kimmel, Cedric Yen-Yu Lin, Guang Hao Low, Maris Ozols, and Theodore J Yoder, Hamiltonian simulation with optimal sample complexity'' npj Quantum Information 3, 13 (2017).
https:/​/​doi.org/​10.1038/​s41534-017-0013-7

[28] Sanjiv Kumar, Mehryar Mohri, and Ameet Talwalkar, On Sampling-Based Approximate Spectral Decomposition'' Proceedings of the 26th Annual International Conference on Machine Learning 553–560 (2009).
https:/​/​doi.org/​10.1145/​1553374.1553446

[29] Sanjiv Kumar, Mehryar Mohri, and Ameet Talwalkar, Sampling methods for the Nyström method'' Journal of Machine Learning Research 13, 981–1006 (2012).
https:/​/​doi.org/​10.5555/​2188385.2343678

[30] Mu Li, James T. Kwok, and Bao-Liang Lu, Making Large-Scale Nyström Approximation Possible'' Proceedings of the 27th International Conference on International Conference on Machine Learning 631–638 (2010).
https:/​/​doi.org/​10.5555/​3104322.3104403

[31] Seth Lloyd Universal Quantum Simulators'' Science 273, 1073–1078 (1996).
https:/​/​doi.org/​10.1126/​science.273.5278.1073

[32] Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost, Quantum principal component analysis'' Nature Physics 10, 631 (2014).
https:/​/​doi.org/​10.1038/​nphys3029

[33] Guang Hao Lowand Isaac L Chuang Hamiltonian Simulation by Uniform Spectral Amplification'' arXiv preprint arXiv:1707.05391 (2017).

[34] Lester Mackey, Ameet Talwalkar, and Michael I. Jordan, Divide-and-Conquer Matrix Factorization'' Proceedings of the 24th International Conference on Neural Information Processing Systems 1134–1142 (2011).
https:/​/​doi.org/​10.5555/​2986459.2986586

[35] Michael W Mahoney Randomized algorithms for matrices and data'' Foundations and Trends® in Machine Learning 3, 123–224 (2011).
https:/​/​doi.org/​10.1561/​2200000035

[36] Roy Mathias Approximation of matrix-valued functions'' SIAM journal on matrix analysis and applications 14, 1061–1063 (1993).
https:/​/​doi.org/​10.1137/​0614070

[37] Awad H Al-Mohyand Nicholas J Higham A new scaling and squaring algorithm for the matrix exponential'' SIAM Journal on Matrix Analysis and Applications 31, 970–989 (2009).
https:/​/​doi.org/​10.1137/​09074721X

[38] Awad H Al-Mohyand Nicholas J Higham Computing the action of the matrix exponential, with an application to exponential integrators'' SIAM journal on scientific computing 33, 488–511 (2011).
https:/​/​doi.org/​10.1137/​100788860

[39] Ritsuo Nakamoto A norm inequality for Hermitian operators'' The American mathematical monthly 110, 238 (2003).
https:/​/​doi.org/​10.1080/​00029890.2003.11919961

[40] Evert J Nyström Über die praktische Auflösung von Integralgleichungen mit Anwendungen auf Randwertaufgaben'' Acta Mathematica 54, 185–204 (1930).
https:/​/​doi.org/​10.1007/​BF02547521

[41] Lorenzo Orecchia, Sushant Sachdeva, and Nisheeth K. Vishnoi, Approximating the Exponential, the Lanczos Method and an Õ(m)-Time Spectral Algorithm for Balanced Separator'' Proceedings of the Forty-Fourth Annual ACM Symposium on Theory of Computing 1141–1160 (2012).
https:/​/​doi.org/​10.1145/​2213977.2214080

[42] Román Orús A practical introduction to tensor networks: Matrix product states and projected entangled pair states'' Annals of Physics 349, 117–158 (2014).
https:/​/​doi.org/​10.1016/​j.aop.2014.06.013

[43] Patrick Rebentrost, Masoud Mohseni, and Seth Lloyd, Quantum support vector machine for big data classification'' Physical review letters 113, 130503 (2014).
https:/​/​doi.org/​10.1103/​PhysRevLett.113.130503

[44] Patrick Rebentrost, Maria Schuld, Leonard Wossnig, Francesco Petruccione, and Seth Lloyd, Quantum gradient descent and Newton's method for constrained polynomial optimization'' New Journal of Physics 21, 073023 (2019).
https:/​/​doi.org/​10.1088/​1367-2630/​ab2a9e

[45] Alessandro Rudi, Raffaello Camoriano, and Lorenzo Rosasco, Less is More: Nyström Computational Regularization'' Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1 1657–1665 (2015).
https:/​/​doi.org/​10.5555/​2969239.2969424

[46] Alessandro Rudi, Raffaello Camoriano, and Lorenzo Rosasco, Less is More: Nyström Computational Regularization'' arXiv preprint arXiv:1507.04717 (2015).

[47] Maria Schuld, Ilya Sinayskiy, and Francesco Petruccione, Prediction by linear regression on a quantum computer'' Physical Review A 94, 022342 (2016).
https:/​/​doi.org/​10.1103/​PhysRevA.94.022342

[48] Martin Schwarz and Maarten Van den Nest Simulating quantum circuits with sparse output distributions'' arXiv preprint arXiv:1310.6749 (2013).

[49] Daniel A. Spielman and Shang-Hua Teng Nearly-Linear Time Algorithms for Graph Partitioning, Graph Sparsification, and Solving Linear Systems'' Proceedings of the Thirty-Sixth Annual ACM Symposium on Theory of Computing 81–90 (2004).
https:/​/​doi.org/​10.1145/​1007352.1007372

[50] Daniel A Spielman and Shang-Hua Teng Spectral sparsification of graphs'' SIAM Journal on Computing 40, 981–1025 (2011).
https:/​/​doi.org/​10.1137/​08074489X

[51] Masuo Suzuki Generalized Trotter's formula and systematic approximants of exponential operators and inner derivations with applications to many-body problems'' Communications in Mathematical Physics 51, 183–190 (1976).
https:/​/​doi.org/​10.1007/​BF01609348

[52] Ameet Talwalkar, Sanjiv Kumar, and Henry Rowley, Large-scale manifold learning'' IEEE Conference on Computer Vision and Pattern Recognition 1–8 (2008).
https:/​/​doi.org/​10.1109/​CVPR.2008.4587670

[53] Ewin Tang A Quantum-Inspired Classical Algorithm for Recommendation Systems'' Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing 217–228 (2019).
https:/​/​doi.org/​10.1145/​3313276.3316310

[54] Joel A. Tropp User-Friendly Tail Bounds for Sums of Random Matrices'' Found. Comput. Math. 12, 389–434 (2012).
https:/​/​doi.org/​10.1007/​s10208-011-9099-z

[55] Hale F Trotter On the product of semi-groups of operators'' Proceedings of the American Mathematical Society 10, 545–551 (1959).
https:/​/​doi.org/​10.1090/​S0002-9939-1959-0108732-6

[56] Maarten Van Den Nest Classical simulation of quantum computation, the Gottesman-Knill theorem, and slightly beyond'' Quantum Information & Computation 10, 258–271 (2010).
https:/​/​doi.org/​10.5555/​2011350.2011356

[57] Frank Verstraete, Juan J Garcia-Ripoll, and Juan Ignacio Cirac, Matrix product density operators: Simulation of finite-temperature and dissipative systems'' Physical review letters 93, 207204 (2004).
https:/​/​doi.org/​10.1103/​PhysRevLett.93.207204

[58] Frank Verstraete, Valentin Murg, and J Ignacio Cirac, Matrix product states, projected entangled pair states, and variational renormalization group methods for quantum spin systems'' Advances in Physics 57, 143–224 (2008).
https:/​/​doi.org/​10.1063/​1.5026985

[59] Guifré Vidal Efficient Simulation of One-Dimensional Quantum Many-Body Systems'' Phys. Rev. Lett. 93, 040502 (2004).
https:/​/​doi.org/​10.1103/​PhysRevLett.93.040502

[60] Christopher K. I. Williams and Matthias Seeger Using the Nyström Method to Speed Up Kernel Machines'' Advances in Neural Information Processing Systems 13 682–688 (2001).
https:/​/​doi.org/​10.5555/​3008751.3008847

[61] CKI. Williams, CE. Rasmussen, A. Schwaighofer, and V. Tresp, Observations on the Nyström Method for Gaussian Process Prediction'' report (2002).

[62] David P Woodruff Sketching as a tool for numerical linear algebra'' Foundations and Trends® in Theoretical Computer Science 10, 1–157 (2014).
https:/​/​doi.org/​10.1561/​0400000060

[63] Kai Zhang and James T Kwok Clustered Nyström method for large scale manifold learning and dimension reduction'' IEEE Transactions on Neural Networks 21, 1576–1587 (2010).
https:/​/​doi.org/​10.1109/​TNN.2010.2064786

[64] Kai Zhang, Ivor W. Tsang, and James T. Kwok, Improved Nyström Low-Rank Approximation and Error Analysis'' Proceedings of the 25th International Conference on Machine Learning 1232–1239 (2008).
https:/​/​doi.org/​10.1145/​1390156.1390311

### Cited by

[1] Ewin Tang, "Quantum Principal Component Analysis Only Achieves an Exponential Speedup Because of Its State Preparation Assumptions", arXiv:1811.00414, Physical Review Letters 127 6, 060503 (2021).

[2] Nai-Hui Chia, András Gilyén, Tongyang Li, Han-Hsuan Lin, Ewin Tang, and Chunhao Wang, "Sampling-based sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning", arXiv:1910.06151.

[3] Aram W. Harrow, "Small quantum computers and large classical data sets", arXiv:2004.00026.

[4] Juan A. Acebron, "A Monte Carlo method for computing the action of a matrix exponential on a vector", arXiv:1904.12759.

The above citations are from Crossref's cited-by service (last updated successfully 2021-09-23 08:28:50) and SAO/NASA ADS (last updated successfully 2021-09-23 08:28:51). The list may be incomplete as not all publishers provide suitable and complete citation data.