An improved quantum-inspired algorithm for linear regression

András Gilyén1, Zhao Song2, and Ewin Tang3

1Alfréd Rényi Institute of Mathematics
2Adobe Research
3University of Washington

Find this paper interesting or want to discuss? Scite or leave a comment on SciRate.


We give a classical algorithm for linear regression analogous to the quantum matrix inversion algorithm [Harrow, Hassidim, and Lloyd, Physical Review Letters'09] for low-rank matrices [Wossnig, Zhao, and Prakash, Physical Review Letters'18], when the input matrix $A$ is stored in a data structure applicable for QRAM-based state preparation.

Namely, suppose we are given an $A \in \mathbb{C}^{m\times n}$ with minimum non-zero singular value $\sigma$ which supports certain efficient $\ell_2$-norm importance sampling queries, along with a $b \in \mathbb{C}^m$. Then, for some $x \in \mathbb{C}^n$ satisfying $\|x – A^+b\| \leq \varepsilon\|A^+b\|$, we can output a measurement of $|x\rangle$ in the computational basis and output an entry of $x$ with classical algorithms that run in $\tilde{\mathcal{O}}\big(\frac{\|A\|_{\mathrm{F}}^6\|A\|^6}{\sigma^{12}\varepsilon^4}\big)$ and $\tilde{\mathcal{O}}\big(\frac{\|A\|_{\mathrm{F}}^6\|A\|^2}{\sigma^8\varepsilon^4}\big)$ time, respectively. This improves on previous "quantum-inspired" algorithms in this line of research by at least a factor of $\frac{\|A\|^{16}}{\sigma^{16}\varepsilon^2}$ [Chia, Gilyén, Li, Lin, Tang, and Wang, STOC'20]. As a consequence, we show that quantum computers can achieve at most a factor-of-12 speedup for linear regression in this QRAM data structure setting and related settings. Our work applies techniques from sketching algorithms and optimization to the quantum-inspired literature. Unlike earlier works, this is a promising avenue that could lead to feasible implementations of classical regression in a quantum-inspired settings, for comparison against future quantum computers.

In this work, we combine two powerful ideas: stochastic gradient descent and the "quantum-inspired" access model of vectors and matrices. Algorithms in this access model have been previously used to demonstrate that certain quantum machine learning algorithms cannot give exponential speedups, and faster algorithms in this model imply stronger barriers to quantum speedup. So, to analyze the potential for quantum speedup in machine learning, we study the problem of linear regression, or solving a linear system $Ax=b$. We notice that, in the quantum-inspired setting, the quantum-like operations we can perform enable us to efficiently sample gradients of $f(x) = \tfrac12\|Ax-b\|^2$ when the matrix $A$ is low rank. This fits nicely with stochastic gradient descent techniques, so we use them to develop much faster algorithms over prior work, which was bottlenecked by its use of costly singular value decompositions. We break through this barrier and obtain a quartic dependence on precision, making significant progress towards practically applicable quantum-inspired algorithms. Later, Shao and Montanaro showed that in certain cases, variants of stochastic gradient descent can run even faster, with quadratic dependence on precision.

► BibTeX data

► References

[1] John Preskill. ``Quantum Computing in the NISQ era and beyond''. Quantum 2, 79 (2018). arXiv:1801.00862.

[2] Andrew M Childs. ``Equation solving by simulation''. Nature Physics 5, 861–861 (2009).

[3] Scott Aaronson. ``Read the fine print''. Nature Physics 11, 291–293 (2015).

[4] Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, and Seth Lloyd. ``Quantum machine learning''. Nature 549, 195–202 (2017). arXiv:1611.09347.

[5] Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd. ``Quantum algorithm for linear systems of equations''. Physical Review Letters 103, 150502 (2009). arXiv:0811.3171.

[6] András Gilyén, Yuan Su, Guang Hao Low, and Nathan Wiebe. ``Quantum singular value transformation and beyond: Exponential improvements for quantum matrix arithmetics''. In Proceedings of the 51st ACM Symposium on the Theory of Computing (STOC). Pages 193–204. (2019). arXiv:1806.01838.

[7] Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. ``Quantum random access memory''. Physical Review Letters 100, 160501 (2008). arXiv:0708.1879.

[8] Anupam Prakash. ``Quantum algorithms for linear algebra and machine learning''. PhD thesis. University of California at Berkeley. (2014). url:​Pubs/​TechRpts/​2014/​EECS-2014-211.pdf.

[9] Nai-Hui Chia, András Gilyén, Tongyang Li, Han-Hsuan Lin, Ewin Tang, and Chunhao Wang. ``Sampling-based sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning''. In Proceedings of the 52nd ACM Symposium on the Theory of Computing (STOC). Page 387–400. (2020). arXiv:1910.06151.

[10] Iordanis Kerenidis and Anupam Prakash. ``Quantum recommendation systems''. In Proceedings of the 8th Innovations in Theoretical Computer Science Conference (ITCS). Pages 49:1–49:21. (2017). arXiv:1603.08675.

[11] Ewin Tang. ``A quantum-inspired classical algorithm for recommendation systems''. In Proceedings of the 51st ACM Symposium on the Theory of Computing (STOC). Pages 217–228. (2019). arXiv:1807.04271.

[12] Iordanis Kerenidis, Jonas Landman, Alessandro Luongo, and Anupam Prakash. ``q-means: A quantum algorithm for unsupervised machine learning''. In Advances in Neural Information Processing Systems. Volume 32. (2019). arXiv:1812.03584.

[13] Iordanis Kerenidis and Anupam Prakash. ``Quantum gradient descent for linear systems and least squares''. Physical Review A 101, 022316 (2020). arXiv:1704.04992.

[14] Danial Dervovic, Mark Herbster, Peter Mountney, Simone Severini, Naïri Usher, and Leonard Wossnig. ``Quantum linear systems algorithms: a primer'' (2018). arXiv:1802.08227.

[15] Leonard Wossnig, Zhikuan Zhao, and Anupam Prakash. ``Quantum linear system algorithm for dense matrices''. Physical Review Letters 120, 050502 (2018). arXiv:1704.06174.

[16] Patrick Rebentrost, Masoud Mohseni, and Seth Lloyd. ``Quantum support vector machine for big data classification''. Physical Review Letters 113, 130503 (2014). arXiv:1307.0471.

[17] Shantanav Chakraborty, András Gilyén, and Stacey Jeffery. ``The power of block-encoded matrix powers: Improved regression techniques via faster Hamiltonian simulation''. In Proceedings of the 46th International Colloquium on Automata, Languages, and Programming (ICALP). Pages 33:1–33:14. (2019). arXiv:1804.01973.

[18] Nai-Hui Chia, András Gilyén, Han-Hsuan Lin, Seth Lloyd, Ewin Tang, and Chunhao Wang. ``Quantum-inspired algorithms for solving low-rank linear equation systems with logarithmic dependence on the dimension''. In Proceedings of the 31st International Symposium on Algorithms and Computation (ISAAC). Pages 47:1–47:17. (2020). arXiv:1811.04852 and 1811.04909 (merged).
arXiv:1811.04852 and 1811.04909 (merged)

[19] Juan Miguel Arrazola, Alain Delgado, Bhaskar Roy Bardhan, and Seth Lloyd. ``Quantum-inspired algorithms in practice''. Quantum 4, 307 (2020). arXiv:1905.10415.

[20] Ewin Tang. ``Quantum principal component analysis only achieves an exponential speedup because of its state preparation assumptions''. Physical Review Letters 127, 060503 (2021). arXiv:1811.00414.

[21] Carlo Ciliberto, Mark Herbster, Alessandro Davide Ialongo, Massimiliano Pontil, Andrea Rocchetto, Simone Severini, and Leonard Wossnig. ``Quantum machine learning: a classical perspective''. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 474, 20170551 (2018). arXiv:1707.08561.

[22] Srinivasan Arunachalam, Vlad Gheorghiu, Tomas Jochym-O'Connor, Michele Mosca, and Priyaa Varshinee Srinivasan. ``On the robustness of bucket brigade quantum RAM''. New Journal of Physics 17, 123010 (2015). arXiv:1502.03450.

[23] Neha Gupta and Aaron Sidford. ``Exploiting numerical sparsity for efficient learning: Faster eigenvector computation and regression''. In Advances in Neural Information Processing Systems. Pages 5269–5278. (2018). arXiv:1811.10866.

[24] Yair Carmon, Yujia Jin, Aaron Sidford, and Kevin Tian. ``Coordinate methods for matrix games''. In 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS). Pages 283–293. IEEE (2020). arXiv:2009.08447.

[25] Sébastien Bubeck. ``Convex optimization: Algorithms and complexity''. Foundations and Trends in Machine Learning 8, 231–357 (2015). arXiv:1405.4980.

[26] Roy Frostig, Rong Ge, Sham Kakade, and Aaron Sidford. ``Un-regularizing: Approximate proximal point and faster stochastic algorithms for empirical risk minimization''. In International Conference on Machine Learning. Pages 2540–2548. (2015). arXiv:1506.07512.

[27] Francis Bach and Eric Moulines. ``Non-asymptotic analysis of stochastic approximation algorithms for machine learning''. In Advances in Neural Information Processing Systems. Pages 451–459. (2011). url: http:/​/​​paper/​4316-non-asymptotic-analysis-of-stochastic-approximation-algorithms-for-machine-learning.pdf.

[28] András Gilyén, Seth Lloyd, and Ewin Tang. ``Quantum-inspired low-rank stochastic regression with logarithmic dependence on the dimension'' (2018). arXiv:1811.04909.

[29] Nai-Hui Chia, Han-Hsuan Lin, and Chunhao Wang. ``Quantum-inspired sublinear classical algorithms for solving low-rank linear systems'' (2018). arXiv:1811.04852.

[30] David P. Woodruff. ``Sketching as a tool for numerical linear algebra''. Foundations and Trends in Theoretical Computer Science 10, 1–157 (2014).

[31] Nadiia Chepurko, Kenneth L. Clarkson, Lior Horesh, Honghao Lin, and David P. Woodruff. ``Quantum-inspired algorithms from randomized numerical linear algebra'' (2020). arXiv:2011.04125.

[32] Changpeng Shao and Ashley Montanaro. ``Faster quantum-inspired algorithms for solving linear systems'' (2021). arXiv:2103.10309.

[33] Thomas Strohmer and Roman Vershynin. ``A randomized Kaczmarz algorithm with exponential convergence''. Journal of Fourier Analysis and Applications 15, 262–278 (2008). arXiv:math/​0702226.

[34] Deanna Needell, Nathan Srebro, and Rachel Ward. ``Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm''. Mathematical Programming 155, 549–573 (2015). arXiv:1310.5715.

[35] Petros Drineas, Ravi Kannan, and Michael W Mahoney. ``Fast Monte Carlo algorithms for matrices II: Computing a low-rank approximation to a matrix''. SIAM Journal on Computing 36, 158–183 (2006).

Cited by

[1] Xiao-Ming Zhang, Tongyang Li, and Xiao Yuan, "Quantum State Preparation with Optimal Circuit Depth: Implementations and Applications", arXiv:2201.11495, Physical Review Letters 129 23, 230504 (2022).

[2] Steven Herbert, "Quantum computing for data-centric engineering and science", Data-Centric Engineering 3, e36 (2022).

[3] Benjamin A. Cordier, Nicolas P. D. Sawaya, Gian G. Guerreschi, and Shannon K. McWeeney, "Biology and medicine in the landscape of quantum advantages", arXiv:2112.00760.

[4] Quynh T. Nguyen, Bobak T. Kiani, and Seth Lloyd, "Quantum algorithm for dense and full-rank kernels using hierarchical matrices", arXiv:2201.11329.

[5] Adam Bouland, Wim van Dam, Hamed Joorati, Iordanis Kerenidis, and Anupam Prakash, "Prospects and challenges of quantum finance", arXiv:2011.06492.

[6] Amirhossein Nourbakhsh, Mark Nicholas Jones, Kaur Kristjuhan, Deborah Carberry, Jay Karon, Christian Beenfeldt, Kyarash Shahriari, Martin P. Andersson, Mojgan A. Jadidi, and Seyed Soheil Mansouri, "Quantum Computing: Fundamentals, Trends and Perspectives for Chemical and Biochemical Engineers", arXiv:2201.02823.

[7] Bujiao Wu, Maharshi Ray, Liming Zhao, Xiaoming Sun, and Patrick Rebentrost, "Quantum-classical algorithms for skewed linear systems with an optimized Hadamard test", Physical Review A 103 4, 042422 (2021).

[8] Franco D. Albareti, Thomas Ankenbrand, Denis Bieri, Esther Hänggi, Damian Lötscher, Stefan Stettler, and Marcel Schöngens, "A Structured Survey of Quantum Computing for the Financial Industry", arXiv:2204.10026.

[9] Patrick Rall, "Faster Coherent Quantum Algorithms for Phase, Energy, and Amplitude Estimation", arXiv:2103.09717.

[10] Sevag Gharibian and François Le Gall, "Dequantizing the Quantum Singular Value Transformation: Hardness and Applications to Quantum Chemistry and the Quantum PCP Conjecture", arXiv:2111.09079.

[11] Shantanav Chakraborty, Aditya Morolia, and Anurudh Peduri, "Quantum Regularized Least Squares", arXiv:2206.13143.

[12] Iordanis Kerenidis and Anupam Prakash, "Quantum machine learning with subspace states", arXiv:2202.00054.

[13] Changpeng Shao and Ashley Montanaro, "Faster quantum-inspired algorithms for solving linear systems", arXiv:2103.10309.

[14] Chenyi Zhang, Jiaqi Leng, and Tongyang Li, "Quantum algorithms for escaping from saddle points", arXiv:2007.10253.

[15] Daniel Chen, Yekun Xu, Betis Baheri, Samuel A. Stein, Chuan Bi, Ying Mao, Qiang Quan, and Shuai Xu, "Quantum-Inspired Classical Algorithm for Slow Feature Analysis", arXiv:2012.00824.

[16] Kazuya Kaneko, Koichi Miyamoto, Naoyuki Takeda, and Kazuyoshi Yoshino, "Linear regression by quantum amplitude estimation and its extension to convex optimization", Physical Review A 104 2, 022430 (2021).

[17] Ebrahim Ardeshir-Larijani, "Parametrized Complexity of Quantum Inspired Algorithms", arXiv:2112.11686.

The above citations are from Crossref's cited-by service (last updated successfully 2022-12-08 09:20:15) and SAO/NASA ADS (last updated successfully 2022-12-08 09:20:16). The list may be incomplete as not all publishers provide suitable and complete citation data.