Towards real-world implementations of quantum machine learning

This is a Perspective on "Quantum Deep Hedging" by El Amine Cherrat, Snehal Raj, Iordanis Kerenidis, Abhishek Shekhar, Ben Wood, Jon Dee, Shouvanik Chakrabarti, Richard Chen, Dylan Herman, Shaohan Hu, Pierre Minssen, Ruslan Shaydulin, Yue Sun, Romina Yalovetzky, and Marco Pistoia, published in Quantum 7, 1191 (2023).

By Junyu Liu (Pritzker School of Molecular Engineering, The University of Chicago, Chicago, IL 60637, USA, Department of Computer Science, The University of Chicago, Chicago, IL 60637, USA, Kadanoff Center for Theoretical Physics, The University of Chicago, Chicago, IL 60637, USA, qBraid Co., Chicago, IL 60615, USA, and SeQure, Chicago, IL 60615, USA).

Quantum machine learning, which involves algorithms akin to machine learning operating on quantum devices, is anticipated to be a flagship application for quantum technologies. In the short term, some experiments and benchmarks have been demonstrated [1,2,3,4,5,6,7,8,9,10], indicating that quantum algorithms related to machine learning may offer solutions to specific challenges across a range of fields, including quantum physics [11], quantum chemistry [1], optimization [2,12], deep learning [13,14,15], sensing [16,17], and many other directions. In the long term, theoretical frameworks for various quantum machine learning algorithms suggest potential speedups over their classical counterparts [18,19,20], contingent on certain conditions related to quantum hardware. These conditions include the likely integration of quantum error correction and the development of efficient interfaces between classical and quantum processors [21].

It is a well-founded notion that quantum computers are capable of addressing problems that are inherently quantum in nature, a concept stemming from Richard Feynman’s original proposition on the quantum simulation of chemical processes [22]. This leads to the intriguing question of whether quantum algorithms could also be beneficial for other types of tasks. Quantum machine-learning algorithms serve as prime examples of this potential, with the ability to relate to numerous real-world issues, including those in finance. Notably, there is significant progress, as highlighted in a recent paper [23], which discusses the advancement of quantum-enhanced hedging and the application of reinforcement-learning algorithms to actual financial markets.

It can be difficult to establish, using computational complexity theory, whether a quantum algorithm truly holds a quantum advantage over all classical alternatives [24]. Similarly, many classical machine-learning algorithms that are practically used lack provable guarantees. For example, the theoretical underpinnings of the emergent capabilities of advanced classical large language models, such as GPT-3 and its successors, remain an unresolved question [25], despite their considerable practical success. Consequently, some argue that rather than solely focusing on demonstrating computational superiority, it might be equally valuable to prioritize the development of end-to-end applications [26]. This approach involves hands-on experimentation with machine-learning models, either on actual machines or through simulations, to ascertain any practical benefits or enhancements that quantum methods may offer. This philosophy aligns with the practical methodologies employed by our classical counterparts.

The paper [23] follows a similar spirit toward practical, real-world implementations of quantum machine learning. The corresponding quantum neural-network architecture is similar to that in a series of previous works, which include end-to-end data uploading and training [27,28,29]. It is argued that, first, the architecture itself is computationally efficient. With some locality designs, the architecture could avoid the so-called barren-plateau problem, as the gradient will only vanish polynomially [4,9]. Secondly, the architecture has a clear classical counterpart, akin to orthogonal classical neural networks. Finally, the quantum architecture appears to have comparable performance to its classical counterpart with an equal or smaller number of training parameters when the number of qubits increases. This experimental observation might be due to the nature of high-dimensional Hilbert spaces.

The authors have also developed a novel quantum-native reinforcement-learning method [30] for hedging using these architectures. There are classical reasons to believe that, despite its complexity, distributed reinforcement learning can yield superior models compared to standard methods. In their work, they attempt to transform classical pipelines into quantum-native ones. They argue that quantum computing may be inherently suitable for their tasks: quantum circuits provide mappings for exponentially large distributions, while measurements reveal only a subset of this information. They find this to be advantageous for the hedging task, where net jumps in stochastic trajectories could be naturally interpreted as the Hamming weights of the encoding. Finally, their findings with end-to-end implementations are verified in the Quantinuum processors with 16 qubits.

The paper [23] sets an excellent example of a real-world, end-to-end implementation of quantum machine-learning experiments. Their research opens up many future directions in the field. First, further experiments are necessary to compare and benchmark against state-of-the-art classical machine-learning models, revealing possible quantum speedups for specific tasks. Optimization of multiple subroutines, particularly concerning data encoding and transfer between classical and quantum processors, could be crucial. Second, exploring the impact of noise in these quantum experiments [31], and determining to what extent we should suppress, mitigate, or correct noise in near-term and future quantum technologies, could be an interesting direction. Lastly, it would be valuable to theorize why quantum models could offer benefits over classical ones, such as reducing the number of training parameters. In sum, coupled with the development of fault-tolerant, large-scale quantum hardware, high-quality quantum machine learning experiments, alongside robust theoretical research and informed conjectures, could advance the field of quantum machine learning and might contribute real-world applications using quantum devices.

Acknowledgements

JL would like to thank various people discussing with him about quantum technologies connecting to machine learning, especially Yuri Alexeev, Frederic Chong, Jens Eisert, Xu Han, Hsin-Yuan Huang, Hansheng Jiang, Liang Jiang, Risi Kondor, Jin-Peng Liu, Zi-Wen Liu, Antonio Mezzapaco, John Preskill, Max Zuo-Jun Shen, David Simmons-Duffin, Changchun Zhong and Quntao Zhuang. JL is supported in part by International Business Machines (IBM) Quantum through the Chicago Quantum Exchange, and the Pritzker School of Molecular Engineering at the University of Chicago through AFOSR MURI (FA9550-21-1-0209).

► BibTeX data

► References

[1] Alberto Peruzzo, Jarrod McClean, Peter Shadbolt, Man-Hong Yung, Xiao-Qi Zhou, Peter J Love, Alán Aspuru-Guzik, and Jeremy L O’brien, A variational eigenvalue solver on a photonic quantum processor, Nature Communications 5, 4213 (2014).
https:/​/​doi.org/​10.1038/​ncomms5213

[2] Edward Farhi, Jeffrey Goldstone, and Sam Gutmann, A quantum approximate optimization algorithm, arXiv preprint arXiv:1411.4028 (2014), https:/​/​doi.org/​10.48550/​arXiv.1411.4028.
https:/​/​doi.org/​10.48550/​arXiv.1411.4028
arXiv:1411.4028

[3] Jarrod R McClean, Jonathan Romero, Ryan Babbush, and Alán Aspuru-Guzik, The theory of variational hybrid quantum-classical algorithms, New Journal of Physics 18, 023023 (2016).
https:/​/​doi.org/​10.1088/​1367-2630/​18/​2/​023023

[4] Jarrod R McClean, Sergio Boixo, Vadim N Smelyanskiy, Ryan Babbush, and Hartmut Neven, Barren plateaus in quantum neural network training landscapes, Nature Communications 9, 4812 (2018).
https:/​/​doi.org/​10.1038/​s41467-018-07090-4

[5] Sam McArdle, Suguru Endo, Alán Aspuru-Guzik, Simon C Benjamin, and Xiao Yuan, Quantum computational chemistry, Reviews of Modern Physics 92, 015003 (2020).
https:/​/​doi.org/​10.1103/​RevModPhys.92.015003

[6] Marco Cerezo, Andrew Arrasmith, Ryan Babbush, Simon C Benjamin, Suguru Endo, Keisuke Fujii, Jarrod R McClean, Kosuke Mitarai, Xiao Yuan, Lukasz Cincio, et al., Variational quantum algorithms, Nature Reviews Physics 3, 625–644 (2021a).
https:/​/​doi.org/​10.1038/​s42254-021-00348-9

[7] Junyu Liu, Francesco Tacchino, Jennifer R Glick, Liang Jiang, and Antonio Mezzacapo, Representation Learning via Quantum Neural Tangent Kernels, PRX Quantum 3, 030323 (2022a).
https:/​/​doi.org/​10.1103/​PRXQuantum.3.030323

[8] Junyu Liu, Khadijeh Najafi, Kunal Sharma, Francesco Tacchino, Liang Jiang, and Antonio Mezzacapo, Analytic Theory for the Dynamics of Wide Quantum Neural Networks, Physical Review Letters 130, 150601 (2023a).
https:/​/​doi.org/​10.1103/​PhysRevLett.130.150601

[9] Junyu Liu, Zexi Lin, and Liang Jiang, Laziness, Barren Plateau, and Noise in Machine Learning, arXiv preprint arXiv:2206.09313 (2022b), https:/​/​doi.org/​10.48550/​arXiv.2206.09313.
https:/​/​doi.org/​10.48550/​arXiv.2206.09313
arXiv:2206.09313

[10] Junyu Liu, Minzhao Liu, Jin-Peng Liu, Ziyu Ye, Yunfei Wang, Yuri Alexeev, Jens Eisert, and Liang Jiang, Towards provably efficient quantum algorithms for large-scale machine-learning models, arXiv preprint arXiv:2303.03428 (2023b), https:/​/​doi.org/​10.48550/​arXiv.2303.03428.
https:/​/​doi.org/​10.48550/​arXiv.2303.03428
arXiv:2303.03428

[11] Hsin-Yuan Huang, Michael Broughton, Jordan Cotler, Sitan Chen, Jerry Li, Masoud Mohseni, Hartmut Neven, Ryan Babbush, Richard Kueng, John Preskill, et al., Quantum advantage in learning from experiments, Science 376, 1182 (2022).
https:/​/​doi.org/​10.1126/​science.abn7293

[12] Sepehr Ebadi, Alexander Keesling, Madelyn Cain, Tout T Wang, Harry Levine, Dolev Bluvstein, Giulia Semeghini, Ahmed Omran, J-G Liu, Rhine Samajdar, et al., Quantum optimization of maximum independent set using Rydberg atom arrays, Science 376, 1209 (2022).
https:/​/​doi.org/​10.1126/​science.abo6587

[13] Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, and Seth Lloyd, Quantum machine learning, Nature 549, 195 (2017).
https:/​/​doi.org/​10.1038/​nature23474

[14] Vojtěch Havlíček, Antonio D Córcoles, Kristan Temme, Aram W Harrow, Abhinav Kandala, Jerry M Chow, and Jay M Gambetta, Supervised learning with quantum-enhanced feature spaces, Nature 567, 209 (2019).
https:/​/​doi.org/​10.1038/​s41586-019-0980-2

[15] Amira Abbas, David Sutter, Christa Zoufal, Aurélien Lucchi, Alessio Figalli, and Stefan Woerner, The power of quantum neural networks, Nature Computational Science 1, 403 (2021).
https:/​/​doi.org/​10.1038/​s43588-021-00084-1

[16] Quntao Zhuang and Zheshen Zhang, Physical-Layer Supervised Learning Assisted by an Entangled Sensor Network, Physical Review X 9, 041023 (2019).
https:/​/​doi.org/​10.1103/​PhysRevX.9.041023

[17] Yi Xia, Wei Li, Quntao Zhuang, and Zheshen Zhang, Quantum-Enhanced Data Classification with a Variational Entangled Sensor Network, Physical Review X 11, 021047 (2021).
https:/​/​doi.org/​10.1103/​PhysRevX.11.021047

[18] Aram W Harrow, Avinatan Hassidim, and Seth Lloyd, Quantum Algorithm for Linear Systems of Equations, Physical Review Letters 103, 150502 (2009).
https:/​/​doi.org/​10.1103/​PhysRevLett.103.150502

[19] Yunchao Liu, Srinivasan Arunachalam, and Kristan Temme, A rigorous and robust quantum speed-up in supervised machine learning, Nature Physics 17, 1013-1017 (2021a).
https:/​/​doi.org/​10.1038/​s41567-021-01287-z

[20] Jin-Peng Liu, Herman Øie Kolden, Hari K Krovi, Nuno F Loureiro, Konstantina Trivisa, and Andrew M Childs, Efficient quantum algorithm for dissipative nonlinear differential equations, Proceedings of the National Academy of Sciences 118, e2026805118 (2021b).
https:/​/​doi.org/​10.1073/​pnas.2026805118

[21] Scott Aaronson, Read the fine print, Nature Physics 11, 291 (2015).
https:/​/​doi.org/​10.1038/​nphys3272

[22] Richard P. Feynman, Simulating physics with computers, International Journal of Theoretical Physics 21, 467-488 (1982).
https:/​/​doi.org/​10.1007/​BF02650179

[23] El Amine Cherrat, Snehal Raj, Iordanis Kerenidis, Abhishek Shekhar, Ben Wood, Jon Dee, Shouvanik Chakrabarti, Richard Chen, Dylan Herman, Shaohan Hu, et al., Quantum deep hedging, Quantum 7, 1191 (2023).
https:/​/​doi.org/​10.22331/​q-2023-11-29-1191

[24] Hsin-Yuan Huang, Richard Kueng, and John Preskill, Information-Theoretic Bounds on Quantum Advantage in Machine Learning, Physical Review Letters 126, 190505 (2021).
https:/​/​doi.org/​10.1103/​PhysRevLett.126.190505

[25] Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al., Language Models are Few-Shot Learners, arXiv preprint arXiv:2005.14165 (2020), https:/​/​doi.org/​10.48550/​arXiv.2005.14165.
https:/​/​doi.org/​10.48550/​arXiv.2005.14165
arXiv:2005.14165

[26] Maria Schuld and Nathan Killoran, Is Quantum Advantage the Right Goal for Quantum Machine Learning? PRX Quantum 3, 030101 (20220).
https:/​/​doi.org/​10.1103/​PRXQuantum.3.030101

[27] Iordanis Kerenidis, Jonas Landman, and Natansh Mathur, Classical and Quantum Algorithms for Orthogonal Neural Networks, arXiv preprint arXiv:2106.07198 (2021), https:/​/​doi.org/​10.48550/​arXiv.2106.07198.
https:/​/​doi.org/​10.48550/​arXiv.2106.07198
arXiv:2106.07198

[28] Natansh Mathur, Jonas Landman, Yun Yvonna Li, Martin Strahm, Skander Kazdaghli, Anupam Prakash, and Iordanis Kerenidis, Medical image classification via quantum neural networks, arXiv preprint arXiv:2109.01831 (2021), https:/​/​doi.org/​10.48550/​arXiv.2109.01831.
https:/​/​doi.org/​10.48550/​arXiv.2109.01831
arXiv:2109.01831

[29] El Amine Cherrat, Iordanis Kerenidis, Natansh Mathur, Jonas Landman, Martin Strahm, and Yun Yvonna Li, Quantum Vision Transformers, arXiv preprint arXiv:2209.08167 (2022), https:/​/​doi.org/​10.48550/​arXiv.2209.08167.
https:/​/​doi.org/​10.48550/​arXiv.2209.08167
arXiv:2209.08167

[30] Hansheng Jiang, Zuo-Jun Max Shen, and Junyu Liu, Quantum Computing Methods for Supply Chain Management, in 2022 IEEE/​ACM 7th Symposium on Edge Computing (SEC) (organization IEEE, 2022) pp. 400–405.
https:/​/​doi.org/​10.1109/​SEC54971.2022.00059

[31] Junyu Liu, Frederik Wilde, Antonio Anna Mele, Liang Jiang, and Jens Eisert, Noise can be helpful for variational quantum algorithms, arXiv preprint arXiv:2210.06723 (2022c), https:/​/​doi.org/​10.48550/​arXiv.2210.06723.
https:/​/​doi.org/​10.48550/​arXiv.2210.06723
arXiv:2210.06723

Cited by

On Crossref's cited-by service no data on citing works was found (last attempt 2024-02-27 19:23:08). On SAO/NASA ADS no data on citing works was found (last attempt 2024-02-27 19:23:08).