Scalable Neural Network Decoders for Higher Dimensional Quantum Codes

Nikolas P. Breuckmann1 and Xiaotong Ni1,2

1Institute for Quantum Information, RWTH Aachen University, Germany
2Max Planck Institute of Quantum Optics, Germany

full text pdf

Machine learning has the potential to become an important tool in quantum error correction as it allows the decoder to adapt to the error distribution of a quantum chip. An additional motivation for using neural networks is the fact that they can be evaluated by dedicated hardware which is very fast and consumes little power. Machine learning has been previously applied to decode the surface code. However, these approaches are not scalable as the training has to be redone for every system size which becomes increasingly difficult. In this work the existence of local decoders for higher dimensional codes leads us to use a low-depth convolutional neural network to locally assign a likelihood of error on each qubit. For noiseless syndrome measurements, numerical simulations show that the decoder has a threshold of around 7.1% when applied to the 4D toric code. When the syndrome measurements are noisy, the decoder performs better for larger code sizes when the error probability is low. We also give theoretical and numerical analysis to show how a convolutional neural network is different from the 1-nearest neighbor algorithm, which is a baseline machine learning method.

Share

► BibTeX data

► References

[1] Charlene Sonja Ahn. Extending quantum error correction: new continuous measurement protocols and improved fault-tolerant overhead. PhD thesis, California Institute of Technology, 2004.

[2] Gaku Arakawa, Ikuo Ichinose, Tetsuo Matsui, and Koujin Takeda. Self-duality and phase structure of the 4d random-plaquette z2 gauge model. Nuclear Physics B, 709 (1): 296-306, 2005. 10.1016/​j.nuclphysb.2004.12.024.
https://doi.org/10.1016/j.nuclphysb.2004.12.024

[3] Paul Baireuther, Thomas E. O'Brien, Brian Tarasinski, and Carlo W. J. Beenakker. Machine-learning-assisted correction of correlated qubit errors in a topological code. Quantum, 2: 48, jan 2018. 10.22331/​q-2018-01-29-48.
https://doi.org/10.22331/q-2018-01-29-48

[4] Yoshua Bengio, Yann LeCun, et al. Scaling learning algorithms towards ai. Large-scale kernel machines, 34 (5): 1-41, 2007.

[5] Héctor Bombín. Single-shot fault-tolerant quantum error correction. Physical Review X, 5 (3): 031043, 2015. 10.1103/​PhysRevX.5.031043.
https://doi.org/10.1103/PhysRevX.5.031043

[6] Nikolas P. Breuckmann. Homological quantum codes beyond the toric code. PhD thesis, RWTH Aachen University, 2017. URL https:/​/​doi.org/​10.18154/​rwth-2018-01100.
https:/​/​doi.org/​10.18154/​rwth-2018-01100

[7] Nikolas P Breuckmann, Kasper Duivenvoorden, Dominik Michels, and Barbara M Terhal. Local decoders for the 2d and 4d toric code. Quantum Information and Computation, 17 (3 and 4): 0181-0208, 2017. 10.26421/​QIC17.3-4.
https://doi.org/10.26421/QIC17.3-4

[8] Christopher Clark and Amos Storkey. Training deep convolutional neural networks to play go. In International Conference on Machine Learning, pages 1766-1774, 2015.

[9] Joshua Combes, Christopher Ferrie, Chris Cesare, Markus Tiersch, GJ Milburn, Hans J Briegel, and Carlton M Caves. In-situ characterization of quantum devices with error correction. 2014. URL https:/​/​arxiv.org/​abs/​1405.5656.
https:/​/​arxiv.org/​abs/​1405.5656

[10] H.A. David and H.N. Nagaraja. Order Statistics. Wiley Series in Probability and Statistics. Wiley, 2004. ISBN 9780471654018.

[11] Eric Dennis, Alexei Kitaev, Andrew Landahl, and John Preskill. Topological quantum memory. Journal of Mathematical Physics, 43 (9): 4452-4505, 2002. 10.1063/​1.1499754.
https://doi.org/10.1063/1.1499754

[12] Guillaume Duclos-Cianci and David Poulin. Fast decoders for topological quantum codes. Physical review letters, 104 (5): 050504, 2010. 10.1103/​PhysRevLett.104.050504.
https://doi.org/10.1103/PhysRevLett.104.050504

[13] Guillaume Duclos-Cianci and David Poulin. Fault-tolerant renormalization group decoder for abelian topological codes. Quantum Information & Computation, 14 (9-10): 721-740, 2014.

[14] Kasper Duivenvoorden, Nikolas P Breuckmann, and Barbara M Terhal. Renormalization group decoder for a four-dimensional toric code. arXiv preprint arXiv:1708.09286, 2017.
arXiv:1708.09286

[15] Brendan J Frey and David JC MacKay. A revolution: Belief propagation in graphs with cycles. Advances in neural information processing systems, pages 479-485, 1998.

[16] Gabriel Goh. Why momentum really works. Distill, 2017. 10.23915/​distill.00006.
https://doi.org/10.23915/distill.00006

[17] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016.

[18] Matthew B Hastings. Decoding in hyperbolic spaces: Ldpc codes with linear rate and efficient error correction. Quantum Information and Computation, 14, 2014.

[19] Norman P Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, et al. In-datacenter performance analysis of a tensor processing unit. 44th International Symposium on Computer Architecture, 2017. 10.1145/​3079856.3080246.
https://doi.org/10.1145/3079856.3080246

[20] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. 3rd International Conference for Learning Representations, San Diego, 2015. URL https:/​/​arxiv.org/​abs/​1412.6980.
https:/​/​arxiv.org/​abs/​1412.6980

[21] Stefan Krastanov and Liang Jiang. Deep neural network probabilistic decoder for stabilizer codes. Scientific Reports, 7 (1), sep 2017. 10.1038/​s41598-017-11266-1.
https://doi.org/10.1038/s41598-017-11266-1

[22] Stephen Marsland. Machine Learning: An Algorithmic Perspective, Second Edition. Chapman & Hall/​CRC, 2nd edition, 2014. ISBN 1466583282, 9781466583283.

[23] Paul A Merolla, John V Arthur, Rodrigo Alvarez-Icaza, Andrew S Cassidy, Jun Sawada, Filipp Akopyan, Bryan L Jackson, Nabil Imam, Chen Guo, Yutaka Nakamura, et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 345 (6197): 668-673, 2014. 10.1126/​science.1254642.
https://doi.org/10.1126/science.1254642

[24] Janardan Misra and Indranil Saha. Artificial neural networks in hardware: A survey of two decades of progress. Neurocomputing, 74 (1–3): 239 - 255, 2010. ISSN 0925-2312. 10.1016/​j.neucom.2010.03.021.
https://doi.org/10.1016/j.neucom.2010.03.021

[25] Eliya Nachmani, Yair Be'ery, and David Burshtein. Learning to decode linear codes using deep learning. In 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, sep 2016. 10.1109/​allerton.2016.7852251.
https://doi.org/10.1109/allerton.2016.7852251

[26] Michael A. Nielsen. Neural Networks and Deep Learning. Determination Press, 2015.

[27] Genevieve B Orr and Klaus-Robert Müller. Neural networks: tricks of the trade. Springer, 2003.

[28] Fernando Pastawski. Quantum memory: design and applications. PhD thesis, LMU Munich, 2012. URL https:/​/​edoc.ub.uni-muenchen.de/​14703/​.
https:/​/​edoc.ub.uni-muenchen.de/​14703/​

[29] David Poulin and Yeojin Chung. On the iterative decoding of sparse quantum codes. Quantum Information and Computation, 8 (10): 0987-1000, 2008.

[30] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529 (7587): 484-489, 2016. 10.1038/​nature16961.
https://doi.org/10.1038/nature16961

[31] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, and Demis Hassabis. Mastering the game of go without human knowledge. Nature, 550 (7676): 354-359, Oct 2017. ISSN 0028-0836. 10.1038/​nature24270.
https://doi.org/10.1038/nature24270

[32] John M Sullivan. A crystalline approximation theorem for hypersurfaces. PhD thesis, Princeton University, 1990.

[33] Koujin Takeda and Hidetoshi Nishimori. Self-dual random-plaquette gauge model and the quantum toric code. Nuclear Physics B, 686 (3): 377 - 396, 2004. ISSN 0550-3213. 10.1016/​j.nuclphysb.2004.03.006.
https://doi.org/10.1016/j.nuclphysb.2004.03.006

[34] David Barrie Thomas, Lee Howes, and Wayne Luk. A comparison of cpus, gpus, fpgas, and massively parallel processor arrays for random number generation. In Proceedings of the ACM/​SIGDA International Symposium on Field Programmable Gate Arrays, FPGA '09, pages 63-72, New York, NY, USA, 2009. ACM. ISBN 978-1-60558-410-2. 10.1145/​1508128.1508139.
https://doi.org/10.1145/1508128.1508139

[35] Yu Tomita and Krysta M Svore. Low-distance surface codes under realistic quantum noise. Physical Review A, 90 (6): 062320, 2014. 10.1103/​PhysRevA.90.062320.
https://doi.org/10.1103/PhysRevA.90.062320

[36] Giacomo Torlai and Roger G Melko. A neural decoder for topological codes. Physical Review Letters, 119 (3): 030501, 2017. 10.1103/​PhysRevLett.119.030501.
https://doi.org/10.1103/PhysRevLett.119.030501

[37] Savvas Varsamopoulos, Ben Criger, and Koen Bertels. Decoding small surface codes with feedforward neural networks. Quantum Science and Technology, 3 (1): 015004, nov 2017. 10.1088/​2058-9565/​aa955a.
https://doi.org/10.1088/2058-9565/aa955a

[38] Chenyang Wang, Jim Harrington, and John Preskill. Confinement-higgs transition in a disordered gauge theory and the accuracy threshold for quantum memory. Annals of Physics, 303 (1): 31-58, 2003. 10.1016/​S0003-4916(02)00019-2.
https://doi.org/10.1016/S0003-4916(02)00019-2

[39] Jonathan S Yedidia, William T Freeman, and Yair Weiss. Understanding belief propagation and its generalizations. Exploring artificial intelligence in the new millennium, 8: 236-239, 2003.

[40] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. 5th International Conference on Learning Representations, 2016.

► Cited by (beta)

Crossref's cited-by service has no data on citing works. Unfortunately not all publishers provide suitable citation data.