In quantum computing there is no such thing as a perfect qubit; interaction with the environment is inevitable and quantum error correction must be used to ensure fault tolerance. Fortunately, the design of quantum codes can draw upon a vast suite of classical error correction techniques that have been developed to support our global communication infrastructure. Of particular interest are low density parity check (LDPC) codes, a family of classical error correction protocols that can achieve performance close to the Shannon limit. Various modern communication technologies – including WiFi and 5G – use LDPC encodings. In the context of quantum computing, it is hoped that quantum LDPC codes will enable fault tolerance with much lower qubit overheads than is possible with the surface code.
Research into quantum LDPC codes is still in its infancy. Many prior studies have focused on their theoretical performance in the asymptotic limit. In the paper Degenerate Quantum LDPC Codes With Good Finite Length Performance recently published in Quantum , the approach is more practical. First, the authors propose a solution to a long-standing problem regarding how to efficiently decode quantum LDPC codes. Second, new constructions are proposed to enable the discovery of high-rate quantum LDPC codes of size suitable for near-term devices.
1 A belief propagation decoder for quantum codes
In both the classical and quantum setting, the essential principle underpinning error correction is that data is redundantly encoded across an expanded space of (qu)bits [2,3]. This redundancy affords the system extra degrees of freedom that can be exploited to detect errors via a series of parity measurements that yield a syndrome. The syndrome is then decoded to determine the best course of action to return the encoded information to its intended state. A particular challenge lies in finding efficient decoding algorithms to minimise the time-scale of the error correction cycle.
In , a new decoder for quantum LPDC codes is presented based on a modification of classical belief propagation (BP). The BP algorithm is commonly deployed in classical communication, and formulates decoding in terms of a probabilistic inference problem . To efficiently calculate marginals, BP exploits knowledge of inter-dependencies between bits and parity checks mapped out by a visual representation of the code known as a factor graph . In general, the BP algorithm can decode syndromes in time linearly proportional to the code length provided the following conditions hold:
- The code is described by a sparse factor graph. By definition, this is automatically the case for LDPC codes.
- The code’s factor graph is sufficiently loop free. In simple terms, this means that the code should not have too many cyclic dependencies between bits and parity checks.
In the context of quantum LDPC codes, the second requirement is problematic due to degenerate quantum errors. This is an effect that is unique to quantum error correction, and arises due to the fact that qubits can encode information in a superposition over basis states. To illustrate this, consider the following logical qubit state
|\psi\rangle_L=\alpha|00\rangle + \beta|11\rangle\rm,
where $\alpha$ and $\beta$ are two complex numbers satisfying $|\alpha|^2+|\beta|^2=1$. Now, imagine this state is subject to a phase-flip $Z_1$ so that it is transformed to
Z_1|\psi\rangle_L=\alpha|00\rangle – \beta|11\rangle\rm .
To restore the corrupted state we have two options: 1) we can simply undo the phase-flip by applying a recovery $R=Z_1$; 2) we can apply a phase-flip on the second qubit $R=Z_2$, again restoring the original logical state
RZ_1|\psi\rangle_L=Z_2Z_1|\psi\rangle_L=\alpha|00\rangle + \beta|11\rangle\rm.
From this we see that there are two degenerate recovery operations: both solutions have weight-one and be considered correct in that they restore the logical state.
On the factor graph representation of a quantum code, degenerate errors manifest themselves as loops that cause BP to fail. Fortunately, it is straightforward to verify whether the output of a BP decoding is valid. This makes it possible to combine the BP decoder with post-processing techniques to improve performance. In , the authors propose the use of a post-processing method known as ordered statistics decoding (OSD) . This is invoked when BP fails, and uses a matrix inversion method to find a unique decoding solution amongst the degenerate possibilities. To ensure that a low-weight solution is found, OSD uses the output of BP to select the best subset of qubits over which to perform the inversion. Numerical simulations of the BP-OSD decoder presented in  show that the OSD post-processing improves decoding performance by several orders of magnitude.
2 Quantum LDPC codes with good finite-length performance
A quantum error correction code can be thought of as a fusion of two classical codes: one to correct bit-flips and the other to correct phase-flips. However, these classical codes must be carefully chosen so that the resultant stabilisers (the quantum analogue of classical parity checks) commute with one another. In practice this is achieved via a quantum code construction which transforms pairs of classical codes into a set of commuting stabilisers for quantum error correction. A well known code construction method for quantum LDPC codes is the hypergraph product due to Tillich and Zémor . This is considered especially powerful as there are no restrictions on the form of classical binary codes it takes as input. However, the drawback of the hypergraph product is that it produces large quantum codes: the hypergraph product of two classical codes of length $n_1$ and $n_2$ respectively gives a quantum code of length $O(n_1n_2)$. In , a generalised hypergraph product is introduced that yields codes with higher rate.
Figure 1:Decoding simulation from  comparing generalised hypergraph product (GHP) codes with the surface code. The physical error rate is shown on the $X$-axis, whilst the logical error rate is on the $Y$-axis. The GHP codes are decoded using BP-OSD whilst the surface code is decoded using the MPS method from . The surface code has parameters $[[1201,1,25]]$, and is chosen so that is comparable in size to the $[[1270,28]]$ GHP code.
In developing the generalised hypergraph product, the authors of  take inspiration from classical quasicyclic codes . Quasicyclic codes are a state-of-the-art family of classical LDPC protocols with excellent performance under BP decoding. Error correction codes are usually represented by a binary parity check matrix. The key differentiating feature of quasicyclic codes is that, in place of a parity-check matrix, they are instead defined in terms of a polynomial matrix where each entry is drawn from a ring of circulants (permutation matrices). A benefit of this change from binary to a ring algebra is that it allows one to use a much more compact representation of the code. In the standard hypergraph product, the quantum code is constructed via tensor products over binary classical codes. The tensor product structure guarantees that the resultant stabilisers commute. For generalised hypergraph product codes, the stabilisers are instead constructed via tensor products of quasicyclic codes defined in terms of the ring algebra. The commutativity guarantee still holds, whilst the resultant codes have improved rate. In fact, the block length of a generalised hypergraph product is reduced by a factor $L$ with respect to the standard hypergraph product, where $L$ is the size of the circulant matrices in the ring used to define the quasicyclic matrices.
In addition to their higher rate, a further advantage of generalised hypergraph product codes is that both their bit- and phase-flip correcting components are themselves quasicyclic codes. As such, they have excellent decoding performance under BP-OSD. The above figure from  shows the performance of two generalised hypergraph product codes compared to a surface code of similar size. Here, the $X$-axis shows the physical error rate of the qubits, whilst the $Y$-axis gives the decoded logical error rate. The generalised hypergraph product codes are decoded using the new BP-OSD decoder whilst the surface code is decoded using the MPS decoder from . We see that the decoding curve for the $[[1270,28]]$ generalised hypergraph product intersects with that of the surface code at a value of $\approx10\%$ for the physical error rate. The surface code only encodes a single logical qubit, whilst the $[[1270,28]]$ hypergraph product code encodes $28$. To achieve the same logical error rate at $10\%$ of the physical error rate, the generalised hypergraph product code therefore reduces the qubit overhead by a factor of approximately $28$ compared to the surface code.
Surface codes and their 2D topological cousins can be viewed as the ideal error correction protocols for experiment due to the fact they can be embedded onto a geometrically local configuration of qubits. However, for qubit platforms with the ability to perform long-range interactions, quantum LDPC codes promise a much more efficient route to fault tolerance. The generalised hypergraph product from  demonstrates that high-rate quantum LDPC codes can be constructed with moderate block lengths making them a viable prospect for quantum computers in the near-term. In addition, the new BP-OSD decoder shows that quantum LDPC codes can outperform surface codes in practice. One caveat is that all decoding simulations in  are performed under an idealised error model for which it is assumed that only the code qubits are subject to error: all auxiliary qubits associated with syndrome extraction are treated as error free. To properly benchmark quantum LDPC codes against the surface code, it will be necessary to perform decoding simulations under circuit-level noise.
Whilst the focus of  is on the practical design and decoding of quantum LDPC codes, the authors of this work have written two follow-up papers where the asymptotic scaling of generalised hypergraph product codes is explored [10,11]. In their latest paper , they prove the remarkable result that generalised hypergraph product codes (re-branded as lifted product codes) can have linear distance-to-block-length scaling. This answers a major open problem in quantum LDPC design and shows that, in principle, fault tolerance can be achieved efficiently.
I would like to thank Nicolai Friis, Peter-Jan Derks, Bohan Lu and Shozab Qasim for comments on this manuscript.
 R. W. Hamming, Error detecting and error correcting codes, Bell System Technical Journal 29, 147 (1950).
 Judea Pearl, Reverend Bayes on inference engines: A distributed hierarchical approach (Cognitive Systems Laboratory, School of Engineering and Applied Science, 1982).
 Frank R Kschischang, Brendan J Frey, Hans-Andrea Loeliger, et al., Factor graphs and the sum-product algorithm, IEEE Transactions on information theory 47, 498 (2001).
 M.P.C. Fossorier and Shu Lin, Soft-decision decoding of linear block codes based on ordered statistics, IEEE Transactions on Information Theory 41, 1379 (1995).
 Jean-Pierre Tillich and Gilles Zémor, Quantum LDPC codes with positive rate and minimum distance proportional to the square root of the blocklength, IEEE Transactions on Information Theory 60, 1193 (2013).
 Sergey Bravyi, Martin Suchara, and Alexander Vargo, Efficient algorithms for maximum likelihood decoding in the surface code, Physical Review A 90, 032326 (2014).
 Marc PC Fossorier, Quasicyclic low-density parity-check codes from circulant permutation matrices, IEEE transactions on information theory 50, 1788 (2004).
 Pavel Panteleev and Gleb Kalachev, Quantum LDPC codes with almost linear minimum distance, IEEE Transactions on Information Theory (Early Access, 2021b).
This View is published in Quantum Views under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. Copyright remains with the original copyright holders such as the authors or their institutions.