Pauli channels can be estimated from syndrome measurements in quantum error correction

The performance of quantum error correction can be signiﬁcantly improved if detailed information about the noise is available, allowing to optimize both codes and de-coders. It has been proposed to estimate error rates from the syndrome measurements done anyway during quantum error correction. While these measurements preserve the encoded quantum state, it is currently not clear how much information about the noise can be extracted in this way. So far, apart from the limit of vanishing error rates, rigorous results have only been established for some speciﬁc codes. In this work, we rigorously resolve the question for arbitrary stabilizer codes. The main result is that a stabilizer code can be used to estimate Pauli channels with correlations across a number of qubits given by the pure distance. This result does not rely on the limit of vanishing error rates, and applies even if high weight errors occur frequently. Moreover, it also allows for measurement errors within the framework of quantum data-syndrome codes. Our proof combines Boolean Fourier analysis, combi-natorics and elementary algebraic geometry. It is our hope that this work opens up interesting applications, such as the online adaptation of a decoder to time-varying noise.


Introduction
Quantum error correction is an essential part of most quantum computing schemes. It can be significantly improved if detailed knowledge about the noise affecting a device is available, as it is possible to optimize codes for specific noise [1,2]. A prominent example is the XZZX-surface code [3]. Furthermore, common decoding algorithms, such as minimum weight matching [4][5][6][7] and belief propagation (see e.g. [8]), can incorporate information about error rates to return more accurate corrections. Other examples of decoders that can incorporate information about error rates are weighted union find [9] and tensor network decoders [10,11]. The latter can also deal with correlated noise models, but are relatively slow. In the context of stabilizer codes, noise is usually modeled using Pauli channels, which are simple to understand and simulate. However, Pauli noise is more than a mere toy model, since randomized compiling can be used to project general noise onto a Pauli channel [12], which has also been demonstrated experimentally [13]. Furthermore, it is known that quantum error correction decoheres noise on the logical level [14]. Consequently, there has been much interest and progress in the estimation of Pauli channels [15][16][17][18]. Complementary to the standard benchmarking approaches, it has been suggested to perform (online) estimation of channels just from the syndromes of a quantum error correction code itself [2,7,[19][20][21][22][23][24]. Such a scheme uses only measurements that do not destroy the logical information. It is thus suited for online adaptation of a decoder to varying noise, for example by adapting weights in a minimum weight matching decoder [7,21]. It furthermore results in a noise model that can be directly used by the decoder [20]. Experimentally, online optimization of control parameters in a 9-qubit superconducting quantum processor has been demonstrated in an experiment by Google [25].
Since the state of the logical qubit is not measured, some assumptions are necessary in order for this estimation to be feasible. However, the precise nature of these assumptions is currently not well understood. In the general case, it is unclear for which combinations of noise models and codes the parameters can be identified using only the syndrome information. Apart from heuristics in the limit of very low error rates [20][21][22], only two special cases have been rigorously treated. For codes admitting a minimum weight matching decoder, such as the toric code, identifiability of a circuit noise model was shown by Spitz et al. [7]. In [24], identifiability results for the restricted class of perfect codes were proven.
It is not a priori clear that an estimation of the error rates just from the syndrome statistics should be possible at all for arbitrary stabilizer codes. There are several objections one could raise. For one, as mentioned above, the state of the logical qubit is not measured, so there is only limited information contained in the syndromes. Phrased another way, there are generally exponentially many errors with the same syndrome, which we therefore cannot distinguish. This also implies that the probability of a syndrome is an exponentially large sum of different error rates, and solving such a system for the error rates appears difficult at best. While it has been suggested to simplify the problem by only taking into account the lowest weight error compatible with each syndrome [20][21][22], this approximation strategy leads to demonstrably sub-optimal estimators [24].

Results
In this work, we show that the estimation task can be solved for arbitrary stabilizer codes. For any given quantum code, we describe a general class of Pauli channels whose parameters can be identified from the corresponding syndrome statistics. Our results also take into account measurement errors by using the framework of quantum data-syndrome codes [26][27][28]. We prove that a large amount of information can be extracted from the syndrome statistics. In the following theorem we make this statement more precise in terms of the pure distance, which is defined as the minimum weight of an undetectable error (see Section 2.3 for details). The pure distance measures up to which weight errors can necessarily be distinguished by their syndrome, and for most codes it coincides with the weight of the smallest stabilizer. Since there can be undetectable but logically trivial errors, the pure distance is usually much smaller than the distance. As an example, for the family of toric codes the pure distance is 4 independent of code size. More generally, the pure distance will be constant for any family of quantum low density parity check (LDPC) codes, since the stabilizer weights are constant.
We are interested in estimating a Pauli channel, i.e. a probability distribution over Pauli errors, which describes the new error occurring before each round of error correction (working in a phenomenological noise model). It is commonly assumed that errors on each qubit occur independently, in which case the channel is described by one error rate for each qubit. In this case, we say the noise is uncorrelated. In a more general setting, the Pauli channel could act on many qubits, such that errors on these qubits are not independent. If e.g. 2 qubit errors occur that are not a combination of independent single qubit errors, we say that the noise is correlated over 2 qubits. The corresponding 2 qubit error rates are then not simply a product of the single qubit error rates and must be additionally specified. For example, if P (X 1 X 2 ) = P (X 1 )P (X 2 ), the errors on the first two qubits are correlated. This notion of correlations is made precise in Section 2.2.3 for classical and Section 2.3.2 for quantum codes. We now give an informal statement of our main results. A formal statement is given in Theorem 7, which also takes into account many detectable errors beyond the pure distance.

qubits.
While this result is stated in a non-constructive fashion, its derivation suggests a concrete estimation protocol. We give a first heuristic discussion of the resulting estimators in Section 2.4. A detailed analysis of such estimators is ongoing work. The key idea behind Theorem 1 is that the error distribution is fully described by a set of moments. We will show that these moments can be estimated up to their sign by solving a polynomial system of equations. The appearance of multiple discrete solutions, differing in some signs, reflects the symmetries of the problem. For example in the case of single qubit noise, there is one symmetry for each logical operator of the code. However, under the additional mild assumption that all error rates are smaller than 1 2 , all moments must be positive, and thus the estimate is unique. We stress that our result does not rely on the limit of vanishing rates, and still applies in the presence of high weight errors. Perhaps surprisingly, even different errors with the same syndrome can occur frequently, as long as they arise as a combination of independent lower weight errors. Our result is arguably the strongest one can reasonably expect, since error rates cannot be identifiable if multiple independently occurring errors have the same syndrome.
The adaptive weight estimator by Spitz et al. [7] can be viewed as solving a special instance of the general equation system we present here, which is applicable only for codes that admit a minimum weight matching decoder. This connection is explained in Section 2.1.
Our arguments combine Boolean Fourier analysis, combinatorics and elementary algebraic geometry. Interestingly, a connection between Pauli channel learning and Boolean Fourier analysis was recently pointed out in an independent work by Flammia and O'Donnell [15].
We start our discussion with the motivating example of the toric code with independent Pauli-X errors, where our results take a particularly simple form. We then discuss the general results. We first discuss the setting of classical codes, and later extend the results to quantum codes. This is for ease of exposition, since the presentation is considerably simplified in the classical setting and thus the underlying concepts become more apparent. We stress that, in contrast to the quantum setting, classically one can measure the individual bits of a codeword without destroying the encoded information. Thus, in the classical setting one does not have to rely solely on the syndrome information and can use easier techniques for error rates estimation. However, for a quantum code this is indeed the only information that can be measured without destroying the encoded state, and this is where our results are most relevant. Classically, our approach might still be useful in the setting of distributed source coding [29].

Example: the toric code
As a motivating example for the methods and proofs in the following sections, we derive an estimator for the simple setting of completely uncorrelated bit-flip noise on the toric code. That is, we assume that errors on different qubits occur independently, possibly with a different rate for each qubit, and that only bit-flip (Pauli-X) errors occur on each qubit. Thus, we consider the toric code essentially as a classical code. This constitutes a simple alternative derivation of the solution given by Spitz et al. [7].
We focus on a single qubit and two adjacent Z-stabilizers, as illustrated in fig. 1. Our task is to estimate the error rate p 4 of the marked qubit, or equivalently, the expectation value E(Z 4 ) = 1 − 2p 4 . Since errors on each qubit are independent, the expectation of a stabilizer measurement is simply the product of expectations of the adjacent bits. Therefore we obtain the following three equations, This system admits a straightforward solution for the expectation of Z 4 , This coincides with the solution given by Spitz et al. [7, eq. (14)], as explained in Appendix E.
Notice that there is a choice of sign, which corresponds to deciding whether p 4 > 1 2 or p 4 < 1 2 . However, under the assumption p 4 < 1 2 the solution is unique.

Identifiablity for classical codes
Let us now turn to the general setting, first for classical codes. We are interested in whether an error distribution can be estimated from repeated syndrome measurements alone. This is certainly not possible for completely arbitrary noise, since we do not measure the state of the logical qubit. However, we will show that for Pauli (and measurement) noise with limited correlations, the estimation is possible. As mentioned above, we will start with the setting of classical codes and perfect syndrome measurements. Then, we will show how to extend these results to quantum (data-syndrome) codes.
The key insight underlying our proof is that the estimation problem is best phrased in terms of moments instead of error rates. The proof then proceeds in the following steps. First, we notice that Fourier coefficients of the error distribution correspond to moments, and see that some of these moments can be estimated from the syndrome statistics. Then, we show that under certain independence assumptions, the full error distribution is characterized completely by a set of low weight transformed moments. We find that these transformed moments are related to the measured moments via a polynomial equation system. This system is described by a coefficient matrix D whose rows essentially correspond to elements of the dual code. We then use local randomness properties of the dual code to find an explicit expression for the symmetric squared coefficient matrix D T D, and finally show that D T D has full rank by using iterated Schur complements. This implies that the equation system has discrete solutions.

Notation
We use the short-hand notation [n] := {1, . . . , n} for the set of the first n natural numbers. For any set A, we denote its powerset as 2 A := {B | B ⊆ A}. The field with two elements is denoted by F 2 . Often we will use F n 2 as a vector space over that field. That is, for a, b ∈ F n 2 the sum a + b is understood to be taken component-wise modulo 2. All vectors are to be understood as column vectors. By wt(a) := |{i : a i = 0}| we denote the weight of a. For any logical statement X we denote by 1[X] the Iverson bracket of X, which assumes the value 1 if X is true and 0 otherwise. Naturally, trivial products, i.e. products over the empty set, are set to 1 as in x∈∅ f (x) := 1. By I k we denote the k × k identity matrix and suppress k when it can be inferred from the context.

Classical codes and Boolean Fourier analysis
Let us start with some basic elements of Boolean Fourier analysis. A detailed review of the topic is given in [30] (note that we use a different normalization convention here). We frequently identify a vector s ∈ F n 2 with its indicator set {i ∈ [n] : s i = 0}. For example, for s = (0, 1, 0, 1, 1, 0) ∈ F 6 2 we also write such that we can write 4 ∈ s. For each s ∈ F n 2 , we define the parity function which is the group character of the abelian group (F n 2 , +). For a function f : F n 2 → R, its Boolean Fourier transform f is the function The Boolean Fourier transform is also known under the name Walsh-Hadamard transformation. However, there are different conventions which lead to transforms with different orderings of bit strings and different notations for the characters χ s . This transformation is invertible, and the inverse is given by For two functions f, g : F n 2 → R, their Boolean convolution f * g is defined by As expected, convolutions become products under Boolean Fourier transform, A classical linear code C, encoding k bits into n bits, is a k-dimensional subspace of F n 2 . It can be described by its parity check matrix H ∈ F (n−k)×n 2 , whose rows span the dual code C ⊥ = {a ∈ F n 2 : a · c = 0 ∀c ∈ C}. C ⊥ can alternatively be interpreted as the set of parity functions which are 1 on all codewords. When an error e ∈ F n 2 occurs, the outcomes of the parity measurements H can be summarized by the syndrome O(e) := He .
From O(e), we can calculate the value of any parity measurement s ∈ C ⊥ . We will assume that repeated rounds of syndrome measurements and corrections are performed. In each round, an error occurs according to the error distribution P : F n 2 → [0, 1]. We call the corresponding distribution of syndromes the syndrome statistics. The error correction capabilities of a classical linear code are indicated by its distance, which is defined as the smallest weight of an undetectable error, which is the same as the smallest weight of a codeword,

Moments and noise model
We denote the Fourier transform of the error distribution P : We interpret this as a moment of the distribution P , since E(a) is exactly the expectation value of the parity measurement a if one were to measure it repeatedly on errors distributed according to P . In particular, for s ∈ C ⊥ , the corresponding moment E(s) can be computed from the measured syndrome statistics, i.e. from the empirically measured frequency with which each syndrome occurs in repeated rounds of error correction. We will always assume E(a) = 0 for all a ⊆ [n], which is, for example, fulfilled if P (0) > 0.5. Thus, our task is to find the distribution P , given some of its Fourier coefficients.
We will show that this task is feasible if there is some independence between errors on different subsets of bits, such that there are no correlations across a large number of qubits at once. To formalize this idea, we introduce a set of channel supports Γ ⊆ 2 [n] (we will consider a concrete choice of Γ later). For each γ ∈ Γ, there is an error channel that acts only on this subset, i.e. its error distribution P γ : F n 2 → [0, 1] is only supported on errors e ∈ F n 2 that are 0 outside of γ. Equivalently, identifying vectors with their indicator set, we can view this as a distribution P γ : 2 [n] → [0, 1] which is only supported on 2 γ . Since all the individual channels act independently, and the total error is the sum of the individual errors, the total distribution of errors is then given by P =γ with the convolution (5). Physically, this expresses the action of several independent sources of errors, each only acting on a limited number of qubits. If Γ = {{1}, . . . , {n}}, this reduces to the commonly used model where each bit is affected by an independent channel. As a note, mathematically, this model corresponds to a convolutional factor graph, as introduced by Mao and Kschischang [31]. This representation in terms of individual channels is an over-parametrization of the total distribution if some supports in Γ overlap. A first idea for a non-redundant distribution would be to use the set of all moments, but these are not all independent. Therefore, we define transformed moments F (a) via a (generalized) inclusion-exclusion computation. Heuristically, we want to divide out all correlations on proper subsets of a in order to capture only correlations across the full size of a. This bears some similarity to the canonical parametrization of Markov random fields, see [32]. For b ⊆ a ⊆ [n], the Möbius function (on the partially ordered set 2 [n] ) is given by For each a ⊆ [n], we define a transformed moment F (a) via the inclusion-exclusion transform It follows from the (multiplicative) generalized inclusion-exclusion principle (e.g. [ Now we will show that, depending on the choice of Γ, a small subset of transformed moments is already sufficient to uniquely specify the distribution. Remember E γ (a) = P γ (a) from (9). Since P is given by the convolution (10) of all individual error distributions P γ we have, where we used that P γ is only supported on 2 γ , and thus We can now express the transformed moments by the parameters of the individual channels. The proof is given in Appendix A. (14). Then, the inclusion-exclusion transform (12) satisfies Notice that trivially F (∅) = 1. Thus, in conclusion, it suffices to determine the transformed moments F (a) for a inΓ The standard moments (13) are then determined by Finally, the error distribution P is determined from the standard moments by applying the inverse Fourier transform (4) to (9).

Identifiability and binomial systems
As discussed in the previous section, learning the transformed moments (F (a)) a∈Γ is sufficient to uniquely determine the error distribution. However, the only thing we can measure are the standard moments (E(s)) s∈C ⊥ corresponding to elements of the dual code, since these fully describe the syndrome statistics. Thus, learning the error distribution boils down to determining the transformed moments (F (a)) a∈Γ from this information. In particular, the error distribution is identifiable from the syndrome statistics if and only if the following system of polynomial equations admits a unique solution, where we have omitted the trivial equation arising from the zero element of the dual code. This is a binomial system, i.e. each equation only has two terms, a constant on the left-hand side and a monomial on the right-hand side. In the notation of [35], the binomial system (18) can be expressed by the coefficient matrix D whose rows are labeled by elements of C ⊥ \ {0} and whose columns are labeled by elements ofΓ. The entry D[s, a] is the exponent of F (a) in the monomial on the right-hand side of equation (18) for E(s). We have transposed the notation comparing to [35]. Explicitly this means that the and the system is given by , where E is the vector containing the elements E(s) and F the elements F (a), as appearing in the system (18). In the special case of single bit noise, i.e. Γ =Γ = {{i} : i ∈ [n]}, the rows of the coefficient matrix D are exactly the dual codewords s ∈ C ⊥ . For now, let us assume that D has full rank. It then follows from the theory of binomial system solving [35, Proposition 2] that the system (18) has a finite number of solutions, and these solutions only differ by multiplying some parameters with complex roots of unity. Since we are only interested in real solutions, this means we can determine the transformed moments F (a), and thus also the standard moments E(a), up to a sign. Thus, if we restrict all moments to be positive, the error distribution is uniquely determined by the syndromes. A simple condition for all moments to be positive is that the error probabilities of all channels are smaller than 1 2 In conclusion, the error distribution can be estimated uniquely from the syndromes, assuming that D has full rank and that the error probability of each channel is smaller than 1 2 . Let us stress that the appearance of multiple solutions, differing by the signs of some moments, reflects actual symmetries of the syndrome statistics as a function of the error rates. For example, consider again case of independent single bit errors, such that Γ =Γ = {{i} : i ∈ [n]}. Then the transformed moments F ({i}) are equal to the standard moments E({i}), and the rows of D are simply the elements of the dual code. Thus, each row of D has even overlap with every codeword c ∈ C. Thus, by (18), flipping the signs of all moments (E({i})) i∈c on the support of the codeword c does not change the measured moments (E(s)) s∈C ⊥ . Since E({i}) = 1 − 2p i , where p i is the rate of errors on the i-th qubit, this simply means that flipping the error rates around 1 2 on a codeword does not affect the syndrome statistics, i.e. we cannot distinguish these two sets of error rates. This observation shows that each codeword corresponds to a symmetry of the identifiability problem.

The rank of the coefficient matrix
We will now establish the most general set of error distributions for which unique identification of the error rates is possible. By the discussion in the previous section, this corresponds to finding the most general choice of Γ for which the system (18) is solvable, i.e. for which the coefficient matrix D is of full rank.
Denote the sets of bits that only support detectable errors as where O(e) is the syndrome (7) of e. It is clear that, if one of the channels γ ∈ Γ supports an undetectable error, one cannot estimate the corresponding error rate. Thus, identifiability can only hold if γ ∈ Γ (D) . Similarly, if two different channels γ 1 , γ 2 contain two errors with the same syndrome, the syndrome statistics only depends on the combined rate of those errors and thus the rates are not identifiable. Identifiability of the noise channel can only hold if γ 1 ∪ γ 2 ∈ Γ (D) for all γ 1 , γ 2 ∈ Γ. We will show that these are in fact the only restriction that must be fulfilled.
Theorem 3 (Classical identifiability condition). Consider a classical code with n bits subject to noise with an error distribution described by channel supports Γ ⊆ 2 [n] . Then the coefficient matrix The proof is given in Sections 2.2.6. Even a combination of two channels must not support an undetectable error. We can also give an equivalent condition in terms ofΓ, which represents the set of errors that can occur "independently". As explained in Appendix B, the assumption γ 1 ∪ γ 2 ∈ Γ (D) for all γ 1 , γ 2 ∈ Γ is equivalent to O(e 1 ) = 0 and O(e 1 + e 2 ) = 0 for all e 1 , e 2 ∈Γ (viewed as binary vectors). In other words, the error distribution is identifiable if undetectable errors and errors that have the same syndrome only occur as a combination of independent errors. We stress that this is substantially weaker than the assumption that errors with the same syndrome never occur (which was made in some previous works such as [20,21]). Indeed, in the total error distribution different errors with the same syndrome can occur frequently, and we are still able to identify the error rates. We only assume that such errors arise as a combination of independent errors.
The most important example is the following. Consider an error model where there is an independent channel on every subset of t bits, i.e. choose Γ as This means we only have correlations across at most t bits. Then, Theorem 3 implies that the error distribution is identifiable as long as d ≥ 2t + 1.

Corollary 4 (Distance based identifiability condition). If the noise is described by channel supports
Proof. For a code with distance d, Γ (D) contains every set of size at most d − 1.
For t = 1, we see that error rates of the standard single bit noise model can be identified as long as d ≥ 3. Informally, identification is possible if error correction is possible.

Orthogonal array properties of the coefficient matrix
This section is devoted to the proof of Theorem 3. Readers not interested in the proofs and mathematical techniques can skip to Section 2.3 for the results in the quantum setting. The first part of the proof is based on local randomness properties of the dual code. The elements of the dual code form a so-called orthogonal array. Since the entries of the coefficient matrix D are related to the dual code, we can use this property to derive an explicit expression for the symmetric squared coefficient matrix D T D. The proof is then finished by computing the rank of D T D, which is possible with the help of combinatorial results derived in the next section.
By T we denote the |C ⊥ | × n-matrix formed by all elements in the span of the rows of the parity check matrix H, i.e. the rows of T are exactly the elements of C ⊥ . It is known that the rows of T look "locally uniformly random" on any subset of up to d − 1 bits. One says that T is an orthogonal array of strength d − 1. This property is relatively easy to see for linear codes (e.g. [36]) and was shown for general (non-linear) classical codes by Delsarte [37]. We use a slightly extended version of the result for linear codes using the set Γ (D) from (20) instead of the distance. A proof is given in Appendix C.
Lemma 5. Let γ ∈ Γ (D) . In the restriction T |γ of T to columns in γ, every bit-string appears equally often as a row.
In other words, the rows of T look locally uniformly random on any choice of bits that only supports detectable errors.
In Appendix D.2 we prove the following statement, as a corollary of Lemma 11.
Lemma 6 (Positive-definiteness of intersection matrix). Let M t be the matrix whose rows and columns are labeled by the elements of Γ ≤t (from (22)) and which is defined entry-wise by Then M t is positive-definite.
We call the matrix M t the intersection matrix. The dimensions of M t depend on n, but we do not make this dependence explicit in our notation. Using Lemma 6, we can finish the proof of Theorem 3.
Proof of Theorem 3. We will prove that the coefficient matrix D from (19) has full rank (over R) by proving that D T D has full rank. First, note that by the definition (20) of Γ (D) , for any γ ∈ Γ (D) , all subsets a ⊆ γ are also elements of Γ (D) . Thus, the assumption γ 1 ∪ γ 2 ∈ Γ (D) from Theorem 3 implies a ∪ b ∈ Γ (D) for all a ∪ b ∈Γ, by the definition (16) (23). By Lemma 6, M t is positive-definite. As a principal sub-matrix of a positive-definite matrix, (D T D) is then also positive-definite and, in particular, has full rank. This implies that D has full rank, which finishes the proof of Theorem 3.

Extension to the quantum case
Now will consider the quantum setting, starting with a short overview of stabilizer codes. We also explain the concept of quantum data-syndrome codes [26][27][28], following Ashikhmin et al. [26], which allow for a unified treatment of data and measurement errors. Then, we state and explain our main result Theorem 7, which is the formal version of Theorem 1. Finally, we explain how to prove this theorem by extending our arguments from the classical to the quantum case.

Preliminaries
The Pauli group P n on n qubits is the group of Pauli strings generated by the Pauli operators {I, X, Y, Z, } with phases, The weight wt(P ) of a Pauli string P = n i=1 P i is the number of non-identity components P i . Modding out phases, one obtains the effective Pauli group We define the symplectic inner product ·, · P on P n by e, e P = 1, e and e anti-commute in P n 0, e and e commute in P n ; (25) note that this expression is well-defined since the commutation relation does not depend on the choice of representatives in P n . We identify P 1 with F 2 2 via the phase space representation, which extends coordinate-wise to define a group isomorphism P n → F 2n 2 . Thus, an element of P n is represented by n "X-bits" and n "Z-bits". Explicitly, X x1 Z z1 ⊗ · · · ⊗ X xn Z zn is mapped to (x 1 , . . . , x n , z 1 , . . . , z n ) T . For example This identification will allow for the application of Boolean Fourier analysis to the Pauli group. We denote the operation of swapping the X-bits and Z-bits by a bar, (x 1 , . . . , x n , z 1 , . . . , z n ) = (z 1 , . . . , z n , x 1 , . . . , x n ). The symplectic inner product (25) then corresponds to e, e P = e · e , where the dot product is evaluated in F 2n 2 , i.e. modulo 2. We define the Pauli weight wt P (e) of e ∈ P n ∼ = F 2n 2 as the weight of the corresponding Pauli operator, i.e. wt P (e) = |{i ∈ [n] : e i = 0 ∨ e i+n = 0}|.
We describe errors using Pauli channels, i.e. quantum channels of the form where P : P n → [0, 1] is a normalized probability distribution, the error distribution of the channel; note again that this expression is independent of the choices of representatives of e in P n . A stabilizer code is defined by a set of commuting Pauli operators g (1) . . . , g (l) ∈ P n . They generate an abelian subgroup S ⊆ P n , called stabilizer group, which must fulfill −1 ∈ S . This is the analogue of the classical dual code C ⊥ . The codespace is defined as the simultaneous +1 eigenspace of the operators in S . Standard error correction with a stabilizer code proceeds as follows. In each round all generators are measured. If an error e ∈ P n occurred then the vector of all measurement outcomes can be represented by the syndrome O(e), defined as O(e) i = g (i) , e P ∀i = 1, . . . , l .
Using the measured syndrome, and based on information about the error rates, one approximates the most likely logical error and applies it as a correction. The outcomes of all stabilizers are determined only by the measurements of the generators (in the case of perfect measurements) via linearity of ·, e P . As in the classical case, we can define the distance of a stabilizer code. However, in contrast to the classical case, there are many errors that act trivially on the code space and do not affect the logical information. We define the distance as the smallest weight of an undetectable error that affects the state of the logical qubit, i.e.
The pure distance and the distance can differ significantly. For example, the distance of the toric code is equal to the lattice size. The pure distance on the other hand is 4 independent of the lattice size, since the weight of any star or plaquette stabilizer is 4. In a practice, the stabilizer measurements themselves could also be faulty. In this case, one should measure additional elements of S to mitigate the effect of measurement errors. This is captured by the framework of quantum data-syndrome codes, which allow for a unified treatment of data and measurement errors [26][27][28]. We now give the basic definitions, following [26]. In the context of data-syndrome codes, errors are described by a data and a measurement part, as e = (e d , e m ) ∈ P n × F m The swapping of X-and Z-bits now only applies to the data bits, i.e. e = (e d , e m ) for e ∈ F 2n+m 2 . We extend the symplectic product to data-syndrome codes by A quantum data-syndrome code is defined by a stabilizer code on n physical qubits with generators g (1) , . . . , g (l) ∈ P n ∼ = F 2n 2 and a classical code that encodes l bits into m bits. We can always write the generator matrix of the classical code in the systematic form . Instead of just the generators g (1) , . . . , g (l) , we measure the stabilizers f (1) , . . . , f (m) defined by The measurements of the stabilizers can then be described by the parity check matrix where the identity part describes the effect of measurement errors, as seen by the following discussion. If an error e = (e d , e m ) ∈ P n × F m 2 occurred, the syndrome O(e) is then described by Naturally, the distance of a data-syndrome code cannot be larger than that of the underlying quantum code. Furthermore, it is not hard to see from the definitions that the pure distance of a data-syndrome code is the minimum of its distance and the pure distance of the underlying stabilizer code. Thus, the pure distance is limited primarily by the underlying quantum code. All in all, measurement errors and data errors can be treated in a unified way, analogous to a standard stabilizer code.

Identifiability results for quantum codes
Now we extend the identifiability results from classical to quantum data-syndrome codes, by applying analogous arguments to the phase space representation. The explicit example of the toric code is discussed in Section 2.1.
We consider a quantum data-syndrome code on n qubits and m measurement bits, and set N := 2n + m. Similar to the classical case, we consider an error model described by independent channels on some selections of qubits and measurement bits. We also allow that some of the channels contain only X-or Z-errors on some qubits. Thus, in the phase space representation, we consider a set of channel supports Γ ⊆ 2 [N ] , and for each γ ∈ Γ an error distribution P γ : 2 [N ] → [0, 1] that is only supported on 2 γ , where we again identify binary vectors in F N 2 with subsets of [N ]. Since the total error is a sum of independently occurring errors, the total error distribution is again given by a Boolean convolution, We denote the sets of bits in phase space representation that only support detectable errors as : O(e) = 0 ∀e ⊆ γ} (38) and denote Γ (D) = {e : e ∈ Γ (D) }. Using these definitions, we can state our main result.
Theorem 7 (General identifiability condition). Consider a quantum data-syndrome code with n qubits and m measurement bits subject to noise described by the channel supports Γ ⊆ 2 [N ] , where N := 2n + m. Assume that any union of two channel supports only supports detectable errors, i.e. for all γ 1 , γ 2 ∈ Γ, γ 1 ∪ γ 2 ∈ Γ (D) . Furthermore assume P γ (0) > 0.5 for all γ ∈ Γ. Then the total error distribution P is identifiable from the syndrome statistics.
As in the classical case, we can also consider the set of "independently occurring errors", instead of the set of channel supports Γ. As explained Appendix B, an equivalent condition to γ 1 ∪ γ 2 ∈ Γ (D) for all γ 1 , γ 2 ∈ Γ (D) in terms of these independent errors is that O(e 1 ) = 0 and O(e 1 +e 2 ) = 0 for all e 1 , e 2 ∈Γ. Thus, we require that independently occurring errors have different syndromes. As also discussed in the classical case, this does not preclude that different errors with the same syndrome occur frequently, but they must arise as a combination of independent errors. The most important implication of Theorem 7 is the following.

Corollary 8 (Pure distance and identifiability). Consider a quantum data-syndrome code of pure distance d p on n qubits and m measurement bits, subject to noise described by the channel supports
Proof. If the quantum data-syndrome code has pure distance d p then Γ (D) contains any γ ⊆ [N ] with Pauli weight at most t (when viewed as a binary vector).
In other words, with a code of pure distance d p we can estimate Pauli noise that is correlated across dp− 1 2 combined qubits and measurement bits. This proves the informal Theorem 1. One could also make stronger independence assumptions, for example that data and measurement errors occur independently of each other. In this case one can consider the pure distance d Q of the underlying quantum code and the distance d C of the measurement code independently and can estimate correlations across at least d Q data qubits and d C measurement bits. Similarly, for CSS-codes, one can consider a separate X-and Z-distance if one assumes that X-and Z-errors occur independently.
Let us now discuss the proof of Theorem 7, which consists of carefully applying the arguments from the classical case to the phase space representation. In this framework, the main difference to the classical case is that the moments must be defined using the symplectic product instead of the normal dot product if we want them to match the measured expectation values. Thus, for any subset a ⊆ [N ], we define For s ∈ DS , E(s) is again exactly the expectation value of the measurement of the stabilizer in repeated rounds of error correction. However, the relation between moments and Fourier coefficients now contains a "twist", i.e. A similar connection between measurements and Fourier coefficients has recently been pointed out by Flammia and O'Donnell [15]. Because P γ is only supported on 2 γ , P γ (a) = P γ (a ∩ γ) and thus E γ (a) = E γ (a ∩ γ). It follows that

E(a) =
We can define the transformed moments F (b) via the inclusion-exclusion transform (12) as in the classical case and obtain By replacing γ with γ in Lemma 2, we obtain F (a) = 1 if there is no γ ∈ Γ such that a ⊆ γ. It thus suffices to know the transformed moments F (b) for b ∈Γ, wherê The problem of learning the error rates has thus been reduced to the problem of learning the transformed moments from the measured expectation values. Explicitly, we need to solve the following equation system, which is analogous to the classical case. Given This system can be described by the coefficient matrix D, whose rows are labeled by elements of S DS and whose columns are labeled by elements ofΓ, with entries We are now in a position to finish the proof of Theorem 7.
Proof of Theorem 7. First we show that the coefficient matrix D defined in (45) has full rank. Note that by the definition of Γ (D) , if γ ∈ Γ (D) , then also a ∈ Γ (D) for any a ⊆ γ. The assumption that γ 1 ∪ γ 2 ∈ Γ (D) for all γ 1 , γ 2 ∈ Γ thus implies that a ∪ b ∈ Γ (D) for all a, b ∈Γ. It is proven in Appendix C (Lemma 10) that the elements of DS look locally uniformly random on any element of Γ (D) . The same arguments as in the proof of Theorem 3 thus imply that D T D can be re-scaled to be a principal sub-matrix of the matrix M t from (23) (for t = max{|a| : a ∈Γ}). Since M t is positive-definite by Lemma 6, we conclude that D T D and thus D is of full rank. By [35][Proposition 2], the re-scaled moments F (a), and thus also the standard moments E(a), can be estimated up to their sign. If we assume all moments E(a) to be positive, then the estimate is unique. A sufficient condition for all moments E(a) to be positive is that P γ (0) > 0.5 for all γ ∈ Γ.
Similar to the classical case, the appearance of multiple solutions, differing by the signs of some moments, reflects the symmetries of the problem. This becomes especially apparent when considering the simple case of a stabilizer code with single qubit noise and no measurement errors. In this case, errors on each qubit are independent. Thus, for a stabilizer S = S 1 ⊗ · · · ⊗ S n ∈ P n , we have E(S) = i∈[n] E(S i ), which is a simpler form of the equation system (44). Consider a representative L ∈ P n of a logical operator (including stabilizers). Denote as In summary, the estimation problem is best phrased in terms of moments instead of error rates. From this perspective, estimating the error distribution boils down to solving a polynomial system (44). If the correlations in the error distribution are small enough, this system has a finite number of discrete solutions. These solutions are described by symmetries related to the logical operators of the code. Under the additional mild assumption that the error rates are smaller than 1 2 the solution is unique.

Practical estimation
While the main result of this paper lies in establishing identifiability conditions in principle, our proofs also suggest a practical method to actually perform the estimation. Here, we briefly sketch this method and heuristically comment on its sample complexity. A detailed analysis is ongoing work.
We suggest a method of moments estimator, i.e. to use empirical expectation valuesÊ(s) for E(s) in (44). The basic steps (for the quantum case, but the classical case is exactly analogous) are the following: 1. Perform m syndrome measurements, and use them to compute the empirical expectation valueÊ(s) for each stabilizer s.
2. Insert these empirical expectation values into (44) and solve the resulting binomial system to obtain an estimate (F (b)) b∈Γ of the transformed moments.
3. Insert this estimate into (43) to obtain an estimate of the moments. (4) to obtain an estimateP of the Pauli error rates.

Perform the inverse Fourier transform
The relevant binomial system can be solved e.g. using the methods described by Chen and LiTien-Yien [35]. It will generally be over-determined. In principle, one can select a subset of equations such that the system is exactly determined, and then solve it analytically, resulting in a closed form expression for the estimate of the error rates in terms of the empirically measured expectation values. This is illustrated by the example of the toric code (Section 2.1), for which our method reproduces the estimator suggested by Spitz et al. [7]. This estimator has also been applied byVarbanov et al. [38]. However, since the empirical expectations contain a certain error, this solution might not always yield a proper probability distribution, if the number of samples used is low. Furthermore, for an over-determined system, selecting a subset of equations removes some of the measured information, which can reduce the accuracy of the estimate. In such cases it might be preferable to instead use a least-squares solver for the over-determined system.
Since our algorithm is designed for an on-line setting where only syndrome information is available, the sample complexity must necessarily be worse than that of algorithms using arbitrary measurements, such as [17] and [15]. Each syndrome measurement contains a measurement of each stabilizer generator, therefore the expectation values of all stabilizers can be estimated simultaneously and no large measurement overhead is needed. However, the estimation error has to be propagated through the binomial system, the inclusion-exclusion transform and the inverse Fourier transform. Heuristically, we expect a sample complexity similar to that of factor graph learning [39], which was applied to Pauli channel estimation by Flammia and Wallman [17,Result 3]. However, there will be a complicating factor accounting for the conditioning of the binomial system described by the coefficient matrix D. Compared to other algorithms designed for the online setting [2,7,[19][20][21][22][23][24], we expect that our method compares favorably, since these algorithms are either designed for a limited class of codes and Pauli noise models, deal with likelihood functions based on syndrome probabilities, not moments, which quickly become intractable, or only work in the limit of vanishing error rates.
In practice, it will not be necessary to estimate the expectation values of all stabilizers, but only a limited subset will be sufficient, depending on how many equation of the binomial system are retained. We expect that it is sufficient to only consider a selection of neighboring stabilizers for each qubit. For example, in the toric code example Section 2.1, only three expectation values are needed per qubit, independent of the system size.
Finally, in case that the Pauli noise model is not strictly identifiable, we expect that the estimate will combine the rates of indistinguishable errors, but still give a good estimate of these total error rates.

Discussion
In this work, we considered the estimation of Pauli channels just from the measurements done anyway during the decoding of a quantum code. We established a general condition for the feasibility of this estimation and explained the relation to the pure distance of the code. Essentially, the estimation is possible as long as the noise has no correlations which exceed the detection capabilities of the code. This result does not rely on the limit of vanishing error rates, and applies even if high weight errors occur frequently, as long as these high weight errors arise as a combination of independent lower weight errors. Our results cover general stabilizer codes and quantum datasyndrome codes [26][27][28] and also take into account measurement errors. The previously proposed adaptive weight estimator [7] can be seen as a special case of our general results, since it solves a specific instance of the system (18) for single qubit noise and a limited class of codes, those that admit a minimum weight matching decoder.
An interesting new problem is an analogue of our results on the logical level. Since we cannot identify the rates of undetectable errors, there is no full identifiability for correlations larger than the pure distance. However, we expect that the logical channel can still be identified, as long as there are no correlations larger than the actual distance of the code. This would practically allow for an analogue of Theorem 1, using the distance instead of the pure distance. Furthermore, our proof naturally suggests a method of moments estimator for the Pauli error rates by inserting the empirical moments in (18). A detailed analysis of this method, including rigorous performance guarantees as well as a better assessment of its sample complexity, is ongoing work.

B Equivalent condition on channel supports
Below Theorems 3 and 7, we stated two equivalent characterizations of detectable errors in terms of channel supports. The following lemma formalizes this statement. The proof is the same in both the classical and the quantum case. Proof. Note that by the definition of Γ (D) , if γ ∈ Γ (D) then e ∈ Γ (D) for all e ⊆ γ. Thus, γ 1 ∪ γ 2 ∈ Γ (D) implies that e 1 , e 2 ∈ Γ (D) for all e 1 ⊆ γ 1 and e 2 ⊆ γ 2 and, in particular, O(e 1 ) = 0. Furthermore, for any such e 1 , e 2 , we have that e 1 + e 2 ⊆ γ 1 ∪ γ 2 , since the addition as binary vectors corresponds to the symmetric difference of the indicator sets. This implies, in particular, O(e 1 + e 2 ) = 0. The other direction of the equivalence is proven similarly by noting that any subset of e ⊆ γ 1 ∪γ 2 can be written as e 1 + e 2 for appropriate choices of e 1 ⊆ γ 1 and e 2 ⊆ γ 2 , where it could be that either e 1 or e 2 is the empty set.

C Orthogonal array properties
In this section, we prove local randomness properties of stabilizer codes. Consider a quantum data-syndrome code with stabilizers S DS on n qubits and m measurement bits. Set N := 2n + m. By T we denote the |S DS | × N -matrix formed by all elements in the span of the rows of the parity check matrix H DS , i.e. the rows of T are exactly the elements of S DS . Then, the following local randomness property holds.

Lemma 10.
Let γ ∈ Γ (D) . In the restriction T |γ of T to columns in γ, every bit-string appears equally often as a row.
We adapt the proof of [36,Theorem 3.29] to our situation. is of rank |γ|. The rows of T |γ are, by definition of T , the vectors ζ T H |γ DS for ζ ∈ F l 2 , where l is the number of rows of H. The number of times a bit-string z appears as a row in T is thus equal to the number of vectors ζ ∈ F l 2 with ζ T H |γ = z. Since H |γ has rank |γ|, this number is 2 l−|γ| independent of z.
Lemma 5 for classical codes is a direct corollary, which follows by only considering the classical (measurement) part of the data-syndrome code. and thus, where we have used that α i = n i + α i−1 . Inserting the definitions of u (i) and T (i) from (60) where the last step is based on Pascals identity. Substituting this result back into (72) we obtain which finishes the proof.

E Connection to Adaptive Weight Estimator
Here, we show that the solution for the toric code derived in Section 2.1 coincides with the solution given by Spitz et al. [7]. The solution derived in Section 2.1 is On the other hand, the solution given by Spitz et al. [7, eq. (14)] is Here, P is used to denote the probabilities of events under random sampling of the errors. Note that in [7] the result is phrased in terms of expectation values of the stabilizer outcomes and errors viewed as taking values 0 or 1, which directly translates into the probabilities above. To see that these two solutions coincide, first notice that E(Z 4 ) = 1 − 2p 4 . Thus, (76) can be rewritten as Similarly, we have E(S i ) = 1−2P (S i = 1) and E(S 1 S 2 ) = 1−2P (S 1 = S 2 ). Using these equations, equality of the solutions can be shown as follows: where in the last equality we used that P (S 1 = 1) + P (S 2 = 1) − P (S 1 = S 2 ) = P (S 1 = 1, S 2 = 1) + P (S 1 = 1, S 2 = 0) + P (S 1 = 1, S 2 = 1) + P (S 1 = 0, S 2 = 1) − P (S 1 = 1, S 2 = 0) − P (S 1 = 0, S 2 = 1) = 2P (S 1 = S 2 = 1) .
This shows that the two solutions (76) and (77) are indeed equivalent.