The factor graph of the [7,4,3] Hamming code corresponding to the parity-check matrix given in Eq. (12). Error nodes are represented as circles and check nodes as squares.

The factor graph of the [7,4,3] Hamming code corresponding to the parity-check matrix given in Eq. (12). Error nodes are represented as circles and check nodes as squares.

Source publication
Article
Full-text available
Quantum low-density parity-check codes can be decoded using a syndrome based GF(4) belief propagation decoder (where GF denotes Galois field). However, the performance of this decoder is limited both by unavoidable 4-cycles in the code's factor graph and the degenerate nature of quantum errors. For the subclass of CSS codes, the number of 4-cycles...

Contexts in source publication

Context 1
... gives the factor graph shown in Fig. 1. In general a given code does not have a unique factor graph as the parity-check matrix from which it is defined is not unique. Furthermore, except in the case of a binary code, the mapping from a parity-check matrix to its corresponding factor graph is not one-to-one as an edge only indicates that H i j = 0, it does not give the value ...
Context 2
... number of edges it contains. A path is a walk containing no repeated nodes or edges with the exception that the first and last node can be the same, in which case the path is called a cycle. The bipartite nature of a code's factor graph ensures that the size of all cycles is even and greater than or equal to four. As an example, the walk c 1 , Fig. 1 is a 4-cycle (that is, a cycle of length four). Typically a code's factor graph will not be cycle free (that is, it will not be a tree) as if a code has such a representation, then its distance is bounded by [15] ...
Context 3
... results presented for this code and all codes that follow are on the depolarizing channel. The effect of augmentation density and random perturbation strength for decoders with N = 10 on this code is shown in Fig. 10. Based on these results, we have selected values of δ = 0.3 for all augmented decoders, δ = 200 for the random perturbation GF(4) decoder, and δ = 400 for the random perturbation supernode ...
Context 4
... FER performance and average required iterations for decoders with N = 100 maximum attempts are shown in Figs. 11 and 12, respectively. The results here are quite similar to those for the bicycle code. Again, the adjusted decoder gives performance similar to that of the supernode decoder. Furthermore, the random perturbation and EFB decoders perform similarly to one another. The augmented GF(2) decoder is outperformed by all modified GF(4) and supernode ...
Context 5
... effect of these undetected errors can also be seen in Fig. 13. For the bicycle code, the reduction in FER with increasing maximum number of iterations was approximately linear on a log-log plot. However, the reduction in FER for the BIBD code can be seen to taper off; that is, increasing the maximum number of attempts has diminishing returns. Partially as a result of this, we only require ...
Context 6
... the elements of the J × L and K × L base matrices˜Hmatrices˜ matrices˜H X and˜Hand˜ and˜H Z , respectively, where 1 J, K L/2. To construct our code, we have used the perfume (23,8,20) (this gives L = 22) and have chosen J = K = 6. The effect of augmentation density and random perturbation strength for decoders with N = 10 on this code is shown in Fig. 14. Note that for this code we can only use GF(2) and GF(4) based decoders as it is not dual containing. Based on these results, we have selected values of δ = 0.07 for the augmented GF(2) decoder, δ = 0.05 for the augmented GF(4) decoder, δ = 50 for the random perturbation GF(4) ...
Context 7
... FER performance and average required iterations for decoders with N = 100 maximum attempts are shown in Figs. 15 and 16, respectively. On the previous two codes, the augmented GF(2) decoder gave a similar or lower FER than the adjusted decoder. This is not the case here with the adjusted decoder giving a significantly lower FER. This suggests that the augmented decoder has some effect in alleviating the effect of 4-cycles in the code's factor graph ...
Context 8
... 0.6 0.8 1 p=0.024 are present when using a GF(2) decoder for this code]. The random perturbation, EFB, and augmented GF(4) decoders all perform similarly on this code. The combined decoder performs worse than the modified GF(4) decoders. Like the bicycle code, all decoding errors observed for this code were detected errors. This is reflected in Fig. 17, which shows an approximately linear reduction in FER with an increasing number of maximum attempts on a log-log plot for all decoders considered. ...
Context 9
... constructed˜H X from four 100 × 100 circulant matrices of weight five. Each of˜Hof˜ of˜H X and˜H and˜ and˜H Z yield factor graphs with 1700 4-cycles, compared to the 2737 4-cycles of the bicycle code considered in Sec. IV A. The effect of augmentation density and random perturbation strength for decoders with N = 10 on this code is shown in Fig. 18. Based on these results, we have selected values of δ = 0.1 for the augmented GF(2) decoder, δ = 0.15 for the augmented GF(4) decoder, and δ = 100 for the random perturbation GF (4) The FER performance and average required iterations for decoders with N = 100 maximum attempts are shown in Figs. 19 and 20, respectively. Again, the ...
Context 10
... for decoders with N = 10 on this code is shown in Fig. 18. Based on these results, we have selected values of δ = 0.1 for the augmented GF(2) decoder, δ = 0.15 for the augmented GF(4) decoder, and δ = 100 for the random perturbation GF (4) The FER performance and average required iterations for decoders with N = 100 maximum attempts are shown in Figs. 19 and 20, respectively. Again, the adjusted decoder outperforms the augmented GF(2) decoder; however, the gap in their performance is smaller than for the quasicyclic code of Sec. IV C. The EFB and augmented GF (4) similarly on this code, both outperforming the random perturbation decoder. The combined decoder is again outperformed by all ...
Context 11
... to the codes moderately low distance of d 10. For example, at p = 0.015 approximately 15% of errors are undetected for both the EFB and augmented GF(4) decoders. However, this is not significant enough fraction of errors to prevent the FER reducing near linearly on a log-log plot with an increasing maximum number of attempts as shown in Fig. ...

Citations

... However, quantum LDPC codes are known to be classically degenerate [1], thus the sparsity of their Tanner graph does no longer fulfill its role of enabling efficient MP decoding, but merely acts as an enabler for fault-tolerance (e.g., fault-tolerant syndrome extraction and fault-tolerant operations on logical qubits). Consequently, a significant effort has been devoted over the last few years to efficient decoding of quantum LDPC codes, by either combining the MP decoding with a post-processing step [2][3][4][5], and/or improving the MP decoding performance itself [6][7][8][9][10]. ...
... Numerically, MS+OSD is a good approximation of the classical ML for the toric code (see for example [3]). However since the OSD post-processing is very costly, namely O(n 3 ), 7 it is interesting to run the best possible decoder beforehand to avoid unnecessary use of OSD. Looking at the curves from Figure 10, this yields a quadratic improvement in the number of calls of OSD, only needing to call OSD with probability ≈ p 4 compared to ≈ p 2 . ...
Preprint
Full-text available
Kitaev's toric code is one of the most prominent models for fault-tolerant quantum computation, currently regarded as the leading solution for connectivity constrained quantum technologies. Significant effort has been recently devoted to improving the error correction performance of the toric code under message-passing decoding, a class of low-complexity, iterative decoding algorithms that play a central role in both theory and practice of classical low-density parity-check codes. Here, we provide a theoretical analysis of the toric code under min-sum (MS) decoding, a message-passing decoding algorithm known to solve the maximum-likelihood decoding problem in a localized manner, for codes defined by acyclic graphs. Our analysis reveals an intrinsic limitation of the toric code, which confines the propagation of local information during the message-passing process. We show that if the unsatisfied checks of an error syndrome are at distance greater or equal to 5 from each other, then the MS decoding is locally blind: the qubits in the direct neighborhood of an unsatisfied check are never aware of any other unsatisfied checks, except their direct neighbor. Moreover, we show that degeneracy is not the only cause of decoding failures for errors of weight at least 4, that is, the MS non-degenerate decoding radius is equal to 3, for any toric code of distance greater or equal to 9. Finally, complementing our theoretical analysis, we present a pre-processing method of practical relevance. The proposed method, referred to as stabiliser-blowup, has linear complexity and allows correcting all (degenerate) errors of weight up to 3, providing quadratic improvement in the logical error rate performance, as compared to MS only.
... Hardware-efficient fault-tolerant quantum computation demonstrations using high-rate QLDPC codes are exciting and also compatible with recently demon-strated experimental capabilities [10][11][12]. Similarly, there has been significant progress in improving the iterative decoding performance of finite-length QLDPC codes using postprocessing and heuristic techniques [13][14][15][16][17]. However, the QLDPC decoding problem still has unanswered questions and, in particular, faster decoders for QLDPC codes are needed to meet the stringent timing constraints in hardware. ...
Article
Full-text available
In practical quantum error correction implementations, the measurement of syndrome information is an unreliable step—typically modeled as a binary measurement outcome flipped with some probability. However, the measured syndrome is in fact a discretized value of the continuous voltage or current values obtained in the physical implementation of the syndrome extraction. In this paper, we use this “soft” or analog information to benefit iterative decoders for decoding quantum low-density parity-check (QLDPC) codes. Syndrome-based iterative belief propagation decoders are modified to utilize the soft syndrome to correct both data and syndrome errors simultaneously. We demonstrate the advantages of the proposed scheme not only in terms of comparison of thresholds and logical error rates for quasi-cyclic lifted-product QLDPC code families but also with faster convergence of iterative decoders. Additionally, we derive hardware (FPGA) architectures of these soft syndrome decoders and obtain similar performance in terms of error correction to the ideal models even with reduced precision in the soft information. The total latency of the hardware architectures is about 600 ns (for the QLDPC codes considered) in a 20 nm CMOS process FPGA device, and the area overhead is almost constant—less than 50% compared to min-sum decoders with noisy syndromes.
... Several methods have been proposed in the literature to generalize belief propagation to quantum codes [50,[81][82][83][84][85][86]. For instance, one can decode X and Z errors separately using the classical version of the BP. ...
... For instance, one can decode X and Z errors separately using the classical version of the BP. The potential correlations between X and Z errors can be taken into account by first decoding X errors, adjusting the channel probabilities based on the correction, and decoding Z errors with this adjusted probability, as proposed in Ref. [84]. It has also been proposed to send vector instead of scalar messages, to compute the probability P(e i = W|s) that a Pauli error W ∈ {I , X , Y, Z} has occurred on each qubit i [82]. ...
Article
Full-text available
Tailored topological stabilizer codes in two dimensions have been shown to exhibit high-storage-threshold error rates and improved subthreshold performance under biased Pauli noise. Three-dimensional (3D) topological codes can allow for several advantages including a transversal implementation of non-Clifford logical gates, single-shot decoding strategies, and parallelized decoding in the case of fracton codes, as well as construction of fractal-lattice codes. Motivated by this, we tailor 3D topological codes for enhanced storage performance under biased Pauli noise. We present Clifford deformations of various 3D topological codes, such that they exhibit a threshold error rate of 50% under infinitely biased Pauli noise. Our examples include the 3D surface code on the cubic lattice, the 3D surface code on a checkerboard lattice that lends itself to a subsystem code with a single-shot decoder, and the 3D color code, as well as fracton models such as the X-cube model, the Sierpiński model, and the Haah code. We use the belief propagation with ordered statistics decoder (BP OSD) to study threshold error rates at finite bias. We also present a rotated layout for the 3D surface code, which uses roughly half the number of physical qubits for the same code distance under appropriate boundary conditions. Imposing coprime periodic dimensions on this rotated layout leads to logical operators of weight O(n) at infinite bias and a corresponding exp[−O(n)] subthreshold scaling of the logical failure rate, where n is the number of physical qubits in the code. Even though this scaling is unstable due to the existence of logical representations with O(1) low-rate and O(n2/3) high-rate Pauli errors, the number of such representations scales only polynomially for the Clifford-deformed code, leading to an enhanced effective distance.
... A variety of methods have been proposed to address this issue, which can be categorized into two categories. The first category contains approaches that modify the BP decoder itself, for example, message normalization and offsets [9,10], layered scheduling [11,12], and matrix augmentation [13]. The second category focuses on post-processing the output of the BP decoder, e.g., ordered statistics decoding (OSD) [11,14], random perturbation [15], enhanced feedback [16], and stabilizer inactivation [17]. ...
... For QLDPC codes, this approach resembles matrix augmentation [13], where a fraction of the rows of the check matrix are duplicated. The advantage is that the messages associated with the duplicated check node are magnified which helps in breaking the symmetry during decoding and leading to a (hopefully) correct error estimate [13]. ...
... For QLDPC codes, this approach resembles matrix augmentation [13], where a fraction of the rows of the check matrix are duplicated. The advantage is that the messages associated with the duplicated check node are magnified which helps in breaking the symmetry during decoding and leading to a (hopefully) correct error estimate [13]. ...
Preprint
Quantum low-density parity-check (QLDPC) codes are promising candidates for error correction in quantum computers. One of the major challenges in implementing QLDPC codes in quantum computers is the lack of a universal decoder. In this work, we first propose to decode QLDPC codes with a belief propagation (BP) decoder operating on overcomplete check matrices. Then, we extend the neural BP (NBP) decoder, which was originally studied for suboptimal binary BP decoding of QLPDC codes, to quaternary BP decoders. Numerical simulation results demonstrate that both approaches as well as their combination yield a low-latency, high-performance decoder for several short to moderate length QLDPC codes.
... Poulin and Chung first applied BP to the decoding of quantum codes [30]. For an introduction on the BP algorithm, see for example [31,12]. Here, we show how to get the standard BP equations from the generalized ansatz. ...
Article
Full-text available
Belief propagation (BP) is well-known as a low complexity decoding algorithm with a strong performance for important classes of quantum error correcting codes, e.g. notably for the quantum low-density parity check (LDPC) code class of random expander codes. However, it is also well-known that the performance of BP breaks down when facing topological codes such as the surface code, where naive BP fails entirely to reach a below-threshold regime, i.e. the regime where error correction becomes useful. Previous works have shown, that this can be remedied by resorting to post-processing decoders outside the framework of BP. In this work, we present a generalized belief propagation method with an outer re-initialization loop that successfully decodes surface codes, i.e. opposed to naive BP it recovers the sub-threshold regime known from decoders tailored to the surface code and from statistical-mechanical mappings. We report a threshold of 17% under independent bit-and phase-flip data noise (to be compared to the ideal threshold of 20.6% ) and a threshold value of 14% under depolarizing data noise (compared to the ideal threshold of 18.9% ), which are on par with thresholds achieved by non-BP post-processing methods.
... These can be grouped into two categories. The first category contains approaches that modify the BP decoder itself, for example, message normalization and offsets [5], [6], layered scheduling [7], [8], and matrix augmentation [9]. The second category contains methods that apply post-processing to the BP decoder output, e.g., ordered statistics decoding (OSD) [7], [10], random perturbation [11], enhanced feedback [12], and stabilizer inactivation [13]. ...
... One of the initial motivations is to perform more node updates in parallel to reduce the effect of short cycles. For QLDPC codes, this approach resembles matrix augmentation [9], where a fraction of the rows of the check matrix are duplicated. The advantage is that the messages associated with the duplicated check node are magnified which helps in breaking the symmetry during decoding and leading to a (hopefully) correct error estimation [9]. ...
... For QLDPC codes, this approach resembles matrix augmentation [9], where a fraction of the rows of the check matrix are duplicated. The advantage is that the messages associated with the duplicated check node are magnified which helps in breaking the symmetry during decoding and leading to a (hopefully) correct error estimation [9]. We demonstrate the extra benefit of the proposed method with a toy example. ...
Preprint
The recent success in constructing asymptotically good quantum low-density parity-check (QLDPC) codes makes this family of codes a promising candidate for error-correcting schemes in quantum computing. However, conventional belief propagation (BP) decoding of QLDPC codes does not yield satisfying performance due to the presence of unavoidable short cycles in their Tanner graph and the special degeneracy phenomenon. In this work, we propose to decode QLDPC codes based on a check matrix with redundant rows, generated from linear combinations of the rows in the original check matrix. This approach yields a significant improvement in decoding performance with the additional advantage of very low decoding latency. Furthermore, we propose a novel neural belief propagation decoder based on the quaternary BP decoder of QLDPC codes which leads to further decoding performance improvements.
... Poulin and Chung first applied BP to the decoding of quantum codes [27]. For an introduction on the BP algorithm, see for example [28], [9]. Here, we show how to get the standard BP equations from the generalized ansatz. ...
Preprint
Belief propagation (BP) is well-known as a low complexity decoding algorithm with a strong performance for important classes of quantum error correcting codes, e.g. notably for the quantum low-density parity check (LDPC) code class of random expander codes. However, it is also well-known that the performance of BP breaks down when facing topological codes such as the surface code, where naive BP fails entirely to reach a below-threshold regime, i.e. the regime where error correction becomes useful. Previous works have shown, that this can be remedied by resorting to post-processing decoders outside the framework of BP. In this work, we present a generalized belief propagation method with an outer re-initialization loop that successfully decodes surface codes, i.e. opposed to naive BP it recovers the sub-threshold regime known from decoders tailored to the surface code and from statistical-mechanical mappings. We report a threshold of 17% under independent bit-and phase-flip data noise (to be compared to the ideal threshold of 20.6%) and a threshold value of 14% under depolarizing data noise (compared to the ideal threshold of 18.9%), which are on par with thresholds achieved by non-BP post-processing methods.
... While the approximation is often acceptable when the girth 4 of the graph is large, the presence of short-cycles tends to be detrimental to the performance of BP [70,71]. Several methods have been proposed in the literature to generalize belief propagation to quantum codes [49,[72][73][74][75][76][77]. For instance, one can decode X and Z errors separately using the classical version of BP. ...
... For instance, one can decode X and Z errors separately using the classical version of BP. The potential correlations between X and Z errors can be taken into account by first decoding X errors, adjusting the channel probabilities based on the correction, and decoding Z errors with this adjusted probability, as proposed in Ref. [75]. It has also been proposed to send vector instead of scalar messages, to compute the probability P (e i = W |s) that a Pauli error W ∈ {I, X, Y, Z} has occurred on each qubit i [73]. ...
Preprint
Full-text available
Tailored topological stabilizer codes in two dimensions have been shown to exhibit high storage threshold error rates and improved subthreshold performance under biased Pauli noise. Three-dimensional (3D) topological codes can allow for several advantages including a transversal implementation of non-Clifford logical gates, single-shot decoding strategies, parallelized decoding in the case of fracton codes as well as construction of fractal lattice codes. Motivated by this, we tailor 3D topological codes for enhanced storage performance under biased Pauli noise. We present Clifford deformations of various 3D topological codes, such that they exhibit a threshold error rate of $50\%$ under infinitely biased Pauli noise. Our examples include the 3D surface code on the cubic lattice, the 3D surface code on a checkerboard lattice that lends itself to a subsystem code with a single-shot decoder, the 3D color code, as well as fracton models such as the X-cube model, the Sierpinski model and the Haah code. We use the belief propagation with ordered statistics decoder (BP-OSD) to study threshold error rates at finite bias. We also present a rotated layout for the 3D surface code, which uses roughly half the number of physical qubits for the same code distance under appropriate boundary conditions. Imposing coprime periodic dimensions on this rotated layout leads to logical operators of weight $O(n)$ at infinite bias and a corresponding $\exp[-O(n)]$ subthreshold scaling of the logical failure rate, where $n$ is the number of physical qubits in the code. Even though this scaling is unstable due to the existence of logical representations with $O(1)$ low-rate Pauli errors, the number of such representations scales only polynomially for the Clifford-deformed code, leading to an enhanced effective distance.
... The notion of belief propagation has been applied in the quantum setting in many different previous works; here we comment on their relation to the present work. Most closely related is the use of belief propagation for decoding quantum information subject to Pauli errors, as studied by Poulin in [14,15], as well as many others since [16][17][18][19][20][21][22][23]. Here, however, classical BP is sufficient: The task is to infer which error occured from the classical syndrome information, which only involves the classical conditional probability of the former given the latter. ...
... This is necessary, since we implicitly traced out the systems A e , Z e and S e . 19 We can store a state of the form in (189) as a list of 2 ke´1 tuples pp i , ρ i , c i q i"0,...,2 ke´1´1 where the p i are non-negative and sum up to one, ρ i are real 2ˆ2 matrices and c i P A B are angle cosines. For simplicity, we actually store the c i as floating-point numbers and always round them to the closest value in A B . ...
Article
Full-text available
Recently, Renes proposed a quantum algorithm called belief propagation with quantum messages (BPQM) for decoding classical data encoded using a binary linear code with tree Tanner graph that is transmitted over a pure-state CQ channel \cite{renes_2017}, i.e., a channel with classical input and pure-state quantum output. The algorithm presents a genuine quantum counterpart to decoding based on the classical belief propagation algorithm, which has found wide success in classical coding theory when used in conjunction with LDPC or Turbo codes. More recently Rengaswamy e t a l . \cite{rengaswamy_2020} observed that BPQM implements the optimal decoder on a small example code, in that it implements the optimal measurement that distinguishes the quantum output states for the set of input codewords with highest achievable probability. Here we significantly expand the understanding, formalism, and applicability of the BPQM algorithm with the following contributions. First, we prove analytically that BPQM realizes optimal decoding for any binary linear code with tree Tanner graph. We also provide the first formal description of the BPQM algorithm in full detail and without any ambiguity. In so doing, we identify a key flaw overlooked in the original algorithm and subsequent works which implies quantum circuit realizations will be exponentially large in the code dimension. Although BPQM passes quantum messages, other information required by the algorithm is processed globally. We remedy this problem by formulating a truly message-passing algorithm which approximates BPQM and has quantum circuit complexity O ( poly n , polylog 1 ϵ ) , where n is the code length and ϵ is the approximation error. Finally, we also propose a novel method for extending BPQM to factor graphs containing cycles by making use of approximate cloning. We show some promising numerical results that indicate that BPQM on factor graphs with cycles can significantly outperform the best possible classical decoder.
... At the same time, the distance bound d ≤ n 1/2 which limits the parameters of all QHP codes, does not apply to GB codes; we show in this work that this family includes codes with linear distances. Third, the regular structure of GB codes simplifies both their implementation and linear-complexity iterative decoding [20,[22][23][24]. Moreover, GB codes have naturally overcomplete sets of minimumweight stabilizer generators, which may improve their performance in the fault-tolerant (FT) setting. ...
Article
Full-text available
Generalized bicycle (GB) codes is a class of quantum error-correcting codes constructed from a pair of binary circulant matrices. Unlike for other simple quantum code ansätze, unrestricted GB codes may have linear distance scaling. In addition, low-density parity-check GB codes have a naturally overcomplete set of low-weight stabilizer generators, which is expected to improve their performance in the presence of syndrome measurement errors. For such GB codes with a given maximum generator weight w, we constructed upper distance bounds by mapping them to codes local in D≤w−1 dimensions, and lower existence bounds which give d≥O(n1/2). We have also conducted an exhaustive enumeration of GB codes for certain prime circulant sizes in a family of two-qubit encoding codes with row weights 4, 6, and 8; the observed distance scaling is consistent with A(w)n1/2+B(w), where n is the code length and A(w) is increasing with w.