Fig 1 - uploaded by Troels Pedersen
Content may be subject to copyright.
Quantization scheme for channel messages.

Quantization scheme for channel messages.

Source publication
Article
Full-text available
Binary message-passing decoders for low-density parity-check (LDPC) codes are studied by using extrinsic information transfer (EXIT) charts. The channel delivers hard or soft decisions and the variable node decoder performs all computations in the L-value domain. A hard decision channel results in the well-know Gallager B algorithm, and increasing...

Context in source publication

Context 1
... ζ W ] where 0 ≤ ζ 0 < ζ 1 < · · · < ζ W . Such a quantization scheme is depicted in Figure 1. ...

Similar publications

Article
Full-text available
The design of low-density parity-check (LDPC) code ensembles optimized for a finite number of decoder iterations is investigated. Our approach employs EXIT chart analysis and differential evolution to design such ensembles for the binary erasure channel and additive white Gaussian noise channel. The error rates of codes optimized for various number...
Article
Full-text available
We consider the decoding of LDPC codes over GF(q) with the low-complexity majority algorithm from [1]. A modification of this algorithm with multiple thresholds is suggested. A lower estimate on the decoding radius realized by the new algorithm is derived. The estimate is shown to be better than the estimate for a single threshold majority decoder....
Article
Full-text available
Non-binary low-density parity-check codes are robust to various channel impairments. However, based on the existing decoding algorithms, the decoder implementations are expensive because of their excessive computational complexity and memory usage. Based on the combinatorial optimization, we present an approximation method for the check node proces...
Article
Full-text available
Finite alphabet iterative decoders (FAIDs) for LDPC codes were recently shown to be capable of surpassing the Belief Propagation (BP) decoder in the error floor region on the Binary Symmetric channel (BSC). More recently, the technique of decimation which involves fixing the values of certain bits during decoding, was proposed for FAIDs in order to...

Citations

... To address these demands, many coarsely quantized LDPC decoders [6][7][8][9][10][11][12] have emerged, offering an enhanced balance between performance and complexity. Finite alphabet iterative decoder (FAID) emerges as a robust solution to the error floor issues prevalent in LDPC codes [6], specifically when operating over binary symmetric channels. ...
Article
Full-text available
Efficient error correction in high-speed communication networks, such as the 50G passive optical network (50G-PON), is paramount. This Letter focuses on optimizing a layered non-surjective finite alphabet iterative decoder (LNS-FAID) for 50G-PON, with an emphasis on high-throughput and low-power consumption. We propose using a distinct lookup table (LUT) for each iteration to enhance decoding performance and lower error floors. Additionally, we improve the 2-bit LNS-FAID architecture by adding operational states and a sign backtracking (SBT) strategy. This paper also introduces a hybrid precision model that merges 3-bit and 2-bit LNS-FAIDs, which balances error correction with computational efficiency. Our simulation results show that these approaches significantly improve the performance of the LDPC code in 50G-PON.
... To balance the error correction capability and decoding complexity, the current practical methods are to use a certain amount of channel information. In [20], it was mentioned that the new binary information can be reconstructed from the output of BDD. In addition, the channel soft information has been used for the product-like codes in [21]. ...
Article
Full-text available
This paper aims to provide a reduced-complexity decoding for satellite communication systems to enhance system performance and transmission efficiency. By addressing the complexity, error rate, and latency issues for satellite communications, a novel 3-dimensional product decoding scheme is proposed. The density evolution algorithm is also introduced to analyze and optimize for the binary message passing decoding, which further reduces the computational complexity and latency. The simulation results show that the proposed method can obtain about 0.3- and 0.15-dB performance gains compared to the similar decodings of the conventional 2-dimensional BCH product codes and staircase codes, respectively. Our proposed coding scheme offers a broad prospect to the critical issues of transmission latency and data throughput in satellite networks.
... In this paper, we demonstrate that the generalized mutual information (GMI) is a suitable metric to determine the parameters to be used in the post-processing of the component code soft-output. The underlying idea is closely related to reconstructing soft information in min-sum decoding [5] and coarsely quantized message passing [6], [7] for (generalized) low-density parity-check (LDPC) codes. The authors of [8] derive a post-processing for a bit-interleaved coded modulation (BICM) system using a cost function based on the GMI. ...
... First, we simulate the performance of product codes based on (64, 51, 6), (128,113,6), and (256, 239, 6) extended Bose-Chaudhuri-Hocquenghem (eBCH) component codes. The rates of the product codes are 0.635, 0.779 and 0.872, respectively. ...
Preprint
Chase-Pyndiah decoding is widely used for decoding product codes. However, this method is suboptimal and requires scaling the soft information exchanged during the iterative processing. In this paper, we propose a framework for obtaining the scaling coefficients based on maximizing the generalized mutual information. Our approach yields gains up to 0.11 dB for product codes with two-error correcting extended BCH component codes over the binary-input additive white Gaussian noise channel compared to the original Chase-Pyndiah decoder with heuristically obtained coefficients. We also introduce an extrinsic version of the Chase-Pyndiah decoder and associate product codes with a turbo-like code ensemble to derive a Monte Carlo-based density evolution analysis. The resulting iterative decoding thresholds accurately predict the onset of the waterfall region.
... To meet the growing demand for data throughput, the bit-width of intermediate signals of LDPC decoder needs to be minimized with high-priority. An important improvement on quantization strategy is to apply different bit-width to the input LLR ch and the intermediate LLRs exchanged between the variable node unit (VNU) and check node unit (CNU), and choose a verylow bit-width as the message passing bit-width, e.g., binary message passing (BMP) decoding [32], ternary message passing (TMP) decoding [33], and quaternary message passing (QMP) decoding [34], etc. From a similar perspective, dual quantization-domain (DQD) [35] [36] and reconstruction-construction-quantization (RCQ) [37] [38] scheme are proposed with larger message passing bit-width, e.g., 4-6 bits. Another potential improvement on quantization strategy is adaptive quantization parameters including the bit-width of intermediate signals [39], quantization intervals and reconstruction levels [40] [41], etc. ...
Article
Full-text available
5G new radio (5G-NR) enhanced Mobile Broadband (eMBB) scenario demands a high data throughput of up to 20Gb/s, leading to an urgent need for low complexity and high throughput decoding algorithms for 5G-NR low-density parity-check (LDPC) codes, the coding scheme of 5G eMBB data channel. Quantization of input and intermediate signals enables the data throughput enhancement and the computation and storage overhead reduction of LDPC decoder, but confronts the problem of performance degradation. In addition, conventional analysis tools show limitation on the accurate asymptotic performance prediction of quantized and normalized LDPC decoding algorithms. In this paper, we first introduce modified multi-edge type density evolution (MET-DE) to the asymptotic performance prediction of quantized and adaptive normalized min-sum algorithm (ANMSA). Then we adopt adaptive asymmetric quantization strategy for MET LDPC codes, combine it with conventional adaptive normalization technique, and further propose adaptive quantized and normalized min-sum algorithm (AQNMSA), which significantly alleviates the performance degradation of fixed quantized and normalized min-sum algorithm (NMSA) with negligible increment of computational complexity. Concurrently, we provide a novel look-up table design algorithm for AQNMSA based on the analysis tool of modified MET-DE. Finally, we apply AQNMSA to 5G-NR LDPC codes with a very low quantization bit-width of 4 bits for intermediate signals, and observe a comparable or superior decoding performance compared with its float-point non-adaptive counterparts under multiple code rates.
... We also show that the reason for this floor is the degree-3 VNs present in the 802.3ca EPON LDPC code and the difficulties very low complexity versions of the SP-MS decoder has dealing with such low-degree VNs [7] . 978-1-6654-3868-1/21/$31.00 ©2022 IEEE The second contribution of this paper is to propose a method to reduce the error floor for the SP-MS decoder. ...
Preprint
Full-text available
Some low-complexity LDPC decoders suffer from error floors. We apply iteration-dependent weights to the degree-3 variable nodes to solve this problem. When the 802.3ca EPON LDPC code is considered, an error floor decrease of more than 3 orders of magnitude is achieved.
... with L a (y) = log (p(y|a)). While the transition probability densities p(y|a) of the communication channel are given in (1), the transition probabilities of the extrinsic channel are in general unknown, but accurate estimates can be obtained via DE analysis [9]. ...
Conference Paper
A simple decoder forq-ary low-density parity-checkcodes is studied, termed symbol message passing. The decoderpasses hard decisions from aq-ary alphabet. For orthogonalmodulations over the additive white Gaussian channel for whichthe modulation order and the field orderqare equal, it is shownthat the extrinsic messages can be modelled as observations of aq-ary symmetric channel, allowing to work out density evolutionequations. A stability analysis is provided which emphasizes theinfluence of degree-3variable nodes. Simulation results showperformance gains for increasingqw.r.t. binary low-densityparity-check codes with bit-interleaved coded modulation, andpotential savings in decoding complexity.
... The design of coarsely quantized LDPC decoders has attracted considerable attention [5]- [10]. In [8], the authors developed an algorithm with binary messages, referred to as binary message passing (BMP) decoding, which allows to exploit the channel soft information while retaining the one-bit message representation of the Gallager A and Gallager B decoding algorithms [11]. An extension of BMP to ternary and quaternary message alphabets was studied in [9], [10]. ...
... The VNs can turn these observations into soft messages if the transition probabilities of the extrinsic channel are known. In [8], it was suggested to employ density evolution (DE) to estimate the transition probabilites of the extrinsic DMCs, showing that this is an effective approach, i.e., it provides good estimates down to moderate block lengths. The principle was then extended in [9], [10] to TMP, and QMP decoding. ...
... In particular, we derive the DE analysis for unstructured irregular LDPC code ensembles. Following the observations of [8], we further investigate the role of degree-2 and degree-3 VNs in the error floor performance by performing a trapping sets analysis. This result is particularly insightful, since the design of LDPC codes is often performed for unquantized, unsaturated message passing algorithms. ...
Conference Paper
We revisit a coarsely quantized message passing decoding algorithm for low density parity-check (LDPC) code ensembles, named quaternary message passing (QMP). Particularly, we analyze the performance of unstructured LDPC codes under QMP decoding by means of density evolution. The impact of degree-2 and degree-3 variable nodes on the error floor performance is also discussed. We design a code for QMP that performs within 0.55 dB of the 5G LDPC code at a block error rate of 10−4 .
... The SMP algorithm is a message-passing algorithm where each message exchanged by a VN/CN pair is a symbol, i.e., an hard estimate of the codeword symbol associated with the VN. Following the principle outlined in [14], the messages sent by CNs to VNs are modeled as observations at the output a q-ary input, q-ary output DMC. By doing so, the messages at the input of each VN can be combined by multiplying the respective likelihoods (or by summing the respective log-likelihoods), providing a simple update rule at the VNs. ...
... The choice of the DMC used to model the extrinsic channel plays a crucial role in the definition of the SMP algorithm, especially from a decoding performance viewpoint. In [14], for the case of binary message-passing (BMP) decoding, it was suggested to model the VN inbound messages as observations of a binary symmetric channel (BSC), whose transition probability was estimated by means of DE analysis. The approach was followed in [13] for SMP, with the VN inbound messages modelled as observations of a q-ary symmetric channel (q-SC). ...
Preprint
Full-text available
We study the performance of low-density parity-check (LDPC) codes over finite integer rings, over two channels that arise from the Lee metric. The first channel is a discrete memory-less channel (DMC) matched to the Lee metric. The second channel adds to each codeword an error vector of constant Lee weight, where the error vector is picked uniformly at random from the set of vectors of constant Lee weight. It is shown that the marginal conditional distribution of the two channels coincides, in the limit of large blocklengths. The performance of selected LDPC code ensembles is analyzed by means of density evolution and finite-length simulations, with belief propagation decoding and with a low-complexity symbol message passing algorithm.
... Among the algorithms in [7], [8], [11], iBDD with scaled reliability (iBDD-SR) [8] is the one yielding the smallest increase in complexity, yet achieving about 0.3 dB performance improvement compared to iBDD for binary transmission. Following the principle introduced in [12], iBDD-SR is based on binary message passing (BMP) between component decoders and generates reliability information at the BDD output by scaling the decisions according to a reliability estimate of the decision. The reliability information, in the form of log-likelihood ratios is then added to the corresponding channel LLRs to form refined bit estimates (see [8, Fig. 2]). ...
... The decoding schemes in [9]- [11], [13], on the other hand, yield some additional coding gains but require the knowledge of the least reliable bits in the decoding process, and hence entail further complexity. Alternative constructions for high-throughput applications include coarsely-quantized low-density parity-check decoders [4], [12], [14], two-stage decoders [15], and SD-HD hybrid schemes based on concatenating a relatively weak SD code and an outer HD product-like code [16], [17]. ...
... The derivation ofμ Table I and Table II, respectively, where the message error probability at the rowtype and column-type CN at the th iteration is given by (11) and (12), respectively, with x c,(0) = p ch , and the values of Tables I-II are derived in (20)- (25). ...
Article
We propose a novel soft-aided iterative decoding algorithm for product codes (PCs). The proposed algorithm, named iterative bounded distance decoding with combined reliability (iBDD-CR), enhances the conventional iterative bounded distance decoding (iBDD) of PCs by exploiting some level of soft information. In particular, iBDD-CR can be seen as a modification of iBDD where the hard decisions of the row and column decoders are made based on a reliability estimate of the BDD outputs. The reliability estimates are derived by analyzing the extrinsic message passing of generalized low-density-parity check (GLDPC) ensembles, which encompass PCs. We perform a density evolution analysis of iBDD-CR for transmission over the additive white Gaussian noise channel for the GLDPC ensemble. We consider both binary transmission and bit-interleaved coded modulation with quadrature amplitude modulation. We show that iBDD-CR achieves performance gains up to 0.51 dB compared to iBDD with the same internal decoder data flow. This makes the algorithm an attractive solution for very high-throughput applications such as fiber-optic communications.
... Whereas, for SRLMP the channel and CN messages are converted to log-likelihood vectors at the VNs by modeling the extrinsic channel as a discrete memoryless channel (DMC) whose transition probabilities may be estimated via density evolution (DE). This technique is similar to the ternary message passing (TMP) decoder [16] for binary codes, and lays its foundation in the binary message passing (BMP) algorithm originally proposed in [17]. ...
... In the decoding algorithm, we will use the L-vector L(y) of the communication channel observation and the L-vectors of the CN messages L(m), ∀m ∈ M Γ . While the transition probabilities of the communication channel, which is a QSC with error probability , are given in (1), the transition probabilities of the extrinsic channel are in general unknown but accurate estimates can be obtained via DE analysis, as suggested in [17]. ...
Preprint
A decoding algorithm for $q$-ary low-density parity-check codes over the $q$-ary symmetric channel is introduced. The exchanged messages are lists of symbols from $\Fq$. A density evolution analysis for maximum list sizes $1$ and $2$ is developed. Thresholds for selected regular low-density parity-check code ensembles are computed showing gains with respect to a similar algorithm in the literature. Finite-length simulation results confirm the asymptotic analysis.