Fig 1 - uploaded by L. Hanzo
Content may be subject to copyright.
(a) A parity-check matrix (b) The bipartite graph having girth of four and corresponding to the parity-check matrix of (a). A cycle of six (represented by the continuous bold edges) and a cycle of four (represented by dashed bold edges) are shown.

(a) A parity-check matrix (b) The bipartite graph having girth of four and corresponding to the parity-check matrix of (a). A cycle of six (represented by the continuous bold edges) and a cycle of four (represented by dashed bold edges) are shown.

Source publication
Article
Full-text available
This survey guides the reader through the extensive open literature that is covering the family of low-density parity-check LDPC codes and their rateless relatives. In doing so, we will identify the most important milestones that have occurred since their conception until the current era and elucidate the related design problems and their respectiv...

Contexts in source publication

Context 1
... in Table II that the first four bits of a codeword are the systematic information bits, followed by three parity bits, each of which checks the parity of the specific information bits as determined by the generator matrix represented in (2). The H matrix can also be represented graphically by what is known as a bipartite graph, as exemplified in Figure 1. Let us consider as an example the LDPC code having N = 6, associated with the H matrix shown in Figure 1(a). ...
Context 2
... H matrix can also be represented graphically by what is known as a bipartite graph, as exemplified in Figure 1. Let us consider as an example the LDPC code having N = 6, associated with the H matrix shown in Figure 1(a). The corresponding graph is then illustrated in Figure 1(b). ...
Context 3
... us consider as an example the LDPC code having N = 6, associated with the H matrix shown in Figure 1(a). The corresponding graph is then illustrated in Figure 1(b). It can be observed that this graph can be divided in two parts (and hence the name bipartite), whereby the right-hand side of the graph shows the so-called parity-check nodes, which correspond to a row of H, whilst the left-hand side (LHS) contains the variable nodes, which relate to the columns of H. ...
Context 4
... variable node is essentially a transmitted bit in the codeword z. The ones in the H matrix of Figure 1 II THE CODEWORDS FOR THE CODE C(7, 4) AND ITS DUAL CODE C ⊥ (7, 3), GIVEN THE GENERATOR MATRIX AND PARITY-CHECK MATRIX ...
Context 5
... the edges that interconnect the parity-check nodes and the variable nodes located on the graph of Figure 1(b). For example, one can observe from Figure 1(b) that the first parity-check node c 1 is checking the result of the modulo-2 sum (called the parity) of v 1 , v 3 , v 5 and v 6 , which is also seen in the first row of the corresponding H matrix; i.e., if the transmitted bits represented by v 1 , v 3 , v 5 and v 6 are received correctly, then the value of v 1 ⊕ v 3 ⊕ v 5 ⊕ v 6 ⊕ c 1 = 0, where '⊕ denotes the modulo-2 sum. ...
Context 6
... the edges that interconnect the parity-check nodes and the variable nodes located on the graph of Figure 1(b). For example, one can observe from Figure 1(b) that the first parity-check node c 1 is checking the result of the modulo-2 sum (called the parity) of v 1 , v 3 , v 5 and v 6 , which is also seen in the first row of the corresponding H matrix; i.e., if the transmitted bits represented by v 1 , v 3 , v 5 and v 6 are received correctly, then the value of v 1 ⊕ v 3 ⊕ v 5 ⊕ v 6 ⊕ c 1 = 0, where '⊕ denotes the modulo-2 sum. ...
Context 7
... us once again focus our attention on the bipartite graph illustrated in Figure 1(b). The bipartite graph representing an LDPC code is also said to be undirected since its edges do not posses any sense of direction. ...
Context 8
... girth in a bipartite graph is always even and its smallest value is four. The graph depicted in Figure 1(b) has a girth of four and the corresponding cycle of four is shown by the dashed bold edges. A cycle of six is also shown marked by the continuous bold edges. ...
Context 9
... this is not the case, the code (and its associated graph) are termed to be irregular. For example, the graph shown in Figure 1(b) can be described as being left-regular, since all the variable nodes located in the graph have the same degree. ...
Context 10
... technique, which is commonly referred to as cycle conditioning -as opposed to girth conditioning -requires the identification of the so-called stopping sets, 10 which are a particular group of variable nodes that is connected to a group of neighboring parity-check nodes more than once. One example of a stopping set exemplified in Figure 1(b) is constituted by the variable nodes v 2 , v 3 and v 6 , because all the neighboring parity-check nodes c 1 , c 2 and c 3 is connected to this variable node set twice. If the underlying graph does not contain any degree-one variable nodes, then it will not be possible to locate any cycle-free stopping set in that graph. ...
Context 11
... the underlying graph does not contain any degree-one variable nodes, then it will not be possible to locate any cycle-free stopping set in that graph. Furthermore, most stopping sets are constituted by multiple cycles, unless the variable nodes in the stopping set have a degree of 2. This can also be verified from the previously mentioned stopping-set example containing v 2 , v 3 and v 6 in the graph of Figure 1(b), which only contains one cycle of six. By means of avoiding small stopping sets, the technique of Tian et al. [125] succeeded in significantly reducing the error floor of irregular LDPC codes, whilst only suffering from a slight BER degradation in the waterfall region. ...
Context 12
... trapping set (a, b) refers to that particular set of a variable nodes in the associated bipartite graph which are connected to b odd-degree and an arbitrary number of even-degree parity-check nodes. For example, a trapping set (5, 2) can be observed in the bipartite graph of Figure 1(b) constituted by the variable nodes v 1 , v 2 , v 3 , v 4 and v 6 and the parity-check nodes c 2 and c 3 . When the values of a and b are relatively small, the variable nodes in the trapping set are not well-connected to the rest of the graph and therefore the corresponding bits have a weak protection. ...

Citations

... Type of LDPC used and purpose of implementation. Whether to improve performance, reduce complexity, increase throughput, and so on, such as the following [1]- [3], [11], [15], [17]- [21] conducted a comprehensive survey on LDPC code encoding and decoding techniques (in both academic and industry). [22], [23] propose a simplified LDPC decoding algorithm to lower implementation complexity, and implemented it on the FPSimilarlylar manner, [24] compares the performance and implementation complexity of LDPC decoders. ...
Article
Full-text available
Data theory coding is an excellent and well-known branch of study that has produced various crucial solutions to the insoluble challenges of safe data transfers. Last improvements in detecting error techniques have resulted in a significant increase in the use of low-density parity-check (LDPC) code to address critical concerns connected to secure data transfer. Until now, decent efforts have been performed on LDPC codes that target low complexity, high performance, and low bit error rate goals. The final aim of this review is to provide a recent literature understanding of modern improvements previously mentioned and in LDPC encoding and decoding (applicative and theoretical) techniques. A comparative scan of many remarkable LDPC decoding algorithms, 5G standard requirements, popular power management methods, and low-energy LDPC design studies is also shown. Lastly, conclusions are presented by outlining key study results, current concerns, and general thoughts on new research directions possibilities.
... As a result, q k distinct legitimate codewords correspond to the q k possible message blocks. The set of q k codewords is referred to as a C(n, k) block code [40]. Unlike source coding mechanisms like MP4 (used for compressing data), in error correcting codes, n − k redundant digits are purposefully added to the information block to attain error correction. ...
... Definition 4 (Hamming weight and Hamming distance [40]). The number of non-zero bits in a codeword is called the Hamming weight. ...
Article
Full-text available
It is a matter of time before quantum computers will break the cryptosystems like RSA and ECC underpinning today’s internet protocols. As Post-Quantum Cryptography (PQC) is a low-cost approach compared to others like quantum key distribution, the National Institute of Standards and Technology (NIST) has recently reviewed and analyzed numerous approaches to PQC. As a PQC candidate, Bit Flipping Key Encapsulation (BIKE) is expected to be standardized as a general-purpose Key Encapsulation Mechanism (KEM) by NIST. However, it lacks a comprehensive review of BIKE associated with technical analysis. This paper aims to present an in-depth review and analysis of the BIKE scheme with respect to relevant attacks. We provide a comprehensive review of the original McEliece (ME) scheme and present a detailed discussion on its practical challenges. Furthermore, we provide an in-depth study on the challenges of ME and BIKE cryptosystems in achieving the Indistinguishability under Chosen-Ciphertext Attack (IND-CCA) security. We provide an analysis of these cryptosystems and their security against several attacks before pointing out the research gaps for strengthening BIKE.
... LDPC codes are linear block codes that can be characterized by either a generator matrix or a parity-check matrix. It is well-known that there is a one-to-one mapping between the generator matrix and the parity-check matrix (see, e.g., Section II.B of the survey paper [6]). In particular, for an (n, k)-LDPC code, there are k information bits in a codeword of n bits. ...
Preprint
Full-text available
Most existing works on analyzing the performance of a random ensemble of low-density parity-check (LDPC) codes assume that the degree distributions of the two ends of a randomly selected edge are independent. In the paper, we take one step further and consider ensembles of LDPC codes with degree-degree correlations. For this, we propose two methods to construct an ensemble of degree-degree correlated LDPC codes. We then derive a system of density evolution equations for such degree-degree correlated LDPC codes over a binary erasure channel (BEC). By conducting extensive numerical experiments, we show how the degree-degree correlation affects the performance of LDPC codes. Our numerical results show that LDPC codes with negative degree-degree correlation could improve the maximum tolerable erasure probability. Moreover, increasing the negative degree-degree correlation could lead to better unequal error protection (UEP) design.
... Definition 2 (Hamming weight [30]). The Hamming weight of a codeword is defined as the number of non-zero bits in the codeword. ...
Preprint
The evolution of quantum computers poses a serious threat to contemporary public-key encryption (PKE) schemes. To address this impending issue, the National Institute of Standards and Technology (NIST) is currently undertaking the Post-Quantum Cryptography (PQC) standardization project intending to evaluate and subsequently standardize the suitable PQC scheme(s). One such attractive approach, called Bit Flipping Key Encapsulation (BIKE), has made to the final round of the competition. Despite having some attractive features, the IND-CCA security of the BIKE depends on the average decoder failure rate (DFR), a higher value of which can facilitate a particular type of side-channel attack. Although the BIKE adopts a Black-Grey-Flip (BGF) decoder that offers a negligible DFR, the effect of weak-keys on the average DFR has not been fully investigated. Therefore, in this paper, we first perform an implementation of the BIKE scheme, and then through extensive experiments show that the weak-keys can be a potential threat to IND-CCA security of the BIKE scheme and thus need attention from the research community prior to standardization. We also propose a key-check algorithm that can potentially supplement the BIKE mechanism and prevent users from generating and adopting weak keys to address this issue.
... Broadly speaking, the PCM of an LDPC code can be constructed in either an unstructured (random) or a structured manner [88]. In the unstructured manner the goal is to construct a PCM whose resultant code can exhibit the best achievable performance, but subject to some predefined constraints, imposed by the designer on the PCM. ...
Thesis
Full-text available
Wireless communication has become an indispensable part of our life and the demand for achieving higher throughput with lower energy consumption is ever growing. The ambitious throughput of 100 Gb/s and beyond is now becoming a modest goal thanks to comprehensive advances in transmission technologies and protocols. One important aspect of these advances is with regard to channel coding methods and the ability to detect and correct errors at the receiver. Computations needed by such methods become generally more complicated as they become more powerful in their performance. This imposes a great challenge for researchers attempting to devise practical methods for encoding and decoding Forward-error Correction (FEC) techniques tailored for high-throughput scenarios. In this work we focus on high-throughput Quasi-Cyclic LDPC (QC-LDPC) codes, as they have been selected as one of the main FEC techniques for the two major next generation wireless technologies, namely Wi-Fi 6 (IEEE 802.11ax) and 5G. Our target is to develop complete encoding and decoding design for these codes in order to reach the throughput of 100 Gb/s with affordable power consumption. Toward this goal, we investigate first the appropriate encoder design for these codes which can be used at such high data-rate with reasonably low power consumption. Then we propose several novel ideas for improving the decoding performance and complexity of QC-LDPC codes. The proposed novel ideas collectively facilitate a decoder able to run at 50 Gb/s with less than 12 pJ/b energy consumption for a Latin squares QC-LDPC code. All the proposed methods are practical and implementable and their effectiveness are showcased by either Field Programmable Gate Array (FPGA) or Application-Specific Integrated Circuit (ASIC) synthesis.
... N. Bonello, S. Chen, and L. Hanzo in [62] have offered a glimpse of six decades of research pertaining to LDPC codes as well as the more recent efforts concentrated on rateless coding. Ten years later, based on the improvement of LDPC code construction in recent ten years, this paper gives the following predictions:  With the progress of electronic technology, the implementation of traditional LDPC code is no longer a difficult problem. ...
... For image or video transmission, a high data rate is needed. LDPC and Turbo code is the near-capacity code and most used for encoding and decoding [24,25]. However, LDPC provides better performance at a higher code rate and minimizes memory requirements. ...
Article
Full-text available
A low-density parity-check (LDPC) coded single-carrier frequency-division multiple access (SC-FDMA) system is proposed for effective image transmission. The employment of the Fejér-Korovkin wavelet transform (FKWT) in combination with a discrete cosine transform (DCT) has been conceived in the proposed system. SC-FDMA is capable of mitigating the channel-induced dispersion at a low peak-to-average power ratio (PAPR). The performance of the proposed scheme is compared to that of the traditional wavelet transform-based SC-FDMA system in terms of different performance metrics. The scheme provides a remarkable performance gain compared to its LDPC coded wavelet transform-based SC-FDMA counterpart. Additionally, FKWT provides improved performance compared to the commonly used Haar wavelet transform-based system.
... Moreover, Andrade et al. [23] report decoder implementations in both GPUs and FPGAs, while Guilloud et al. [16] and Thameur et al. [28] report implementations in both FPGAs and ASICs. Only the binary LDPC decoding are reported in [19], [21], without implementations. ...
Article
Full-text available
Non-binary low-density parity-check ( nbldpc) codes show higher error-correcting performance than binary codes when the codeword length is moderate and/or the channel has bursts of errors. The need for high-speed decoders for future digital communications led to the investigation of optimized nbldpc decoding algorithms and efficient implementations that target high throughput and low energy consumption levels. We carried out a comprehensive survey of existing nbldpc decoding hardware that targets the optimization of these parameters. Even though existing nbldpc decoders are optimized with respect to computational complexity and memory requirements, they still lag behind their binary counterparts in terms of throughput, power and area optimization. This study contributes to an overall understanding of the state-of-the-art on, and based systems, and highlights the current challenges that still have to be overcome on the path to more efficient nbldpc decoder architectures.
... For the sake of exposition, define H = h ,1 , h ,2 , · · · , h , ∈ C × , ∀ ∈ K, as the zero-padded concatenated BS-RIS-UE channels without regard to the -th RIS's effect, where we have h , = u , * v , , 0 1×( − ) ∈ C ×1 , ∀ ∈ M, ∀ ∈ K. The 4 In fact, the adjustable delay resolution depends on the specific hardware design, which, anyhow, generally cause grid mismatch errors in time synchronization. Fortunately, with the employment of millimeter-wave and terahertz frequency bands, the mismatch errors will be substantially eliminated due to the increased sampling rate. ...
Preprint
Full-text available
Reconfigurable intelligent surface (RIS) is a promising technology for establishing spectral- and energy-efficient wireless networks. In this paper, we study RIS-enhanced orthogonal frequency division multiplexing (OFDM) communications, which generalize the existing RIS-driven context focusing only on frequency-flat channels. Firstly, we introduce the delay adjustable metasurface (DAM) relying on varactor diodes. In contrast to existing reflecting elements, each one in DAM is capable of storing and retrieving the impinging electromagnetic waves upon dynamically controlling its electromagnetically induced transparency (EIT) properties, thus additionally imposing an extra delay onto the reflected incident signals. Secondly, we formulate the rate-maximization problem by jointly optimizing the transmit power allocation and the RIS reflection coefficients as well as the RIS delays. Furthermore, to address the coupling among optimization variables, we propose an efficient algorithm to achieve a high-quality solution for the formulated non-convex design problem by alternately optimizing the transmit power allocation and the RIS reflection pattern, including the reflection coefficients and the delays. Thirdly, to circumvent the high complexity for optimizing the RIS reflection coefficients, we conceive a low-complexity scheme upon aligning the strongest taps of all reflected channels, while ensuring that the maximum delay spread after introducing extra RIS delays does not exceed the length of the cyclic prefix (CP). Finally, simulation results demonstrate that the proposed design significantly improves the OFDM rate performance as well as the RIS's adaptability to wideband signals compared to baseline schemes without employing DAM.
... It applied the data on the subsequent iteration into embedded memory blocks to reduce LUT-RAM usage and accomplish a maximum amount of 332 Mb/s. Furthermore, Bonello et al. [36] deliberated that an overview of the essentials LDPC codes was shown. Hence, different features of LDPC codes indicated briefly, such as desirable properties of encoding and decoding along with its logical effects. ...
Article
Full-text available
Low-Density Parity-Check (LDPC) error correction decoders emerge as a suitable path as long as offers a resilient error correction performance and its appropriateness to comparable hardware operation. This paper has been presented a case study to evaluate the use of LDPC code designs based on various features, such as flexibility, high processing speed, and the parallelism of Field-Programmable Gate Array (FPGA) devices. Hence, it has categorized the differences of key factors in FPGA-based LDPC decoder design and three crucial performance features are defined, such as processing throughput, processing latency, and hardware resource requirements. Furthermore, this word supports the concerned researchers to comprehend the differences between various related word and their results of most popular techniques.