ArticlePDF Available

Concatenation of Hamming Codes and Accumulator Codes with High-Order Modulations for High-Speed Decoding

Authors:

Abstract and Figures

We propose a concatenated code structure combined with high-order modula- tions suitable for implementation with iterative decoders operating at gigabit-per- second (Gbps) data rates. The examples considered in this article are serial/parallel concatenations of Hamming and accumulator codes. Performance results are given for binary phase-shift keyed (BPSK) modulation, quadrature phase-shift keyed (QPSK) modulation, 8 phase-shift keyed (8PSK) modulation, and 16 quadrature amplitude modulation (16QAM) over additive white Gaussian noise (AWGN) and Rayleigh fading channels.
Content may be subject to copyright.
IPN Progress Report 42-156 February 15, 2004
Concatenation of Hamming Codes and Accumulator
Codes with High-Order Modulations for
High-Speed Decoding
D. Divsalar
1
and S. Dolinar
1
We propose a concatenated code structure combined with high-order modula-
tions suitable for implementation with iterative decoders operating at gigabit-per-
second (Gbps) data rates. The examples considered in this article are serial/parallel
concatenations of Hamming and accumulator codes. Performance results are given
for binary phase-shift keyed (BPSK) modulation, quadrature phase-shift keyed
(QPSK) modulation, 8 phase-shift keyed (8PSK) modulation, and 16 quadrature
amplitude modulation (16QAM) over additive white Gaussian noise (AWGN) and
Rayleigh fading channels.
I. Introduction
In this article, we design coding schemes based on serial concatenations [7,10] of a high-rate Hamming
outer code and a simple 2-state, rate-1 accumulator inner code. The interleaver between the outer and
inner codes has a block structure that allows easy parallelization of the decoding algorithm for applications
requiring high-speed decoding. The serially concatenated codes are combined with binary phase-shift
keyed (BPSK) modulation, quadrature phase-shift keyed (QPSK) modulation, 8 phase-shift keyed (8PSK)
modulation, or 16 quadrature amplitude modulation (16QAM) to give high overall bandwidth efficiencies.
The general structure of a serial concatenation with a “parallel interleaver” is shown in Fig. 1. In
this construction of an (n
O
n
I
,k
O
k
I
)code, we concatenate k
I
outer codes {C
O
i
,i =1, ···,k
I
} of length
and dimension (n
O
,k
O
) with n
O
inner codes {C
I
j
,j =1, ···,n
O
} of length and dimension (n
I
,k
I
), via
an interleaver π of total size k
I
n
O
. Instead of allowing a completely general interleaver of size k
I
n
O
,we
require that π be decomposable into a set of permutations {π
O
i
,i =1, ···,k
I
}, each of size n
O
, applied
to the outputs of each of the k
I
outer codes separately, followed by a k
I
× n
O
rectangular interleaver,
followed by a set of permutations {π
I
j
,j =1, ···,n
O
}, each of size k
I
, applied to the inputs of each of the
n
O
inner codes separately. In the interests of allowing more irregularity, this structure can be generalized
to allow the outer codes {C
O
i
,i =1, ···,k
I
} to have different dimensions {k
O
i
,i =1, ···,k
I
} and the
inner codes {C
I
j
,j =1, ···,n
O
} to have different lengths {n
I
j
,j =1, ···,n
O
};however, in this article, our
attention is restricted to component codes with non-varying dimensions and lengths.
1
Communications Systems and Research Section.
The research described in this publication was carried out by the Jet Propulsion Laboratory, California Institute of
Technology, under a contract with the National Aeronautics and Space Administration.
1
k
O
bits
k
O
bits
n
O
bits
n
O
bits
k
O
bits
n
O
bits
n
O
bits
n
O
bits
n
O
bits
OUTER CODE
C
O
(n
O
, k
O
)
1
OUTER CODE
C
O
(n
O
, k
O
)
2
OUTER CODE
C
O
(n
O
, k
O
)
kI
INTERLEAVER
π
O
(n
O
)
1
INTERLEAVER
π
O
(n
O
)
kI
INTERLEAVER
π
O
(n
O
)
2
RECTANGULAR
INTERLEAVER
k
I
n
O
bits
k
I
bits k
I
bits
k
I
bits
k
I
bits
n
I
bits
n
I
bits
n
I
bits
INTERLEAVER
π
nO
(k
I
)
I
INTERLEAVER
π
2
(k
I
)
I
INTERLEAVER
π
1
(k
I
)
I
INNER CODE
C
I
(n
I
, k
I
)
1
INNER CODE
C
I
(n
I
, k
I
)
2
INNER CODE
C
nO
(n
I
, k
I
)
I
k
I
bits
k
I
bits
Fig. 1. Serial concatenation with parallel interleaver structure.
k
I
OUTER CODES
AND INTERLEAVERS
n
O
INNER CODES AND INTERLEAVERS
This structure gives a natural generalization of parallel concatenated codes [11], rather than just an
interesting subclass of serially concatenated codes. To see this, simply let all of the k
I
outer codes be
(q, 1) repetition codes. In this case, the interleavers at the outputs of the outer codes have no effect and
can be omitted. The rectangular interleaver simply ensures that each of the n
O
= q inner codes receives
a copy of the full package of k
I
distinct input bits. Then the random interleavers at the inputs of the
inner codes feed different permutations of these k
I
bits to each inner code. The net effect is the same as
the usual definition of a parallel concatenated (turbo) code.
When the outer codes are not repetition codes, the interleavers attached to the outer codes can provide
some interleaving gain, although this gain will be fairly small if the outer codes are small block codes.
2
For example, an (8,4) extended Hamming outer code has 14 codewords of weight 4. After a random
permutation, each of these 14 codewords could be scattered into any of
8
4
=70weight-4 sequences of
length 8. This effectively thins the weight-4 part of the Hamming code’s weight spectrum by a factor of
70/14 = 5.
We were interested in the case where the outer component codes {C
O
i
,i=1, ···,k
I
} are small Hamming
codes or extended Hamming codes, and the inner codes are 2-state, rate-1 accumulator codes [1]. Our aim
wastoproduce codes with good power and bandwidth efficiencies, over a range of bandwidth utilizations,
that can be decoded with parallel architecture at very high speeds, on the order of 1 gigabit per second
(Gbps). We wanted outer codes with high code rates for teaming with high-order modulations and
with minimum distance 3 or 4 to achieve high interleaving gain. These considerations led to our choice
of Hamming or extended Hamming codes as the outer codes that could achieve minimum distance 3
or 4 with the highest code rates for given code dimensions. For the inner codes, we selected 2-state,
rate-1 accumulator codes 1/(1 + D) for their extreme simplicity of decoding. Since the accumulator codes
have the same complexity per bit independent of their length, whereas the decoding complexity of the
Hamming codes increases with their lengths, the overall design is asymmetric in the number and sizes
of the outer and inner codes, specifically, n
O
k
I
.Agoal is to select these sizes such that k
I
parallel
copies of the outer decoder can run at approximately the same speed as n
O
parallel copies of the inner
decoder.
II. Coded Modulations Using the (15,11) Hamming and Accumulator Codes
In this article, we concentrate on one example of the general structure in Fig. 1 in which the outer codes
are (15,11) Hamming codes. We team this rate-11/15 code with QPSK modulation, 8PSK modulation,
or 16QAM, producing (ideal) bandwidth efficiencies of 1.47 bps/Hz, 2.20 bps/Hz, and 2.93 bps/Hz,
respectively.
Figure 2 shows the encoder for a (15,11) Hamming outer code concatenated with a 2-state, rate-1
accumulator inner code. For this code, k
O
= 11, n
O
= 15, and we selected k
I
= n
I
= 372 to yield
an overall code of length 5580 and dimension 4092. In our implementation, we used k
I
= 372 random
interleavers {π
O
i
,i =1, ···,n
O
} at the output of the Hamming outer codes and n
O
= 372 S-random [2]
interleavers {π
I
j
,j =1, ···,k
I
} to permute the inputs to the accumulator inner codes. In the figure,
we indicate that a desired input data rate of 1.116 Gbps can be obtained by running the 372 Hamming
encoders at 3 Mbps each and the 15 accumulator encoders at 101 Mbps each.
The outputs of the 15 accumulator codes are mapped to a high-order modulation. With 8PSK, the
first output bits of the first three accumulators are mapped to a 3-bit 8PSK symbol using a Gray code
mapping. Then the first output bits of the next three accumulators are mapped to the second 3-bit 8PSK
symbol, and so on, until the first five 8PSK symbols are produced. Then the next five 8PSK symbols are
created using the second bits from three accumulators at a time, and so forth. With QPSK or 16QAM,
we take output bits two or four at a time from two or four separate accumulators to form each QPSK or
16QAM symbol, stepping through the 15 accumulator codes in a cyclic order until all 5580 output bits
are mapped. This type of mapping eliminates the need for a channel interleaver within the length of a
code block.
Figure 3 shows the decoder for the code in Fig. 2. In this figure, the demapper provides the reliability
of those bits assigned to the modulation symbols, given in-phase and quadrature received observations.
The soft-in, soft-out (SISO) modules [8,9] for the accumulator also can interact with the demapper for
further performance enhancement if desired.
An (n, k) Hamming code has a cyclic representation and can be equivalently generated as a recur-
sive terminated convolutional code with primitive feedback polynomial. If the feedforward polynomial is
chosen to be the same as the feedback polynomial, this produces a systematic version of the Hamming
3
DATA
1.116
Gbps
101 Mbps
DEMUX
HAMMING
(15,11)
HAMMING
(15,11)
HAMMING
(15,11)
π
π
π
ππ π
M
A
P
P
I
N
G
WRITE
READ
3 Mbps
D
D
D
QPSK,
8PSK, or
16QAM
MODULATOR
760 mega-symbols per
second (Msps) QPSK
507 Msps 8PSK
380 Msps 16QAM
Fig. 2. Encoder for the Hamming and accumulator code.
π
D
E
M
O
D
U
L
A
T
O
R
SISO
MODULE
ACCUM.
SISO
MODULE
HAMMING
MUX
π π
π
π
π
D
E
M
A
P
P
E
R
SISO
MODULE
ACCUM.
SISO
MODULE
ACCUM.
SISO
MODULE
HAMMING
SISO
MODULE
HAMMING
Fig. 3. Decoder for the Hamming and accumulator code.
4
code. This convolutional encoding of the (15,11) Hamming code is shown in Fig. 4. The first k bits
enter the systematic convolutional encoder and emerge as the first k encoded symbols. After k bits, the
switch in Fig. 4 is toggled, and the encoder continues running long enough to produce n k additional
encoded symbols, which are the parity symbols for the (n, k) Hamming code. We note that an extended
Hamming code, shortened by one bit, also can be implemented by a recursive convolutional code, where
the primitive feedback and feedforward polynomials are multiplied by 1 + D.
The convolutional encoding of an (n, k) Hamming code gives a time-invariant trellis representation
with 2
nk
states. Alternatively, a time-varying minimal trellis representation can be used for decoding
to gain speed if necessary over that provided by the time-invariant trellis representation. Figure 5 shows
a minimal trellis representation for the (15,11) Hamming code. The time-invariant trellis corresponding
to the convolutional encoder of Fig. 4 has 32 edges per decoded bit, whereas the time-varying trellis of
Fig. 5 has only 17.8 trellis edges (on the average) per decoded bit, a reduction of nearly a factor of 2.
The inner accumulator code can be decoded using the Bahl–Cocke–Jelinek–Raviv (BCJR) algo-
rithm [3,8,9] on a time-invariant trellis with only two states. Alternatively, the accumulator code also
can be decoded using belief propagation on a loop-free Tanner graph [4] representing the same code, as
shown in Fig. 6.
If higher speeds and many iterations are necessary, the inner or outer decoders can pipeline their
iterations as needed.
15-bit
SYSTEMATIC
HAMMING
OUTPUT
CODEWORD
INPUT DATA
11 bits
A
B
D
D D D
Fig. 4. A recursive convolutional code with primitive feedback polynomial that gener-
ates a time-invariant trellis representation that can be used for decoding the (15,11)
Hamming code.
1 11100000000000
0 10110100000000
0 01111000000000
0 00110001000000
0 00011110000000
0 00001011010000
0 00000111100000
0 00000001111000
0 00000000110100
0 00000000011110
0 00000000000111
A MINIMAL SPAN
GENERATOR
MATRIX
FOR THE (15,11)
HAMMING CODE
ENCODED SYMBOL = 0
ENCODED SYMBOL = 1
UPPER BRANCH: INFORMATION BIT = 0
LOWER BRANCH: INFORMATION BIT = 1
Fig. 5. A time-varying minimal trellis representation that can be used for decoding the (15,11) Hamming code.
5
λ
c,k2
CHANNEL SYMBOLS
INFORMATION BITS
λ
c,k1
λ
c,k
λ
i,k1
λ
i,k
λ
i,k+1
Fig. 6. A Tanner graph representation of the accumulator code that can be used for
high-speed parallelizable decoding.
III. Performance on an Additive White Gaussian Noise Channel
First we applied Gaussian density evolution [5,6] (see also [12]) to analyze the performance of con-
catenated Hamming and accumulator codes under iterative decoding on an additive white Gaussian noise
(AWGN) channel. The results are shown in Fig. 7, which plots the output signal-to-noise ratio SNR
out
versus the input signal-to-noise ratio SNR
in
of the extrinsic log-likelihood messages computed by the
outer and inner component decoders. The iterative decoding threshold for this code on the binary-input
AWGN channel is E
b
/N
0
=2.1 dB, which is only 0.6 dB worse than the capacity threshold of 1.505 dB
for any code of rate 11/15.
Next we simulated the performance of the (5580,4092) Hamming and accumulator code when combined
with various modulations. Figures 8 through 10 show the bit-error rate (BER) and codeword-error rate
(WER) performance of this code with QPSK, 8PSK, and 16QAM, respectively, on the AWGN channel.
The capacity thresholds for QPSK, 8PSK, and 16QAM are 1.505 dB, 3.47 dB, and 4.36 dB, at throughputs
of 1.47 bps/Hz, 2.20 bps/Hz, and 2.93 bps/Hz, respectively.
IV. Performance on a Rayleigh Fading Channel
We applied Gaussian density evolution [5,6] to analyze the performance of concatenated Hamming
and accumulator codes under iterative decoding on an independent Rayleigh fading channel with perfect
channel state information. The results are shown in Fig. 11. The iterative decoding threshold for this
code and channel is E
b
/N
0
=5.5 dB.
We simulated the performance of the (5580,4092) Hamming and accumulator code when combined
with QPSK modulation, 8PSK modulation, and 16QAM on a correlated Rayleigh fading channel, where
the ratio of the maximum Doppler rate f
d
to the modulation transmission rate R
s
is set as a parameter.
The fading process was generated by passing a complex white Gaussian process through a filter with
transform function H(f)=
S(f), where S(f)isthe power spectral density of the fading process and is
modeled for an omnidirectional antenna as S(f)=C/
1 (f/f
d
)
2
, where C is a constant; see Fig. 12.
Figures 13 through 15 show the BER and WER with QPSK, 8PSK, and 16QAM, respectively, and no
channel interleaving, for f
d
/R
s
=0.05. For slower fading, with f
d
/R
s
significantly less than 0.05, channel
interleaving over multiple codewords is required to obtain good performance. The capacity thresholds for
QPSK, 8PSK, and 16QAM over independent Rayleigh fading with perfect channel state information are
4.66 dB, 6.49 dB, and 7.25 dB, at throughputs of 1.47 bps/Hz, 2.20 bps/Hz, and 2.93 bps/Hz, respectively.
6
HAMMING
(15,11)
π
D
ACCUMULATOR
E
b
/ N
0
= 2.1 dB
CODE RATE = 11/15
BPSK or QPSK MODULATION
AWGN CHANNEL
ACCUMULATOR
(15,11) HAMMING
01234 5678 910
0
1
2
3
4
5
6
7
8
9
10
SNR
in
(Accum), SNR
out
(Hamming)
Fig. 7. Gaussian density evolution for the Hamming and
accumulator code on the AWGN channel.
SNR
out
(Accum), SNR
in
(Hamming)
8 ITERATIONS
10 ITERATIONS
8 ITERATIONS
BER
10 ITERATIONS
WER
1.0 5.0
10
0
E
b
/ N
0
, dB
Fig. 8. Performance of the (15,11) Hamming code concatenated via
a parallelized interleaver with the 2-state accumulator code, when
combined with QPSK modulation on an AWGN channel.
BER and WER
1.5 2.0 2.5 3.0 3.5 4.0 4.5
10
1
10
2
10
3
10
4
10
5
10
6
10
7
372 PARALLEL (15,11) HAMMING OUTER CODES
15 PARALLEL ACCUMULATOR INNER CODES
INPUT BLOCK = 11 372 = 4092 bits
THROUGHPUT = 22/15 (1.47 bps/Hz)
QPSK MODULATION
AWGN CHANNEL
7
8PSK
MODULATION
011
001
000
100
101
111
110
010
8 ITERATIONS
10 ITERATIONS
8 ITERATIONS
BER
10 ITERATIONS
WER
4.0 6.0
10
0
E
b
/ N
0
, dB
Fig. 9. Performance of the (15,11) Hamming code concatenated via
a parallelized interleaver with the 2-state accumulator code, when
combined with 8PSK modulation on an AWGN channel.
BER and WER
4.5 5.0 5.5
10
1
10
2
10
3
10
4
10
5
10
6
10
7
372 PARALLEL (15,11) HAMMING OUTER CODES
15 PARALLEL ACCUMULATOR INNER CODES
INPUT BLOCK = 11 372 = 4092 bits
THROUGHPUT = 44/15 (2.20 bps/Hz)
8PSK MODULATION
AWGN CHANNEL
5.0 7.0
10
0
E
b
/ N
0
, dB
Fig. 10. Performance of the (15,11) Hamming code concatenated
via a parallelized interleaver with the 2-state accumulator code,
when combined with 16QAM on an AWGN channel.
BER and WER
5.5 6.0 6.5
10
1
10
2
10
3
10
4
10
5
10
6
10
7
10
8
16QAM
8 ITERATIONS
10 ITERATIONS
8 ITERATIONS
BER
10 ITERATIONS
WER
I
Q
372 PARALLEL (15,11) HAMMING OUTER CODES
15 PARALLEL ACCUMULATOR INNER CODES
INPUT BLOCK = 11 372 = 4092 bits
THROUGHPUT = 44/15 (2.93 bps/Hz)
16QAM
AWGN CHANNEL
8
HAMMING
(15,11)
π
D
ACCUMULATOR
E
b
/ N
0
= 5.5 dB
MODULATION = BINARY
CHANNEL = INDEPENDENT RAYLEIGH FADING CHANNEL
RECEPTION = COHERENT WITH PERFECT CHANNEL
STATE INFORMATION
CODE RATE = 11/15
ACCUMULATOR
(15,11) HAMMING
01 2 3456
0
1
2
3
4
5
6
7
8
9
10
SNR
in
(Accum), SNR
out
(Hamming)
Fig. 11. Gaussian density evolution for the Hamming and accumu-
lator code on an independent Rayleigh fading channel.
SNR
out
(Accum), SNR
in
(Hamming)
78910
COMPLEX WHITE
GAUSSIAN PROCESS
FILTER
H (f ) =
1
4
1
f
d
f
2
COMPLEX OUTPUT
PROCESS
Fig. 12. Fading channel simulator to model a correlated
Rayleigh fading channel.
V. Other Performance Examples Using Different Hamming Codes
Figure 16 shows performance with BPSK modulation over an AWGN channel for rate-1/2 codes with
input block sizes k = 1024, 4096, 16384, formed from concatenating k
I
= 128, 512, 2048 (8,4) Hamming
outer codes with n
O
=8accumulator codes. In this case, the minimum distance of the outer codes is 4,
thus yielding higher interleaving gain. As seen in Fig. 16, very low word-error rates are achieved for the
large block size, k = 16384, at E
b
/N
0
of about 1.2 dB, which is about 1 dB above the capacity threshold
for this code. The Shannon capacity limit of 0.187 dB also is shown in the figure.
Figure 17 shows performance with BPSK modulation over an AWGN channel for a very high-rate
(57/63 = 0.905) code with input block size k = 1824, formed from concatenating k
I
=32(63,57)
Hamming outer codes with n
O
=63accumulator codes. The Shannon capacity threshold for rate 57/63
is 3.27 dB.
9
8 ITERATIONS
10 ITERATIONS
8 ITERATIONS
BER
10 ITERATIONS
WER
510
10
0
E
b
/ N
0
, dB
Fig. 13. Performance of the (15,11) Hamming code concatenated
via a parallelized interleaver with the 2-state accumulator code,
when combined with QPSK modulation on a correlated Rayleigh
fading channel.
BER and WER
6789
10
1
10
2
10
3
10
4
10
5
10
6
10
7
372 PARALLEL (15,11)
HAMMING OUTER CODES
15 PARALLEL ACCUMULATOR
INNER CODES
INPUT BLOCK = 11 372 = 4092 bits
THROUGHPUT = 224/15 (1.47 bps/Hz)
QPSK MODULATION
RAYLEIGH FADING CHANNEL
10
8
10
9
DOPPLER/QPSK BAUD RATE = 0.05
712
10
0
E
b
/ N
0
, dB
Fig. 14. Performance of the (15,11) Hamming code concatenated via
a parallelized interleaver with the 2-state accumulator code, when
combined with 8PSK modulation on a correlated Rayleigh fading
channel.
BER and WER
891011
10
1
10
2
10
3
10
4
10
5
10
6
10
7
10
8
8 ITERATIONS
10 ITERATIONS
8 ITERATIONS
BER
10 ITERATIONS
WER
372 PARALLEL (15,11)
HAMMING
OUTER CODES
15 PARALLEL ACCUMULATOR
INNER CODES
INPUT BLOCK = 11 372 = 4092 bits
THROUGHPUT = 33/15 (2.20 bps/Hz)
8PSK MODULATION
DOPPLER/8PSK BAUD RATE = 0.05
RAYLEIGH FADING CHANNEL
10
812
10
0
E
b
/ N
0
, dB
Fig. 15. Performance of the (15,11) Hamming code concatenated
via a parallelized interleaver with the 2-state accumulator code,
when combined with 16QAM on a correlated Rayleigh fading
channel.
BER and WER
91011
10
1
10
2
10
3
10
4
10
5
10
6
10
7
8 ITERATIONS
10 ITERATIONS
8 ITERATIONS
BER
10 ITERATIONS
WER
372 PARALLEL (15,11) HAMMING
OUTER CODES
15 PARALLEL ACCUMULATOR
INNER CODES
INPUT BLOCK = 11 372 = 4092 bits
THROUGHPUT = 44/15 (2.93 bps/Hz)
16QAM
RAYLEIGH FADING CHANNEL
DOPPLER/16QAM BAUD RATE = 0.05
VI. Conclusion
The concatenated codes described in this article achieve medium to high code rates using very simple
component codes and can be combined with high-order modulations to obtain good power and bandwidth
efficiency at very high decoding rates.
References
[1] D. Divsalar, H. Jin, and R. J. McEliece, “Coding Theorems for ‘Turbo-Like’
Codes,” 1998 Allerton Conference, September 23–25, 1998.
[2] S. Dolinar and D. Divsalar, “Weight Distributions for Turbo Codes Using Ran-
dom and Nonrandom Permutations,” The Telecommunications and Data Ac-
quisition Progress Report 42-122, April–June 1995, Jet Propulsion Laboratory,
Pasadena, California, pp. 56–65, August 15, 1995.
http://tmo.jpl.nasa.gov/tmo/progress
report/42-122/122B.pdf
[3] L. R. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal Decoding of Linear
Codes for Minimizing Symbol Error Rate,” IEEE Trans. Inform. Theory,vol.
IT-20, pp. 284–287, 1974.
[4] R. M. Tanner, “A Recursive Approach to Low Complexity Codes,” IEEE Trans-
actions on Information Theory,vol. 27, issue 5, pp. 533–547, September 1981.
11
INPUT BLOCK
1024 bits
10 ITERATIONS
INPUT BLOCK
4096 bits
10 ITERATIONS
INPUT BLOCK
1024 bits
20 ITERATIONS
INPUT BLOCK
16,384 bits
30 ITERATIONS
INPUT BLOCK
16,384 bits
20 ITERATIONS
CAPACITY 0.187
INPUT BLOCK
4096 bits
20 ITERATIONS
HAMMING
(8,4)
π
D
RATE = 1
RATE = 1/2
ACCUMULATOR
0.0 5.0
E
b
/ N
0
, dB
Fig. 16. Performance with BPSK modulation for a rate-1/2 code formed from con-
catenating an (8,4) Hamming outer code with the 2-state accumulator code.
WORD-ERROR PROBABILITY
0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5
10
0
10
1
10
2
10
3
10
4
10
5
10
6
[5] H. El Gamal and A. R. Hammons, Jr., “Analyzing the Turbo Decoder Using the
Gaussian Approximation,” IEEE Transactions on Information Theory,vol. 47,
issue 2, pp. 671–686, February 2001.
[6] D. Divsalar, S. Dolinar, and F. Pollara, “Iterative Turbo Decoder Analysis Based
on Density Evolution,” IEEE Journal on Selected Areas in Communications,
vol. 19, no. 5, pp. 891–907, May 2001.
[7] S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, “Serial Concatenation
of Interleaved Codes: Performance Analysis, Design, and Iterative Decoding,”
IEEE Transactions on Information Theory,vol. 44, pp. 909–926, May 1998.
[8] S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, “A Soft-Input Soft-
Output Maximum A Posteriori (MAP) Module to Decode Parallel and Serial
Concatenated Codes,” The Telecommunications and Data Acquisition Progress
Report 42-127, July–September 1996, Jet Propulsion Laboratory, Pasadena, Cal-
ifornia, pp. 1–20, November 15, 1996.
http://tmo.jpl.nasa.gov/tmo/progress
report/42-127/127H.pdf
[9] S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, “A Soft-Input Soft-
Output APP Module for Iterative Decoding of Concatenated Codes,” IEEE
Communications Letters,vol. 1, issue 1, pp. 22–24, January 1997.
12
[10] D. Divsalar, S. Dolinar, and F. Pollara, “Serial Concatenated Trellis Coded
Modulation with Rate-1 Inner Code,” Global Telecommunications Conference,
2000, GLOBECOM ’00, IEEE,vol. 2, no. 27, November 27–December 1, 2000,
pp. 777–782.
[11] C. Berrou and A. Glavieux, “Near Optimum Error Correcting Coding and De-
coding: Turbo-Codes,” IEEE Transactions on Communications,vol. 44, issue 10,
pp. 1261–1271, October 1996.
[12] S. ten Brink, “Convergence Behavior of Iteratively Decoded Parallel Con-
catenated Codes,” IEEE Transactions on Communications,vol. 49, issue 10,
pp. 1727–1737, October 2001.
HAMMING
(63,57)
π
D
ACCUMULATOR
INPUT
BLOCK
1820
CODE RATE = 0.9
BPSK MODULATION
AWGN CHANNEL
5 ITERATIONS
3.0 6.0
E
b
/ N
0
, dB
Fig. 17. Performance with BPSK modulation for a very high rate code
formed from concatenating a (63,57) Hamming outer code with a 2-state,
rate-1 accumulator inner code.
BER
10
2
10
3
10
4
10
5
10
6
10
7
10 ITERATIONS
3.5 4.0 4.5 5.0 5.5
13
... where R is the code rate, is the energy per bit, and is the one-sided power spectral density of the noise. We evaluate the best possible performance of the codec by using the recursion form The dashed curves in Figure 6 are obtained from the recursive method for a set of rate 1/2 codes with degrees (8,9), (9,10), (10,11), (11,12), (12,13), (13,14), and for a set of rate around 1/3 codes with degrees (9,6), (10,6), (11,6), and (11,7). The solid curves in Figure 6 are obtained from the non-recursive low bound (2.11). ...
... where R is the code rate, is the energy per bit, and is the one-sided power spectral density of the noise. We evaluate the best possible performance of the codec by using the recursion form The dashed curves in Figure 6 are obtained from the recursive method for a set of rate 1/2 codes with degrees (8,9), (9,10), (10,11), (11,12), (12,13), (13,14), and for a set of rate around 1/3 codes with degrees (9,6), (10,6), (11,6), and (11,7). The solid curves in Figure 6 are obtained from the non-recursive low bound (2.11). ...
... Ten iteration in the MB algorithm is used. Two randomly constructed (8,9) and (9, 10) LDGM codes with length 6000 are simulated. To render a good threshold property while maintaining a low error-floor for the MB decoding algorithm, we use the following strategy for selecting the weight m. ...
... The upper plot refers to block codes of length N = 1024 which are encoded by 768 information bits (so the rate is 1.5 bits channel use ), and the lower plot refers to block codes of length N = 300 which are encoded by 270 bits whose rate is therefore 1. [24], the Valembois-Fossorier (VF) bound [35], the improved sphere-packing (ISP) bound, and the random-coding upper bound of Gallager [11]. reflected from the results plotted in Figure 4 that a gap of about 1.5 dB between the ISP lower bound and the performance of the iteratively decoded codes in [6] is mainly due to the imperfectness of these codes and their sub-optimal iterative decoding algorithm; this conclusion follows in light of the fact that for random codes of the same block length and rate, the gap between the ISP bound and the random coding bound is reduced to less than 0.4 dB. ...
... 8 bits channel use .between the ISP lower bound and the random-coding upper bound of Gallager does not exceed 0.4 dB. In[6], Divsalar and Dolinar design codes with the considered parameters by using concatenated Hamming and accumulate codes. They also present computer simulations of the performance of these codes under iterative decoding, when the transmission takes place over the AWGN channel and several common modulation schemes are applied. ...
Article
This paper derives an improved sphere-packing (ISP) bound for finite-length codes whose transmission takes place over symmetric memoryless channels. We first review classical results, i.e., the 1959 sphere-packing (SP59) bound of Shannon for the Gaussian channel, and the 1967 sphere-packing (SP67) bound of Shannon et al. for discrete memoryless channels. A recent improvement on the SP67 bound, as suggested by Valembois and Fossorier, is also discussed. These concepts are used for the derivation of a new lower bound on the decoding error probability (referred to as the ISP bound) which is uniformly tighter than the SP67 bound and its recent improved version. The ISP bound is applicable to symmetric memoryless channels, and some of its applications are exemplified. Its tightness is studied by comparing it with bounds on the ML decoding error probability, and computer simulations of iteratively decoded turbo-like codes. The paper also presents a technique which performs the entire calculation of the SP59 bound in the logarithmic domain, thus facilitating the exact calculation of this bound for moderate to large block lengths without the need for the asymptotic approximations provided by Shannon.
... The article [33] introduces a concatenated coding scheme which utilizes Hamming codes and accumulator codes in conjunction with high-order modulations to attain rapid decoding. The proposed methodology has been formulated to enhance the balance between the intricacy of decoding, the efficacy of error correction, and the rate of code. ...
Article
Full-text available
Error control coding improves reliability and efficiency of wireless communication systems. This research delves into the latest advancements in error control coding schemes for wireless communication systems, with a specific focus on their application within the domain of Low Power Wide Area Networks (LPWANs) such as Narrowband Internet of Things (NB-IoT) systems, LoRa energy-efficient hardware design, random access control systems, and wireless sensor networks. In the context of LPWANs, particularly NB-IoT systems, we investigate the adaptive coding and modulation (ACM) in NB-IoT systems, which adapts coding rates and modulation techniques to changing channel circumstances. Comprehensive analysis of the literature review shows that the proposed approaches are more energy efficient and less error-prone than fixed coding and modulation schemes. Using Cell Design Methodology (CDM), Single Error Correction (SEC) codes are optimized for energy efficiency. Power and energy efficiency have improved in standard CMOS and CDM logic systems. This impacts IoT devices, memory storage, and security systems. Next, Forward Error Correction (FEC) is examined for ALOHA-based random-access control systems wherein packet loss rate and spectral efficiency metrics are mathematically calculated. FEC is beneficial, especially when time and frequency synchronization diverge. Monte Carlo simulations studied in the literature shows that FEC improves satellite communication systems. This review establishes benefits of FEC, notably in satellite communications systems. In addition, Wireless Sensor Networks (WSNs) error control techniques and network reliability and performance are examined along with Convolutional, Turbo, Low density parity check codes(LDPC), and Polar error correction coding (ECC) methods. We learn about their benefits, drawbacks, and implementation challenges by assessing their efficacy, efficiency, and applicability in wireless sensor networks (WSN) applications. This study further highlights advancements of energy-efficient error control coding when, integrated into network designs alongwith the trade-offs between efficacy and intricacy to help improve wireless communication networks. Finally, this study summarizes current error control coding approaches for wireless communication systems. We provide insights into improving reliability, energy conservation, and efficacy by examining ACM, CDM, FEC, and error correction coding (ECC) across varied domains including LPWAN and IoT technologies with earnest hope to facilitate researchers, engineers, and professionals working in the field of error control coding in wireless communication.
... This is why it is still widely used today in many applications related to the digital data transmission and commuication networks. [17,25,4,29,33,34,14,18]. In 2021, Falcone and Pavone [6] studied the weight distribution problem of the binary Hamming code. ...
Preprint
Full-text available
In this work, we present a new simple way to encode/decode messages transmitted via a noisy channel and protected against errors by the Hamming method. We also propose a fast and efficient algorithm for the encoding and the decoding process which do not use neither the generator matrix nor the parity-check matrix of the Hamming code.
... Parallel implementations of Hamming codes have been considered before [3,4]. Mitarai and McCluskey [3] refer to a hardware architecture in which several input bits are transformed simultaneously into a number of output bits using some integrated circuit logic. ...
Article
Full-text available
The Hamming code is a well-known error correction code and can correct a single error in an input vector of size n bits by adding logn parity checks. A new parallel implementation of the code is presented, using a hierarchical structure of n processors in logn layers. All the processors perform similar simple tasks, and need only a few bytes of internal memory.
... Although this has been a major step forward, there is still a practical need for improvements in terms of versatility, throughput and simplicity. Recently, the serial concatenation of an outer Hamming code with an inner rate-1 accumulator has received attention in [3, 4]. This structure uses very simple component codes with low decoding complexity to achieve high code rates with performance close to the Shannon limit in the waterfall region. ...
Conference Paper
In this paper, we propose a rate-compatible serially concatenated structure consisting of an outer linear extended BCH code and an inner recursive systematic convolutional code. Rate flexibility is achieved by puncturing the inner code. A two- step code design procedure combining analytical union bounds with Extrinsic Information Transfer charts is used to obtain codes offering very good performance in both the waterfall and the error floor regions over a wide range of code rates. The resulting codes show interesting advantages in terms of convergence and error floor compared to similar structures using convolutional codes as outer codes.
Article
The block product turbo code (BPTC) is classified as one of block turbo code concatenation forms. The Hamming code can detect two-bit error and correct one-bit error. The BPTC uses two Hamming codes for "column" coding and "row" coding, it has improved the Hamming code correcting only one error. In addition, the BPTC carries out block interleaving coding for disorganizing the transmission sequence before transmission, so as to avoid burst errors when the signal meets multi-path channel in the channel. This paper will discuss the decoding mechanism of the BPTC and analyze the efficiency of using a soft decoding algorithm in the decoding process. The soft Hamming Decoder is based on error patterns which belong to the same syndrome. It is shown that it is sufficient to investigate error patterns with one and two errors to gain up to 1.2 dB compared to hard decision decoding. Here, we will consider also the error patterns with three errors which belong to the determined syndrome, which increases the gain and improves the quality of the soft-output due to the increased number of comparisons with valid code words, in despite that, it will increase the complexity of the decoding process. The system is based on two Hamming block channel code combinations, which can be similar or different, a block interleaving to construct a BPSK modulation and BPTC coding system in the concept of feedback encoding in turbo code over an AWGN channel. To observe its coding improvement, we present the simulation results for the soft decoding of the BPTC codes of a code word length from 49 bits (using two (7,4) codes) up to 1440 bits (using two (127,120) codes).
Conference Paper
Serial concatenation of Hamming codes and an accumulator has been shown to achieve near capacity performance at high code rates. However, these codes usually exhibit poor error floor performance due to their small minimum distances. To overcome this weakness, we propose to replace the outer Hamming codes by product codes constructed from Hamming codes and single-parity-check codes. In this way, the minimum distance of the outer code can be doubled, which is expected to increase the minimum distance of the serially concatenated code and thus to improve error floor performance. Three-dimensional EXIT chart is used for their convergence analysis and the derived thresholds are shown to be close to Shannon limit. Low weight distance spectrum of the proposed code is also calculated and compared with the original code. Simulation results show that the proposed codes can lower the error floor by two orders of magnitudes without waterfall performance degradation at short block length.
Conference Paper
This paper determines mechanisms to tackle errors when implementing Boolean functions in nano-circuits. Nano-fabrics are expected to have very high defect rates as atomic variations directly impact such materials. This paper develops a coding mechanism that uses a combination of cheap but unreliable nano-device as the main function and reliable but expensive CMOS devices to implement the coding mechanism. The unique feature of this paper is that it exploits the don't-cares that naturally occur in Boolean functions to construct better codes. The reliable Boolean function problem is cast as a constraint satisfaction problem and then solved using a tree-based dynamic programming algorithm.
Article
Full-text available
This article takes a preliminary look at the weight distributions achievable for turbo codes using random, nonrandom, and semirandom permutations. Due to the recursiveness of the encoders, it is important to distinguish between self-terminating and non-self-terminating input sequences. The non-self-terminating sequences have little efiect on decoder performance, because they accumulate high encoded weight until they are artiflcially terminated at the end of the block. From probabilistic arguments based on selecting the permutations randomly, it is concluded that the self-terminating weight-2 data sequences are the most important consideration in the design of the constituent codes; higher-weight self-terminating sequences have successively decreasing importance. Also, increasing the number of codes and, cor- respondingly, the number of permutations makes it more and more likely that the bad input sequences will be broken up by one or more of the permuters. It is possible to design nonrandom permutations that ensure that the minimum distance due to weight-2 input sequences grows roughly as p 2 N, where N is the block length. However, these nonrandom permutations amplify the bad efiects of higher-weight inputs, and as a result they are inferior in performance to randomly selected permutations. But there are \semirandom" permutations that perform nearly as well as the designed nonrandom permutations with respect to weight-2 input sequences and are not as susceptible to being foiled by higher-weight inputs.
Article
Full-text available
In this paper we discuss AWGN coding theorems for ensembles of coding systems which are built from fixed convolutional codes interconnected with random interleavers. We call these systems "turbo-like" codes and they include as special cases both the classical turbo codes (1,2,3) and the serial concatentation of interleaved convolutional codes (4). We offer a general conjecture about the behavior of the ensemble (maximum-likelihood decoder) word error probability as the word length approches infinity. We prove this conjecture for a simple class of rate 1/q serially concatenated codes where the outer code is a q-fold repetition code and the inner code is a rate 1 convolutional code with transfer function 1/(1 + D). We believe this represents the first rigorous proof of a coding theorem for turbo-like codes.
Article
Full-text available
Concatenated coding schemes with interleavers consist of a combination of two simple constituent encoders and an interleaver. The parallel concatenation known as \turbo code" has been shown to yield remarkable coding gains close to theoretical limits, yet admitting a relatively simple iterative decoding technique. The recently proposed serial concatenation of interleaved codes may ofier performance superior to that of turbo codes. In both coding schemes, the core of the iterative decoding structure is a soft-input soft-output (SISO) module. In this article, we describe the SISO module in a form that continuously updates the maximum a posteriori (MAP) probabilities of input and output code symbols and show how to embed it into iterative decoders for parallel and serially concatenated codes. Results are focused on codes yielding very high coding gain for space applications. The recent proposal of \turbo codes" (2), with their astonishing performance close to the theoretical Shannon capacity limits, has once again shown the great potential of coding schemes formed by two or more codes working in a concurrent way. Turbo codes are parallel concatenated convolutional codes (PCCCs) in which the information bits are flrst encoded by a recursive systematic convolutional code and then, after passing through an interleaver, are encoded by a second systematic convolutional encoder. The code sequences are formed by the information bits, followed by the parity check bits generated by both encoders. Using the same ingredients, namely convolutional encoders and interleavers, serially concatenated convolutional codes (SCCCs) have been shown to yield performance comparable, and in some cases superior, to turbo codes (5).
Article
Full-text available
Concatenated coding schemes consist of the combination of two or more simple constituent encoders and interleavers. The parallel concatenation known as "turbo code" has been shown to yield remarkable coding gains close to theoretical limits, yet admitting a relatively simple iterative decoding technique. The recently proposed serial concatenation of interleaved codes may offer superior performance to that of turbo codes. In both coding schemes, the core of the iterative decoding structure is a soft-input soft-output (SISO) a posteriori probability (APP) module. In this letter, we describe the SISO APP module that updates the APP's corresponding to the input and the output bits, of a code, and show how to embed it into an iterative decoder for a new hybrid concatenation of three codes, to fully exploit the benefits of the proposed SISO APP module.
Conference Paper
Full-text available
We develop new, low complexity turbo codes suitable for bandwidth and power limited systems, for very low bit and word error rate requirements. Motivated by the structure of previously discovered low complexity codes such as repeat-accumulate (RA) codes with low density parity check matrix, we extend the structure to high-level modulation such as 8PSK, and 16QAM. The structure consists of a simple 4-state convolutional or short block code as an outer code, and a rate-1, 2 or 4-state inner code. Two design criteria are proposed: the maximum likelihood design criterion, for short to moderate block sizes, and an iterative decoding design criterion for very long block sizes
Article
Full-text available
We track the density of extrinsic information in iterative turbo decoders by actual density evolution, and also approximate it by symmetric Gaussian density functions. The approximate model is verified by experimental measurements. We view the evolution of these density functions through an iterative decoder as a nonlinear dynamical system with feedback. Iterative decoding of turbo codes and of serially concatenated codes is analyzed by examining whether a signal-to-noise ratio (SNR) for the extrinsic information keeps growing with iterations. We define a “noise figure” for the iterative decoder, such that the turbo decoder will converge to the correct codeword if the noise figure is bounded by a number below zero dB. By decomposing the code's noise figure into individual curves of output SNR versus input SNR corresponding to the individual constituent codes, we gain many new insights into the performance of the iterative decoder for different constituents. Many mysteries of turbo codes are explained based on this analysis. For example, we show why certain codes converge better with iterative decoding than more powerful codes which are only suitable for maximum likelihood decoding. The roles of systematic bits and of recursive convolutional codes as constituents of turbo codes are crystallized. The analysis is generalized to serial concatenations of mixtures of complementary outer and inner constituent codes. Design examples are given to optimize mixture codes to achieve low iterative decoding thresholds on the signal-to-noise ratio of the channel
Article
Mutual information transfer characteristics of soft in/soft out decoders are proposed as a tool to better understand the convergence behavior of iterative decoding schemes. The exchange of extrinsic information is visualized as a decoding trajectory in the extrinsic information transfer chart (EXIT chart). This allows the prediction of turbo cliff position and bit error rate after an arbitrary number of iterations. The influence of code memory, code polynomials as well as different constituent codes on the convergence behavior is studied for parallel concatenated codes. A code search based on the EXIT chart technique has been performed yielding new recursive systematic convolutional constituent codes exhibiting turbo cliffs at lower signal-to-noise ratios than attainable by previously known constituent codes
Article
This paper presents a new family of convolutional codes, nicknamed turbo-codes, built from a particular concatenation of two recursive systematic codes, linked together by nonuniform interleaving. Decoding calls on iterative processing in which each component decoder takes advantage of the work of the other at the previous step, with the aid of the original concept of extrinsic information. For sufficiently large interleaving sizes, the correcting performance of turbo-codes, investigated by simulation, appears to be close to the theoretical limit predicted by Shannon
Article
A method is described for constructing long error-correcting codes from one or more shorter error-correcting codes, referred to as subcodes, and a bipartite graph. A graph is shown which specifies carefully chosen subsets of the digits of the new codes that must be codewords in one of the shorter subcodes. Lower bounds to the rate and the minimum distance of the new code are derived in terms of the parameters of the graph and the subeodes. Both the encoders and decoders proposed are shown to take advantage of the code's explicit decomposition into subcodes to decompose and simplify the associated computational processes. Bounds on the performance of two specific decoding algorithms are established, and the asymptotic growth of the complexity of decoding for two types of codes and decoders is analyzed. The proposed decoders are able to make effective use of probabilistic information supplied by the channel receiver, e.g., reliability information, without greatly increasing the number of computations required. It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities. The construction principles
Article
The general problem of estimating the a posteriori probabilities of the states and transitions of a Markov source observed through a discrete memoryless channel is considered. The decoding of linear block and convolutional codes to minimize symbol error probability is shown to be a special case of this problem. An optimal decoding algorithm is derived.