Binary Message Passing Decoding of Product Codes Based on Generalized Minimum Distance Decoding - Chalmers Research

Page created by Daniel Dunn
 
CONTINUE READING
Binary Message Passing Decoding of Product Codes Based on
Generalized Minimum Distance Decoding

Downloaded from: https://research.chalmers.se, 2019-04-22 20:57 UTC

     Citation for the original published paper (version of record):
     Sheikh, A., Graell i Amat, A., Liva, G. (2019)
     Binary Message Passing Decoding of Product Codes Based on Generalized Minimum Distance Decoding
     53rd Annual Conference on Information Sciences and Systems (CISS). Invited paper
     http://dx.doi.org/10.1109/CISS.2019.8692862

N.B. When citing this work, cite the original published paper.

research.chalmers.se offers the possibility of retrieving research publications produced at Chalmers University of Technology.
It covers all kind of research output: articles, dissertations, conference papers, reports etc. since 2004.
research.chalmers.se is administrated and maintained by Chalmers Library

                                           (article starts on next page)
Binary Message Passing Decoding of Product Codes
Based on Generalized Minimum Distance Decoding
                                Alireza Sheikh§ , Alexandre Graell i Amat§ , and Gianluigi Liva†
                        §
                            Department of Electrical Engineering, Chalmers University of Technology, Sweden
                †
                    Institute of Communications and Navigation of the German Aerospace Center (DLR), Germany

                                                                (Invited Paper)

   Abstract—We propose a binary message passing decoding algo-             throughputs and energy consumption close to that of iBDD,
rithm for product codes based on generalized minimum distance              another line of research recently explored is to improve the
decoding (GMDD) of the component codes, where the last stage               performance of the conventional iBDD. In [10], an algorithm
of the GMDD makes a decision based on the Hamming distance
metric. The proposed algorithm closes half of the gap between              that exploits conflicts between component codes in order
conventional iterative bounded distance decoding (iBDD) and                to assess their reliabilities even when no channel reliability
turbo product decoding based on the Chase–Pyndiah algorithm,               information is available, was proposed. The algorithm, dubbed
at the expense of some increase in complexity. Furthermore, the            anchor decoding (AD), improves the performance of iBDD
proposed algorithm entails only a limited increase in data flow            at the expense of some increase in decoding complexity. In
compared to iBDD.
                                                                           [11], a decoding algorithm based on marking the least reliable
                            I. I NTRODUCTION                               bits was proposed for staircase codes. In [12], we proposed a
                                                                           decoding algorithm based on BDD of the component codes,
   Applications requiring very high throughputs, such as fiber-            named iBDD with scaled reliability (iBDD-SR). The algorithm
optic communications and high-speed wireless communica-                    in [12] improves the performance of iBDD by exploiting chan-
tions, have recently triggered a significant amount of research            nel reliabilities as proposed in [13] for LDPC codes, while still
on low-complexity decoders. While codes-on-graphs such as                  only exchanging binary (i.e., hard-decision) messages between
low-density parity-check (LDPC) codes and turbo codes have                 component decoders, similar to iBDD. iBDD-SR improves
been shown to provide close-to-capacity performance under                  upon iBDD and AD, and achieves the same throughput of
belief propagation (BP) decoding, scaling their BP decoders                iBDD with a slight increase in energy consumption [14]. In
to yield throughtputs of the order of several Gbps or even                 [15], we proposed an algorithm based on generalized minimum
1 Tbps, as required for example for the the future optical                 distance decoding (GMDD) of the component codes. The
metro-networks, is a very challenging task. One of the main                proposed algorithm closes over 50% of the performance gap
bottlenecks is the data flow required by the exchange of soft              between iBDD and turbo product decoding (TPD) based on
information in the iterative BP decoding. This has spurred                 the Chase–Pyndiah algorithm [16], with lower complexity
a great deal of research in novel low-complexity decoding                  than TPD. However, the algorithm, which we referred to as
algorithms.                                                                iterative GMDD with scaled reliability (iGMDD-SR), requires
   Several works have attempted to reduce the decoding com-                the exchange of soft information between the component
plexity of BP decoding of LDPC codes, see, e.g., [1]–[4]. For              decoders and hence entails a decoder data flow equivalent to
high-throughput applications, an alternative to LDPC codes                 that of TPD and significantly higher than that of iBDD.
with (BP) soft decision decoding (SDD) is to consider hard                    In this paper, we propose a novel binary message pass-
decision decoding (HDD). Product codes (PCs) [5], half-                    ing (BMP) decoding algorithm for product codes based on
product codes [6], staircase codes [7], braided codes [8], and             GMDD of the component codes, which we refer to as BMP-
other product-like code structures [9] with HDD based on                   GMDD. The proposed algorithm follows the same principle
bounded distance decoding (BDD) of the component codes                     as the iGMDD-SR algorithm proposed in [15], but a crucial
(which we refer here to as iterative BDD (iBDD)) yield                     difference is that the Hamming distance metric is used at the
excellent performance with a significantly reduced data flow,              final stage of the GMDD of the component codes. In contrast
hence achieving very high throughputs. However, this comes                 to iGMDD-SR, the resulting algorithm does not require the
at the expense of a performance loss (typically larger than 1              exchange of soft information, but the exchange of the hard
dB) compared to SDD.                                                       decisions on the code bits (as conventional iBDD) and an
   To close the performance gap between iBDD of product-like               ordered list of the dmin − 1 least reliable code bits for
codes and SDD of LDPC codes or product-like codes, yet with                each component code, where dmin is the minimum Hamming
                                                                           distance of the component code. This list can be represented
  This work was financially supported by the Knut and Alice Wallenberg
Foundation, the Swedish Research Council under grant 2016-04253, and the   by a small number of bits. The proposed algorithm yields
Ericsson Research Foundation.                                              performance very close to that of iGMDD-SR, closing 50% of
c03                                                              A. Generalized Minimum Distance Decoding
                                                                                   Consider the decoding of a BCH component code of length
                                                                              n and the vector of channel LLRs l = (L1 , . . . , Ln ) corre-
                                      c3                                     sponding to the received vector r = (r1 , . . . , rn ). GMDD is
 c4          c4,3                                                      c4
                                                                              based on multiple algebraic error-erasure decoding attempts
                                                                              [17]. In particular, the decoder ranks the coded bits in terms of
                                                                              their reliabilities |L1 |, . . . , |Ln |. Then, the m least reliable bits
                                                                              in r are erased, where m ∈ Modd , {dmin − 1, dmin − 3, ..., 2}
                                                                              if dmin is odd and m ∈ Meven ,∈ {dmin − 1, dmin − 3, ..., 3}
Fig. 1. Code array (left) and simplified Tanner graph (right) of a PC with    if dmin is even. For later use, we denote by L the ordered
identical component code of length n = 6 for row and column codes. In the
simplified Tanner graph, the CNs are represented by squares (the CNs on the   list of dmin − 1 least reliable code bits. It can be readily
left represent the column codes and the CNs on the right represent the row    checked that |Modd | = |Meven | = t, where t = dmin2−1
codes) and degree-2 VNs are represented as simple edges. The third column     is the error correcting capability of the code. Together with
code and the fourth row code are highlighted.
                                                                              the received vector r, this gives a list of t + 1 trial vectors
                                                                              r̃i , i = 1, . . . , t + 1, out of which t vectors contain both
the performance gap between iBDD and TPD, while entailing                     erasures and (possibly) errors. Finally, algebraic error-erasure
only a small increase in data flow compared to iBDD (between                  decoding [18, Sec. 6.6] is applied to each trial vector r̃i ,
8.5% and 34.3%, depending on the code parameters).                            resulting in a set of candidate codewords, of size at most
   Notation: We use boldface letters to denote vectors and                    t + 1, denoted by S . If decoding fails for all t + 1 vectors
matrices, e.g., x and X = [xi,j ]. The i-th row and j-th column               in the list, a failure is declared. Otherwise, the decoder picks
of matrix X is denoted by Xi,: and X:,j , respectively. |a|                   among all candidate codewords in S the one that minimizes
denotes the absolute value of a, bac the largest integer smaller              the generalized distance dGD (r, c), [17]
than or equal to a, and dae the smallest integer larger than or
equal to a. A Gaussian distribution with mean µ and variance                          ĉ = arg min dGD (r, c)
                                                                                             c∈S
σ 2 is denoted by N (µ, σ 2 ).                                                                      X                 X
                                                                                         = arg min        (1 − αi ) +   (1 + αi ),                 (1)
                                                                                               c∈S    i:ri =ci
                         II. P RELIMINARIES                                                                                i:ri 6=ci

                                                                                           ∆
   Let C be an (n, k, dmin ) binary linear code, where n,                     where αi = |Li |/ max |Lj |. Note that if all input LLRs
                                                                                                     1≤j≤n
k, and dmin are the code length, dimension, and minimum                       L1 , . . . , Ln have the same magnitude, we have αi = 1 for
distance, respectively. We consider two-dimensional PCs with                  all i = 1, . . . , n and (1) reverts to 2dH (r, ĉ), where dH (r, ĉ)
identical binary Bose–Chaudhuri–Hocquenghem (BCH) com-                        is the Hamming distance between r and ĉ.
ponent code C for the row and column codes. Such a PC, of                        By introducing erasures and performing multiple error-
parameters (n2 , k 2 , d2min ) and rate R = k 2 /n2 , is defined as           erasure component decoding attempts, GMDD can decode
the set of all n × n arrays such that each row and each column                beyond half the minimum distance of the code.
of the array is a codeword of C. Thus, a codeword of the PC
can be represented by an n × n binary matrix C = [ci,j ].                       III. B INARY M ESSAGE PASSING D ECODING BASED ON
Alternatively, a PC can be defined via a Tanner graph with                          G ENERALIZED M INIMUM D ISTANCE D ECODING
2n constraint nodes (CNs), where n CNs correspond to the                         In this section, we propose a BMP decoding algorithm for
row codes and n CNs correspond to the column codes. The                       PCs based on GMDD of the component codes. We refer to it
graph has n2 variable nodes (VNs) corresponding to the n2                     as BMP-GMDD. The algorithm follows the same principle as
code bits. The code array and (simplified) Tanner graph of a                  the iGMDD-SR algorithm that we proposed in [15]. However,
two-dimensional PC with n = 6 is shown in Fig. 1.                             compared to iGMDD-SR, the proposed BMP-GMDD does
   We assume transmission over the binary-input additive                      not require the exchange of the reliabilities on the code bits
white Gaussian noise (AWGN) channel. The channel obser-                       between the row and column decoders. To achieve that, rather
vation corresponding to code bit ci,j is given by                             than considering the generalized distance in (1) to perform the
                                                                              decision at the last stage of GMDD of the row and column
                          yi,j = xi,j + zi,j ,                                decoders as in [15], we perform the decision based on the
                                                                              Hamming distance, i.e., among all candidate codewords in S
where xi,j = (−1)ci,j , zi,j ∼ N (0, (2REb /N0 )−1 ), with
                                                                              (see Section II-A), the decoder selects the one that minimizes
Eb /N0 being the signal to noise ratio. We denote by L = [Li,j ]
                                                                              dH (r, c), i.e., the decision in (1) is substituted by
the matrix of channel log-likelihood ratios (LLRs) and by
R = [ri,j ] the matrix of hard decisions at the channel output,                                      ĉ = arg min dH (r, c).                       (2)
where ri,j is obtained by mapping the sign of Li,j according                                                 c∈S

to 1 7→ 0 and −1 7→ 1. We denote this mapping by B(·), i.e.,                     Making the decision based on the Hamming distance instead
ri,j = B(Li,j ). With some abuse of notation, we also write                   of the generalized distance entails a small performance loss, as
R = B(L).                                                                     the decision does not take into consideration the normalized
reliabilities αi . However, this allows to significantly reduce                                                         code bit ci,j
                                                                                        i-th row                      r,(`)
                                                                                                                     wi   Li,j                         j-th column
the decoder data flow, as explained later.                                    c,(`−1)               r,(`)                       r,(`)          r,(`)
                                                                            Ψi,:                   µ̄i,j ∈ {±1, 0}             µi,j           ψi,j
    The proposed BMP-GMDD algorithm works as follows.                                   GMDD                          ×    +          B(·)               GMDD
                                                                              r,(`−1)                                                          c,(`)
                                                                             Li                                                               Lj
Without loss of generality, assume that the decoding starts
with the row codes and let us consider the decoding of the
                                                      c,(`−1)              Fig. 2. Block diagram showing the information flow from the i-th row decoder
i-th row code at iteration `. Let Ψc,(`−1) = [ψi,j            ] be the     to the j-th column decoder in BMP-GMDD. The message at the input of the
matrix of hard decisions on code bits ci,j after the decoding                                                                                 c,(`−1)
                                                                           i-th row decoder is the vector of hard decisions on the code bits Ψi,:     and
                                                            r,(`−1)
of the n column codes at iteration ` − 1. Also, let Li              be     the ordered list of the dmin − 1 least reliable bits Lj
                                                                                                                                   r,(`−1)
                                                                                                                                             from the decoding
the ordered list of dmin − 1 least reliable bits of codeword Ci,:          of the column codes at the previous iteration.
from the decoding of the column codes at iteration ` − 1. Note
                                       r,(`−1)
that in the first iteration the list Li        is built according to
the ordering of the channel reliabilities Li,: = (Li,1 , . . . , Li,n ).   column decoder, which entails a significantly higher decoder
Row decoding of the i-th row code is then performed based on               data flow compared to BMP-GMDD.
   c,(`−1)         r,(`−1)
Ψi,:        and Li         . First, GMDD of the i-th row code based
                                                          c,(`−1)
on the Hamming distance is performed based on Ψi,:                 and                  IV. D ECODING C OMPLEXITY D ISCUSSION
Lri , as explained in Section II-A (see (2)). Note that GMDD
does not provide reliability information about the decoded bits,              A thorough complexity analysis of BMP-GMDD should
i.e., it is a soft-input hard-output decoding algorithm. In order          include, besides pure algorithmic aspects, implementation
to provide the column decoders with the list of m least reliable           implications in terms of memory requirements, wiring, and
bits for each codeword C:,j after the decoding of the row                  transistor switching activity [14], and is beyond the scope of
codes at iteration `, we do the following. The output bits of              this paper. We however provide a high-level discussion of the
GMDD are mapped according to 0 7→ +1 and 1 7→ −1 if                        complexity and data flow of BMP-GMDD compared to that of
GMDD is successful and mapped to 0 if GMDD fails. Let                      conventional iBDD, AD [10], iBDD-SR [12], and iGMDD-SR
  r,(`)
µ̄i,j ∈ {±1, 0} be the result of this mapping for the decoded              [15].
bit corresponding to code bit ci,j . The reliability information              Conventional iBDD, iBDD-SR, and AD are based on BDD
is then formed according to                                                of the component codes and are characterized by a similar
                     r,(`)           r,(`)      r,(`)                      complexity and data flow. In particular, it was shown in [14]
                   µi,j = wi                 · µ̄i,j + Li,j ,       (3)
                                                                           that for the same data throughput (up to 1 Tbps), iBDD-SR
where wi
         r,(`)
             > 0 is a scaling factor than needs to be optimized.           provides 0.2–0.25 dB gain with respect to iBDD with only
Then, the hard decision on ci,j made by the i-th row decoder               slightly higher energy consumption.
is formed as                                                                  Both GMDD-SR and the proposed BMP-GMDD are based
                        r,(`)       r,(`)
                       ψi,j = B(µi,j ).                      (4)           on GMDD of the component codes. In this case, t error-
                                                                           erasure decoding attempts and one BDD attempt are required.
                             r,(`)
   The hard decision ψi,j is the binary message on code bit                Each error-erasure decoding attempt has a cost close to a run
ci,j passed from the i-th row code to the j-th column code,                of BDD. Each decoding attempt may result in a candidate
i.e., from the i-th row CN to the j-th column CN (see Fig. 1).             codeword that is used to form a list of size up to t + 1, as
In particular, after applying this procedure to all row codes,             explained in Section II-A. The minimization of the generalized
                         r,(`)
the matrix Ψr,(`) = [ψi,j ] is formed and used as the input for            distance in (1) for GMDD-SR and the Hamming distance in
the n column decoders. Furthermore, after decoding of all row              (2) for BMP-GMDD has a negligible cost with respect to the
codes, for each column codeword C:,j , the corresponding code              t+1 decoding attempts. On the other hand, both BMP-GMDD
                                                  r,(`)         r,(`)      and iGMDD-SR require finding the dmin − 1 least reliable bits
bits are ranked according to the reliabilities (µ1,j , . . . , µn,j ).
                                                         c,(`)             and sorting them according to their reliabilities, which adds
Then the m least reliable bits are stored in the list Lj , which           some further complexity.
is passed to the j-th column decoder.
                                                                              Note that GMDD-SR requires the component decoders to
   The decoding of the n column codes at iteration ` is then               be provided with soft information by the previous decoding
performed based on the hard decisions Ψr,(`) and the lists of              iteration. Therefore, its data flow is significantly higher than
                      c,(`)        c,(`)
least reliable bits L1 , . . . , Ln      as explained for the i-th         that of iBDD, iBDD-SR, and AD, and is the same of soft deci-
row decoder above. After decoding of the n column codes                    sion TPD. For an a-bit representation of the soft information,
                                                      c,(`)
at decoding iteration `, the matrix Ψc,(`) = [ψi,j ] of hard               the data flow is roughly a times that of BDD, iBDD-SR, and
                               r,(`)        r,(`)
decision bits and the lists L1 , . . . , Ln are passed to the              AD. In contrast, BMP-GMDD requires only the exchange of
n row decoders for the next decoding iteration. The iterative              the hard decisions and the ordered list of dmin −1 least reliable
process continues until a maximum number of iterations is                  bits for each row and column codeword. For a component code
reached. The BMP-GMDD of PCs is schematized in Fig. 2.                     of length n, the index of each code bit can be represented
   Remark: With reference to Fig. 2, the iGMDD-SR algorithm                with dlog2 (n)e bits. Furthermore, for each of the dmin − 1
                                                   r,(`)
proposed in [15] passes the soft information µi,j to the j-th              least reliable bits we need to provide their ordering in terms
Table I
        C OMPARISON OF DIFFERENT PRODUCT DECODING ALGORITHMS . C ODING GAINS AND CAPACITY GAPS ARE MEASURED AT A BER OF 10−6

                                                                                                                channel       exchanged    gain over      gap from
      acronym                                              decoding algorithm
                                                                                                              reliabilities    messages   iBDD (dB)     capacity (dB)
     iBDD                                              iterative bounded distance decoding                        no            hard           -         1.03 (HD)
 iBDD (ideal)                             iterative bounded distance decoding without miscorrections              no            hard         0.28        0.75 (HD)
   iBDD-SR                             iterative bounded distance decoding with scaled reliability [12]           yes           hard         0.27         2.3 (SD)
      AD                                                       anchor decoding [10]                               no            hard         0.18        0.85 (HD)
 BMP-GMDD                                 binary message passing decoding based on GMD decoding                   yes           hard         0.51        1.79 (SD)
  iGMDD-SR                     iterative generalized minimum distance decoding with scaled reliability [15]       yes           soft         0.58        1.72 (SD)
      TPD                                         turbo product decoding (Chase–Pyndiah) [16]                     yes           soft         1.08        1.22 (SD)

  10−1                                                                                      [15], and TPD based on the Chase-Pyndiah decoding [16]. For
                                                                                            all algorithms, a maximum of `max = 10 decoding iterations
  10−2          HD ca                                                                       is performed. As a reference, the Shannon limit for SDD and
                      pacity
                                                                                            HDD is also plotted in the figure.
  10−3                                                                                         Both BMP-GMDD and iGMDD-SR require a proper choice
                                                                                                                      (`)
                 SD capacity

                                                                                            of the scaling factors wi . For simplicity, we consider the
BER

  10−4                                                                                      same scaling factor for all row and column codes, i.e.,
                                     iBDD                                                     r,(`)     c,(`)
                                                                                            wi      = wj      = w(`) for all i, j = 1, . . . , n, and jointly
                                     AD
  10−5                               ideal iBDD                                             optimize the vector w = (w(1) , . . . , w(`max ) ) by using Monte–
                                     iBDD-SR                                                Carlo estimates of the BER for a fixed Eb /N0 as the op-
                                     iGMDD-SR                                               timization criterion. Intuitively, one would expect that the
  10−6
                                     BMP-GMDD                                               decisions of the component decoders become more reliable
                                     TPD
                                                                                            with increasing number of iterations, whereas the channel
  10−7
     2.5                       3         3.5        4      4.5             5        5.5     observations become less informative. Therefore, in order
                                               Eb /N0 (dB)                                  to reduce the optimization search space, we only consider
                                                                                            vectors w with monotonically increasing entries. iBDD-SR
Fig. 3. BER performance of different decoding algorithms for a PC with
(256, 239, 6) eBCH component codes and transmission over the AWGN                           also requires scaling factors (see [12], [19]). In this case, the
channel. The PC rate is 0.8716 and the maximum number of decoding                           scaling factors can be derived using density evolution [13],
iterations is 10.                                                                           [19].
                                                                                               The two reference curves are conventional iBDD (red curve
                                                                 r,(`)                      with empty triangle markers) and TPD (purple curve with
of reliabilities. Thus, each ordered lists Li , i = 1, . . . , n,
       c,(`)
and Lj , = j, . . . , n, can be represented with                                            pentagon markers), with the latter performing 1.1 dB better
                                                                                            at a BER of 10−7 . AD (dark blue curve with filled circle
            (dlog2 (n)e + dlog2 (dmin − 1)e) (dmin − 1)                                     markers) and iBDD-SR (pink curve with filled triangle mark-
                                                                                            ers) outperform conventional iBDD by 0.18 dB and 0.27 dB,
bits each. This is the additional data flow (per row and column                             respectively, at the same BER. As a reference, we also show
code decoding) compared to conventional iBDD. For instance,                                 the performance of ideal iBDD (brown curve with empty circle
for a component code of code length n = 256 bits, the data                                  markers), where a genie prevents miscorrections. Interestingly,
flow of BMP-GDD is 15.625% and 34.375% higher than that                                     at a BER of 10−7 iBDD-SR yields the same performance as
of iBDD for dmin = 5 (t = 2) and dmin = 9 (t = 4),                                          ideal iBDD.1 iGMDD-SR (green curve with diamond markers)
respectively. For a component code of length n = 512 bits,                                  outperforms iBDD, iBDD-SR, and AD and closes ≈ 54% of
the increase in data flow is reduced to 8.593% and 18.75%,                                  the gap between iBDD and TPD, at the expense of an increased
respectively. Thus, the increase in data flow of BMP-GMDD                                   complexity and data flow.
compared to iBDD is very limited and is much lower than the
                                                                                               The performance of the proposed BMP-GMDD is given by
data flow of iGMDD-SR and conventional TPD.
                                                                                            the blue curve with square markers. The proposed decoding
                                   V. N UMERICAL R ESULTS                                   algorithm yields performance very close to that of iGMDD-SR
                                                                                            (a performance degradation compared to iGMDD-SR of only
   In Fig. 3, we give the bit error rate (BER) performance
                                                                                            0.074 dB is observed at a BER of 10−7 ), while achieving
of BMP-GMDD for a PC with double-error-correcting ex-
                                                                                            a significantly lower data flow. BMP-GMDD closes around
tended BCH (eBCH) codes with parameters (256, 239, 6) as
component codes and transmission over the AWGN channel.
                                                                                              1 We remark that the performance of iBDD-SR in Fig. 3 is improved
The resulting PC has rate R = 2392 /2562 ≈ 0.8716. For
                                                                                            compared to [15], since in this paper we use the optimized scaling factors
comparison purposes, we we also plot the performance of                                     based on the density evolution derived in [19], rather than based on Monte-
conventional iBDD, AD [10], iBDD-SR [12], iGMDD-SR                                          Carlo simulations as in [15].
50% of the performance gap between iBDD and TPD, while               staircase codes. Overall, the proposed BMP-GMDD algorithm
requiring only a 21.48% higher data throughput than iBDD.            provides a very good performance-complexity tradeoff and is
   The coding gain improvements of all considered decoding           appealing for very high-throughput applications such as fiber-
algorithms over iBDD are summarized in Table I (fifth col-           optic communications.
umn). In the table we also indicate whether the algorithms ex-
                                                                                              ACKNOWLEDGMENT
ploit the channel reliabilities or not, the nature of the messages
exchanged in the iterative decoding (hard or soft), as well as         The authors would like to thank Dr. Christian Häger for
the gap to capacity for all schemes (sixth column). Note that        providing the simulation results of anchor decoding in Fig. 3.
the performance of iBDD and AD should be compared to the                                           R EFERENCES
hard decision (HD) capacity, while the performance of iBDD-
                                                                      [1] A. Darabiha, A. C. Carusone, and F. R. Kschischang, “Power reduction
SR, iGMDD-SR, BMP-GMDD, and TPD should be compared                        techniques for LDPC decoders,” IEEE J. Solid-State Circ., vol. 43, no. 8,
to the soft decision (SD) capacity since the channel LLRs are             pp. 1835–1845, Aug. 2008.
exploited in the decoding. Overall, one can see a clear trade-        [2] T. Mohsenin, D. N. Truong, and B. M. Baas, “A low-complexity
                                                                          message-passing algorithm for reduced routing congestion in LDPC
off between BER performance and decoding complexity for                   decoders,” IEEE Trans. Circ. and Sys. I: Regular Papers, vol. 57, no. 5,
the different algorithms.                                                 pp. 1048–1061, May 2010.
   We remark that if the channel LLRs are highly reliable             [3] F. Angarita, J. Valls, V. Almenar, and V. Torres, “Reduced-complexity
                                                                          Min-Sum algorithm for decoding LDPC codes with low Error-Floor,”
but with wrong sign, one can expect that the decoding rule                IEEE Trans. Circ. and Sys. I: Regular Papers, vol. 61, no. 7, pp. 2150–
in (3) will be unable to recover from these errors. In this               2158, Jul. 2014.
situation, although µ̄ri,j may correspond to a correct decision,      [4] K. Cushon, P. Larsson-Edefors, and P. Andrekson, “Low-power
                                                                          400-Gbps soft-decision LDPC FEC for optical transport networks,”
it is overridden by the channel channel, i.e., the hard decision          IEEE/OSA J. Lightw. Technol., vol. 34, no. 18, pp. 4304–4311, Sep.
                                                     r,(`)
on code bit ci,j made by the i-th row decoder, ψi,j , becomes             2016.
  r,(`)         r,(`)     r,(`)                                       [5] P. Elias, “Error-free coding,” Trans. IRE Professional Group on Inf.
ψi,j = B(wi           · µ̄i,j + Li,j ) = B(Li,j ) (cf. (3) and            Theory, vol. 4, no. 4, pp. 29–37, Sep. 1954.
(4)), leading to an erroneously decoded bit. Therefore, one           [6] J. Justesen, “Performance of Product Codes and Related Structures with
needs to be careful when applying BMP-GMDD to avoid                       Iterated Decoding,” IEEE Trans. Commun., vol. 59, no. 2, pp. 407–415,
                                                                          Feb. 2011.
the appearance of an error floor. In particular, to avoid such        [7] B. P. Smith, A. Farhood, A. Hunt, F. R. Kschischang, and J. Lodge,
errors and avoid a high error floor, we run BMP-GMDD for                  “Staircase codes: FEC for 100 Gb/s OTN,” IEEE/OSA J. Lightw.
some iterations and then we append a few conventional iBDD                Technol., vol. 30, no. 1, pp. 110–117, Jan. 2012.
                                                                      [8] Y. Jian, H. D. Pfister, and K. R. Narayanan, “Approaching capacity
iterations, where the channel reliabilities are disregarded when          at high rates with iterative hard-decision decoding,” IEEE Trans. Inf.
making the decision on a given code bit. The appended iBDD                Theory, vol. 63, no. 9, pp. 5752–5773, Sep. 2017.
iterations increase the chance to correct transmission errors         [9] C. Häger, H. D. Pfister, A. Graell i Amat, and F. Brännström, “Density
                                                                          Evolution for Deterministic Generalized Product Codes on the Binary
with high channel reliability. By doing so, an error floor is             Erasure Channel at High Rates,” IEEE Trans. Inf. Theory, vol. 63, no. 7,
avoided. The same discussion applies to iBDD-SR [12], [19]                pp. 4357–4378, Jul. 2017.
and iGMDD-SR [15]. For the simulation of BMP-GMDD,                   [10] C. Häger and H. D. Pfister, “Approaching Miscorrection-free Perfor-
                                                                          mance of Product Codes with Anchor Decoding,” IEEE Trans. Commun.,
iBDD-SR, and iGMDD-SR in Fig. 3 we considered 8 decoding                  vol. 66, no. 7, pp. 2797–2808, Jul. 2018.
iterations of the algorithms appended with 2 iBDD iterations.        [11] Y. Lei, A. Alvarado, B. Chen, X. Deng, Z. Cao, J. Li, and K. Xu,
                                                                          “Decoding staircase codes with marked bits,” in Proc. IEEE Int. Symp.
                      VI. C ONCLUSION                                     Turbo Codes & Iterative Information Processing (ISTC), Hong Kong,
                                                                          Dec. 2018.
   We proposed a new message passing decoding algorithm              [12] A. Sheikh, A. Graell i Amat, and G. Liva, “Iterative bounded distance
for product codes based on generalized minimum distance                   decoding of product codes with scaled reliability,” in Proc. Eur. Conf.
decoding, i.e., error and erasure decoding, of the component              Opt. Commun. (ECOC), Rome, Italy, Sep. 2018.
                                                                     [13] G. Lechner, T. Pedersen, and G. Kramer, “Analysis and design of binary
codes, where the last stage of GMDD is based on the Ham-                  message passing decoders,” IEEE Trans. Commun., vol. 60, no. 3, pp.
ming distance metric. The proposed algorithm, dubbed BMP-                 601–607, Mar. 2012.
GMDD, exploits soft information but requires to exchange             [14] C. Fougstedt, A. Sheikh, A. Graell i Amat, G. Liva, and P. Larsson-
                                                                          Edefors, “Energy-efficient soft-assisted product decoders,” in Proc. OSA
only hard decisions and a short ordered list of the least                 Optical Fiber Commun. Conf. (OFC), San Diego, CA, Mar. 2019.
reliable bits between component decoders, hence introducing          [15] A. Sheikh, A. Graell i Amat, G. Liva, C. Häger, and H. D. Pfister,
a limited increase in data flow compared to conventional                  “On low-complexity decoding of product codes for high-throughput
                                                                          fiber-optic systems,” in Proc. IEEE Int. Symp. Turbo Codes & Iterative
iterative bounded distance decoding. For the considered sce-              Information Processing (ISTC), Hong Kong, Dec. 2018.
nario based on (256, 239, 6) double-error-correcting eBCH            [16] R. M. Pyndiah, “Near-optimum decoding of product codes: block turbo
component codes, the proposed algorithm closes about 50%                  codes,” IEEE Trans. Commun., vol. 46, no. 8, pp. 1003–1010, Aug.
                                                                          1998.
of the performance gap between iBDD and turbo product                [17] G. Forney, “Generalized minimum distance decoding,” IEEE Trans. Inf.
decoding and yields performance very close to that of the                 Theory, vol. 12, no. 2, pp. 125–131, Apr. 1966.
algorithm iGMDD-SR introduced in [15], with a much lower             [18] S. Lin and D. J. Costello Jr., Error Control Coding, Second Edition.
                                                                          Upper Saddle River, NJ, USA: Prentice-Hall, Inc., 2004.
data flow, only 21.48% higher than that of iBDD. The increase        [19] A. Sheikh, A. Graell i Amat, and G. Liva, “Binary message passing
in data flow is even lower for longer component codes. While              decoding of product-like codes,” IEEE Trans. Commun. (submitted),
in this paper we considered PCs, the proposed algorithm can               2019.
be extended to other classes of product-like codes such as
You can also read