Decoding of Non-Binary LDPC Codes Using the Information Bottleneck Method - arXiv

Page created by Jeanette King
 
CONTINUE READING
Decoding of Non-Binary LDPC Codes Using the
                                                     Information Bottleneck Method
                                                       Maximilian Stark, Gerhard Bauch                                       Jan Lewandowsky, Souradip Saha
                                                      Hamburg University of Technology                                    Fraunhofer Institute for Communication,
                                                         Institute of Communications                                  Information Processing and Ergonomics (FKIE)
                                                          21073 Hamburg, Germany                                                53343 Wachtberg, Germany
                                                      {maximilian.stark,bauch}@tuhh.de                             {jan.lewandowsky,souradip.saha}@fkie.fraunhofer.de

                                            Abstract—Recently, a novel lookup table based decoding                 communication (uRLLC), non-binary LDPC codes could be
                                         method for binary low-density parity-check codes has attracted            promising candidates, if decoders with affordable complexity
arXiv:1810.08921v2 [cs.IT] 23 Feb 2019

                                         considerable attention. In this approach, mutual-information-             were available.
                                         maximizing lookup tables replace the conventional operations
                                         of the variable nodes and the check nodes in message passing                  Recently, several authors considered a novel approach for
                                         decoding. Moreover, the exchanged messages are represented                decoding of binary LDPC codes [7]–[12]. In these works,
                                         by integers with very small bit width. A machine learning                 a principle was applied which is fundamentally different
                                         framework termed the information bottleneck method is used to             from state-of-the-art signal processing approaches. Instead of
                                         design the corresponding lookup tables. In this paper, we extend          implementing the sum-product algorithm to decode an LDPC
                                         this decoding principle from binary to non-binary codes. This is
                                         not a straightforward extension but requires a more sophisticated         code, mutual-information-maximizing lookup tables were used
                                         lookup table design to cope with the arithmetic in higher order           to replace all conventional signal processing steps in an
                                         Galois fields. Provided bit error rate simulations show that our          LDPC decoder. These lookup tables process only quantization
                                         proposed scheme outperforms the log-max decoding algorithm                indices which can be stored using just a few bits in hard-
                                         and operates close to sum-product decoding.                               ware. Moreover, all expensive operations were replaced by
                                                                                                                   simple lookup operations in the designed mutual-information-
                                                                 I. I NTRODUCTION
                                                                                                                   maximizing lookup tables. In [13] and [14] it was shown that
                                            Shortly after their rediscovery by MacKay [1], binary low-             this approach is, in fact, beneficial in comparison to state-of-
                                         density parity-check (LDPC) codes have been generalized for               the-art LDPC decoders in practical decoder implementations.
                                         non-binary symbol alphabets over higher order Galois fields               The applied mutual-information-maximizing lookup tables can
                                         with field order q. However, the decoding of these codes using            be constructed using the information bottleneck method [15].
                                         sum-product decoding is computationally much more expen-                      The existing works [7]–[12] only describe the decoding of
                                         sive than the decoding of their binary counterparts. The main             binary LDPC codes with the proposed method. In this paper,
                                         computational bottleneck for higher order Galois field LDPC               our aim is to extend the fundamental principle also to non-
                                         codes is the required convolution of probability distributions            binary LDPC codes. This extension is not straightforward as
                                         at the check nodes. Moreover, the number of bits required                 it requires sophisticated lookup table design approaches. From
                                         to represent the processed probability vectors in hardware is             the results obtained, we observe that the proposed algorithm
                                         large. Approaches to reduce the implementation complexity                 performs very close to to the sum-product algorithm.
                                         of the check node operation range from application of the                     The paper contains the following main contributions:
                                         fast convolution using the fast Walsh-Hadamard transform                      • We devise relevant-information-preserving variable and
                                         (FHT) to log-domain decoding with an approximated check                         check node operations using the information bottleneck
                                         node operation [2]–[5]. Despite these very important works,                     method resulting in a novel decoder for non-binary LDPC
                                         the development of efficient decoding methods for non-binary                    codes.
                                         LDPC codes continues to be an interesting subject of current                  • In the resulting decoder, all arithmetic operations are
                                         research for practical purposes as non-binary LDPC codes                        replaced by simple lookups.
                                         have better error correction properties for short block lengths               • This novel decoder can be applied for arbitrary regular
                                         than binary LDPC codes. The latter unfold their capacity                        non-binary LDPC codes.
                                         approaching behavior only for very large codeword lengths [6].                • Inherently, we devise a discrete density evolution scheme
                                         Therefore, especially in 5G related scenarios such as massive-                  for non-binary LDPC codes which can be used to study
                                         machine-type communications and ultra-reliable low latency                      the performance of non-binary code ensembles under the
                                                                                                                         considered lookup table decoding.
                                           This work has been accepted for publication at the IEEE International
                                         Conference on Communications (ICC’19). Copyright may be transferred           • Despite all operations being simple lookup operations
                                         without notice, after which this version may no longer be accessible.           and all messages being passed during decoding are repre-
sented with a few bits, our proposed decoder shows only                               TABLE I: Arithmetic in GF(22 )
      0.15 dB performance degradation over Eb /N0 compared                     Addition ci + cj                    Multiplication ci cj
      to double-precision sum-product decoding and outper-
                                                                              cj                                   cj
      forms double-precision log-max decoding by 0.4 dB for                         0         1    α    α2              0     1        α    α2
                                                                         ci                                   ci
      an exemplary code over GF(4).
                                                                          0         0         1    α    α2        0     0      0       0    0
   The rest of this paper is structured as follows. The follow-           1         1         0    α2   α         1     0      1       α    α2
ing section introduces the prerequisites on non-binary LDPC               α         α         α2    0   1         α     0     α        α2   1
codes, the considered transmission system and the mutual-                 α2        α2        α     1   0         α2    0     α2       1    α
information-maximizing lookup table design with the informa-
tion bottleneck method. Section III compares the conventional                  max I(X; T )
sum-product algorithm used for the decoding of non-binary                      p(t|y)                        y0
LDPC codes with the proposed lookup table based approach                                                                x          t
in detail. In Section IV we investigate the performance of               X              Y          T         y1
                                                                                        min I(Y ; T )
the proposed approach for an exemplary code. Finally, we                                p(t|y)
conclude the paper in Section V.                                                                                        (b)
                                                                                        (a)
   Notation: The realizations y ∈ Y from the event space
Y of a discrete random variable Y occur with probability              Fig. 1: (a) Illustration of the information bottleneck principle,
Pr(Y = y) and p(y) is the corresponding probability distribu-         (b) Exemplary information bottleneck graph.
tion. The cardinality or alphabet size of a random variable is
                                                                      B. The Information Bottleneck Method and Relevant-
denoted by |Y|. Joint distributions and conditional distributions
                                                                      Information-Preserving Signal Processing
are denoted p(x, y) and p(x|y), δ(x) denotes the Kronecker
delta function.                                                          The information bottleneck method [15] is a mutual-
                                                                      information-maximizing clustering framework from machine
                      II. P REREQUISITES                              learning. As depicted in Figure 1a it considers a Markov chain
                                                                      X → Y → T of three random variables. X is termed the
  This section briefly reviews non-binary LDPC codes and
                                                                      relevant variable, Y is termed the observation and T is a com-
the information bottleneck method. Then, decoding of binary
                                                                      pressed representation of Y . The compression is described by
LDPC codes using the information bottleneck is summarized.
                                                                      the conditional distribution p(t|y). This compression mapping
A. Non-binary LDPC Codes                                              is designed such that the mutual information I(X; T ) is maxi-
                                                                      mized while at the same time the mutual information I(Y ; T )
   LDPC codes are typically defined using a sparse parity-
                                                                      is minimized. If the mapping p(t|y) uniquely assigns a t to
check matrix H with dimension Nc × Nv such that a parity-
                                                                      each y with probability 1, this mapping can be implemented in
check equation for a codeword c fulfills H · c = 0. Each row
                                                                      a lookup table such that t = f (y). Algorithms to find suitable
of H represents a parity-check equation. Such an equation has
                                                                      compression mappings are described in [?], [17], [18]. These
the form
                         c −1
                        dX                                            algorithms require the joint distribution p(x, y) and the desired
                              hk ck = 0,                  (1)         cardinality |T | of the compression variable T as inputs. As a
                              | {z }                                  by-product, an information bottleneck algorithm delivers the
                          k=0   c′k
                                                                      joint distribution p(x, t) = p(x|t)p(t).
where hk correspond to the non-zero entries of the respective            Preliminary works have shown that the information bottle-
row, ck are the corresponding codeword symbols and dc                 neck design principle can be applied to build signal process-
denotes the check node degree. The arithmetic that has to be          ing blocks which implement mutual-information-maximizing
applied in (1) depends on the field order of the considered           lookup tables. Figure 1b shows such a mutual-information-
Galois field. For binary codes, all hk = 1, all ck ∈ GF(2) =          maximizing lookup table. This figure uses the information
{0, 1} and the sum is a modulo 2 sum. In contrast, in the non-        bottleneck graph notation introduced in [19]. The inputs
binary case all hk and ck and hence their products c′k = hk ck        (y0 , y1 ) of the shown lookup table are compressed by the
too are field elements from GF(q). Therefore, the arithmetical        mutual-information-maximizing lookup table such that the
rules for multiplication and addition for the respective finite       output t is highly informative about the relevant variable
field have to be taken into account. We consider extension            X. The processed inputs (y0 , y1 ) and the output t of the
                                          m
fields GF(2m ) = {0, 1, α, α2 , . . . , α2 −2 }, where α is the so-   system are considered to be quantization indices from the
called primitive element of the field. Such a field is generated      set T = {0, 1, . . . , |T | − 1}. In contrast, state-of-the-art
by a primitive polynomial. The primitive polynomial can be            signal processing algorithms process quantized samples with
used to derive multiplication and addition rules for two given        a certain precision. However, in the information bottleneck
elements ci , cj ∈ GF(2m ). These rules are exemplarily shown         approach this is not required as the mutual information of the
for GF(22 ) in Table I. For their exact derivation and more           involved variables does not depend on the representation val-
details on the corresponding field theory, we refer the reader        ues of the quantized signal, but only on their joint probability
to [16].                                                              distributions.
C. Information Bottleneck Decoding of Binary LDPC Codes
                                                                         yk,0
   In prior works on binary LDPC codes, the relevant variable
                                                                                     (bk,0 , bk,1 )           tk,0
X for a check node is the modulo 2 sum of the bits connected
to the check node. Thus, the mutual-information-maximizing               yk,1
lookup table serves as an integer-based replacement for the                                                          ck         tk
                                                                         yk,2
well-known box-plus operation for log-likelihood ratios. The
approach in prior works was to construct lookup tables for                           (bk,2 , bk,3 )           tk,1
each node type and every iteration using a framework which               yk,3
pairs a density evolution technique with an information bot-
                                                                                            arg max I(Ck ; Tk )
tleneck algorithm [7]–[12]. This complex lookup table con-                                       p(tk |yk )
struction step is performed offline. The lookup tables are pre-       Fig. 2: Information bottleneck graph of lookup table p(tk |yk ).
generated for a fixed design-Eb /N0 , but used for all Eb /N0
in practice. Once constructed, all decoding operations become
                                                                      quantization index tk ∈ {0, 1, . . . , |Tchan | − 1} instead. Intu-
lookup operations in the pre-generated tables. This approach
                                                                      itively, this quantization index should be highly informative
achieves considerable gains in decoding throughput [13] and
                                                                      about ck . Such an index tk can be obtained from yk using a
performance extremely close to double-precision sum-product
                                                                      mutual-information-maximizing lookup table p(tk |yk ) which
decoding for binary codes. In the following section, we
                                                                      is constructed with the information bottleneck method. The
present all the required steps to generalize the construction
                                                                      required joint distribution p(ck , yk ) to construct the table
framework [10] from binary to non-binary LDPC codes. This
                                                                      follows directly from (2). As a by-product we obtain p(ck , tk )
generalization is not straightforward and the challenges are
                                                                      which will be used for the construction of subsequent lookup
versatile. The main reason is the much more sophisticated
                                                                      tables. The size of the lookup table can be reduced by using
arithmetic in higher order Galois fields.
                                                                      a decomposition into two-input lookup tables as exemplified
 III. D ECODING N ON -B INARY LDPC CODES USING              THE       in Figure 2 for m = 4 inputs.
          I NFORMATION B OTTLENECK M ETHOD
                                                                      B. Non-Binary Check Node Operations in Sum-Product De-
  In this section, we describe how a lookup table based               coding
decoder for non-binary LDPC codes is built. In each step
                                                                         In sum-product decoding of non-binary LDPC codes, the
we start from the conventional sum-product decoding and
                                                                      symbol probabilities (2) are passed to the check nodes. Each
compare it with the lookup table approach. We first describe
                                                                      check node performs three tasks according to its parity-check
the transmission scheme and the channel output quantizer.
                                                                      equation (1).
Then, we explain how check and variable node operations in
                                                                         1) Multiplication by Edge Weights c′k = hk ck : First, the
the sum-product algorithm can be replaced by lookup tables.
                                                                      incoming probability vectors for the incoming symbols ck are
A. Transmission Scheme and Channel Output Quantization                transformed into the probability vectors for the products c′k
                                                                      incorporating the appropriate edge weight hk . According to the
   We consider a non-binary LDPC encoded transmission over
a quantized output, symmetric additive white Gaussian noise           multiplication rules described in Section II, this corresponds
(AWGN) channel with binary phase shift keying modula-                 to a cyclic shift of the last 2m − 1 entries in the probability
                                                                      vectors [16].
tion (BPSK). In the applied scheme, m BPSK symbols are
transmitted for each codeword symbol ck . At the receiver,               2) Summation: Once all p(c′k ) are obtained, the check
the received signal is first quantized. The quantizer delivers        node computes the convolution of dc − 1 probability vectors
m outputs yk = [yk,0 , yk,1 , . . . , yk,m−1 ]T for each codeword     p(c′k ) P
                                                                              to account for the summation of the involved c′k in
                                                                       ′              ′
symbol ck . The bit width of the applied quantizer is denoted w,      cj =      k6=j ck which follows from (1). This convolution is
such that the outputs yk,j are from alphabet {0, 1, . . . , 2w −1}.   usually implemented as a fast convolution using FHT resulting
   The first step in conventional sum-product decoding of             in the complexity O(dc 2m log2 2m ).
non-binary LDPC codes is the calculation of the symbol                   3) Multiplication by Inverse Edge Weights cj = h−1       ′
                                                                                                                               j cj : In
probabilities                                                         the last step of the check node update, the outgoing message
                                                                      which is passed to a connected variable node is again found
                                  m−1
                           p(ck ) Y                                   by a cyclic shift of the last 2m − 1 entries of p(c′j ) according
              p(ck |yk ) =            p(yk,j |bk,j ),          (2)    to the inverse edge weight h−1
                           p(yk ) j=0                                                                j .

where bk,j denote the bits in the binary representation of            C. Non-Binary Check Node Operations from the Information
ck . For each symbol, this corresponds to a probability vector        Bottleneck Method
which is used as channel knowledge for sum-product decod-                Here, we propose to replace all of the aforementioned opera-
ing.                                                                  tions with mutual-information-maximizing lookup tables. The
   In contrast, the proposed information bottleneck decoder           entire workflow of the check node design with the information
does not use any probability vector, but processes a single           bottleneck method is exemplified in Figure 3 for a degree
P2) Summation:                To account for the summation c′0 =
            1                                                                         ′
  y1in                                                                          k6=0 ck , in Figure 3, the convolution equivalent lookup table
           c′1        t′1                                                   is depicted in the box labeled 2 . Again since only unsigned
  h1                                                                        integers t′k are processed instead of probability vectors, a new
                                                                            t′0 given (t′1 , t′2 . . . , t′dc −1P
                                                                                                                ) has to be generated which is highly
  y2in                                                                      informative about c′0 = k6=0 c′k . Therefore, we need the joint
           c′2        t′2
  h2                                  2                      3              distribution p(c′0 , t′1 , t′2 . . . , t′dc −1 ). Similarly as in (3) one finds
                                     c′0           t′0       c0    tout
                                                                    0          p(c′0 , t′1 , t′2 , . . . , t′dc −1 ) =
  y3in                                                                                                                                                 c −1
                                                                                                                                                      dY
           c′3        t′3                         h0−1
                                                                                              X
                                                                                                               p(c′0 |c′1 , c′2 , . . . , c′dc −1 )           p(t′k , c′k ). (4)
  h3
                                                                                       c′1 ,c′2 ,...,c′dc −1                                          k=1

  y4in                                                                      In (4), p(c′0 |c′1 , cP          ′             ′
                                                                                                             2 , . . . , cdc −1 ) reflects the sum arith-
           c′4        t′4
  h4                                                                        metic c′0               =                   c ′
                                                                                                                k6=0 k in GF(2
                                                                                                                                        m
                                                                                                                                    P ). ′ Mathematically,
                                                                            p(c′0 |c′1 , c′2 , . . . , c′dc −1 ) = δ(c′0 +             k6=0 ck ). Feeding the
           arg              max             −1
                                                  I(C0 ; T0out )            joint distribution (4) to an information bottleneck algo-
                 p(tout in          in
                    0 |y1 ,h1 ,...,y4 ,h4 ,h0 )                             rithm with output cardinality |Tconv | delivers a lookup table
Fig. 3: Information bottleneck graph of lookup table                        p(t′0 |t′1 , t′2 , . . . , t′dc −1 ), where t′0 ∈ {0, 1, . . . , |Tconv | − 1} and
p(tout in                in        −1                                       I(C0′ ; T0′ ) → max for the given cardinality |Tconv |.
   0 |y1 , h1 , . . . , y4 , h4 , h0 ) for dc = 5.
                                                                               Similar as in Section III-A, we note that a two-input
                                                                            decomposition of lookup tables can be applied to reduce the
dc = 5 check node. This check node processes dc − 1 = 4
                                                                            size of the lookup table p(t′0 |t′1 , t′2 , . . . , t′dc −1 ).
incoming quantization indices ykin to determine one outgoing
quantization index tout                                                        3) Multiplication by Inverse Edge Weights c0 = h−1                          ′
                                                                                                                                                        0 c0 :
                         0 which is passed back to the variable                                                                                           −1
node replacing the probability vector p(c0 ) in the sum-product             The multiplication equivalent by the inverse edge label h0 is
algorithm. Please note that the message y0in for c0 is excluded             also implemented as a mutual-information-maximizing lookup
                                                                                                   −1 ′
since extrinsic information on c0 shall be generated. Message               table p(tout   0 |h0 , t0 ) and depicted in the box labeled 3 in

generation has to be carried out using an equivalent structure              Figure 3. The joint distribution p(c0 , h−1                      ′
                                                                                                                                        0 , t0 ) for designing

for all other cj .                                                          the involved lookup table can be obtained equivalently as
   We provide a step-by-step derivation of the joint distri-                explained for the multiplication equivalent by h0 using (3).
butions required as inputs for the information bottleneck                   The final output tout           0 ∈ {0, 1, . . . , |Tprod | − 1} is passed to a

algorithms to generate the respective mutual-information-                   connected variable node.
maximizing lookup tables.                                                   D. Non-Binary Variable Node Operations in Sum-Product
   1) Multiplication by Edge Weights c′k = hk ck : In Figure                Decoding
3 the multiplication equivalent lookup table is depicted in the                In sum-product decoding of non-binary LDPC codes, each
box labeled 1 . Obviously, since all incoming quantization in-              variable node receives dv probability vectors from its con-
dices ykin are just unsigned integers, no shift of any probability          nected check nodes, where dv is the degree of the variable
vector is possible. However, this is not required since we are              node. To generate extrinsic information which is passed back
only interested in preserving the information on the relevant               to the check nodes during decoding, dv − 1 messages from the
random variable Ck′ given the input tuple (hk , ykin ). Therefore,          check nodes and the channel message (2) are multiplied. This
we need to determine the joint distribution p(c′k , hk , ykin ) to          results from the equality constraint of a variable node, i.e.,
design a mutual-information-maximizing lookup table with the                all incoming messages are probability vectors for the same
information bottleneck method. According to the general chain               codeword symbol.
rule of probabilities and given the independence of ykin and hk ,
                          X                                                 E. Non-Binary Variable Node Operations from the Informa-
   p(c′k , hk , ykin ) =       p(c′k |hk , ck )p(ck , ykin )p(hk ). (3)     tion Bottleneck Method
                       ck ∈GF(2m )
                                                                                In the following, we consider an arbitrary node which be-
In (3), p(c′k |hk , ck ) reflects the multiplication arithmetic c′k =       longs to a codeword symbol c. Here, we propose to replace the
hk ck in GF(2m ). Mathematically, p(c′k |hk , ck ) = δ(c′k +hk ck ),        described variable node operation with a mutual-information-
i.e., it is 1 if c′k = hk ck and 0 otherwise. In the first decoding         maximizing lookup table. This lookup table is depicted in
iteration, p(ck , ykin ) is given by p(ck , tk ) with tk = ykin since all   Figure 4 and it processes dv − 1 incoming quantization indices
incoming ykin are obtained directly from the quantizer (cf. Sec-            ykin received from the check nodes and a channel index ych     in

tion III-A). Feeding the joint distribution (3) to an information           from the channel output quantizer to determine one outgoing
bottleneck algorithm with output cardinality |Tmult | delivers the          quantization index tout
                                                                                                  0 which is passed back to a check node.
lookup table p(t′k |hk , ykin ), where t′k ∈ {0, 1, . . . , |Tmult | − 1}   Please note that the message y0in is excluded at the input on
           ′    ′
and I(CK     ; TK ) → max for the given cardinality |Tmult |.               the left since extrinsic information shall be generated. Message
in
                   ych                                                                                      10−1
                                                                                                                                                 log-max decoding
                   y1in                       c        tout
                                                        0
                                                                                                            10−2

                                                                                           bit error rate
                   y2in
                                                                                                            10−3
Fig. 4: Information bottleneck graph of lookup table                                                               sum-product decoding
p(tout in    in in             in
   0 |ych , y1 , y2 , . . . , ydv −1 ) for dv = 3.
                                                                                                                       proposed information
                                                                                                            10−4        bottleneck decoding
generation has to be carried out with the same structure for all                                                                                        0.5 dB
other connected edges. The joint input distribution to design
the depicted lookup table in Figure 4 is given by                                                           10−5                              0.15 dB
                                                                 v −1
                                                                dY
        in                                                                                                         1           1.5            2          2.5
  p(c, ych , y1in , y2in , . . . , ydinv −1 ) = p(c)p(ych
                                                       in
                                                          |c)           p(ylin |c). (5)
                                                                l=1                                                                  Eb /N0 in dB
This joint distribution reflects the aforementioned equality con-                         Fig. 5: Bit error rate performance of our proposed decoder and
straint of the variable node. Feeding the joint distribution (5)                          reference systems with properties summarized in Table II and
to an information bottleneck algorithm with output cardinality                            imax = 40.
|Tvar | delivers a lookup table p(tout       in   in in             in
                                        0 |ych , y1 , y2 , . . . , ydv −1 ),
         out                                        out
where t0 ∈ {0, 1, . . . , |Tvar | − 1} and I(C; T0 ) → max. The                           over Galois field GF(4). The code was taken from [20] and has
unsigned integer tout0 is passed back to the connected check                              length Nv = 816, code rate Rc = 0.5, variable node degree
node on the target edge in the next decoding iteration.                                   dv = 3 and check node degree dc = 6.
   Finally, we note that a two-input decomposition of lookup                                 The obtained bit error rates for sum-product decoding using
tables can be applied to reduce the size of the variable node                             FHT [3], log-max decoding [5] and the proposed information
lookup table.                                                                             bottleneck based decoding are depicted in Figure 5. The chan-
F. Discrete Density Evolution for Non-Binary Codes and                                    nel quantizer described in Section III-A was used with output
Fixed Lookup Tables                                                                       cardinality |Tchan | = 128 corresponding to 7 bit quantization.
                                                                                          For the sum-product and the log-max decoder, the symbol
   It is important that the distributions of the exchanged mes-
                                                                                          probabilities p(ck |tk ) were used for decoding. In contrast,
sages evolve over the iterations. To cope with this evolution
                                                                                          the information bottleneck decoder worked directly on the
it is, therefore, appropriate to design updated lookup tables
                                                                                          quantization indices tk .
for each decoding iteration using the appropriate distributions.
                                                                                             All decoders performed a maximum of imax = 40 iterations.
These joint distributions correspond to the by-products p(x, t)
                                                                                          The information bottleneck decoder was constructed for a
of the applied information bottleneck algorithm. By using
                                                                                          design-Eb /N0 of 1.5 dB. The most important parameters of
these output distributions as inputs of the next applied infor-
                                                                                          the applied decoders are summarized in Table II for a quick
mation bottleneck to construct lookup tables, we inherently
                                                                                          overview.
track the evolution of these joint input distributions. This
is completely analogous to the discrete density evolution                                    In the information bottleneck decoder, only integer-valued
scheme for binary LDPC codes described in [8]–[11]. As                                    indices from the sets Tmult and Tvar are used as messages
an interesting consequence, the decoding performance for a                                instead of probability vectors. In literature, the precision of
considered regular ensemble under the proposed lookup table                               the exchanged probability vectors is often provided in bits per
based decoding scheme can be investigated. We note that                                   field element [21]. Thus, for a fair comparison, the cardinalities
performing efficient density evolution for non-binary LDPC                                summarized in Table III, correspond to a maximum of 3 bits
codes is an open problem which is inherently tackled by the                               per field element (cf. Table II column 4).
proposed lookup table construction scheme. However, since                                    In Figure 5 the sum-product algorithm serves as a bench-
this is not the main topic of this paper, we defer further                                mark with the best bit error rate performance, but at the
investigation of this interesting finding to a subsequent work.                           same time, it has the highest computational complexity (cf.
   Finally, we propose to construct all involved lookup tables                            Table II). Although all applied operations in the information
just once for a fixed design-Eb /N0 . The constructed lookup                              bottleneck decoder are simple lookups, the decoder performs
tables are then stored and applied for all Eb /N0 . Hence, the                            only 0.15 dB worse than the benchmark. Despite the fact
lookup table construction has to be done only once and offline.                           that the log-max decoder uses conventional arithmetic and
                                                                                          double-precision precision message representation, it is clearly
                IV. R ESULTS AND D ISCUSSION                                              outperformed by the proposed information bottleneck decoder.
   In this section, we present and discuss results from a bit                             We emphasize that for the non-binary case the applied lookup
error rate simulation of an exemplary non-binary LDPC code                                tables completely replace all arithmetical operations such as
TABLE II: Simulation parameters
 decoder         check node operation    variable node operation   bits/element exchanged messages     check node operation computational complexity
 sum-product             FHT                  multiplication                     64 bit                            O(dc q(log2 q + dc ))
 log-max             max∗ () [5]                 addition                        64 bit                                  O(dc q 2 )
 proposed            lookup table              lookup table                       3 bit                                   O(dc )

TABLE III: Total memory amount of lookup tables in the                        [4] D. Declercq and M. Fossorier, “Extended minsum algorithm for decod-
information bottleneck decoder per iteration.                                     ing LDPC codes over GF(q),” in Proc. IEEE International Symposium
                                                                                  on Information Theory (ISIT). IEEE, 2005, pp. 464–468.
            Lookup table           cardinality     table size                 [5] H. Wymeersch, H. Steendam, and M. Moeneclaey, “Log-domain decod-
            Check node 1 , 3     |Tmult | = 256     3.04 kB                       ing of LDPC codes over GF(q),” in Proc. IEEE International Conference
            Check node 2         |Tconv | = 512   129.02 kB                       on Communications (ICC), vol. 2, June 2004, pp. 772–776 Vol.2.
            Variable node         |Tvar | = 512   82.94 kB                    [6] S.-Y. Chung et al., “On the design of low-density parity-check codes
            Total                                 215.00 kB                       within 0.0045 db of the Shannon limit,” IEEE Communication Letters,
                                                                                  vol. 5, no. 2, pp. 58–60, 2001.
                                                                              [7] J. Lewandowsky and G. Bauch, “Trellis based node operations for
convolution of probability vectors and multiplication. The pro-                   LDPC decoders from the information bottleneck method,” in Proc.
cessing of probability vectors simplifies to lookups of scalar                    9th International Conference on Signal Processing and Communication
                                                                                  Systems (ICSPCS’15), Cairns, Australia, 2015, pp. 1–10.
integers in pre-generated tables. During our work, we noticed                 [8] F. J. C. Romero and B. M. Kurkoski, “LDPC decoding mappings that
that the cardinalities in the information bottleneck algorithms                   maximize mutual information,” IEEE Journal on Selected Areas in
required to obtain performance close to sum-product decoding                      Communications, vol. 34, no. 9, pp. 2391–2401, Sep. 2016.
                                                                              [9] M. Meidlinger et al., “Quantized message passing for LDPC codes,” in
grow with increasing Galois field order. For the considered                       Proc. 49th Asilomar Conference on Signals, Systems and Computers,
GF(4) LDPC code, the amount of memory that is required                            Pacific Grove, CA, USA, Nov. 2015, pp. 1606–1610.
to store the lookup tables is provided in Table III. It can be               [10] J. Lewandowsky and G. Bauch, “Information-optimum LDPC decoders
                                                                                  based on the information bottleneck method,” IEEE Access, vol. 6, pp.
seen that for the considered decoder, 215.00 kilobytes (kB)                       4054–4071, 2018.
are needed per iteration. This amount of memory required can                 [11] M. Stark, J. Lewandowsky, and G. Bauch, “Information-Optimum LDPC
be justified by the huge savings in computational complexity.                     decoders with message alignment for irregular codes,” in Proc. IEEE
                                                                                  Global Communications Conference (Globecom2018), accepted for pub-
                          V. C ONCLUSION                                          lication, Abu Dhabi, United Arab Emirates, Dec. 2018.
                                                                             [12] M. Stark, J. Lewandowsky, and G. Bauch, “Information-bottleneck
   In this paper, we leveraged the information bottleneck’s                       decoding of high-rate irregular LDPC codes for optical communication
ability to preserve relevant information to overcome the main                     using message alignment,” Applied Sciences, vol. 8, no. 10, 2018.
                                                                             [13] R. Ghanaatian et al., “A 588-gb/s LDPC decoder based on finite-alphabet
computational burdens of non-binary LDPC decoding. Mo-                            message passing,” IEEE Transactions on Very Large Scale Integration
tivated by the preliminary results for binary LDPC codes,                         (VLSI) Systems, vol. 26, no. 2, pp. 329–340, Feb. 2018.
we have presented a complete framework to design check                       [14] J. Lewandowsky et al., “Design and evaluation of information bottleneck
                                                                                  LDPC decoders for software defined radios,” in Proc. 12th Interna-
node and variable node operations which replace all arithmetic                    tional Conference on Signal Processing and Communication Systems
operations using only lookups in non-binary LDPC decoders.                        (ICSPCS), accepted for publication, Cairns, Australia, Dec 2018.
We provided a step-by-step conversion of the conventional                    [15] N. Tishby, F. C. Pereira, and W. Bialek, “The information bottleneck
                                                                                  method,” in Proc. 37th Allerton Conference on Communication and
sum-product algorithm resulting in an information bottleneck                      Computation, 1999.
decoder which performs only 0.15 dB worse than the sum-                      [16] R. A. Carrasco, Non-binary error control coding for wireless commu-
product algorithm and outperforms the log-max algorithm. In                       nication and data storage, M. Johnston, Ed. Chichester, U.K: Wiley,
                                                                                  2008.
addition, a discrete density evolution scheme for the proposed               [17] N. Slonim, “The information bottleneck: Theory and applications,” Ph.D.
decoding method was sketched. Future work should investigate                      dissertation, Hebrew University of Jerusalem, 2002.
possibilities of efficient lookup table implementation for the               [18] S. Hassanpour et al., “On the relation between the asymptotic perfor-
proposed decoding scheme. In our opinion, the combination of                      mance of different algorithms for information bottleneck framework,” in
                                                                                  Proc. IEEE International Conference on Communications (ICC), Paris,
promising bit error rate curves and simple lookup operations                      France, Jul. 2017, pp. 1–6.
already motivates further work on the information bottleneck                 [19] J. Lewandowsky, M. Stark, and G. Bauch, “Information bottleneck
                                                                                  graphs for receiver design,” in Proc. IEEE International Symposium on
principle in decoding of non-binary LDPC codes. Such a                            Information Theory (ISIT), Barcelona, Spain, Jul. 2016, pp. 2888–2892.
simple scheme for decoding could drastically increase applica-               [20] D. J. C. MacKay, “Encyclopedia of sparse graph codes.” [Online].
bility of non-binary LDPC codes in many practical scenarios.                      Available: http://www.inference.phy.cam.ac.uk/mackay/codes/data.html
                                                                             [21] H. Wymeersch, H. Steendam, and M. Moeneclaey, “Computational com-
                                                                                  plexity and quantization effects of decoding algorithms for non-binary
                            R EFERENCES                                           LDPC codes,” in Proc. IEEE International Conference on Acoustics,
                                                                                  Speech, and Signal Processing (ICASSP), vol. 4, May 2004, pp. iv–iv.
 [1] M. C. Davey and D. MacKay, “Low-density parity check codes over
     GF(q),” IEEE Communication Letters, vol. 2, no. 6, pp. 165–167, June
     1998.
 [2] V. Savin, “Min-max decoding for non binary LDPC codes,” in Proc.
     IEEE International Symposium on Information Theory (ISIT), July 2008,
     pp. 960–964.
 [3] D. Declercq and M. Fossorier, “Decoding algorithms for nonbinary
     LDPC codes over GF(q),” IEEE Transactions on Communications,
     vol. 55, no. 4, pp. 633–643, 2007.
You can also read