GRAND Managing Noise and Interference Separately - Multiple Access Channel Decoding using Soft

 
CONTINUE READING
Managing Noise and Interference Separately -
 Multiple Access Channel Decoding using Soft
 GRAND
 Amit Solomon Ken R. Duffy Muriel Médard
 RLE, MIT Hamilton Institute RLE, MIT
 Cambridge, MA 02139, USA Maynooth University, Ireland Cambridge, MA 02139, USA
 Email: amitsol@mit.edu Email: ken.duffy@mu.ie Email: medard@mit.edu
 Abstract—Two main problems arise in the Multiple Access input discrete signal alphabets, that matches that of a TDMA
 Channel (MAC): interference from different users, and addi- channel [6]–[10]. When noise is absent or negligible, one may
 tive noise channel noise. Maximum A-Posteriori (MAP) joint
2021 IEEE International Symposium on Information Theory (ISIT) | 978-1-5386-8209-8/21/$31.00 ©2021 IEEE | DOI: 10.1109/ISIT45174.2021.9517890

 instantiate the noiseless MAC by the use of ZigZag decoding
 decoding or successive interference cancellation are known to
 be capacity-achieving for the MAC when paired with appro- [11]. In this case, users send packets, possibly uncoded. Each
 priate codes. We extend the recently proposed Soft Guessing packet corresponds to a codeword. We can expect distinct
 Random Additive Noise Decoder (SGRAND) to guess, using soft packet transmissions to experience slightly different delays.
 information, the effect of noise on the sum of users’ transmitted Such differences will tend to happen naturally in MAC without
 codewords. Next, we manage interference by applying ZigZag further coordination, or we may introduce them. When packets
 decoding over the resulting putative noiseless MAC to obtain
 candidate codewords. Guessing continues until the candidate are transmitted multiple times with different delays over a
 codewords thus obtained pertain to the corresponding users’ noiseless channel, the receiver observes different collision
 codebooks. This MAC SGRAND decoder is a MAP decoder that events, leading to different summations of packets in the
 requires no coordination between users, who can use arbitrary channel, with different time offsets. Using ZigZag decoding,
 moderate redundancy short length codes of different types and the receiver can then perform interference cancellation by
 rates.
 Index Terms—MAC, MAP decoding, GRAND, ZigZag decod- considering the offsets as providing different equations. An
 ing. example of ZigZag decoding is shown in Figure 1. This decod-
 ing method has been shown to, in effect, resemble orthogonal
 I. I NTRODUCTION channels for different users in the noiseless regime [12], [13].
 The non-orthogonal Multiple Access Channel (MAC) is a Let us now envisage the issue of noise in the channel. A
 commonly used communications model [1]. In MAC, users natural candidate for dealing with noise is the recently pro-
 transmit simultaneously over a shared channel. The users’ posed Guessing Random Additive Noise Decoder (GRAND)
 modulated codewords are added to each other and to channel decoder [14]. GRAND based decoders attempt to identify the
 noise. The receiver must thus resolve two effects: interference additive noise that corrupted the channel input, rather than
 and noise. The approach for doing so that was first proposed identifying the channel input directly. From most likely to least
 by Ahlswede [2] and by Liao [3], which attempts to solve the likely based on a channel model and soft information, if it is
 problem of interference and the noise concurrently, is the fol- available, GRAND schemes sequentially invert putative noise
 lowing: when decoding one user’s message, the receiver treats effect sequences from the received signal and query whether
 undecoded modulated codewords as noise, with attendant the remainder is a member of the codebook. The first time
 effects on that user’s rate [1]. It can achieve capacity, i.e. reach a member of the codebook is identified, GRAND terminates
 the boundary of the Cover-Wyner region [1], with some level and returns that word as its output. GRAND based decoders
 of coordination in users’ transmission. Types of coordination are code-agnostic and noise-centric. If noise is additive and
 include separating different users into orthogonal channel in GRAND’s noise effect query order is consistent with the
 time through Time Division Multiplexing Access (TDMA), in channel, it provably identified a Maximum Likelihood (ML)
 frequency through Frequencey Division Multiplexing Access decoding.
 (FDMA), or through code selection Code Division Multiple When paired with Random Linear Code (RLC)s, GRAND
 Access (CDMA) [4], possibly with rate splitting Rate Splitting has been shown to be capacity achieving in the hard Single-
 Multiple Access (RSMA) [5]). Input Single-Output (SISO) setting [14]. Several GRAND
 The approach we propose requires no coordination whatso- variants for SISO channels have been proposed that use soft
 ever from different users. Different users can transmit using information: 1) Symbol Reliability GRAND (SRGRAND),
 different codes, lengths, rates, etc. In effect, the method lets which uses one bit of soft information per channel output [15],
 users operate as if they use orthogonal channels. Its core is [16]; 2) Soft GRAND (SGRAND), a soft detection variant of
 to deal separately with additive interference and with noise. GRAND that uses (potentially) real-valued soft information
 Let us consider the former first. If we have interference in per channel symbol, and is a ML decoder in the soft detection
 the absence of noise, the resulting noiseless MAC, sometimes SISO scenario [17]; and 3) Ordered Reliability Bits GRAND
 termed additive MAC, has a capacity region, for any given (ORBGRAND), which serves as a middle ground between

 978-1-5386-8209-8/21/$31.00 ©2021 IEEE 2602
 Authorized licensed use limited to: Maynooth University Library. Downloaded on January 19,2022 at 11:29:40 UTC from IEEE Xplore. Restrictions apply.
the two, as it uses dlog2 ne soft information bits per channel a ML demodulation does not suffice, as ~x may not to be
symbol [18]. Abbas et al. demonstrated hardware architectures uniformly distributed, even if {~xi }i are, depending on the
for both GRAND [19] and ORBGRAND [20]. modulation.
 In the spirit of GRAND decoders, we wish to guess
the effect of the noise added by the channel, and remove C. Transmission delays
that effect to obtain an equivalent noise-free MAC channel. In ZigZag decoding, transmitters send the same codewords,
GRAND based decoders exploit the fact that the entropy of or, equivalently, packets s times. As different users are not
the noise is substantially lower than the entropy of the inputs, coordinated and transmit codewords at uncoordinated times,
hence guessing the noise effect is easier than guessing the there is a delay between transmission of different messages.
codeword. Treating signals from the modulated codewords of We denote the delay of the i-th message in the j-th transmis-
 (j)
undecoded users as noise, as is done in the Ahlswede and sion, relative to the first transmission, by di . A negative delay
the Liao frameworks, is undesirable for GRAND, since those means that the first codeword is delayed with respect to the j-
signals would generally have high entropy. We instead tailor th message. A delay operator, denoted by D (~xi , di ), is an op-
 (j)
SGRAND to fit MAC in order to remove the effects of the erator that delays ~xi by di units, and pads it with zeros so all
noise, followed by ZigZag decoding, described earlier. We messages in the same transmission are of the same length.  By
prove that the suggested algorithm is a Maximum A-Posteriori (j) (j)
 definition, d1 = 0 for all j. The first ~xi input of D ~xi , di
(MAP) decoder, which is known to be optimal. may be omitted, if it is clear from context which codeword is
 II. M ODEL AND BACKGROUND delayed. Elements of the j-th transmission are denoted by a
 (j) superscript. The channelinput of the j-th transmission is
A. Definitions and Notation Ps (j)
 denoted by ~x(j) = i=1 D ~xi , di . The channel output in
 ~ denote a scalar, vector, matrix, and a random
 Let x, ~x, X, X n o
vector respectively. All vectors are row vectors unless stated the j-th transmission is Y ~ (j) = ~x(j) + Z
 ~ (j) , where Z ~ (j)
 j
otherwise, and xi denotes the i-th element of ~x. Composition are mutually independent, as the channel is memoryless. Note
of functions is denoted by ◦, i.e.
 √ f ◦ g (x) = f (g (x)). Non- that different elements of ~x(j) may have different modulations,
subscript/superscript i denotes −1. as a result of edge-effects. This phenomenon is illustrated
 in Figure 1, where both transmitters use BPSK modulation
B. System Model
 with a phase shift of π/2 radians. The second message is
 We consider a standard MAC model where s transmitters delayed by one/two time units in the first/second transmission,
attempt to send modulated codewords of length ni , i.e. the respectively. Observing ~x(1) , we notice that x1,1 ∈ {i, −i},
i-th transmitter transmits ~xi ∈ Cni , to be sent over an analog while x1,2 + x2,1 ∈ {1 + i, 1 − i, −1 + i, −1 − i}. As com-
channel. To focus on the novel aspects of the work, we assume mon in MAC literature, we assume that the channel responses,
that ni = n for all i, but this assumption can be dropped in including delays, are perfectly known to the receiver. Kazemi
practice. A codebook is the set of all possible codewords, and et al. [22] have studied the effect on ZigZag decoding of
is denoted by Xi ⊆ Cn . The j-th element of ~xi may take imperfectly known channel responses.
values from Xij , where Xi ⊆ Xi1 × . . . × Xin . P All messages
 s D. SGRAND
are added together in the channel into ~x = i=1 ~
 xi . The
 n
codebook of the sum is denoted by X ⊆ C . Similarly, the j- In the soft detection setting, SGRAND provides a means
th element of X is an element of X j , where X ⊆ X 1 ×. . . X n . to generate error effect sequences in order, on the fly, from
Note that X j is different from the individual codebooks Xij , most likely to least likely for any arbitrary memoryless chan-
and specifically, of a higher constellation. For example, when nel [17]. In Section III-B, we modify SGRAND to rank order
s = 2 transmitters use a Binary Phase Shift Keying (BPSK) error sequences on the fly for MAC. To check for codebook
modulation, with a phase shift of π/2 radians, ~x has 4 membership, we use ZigZag decoding to cancel interference,
constellation points, which constitute a Quadrature Phase Shift as we explain in Section III-A. We then check whether each
Keying (QPSK) modulation [21], as shown in Figure 1. In of the individual putative codewords are a members of their
addition, the memoryless channel adds random additive noise codebook. An efficient way of doing so for linear codes would
~ to the channel input, resulting in channel output Y
Z, ~ = ~x + Z.
 ~ be to test whether the syndrome of said remainder is zero [23,
The noise is independent of the transmitted information. Chapter 2].
 A soft decoder is a mapping Cn → X1 × III. MAC SGRAND
. . . × Xs . A soft detection MAP decoder outputs
 arg max
 
 p ~x1 , . . . , ~xs | ~y (1) , . . . ~y (s) , where A. Noise free transmission
~ xs ∈X1 ×...×Xs
x1 ,...,~ In order to better understand the decoding strategy proposed
p denotes the likelihood function. A demodulator is a
 in this paper, we first discuss the case where no noise is added
function θ : Cn → X 1 × . . . × X n that returns the ~ (j) = ~0, so Y
 ~ (j) = ~x(j) . The goal is
 to the channel input, i.e. Z
MAP modulated Qversion of the channel input, namely
 n to recover the transmitted channel inputs ~x1 , . . . , ~xs . Decod-
 arg max i=1 p (xi | yi ), where the maximum is ability is not guaranteed in this case: we need to ensure that
x1 ∈X 1 ,...,xn ∈X n
taken on each element individually, and not on X . Note that the mapping from ~x1 , . . . , ~xs to ~x(1) , . . . , ~x(s) is invertible.

 2603
 Authorized licensed use limited to: Maynooth University Library. Downloaded on January 19,2022 at 11:29:40 UTC from IEEE Xplore. Restrictions apply.
 Ԧ1 = 1,1 , 1,2 , 1,3 1,1 , 1,2 , 1,3 , 0 1,1
 1 0 0 0 0 0 1,1
 (0) 1,2
 Ԧ 1
 = 1,1 , 1,2 + 2,1 , 1,3 + 2,2 , 2,3 0 1 0 1 0 0 1,2 + 2,1
 1,3
 + Ԧ = 2,1 0 0 1 0 1 0 1,3 + 2,2
 Ԧ2 = 2,1 , 2,2 , 2,3 0, 2,1 , 2,2 , 2,3 2,2 0 0 0 0 0 1 2,3
 (1) 2,3 = 1 0 0 0 0 0 Ԧ = 1,1
 0 1 0 0 0 0 1,2
 Q 0 0 1 1 0 0 1,3 + 2,1
 0 0 0 0 1 0 2,2
 Ԧ1 = 1,1 , 1,2 , 1,3 1,1 , 1,2 , 1,3 , 0,0
 (0) 0 0 0 0 0 1 2,3
 2
 Ԧ = 1,1 , 1,2 , 1,3 + 2,1 , 2,2 , 2,3
 +
 Ԧ2 = 2,1 , 2,2 , 2,3 0,0, 2,1 , 2,2 , 2,3 I
 (2)

 Fig. 1: Multiple Access Channel example. There are s = 2 transmitters, that send their messages twice. The delays of the
 second transmitter in the first/second transmissions are 1 and 2, respectively. The modulated messages are ~x1 = x1,1 , x1,2 , x1,3
 and ~x2 = x2,1 , x2,2 , x2,3 . Both messages are delayed and summed up into ~x(1) = x1,1 , x1,2 + x2,1 , x1,3 + x2,2 , x2,3 and
~x(2) = x1,1 , x1,2 , x1,3 + x2,1 , x2,2 , x2,3 . Upon receiving a noiseless version of ~x(1) , ~x(2) , the decoder, which knows the delays,
 can recover ~x1 , ~x2 , as suggested by the blue arrows. In the example, the first/second transmitter use BPSK modulation with
 a phase shift of π/2 radians, which are shown as the red/blue points, respectively. The effective constellation of the sum is a
 QPSK modulation, which is illustrated by the black points. The column stack vectors and the participation matrix A are also
 shown in the figure.
 For example, consider a case with s = 2 transmitters and two where as explained in Section II, channel inputs are not nec-
 transmissions. Suppose in both transmissions, the delay be- essarily distributed uniformly. We first prove that determining
 tween the two transmitters is the same. In this case, the receiver the MAP noise sequence ~zcol is equivalent to finding the MAP
 gets in the second transmission a message that is linearly channel input ~xcol , then we present MAC SGRAND that finds
 dependent on the first message, hence it cannot recover ~x1 , ~x2 . the MAP error sequence, hence it is an optimal decoder.
 To define the problem formally, we define ~xcol , ~y col a column
 Theorem III.1. Let ~xcol ∈ X col be a transmitted message
 stack of ~x1 , . . . , ~xs , and all the transmissions of ~y , respectively, ~ = A · ~xcol +
 T  T on an additive memoryless MAC, such that Y
 i.e. ~xcol = (~x1 , . . . , ~xs ) , ~y col = ~y (1) , . . . , ~y (s) . The col ~ col col
 Z , and A, ~
 x are decodable (as defined in Section III-A).
 superscript indicates a column stack vector. The length of ~y col , col col
 
 Then finding arg max p ~
 x | ~
 y is equivalent to finding
 denoted by n0 , satisfies ! xcol ∈X col
 ~
 s 
 X (j) (j) arg max p ~zcol | ~y col .
 n0 = sn + max di − min di z col :~
 ~ y col −~
 z col ∈Y col
 (j) (j)
 j=1 i:di ≥0 i:di
We first define for each symbol i of ~y col the Ordered
 Algorithm 1: MAC SGRANDAB
Symbol Indices (OSI) as a vector that ranks order, from most
likely to least likely, the error element that affected the i- Input: ~y col , A, b, X col . max #queries b, SGRAND: b = ∞
th demodulated symbol. Formally speaking, ~si ∈ C|Y | is
 col,i
 Output: ~xcol
an OSI vector, where the j-th element satisfies θ yicol − 1: g ← 0n o . g counts queries performed
sij ∈ Y col,i . A vector ~si is an OSI of the i-th 2: S ← 0 ~ T
 . S contains candidate error vectors
 i col
  symbol
 i col
 if
  it
satisfies the
  following: 1) s 1 = 0; 2) p θ y i − sj | y i ≥ 3: while g ≤ b do
p θ yicol − sil | yicol ∀j < l; and 3) ~si does not contain du-
  
 4: ~zcol ← arg max p θ ~y col − ~v col | ~y col
plicate elements. The OSI of each symbol is a function of yicol , v col ∈S
 ~
 5: ~ŷ col ← θ ~y col − ~zcol
the modulation and the channel statistics, but this dependency 
is left implicit for notational simplicity. An OSI is unique 6: S = S \ ~zcol
almost surely for continuous memoryless channels. To demon- 7: g ←g+1  
strate the definition of OSI, let us consider the example of Fig- 8: ~xcol ← ZigZagDecoder ~ŷ col , A, X col
ure 1, assuming the transmission power is 1 and the channel is 9: if ~xcol 6=⊥ then
an Additive White
 p Gaussian p Noise (AWGN) channel. Suppose 10: return . MAP codeword found
that yicol = 3/2 + 1/2i, and yicol does not suffer from 11: else . Update S for the next query
 col,i
an edge effect,  i.e. Y = {1 + i, 1 − i, −1 − i, −1 + i}. 12: if ~zcol = ~0T then
 col
Then θ yi = 1 + i, and ~si = (0, 2i, 2, 2 + 2i), so 13: j∗ ← 0
θ yicol − ~si = (1 + i, 1 − i, −1 + i, −1 − i), which lists all 14: else
possible channel inputs, from most likely to least likely. We 15: j∗ ← max {j : zj 6= 0} . j∗ > 0
define the previous function, denoted by φ (·), as a function 16: end if
that returns the previous element in the OSI vector, namely: 17: j ← j∗
  i 18: while j ≤ n0 do
 sj−1 , if j > 1
 φ sij =
 
 19: if j > 0 then 
 ⊥, j=1   20: if φ−1 zjcol 6=⊥ then
 −1 −1 i
and φ (·) as the inverse of φ (·), where φ s Y col =⊥. zjcol = φ−1
 
 | i | 21: zjcol
  col
For example, φ θ yi col
 
 =⊥, as θ yi col
 
 is the MAP demod- 22: S = S ∪ ~z 
ulated symbol. 23: zjcol = φ zjcol
 Using the definitions of OSI, φ (·) and φ−1 (·), pseudocode 24: end if
for MAC SGRAND with ABandonment (SGRANDAB) is 25: end if
given in Algorithm 1. We prove that Algorithm 1 is a MAP 26: j ←j+1
decoder for a memoryless MAC. 27: end while
 28: end if
C. Main Theorem 29: end while
Theorem III.2. Algorithm 1 satisfies the following, for an xcol ←⊥
 30: ~ . Codeword not found in b queries; failure
additive memoryless MAC: 31: return

 1) Correctness: Error vectors are queried in non increasing procedure Z IG Z AG D ECODER(~ŷ col , A, X col )
 order of likelihood, hence a returned codeword is a MAP if ∃~xcol s.t. A · ~xcol = ~ŷ col & ~xcol ∈ X col then
 codeword. return ~xcol
 2) Progress: Each error vector is queried at most once. else
 return ⊥
Proof. The proof carries similar flavor of the SISO SGRAND end if
proof [17]. Let the parent of ~zcol 6= ~0T , denoted by π ~zcol ,
 
 end procedure
be defined as the follows:
 
 φ zjcol , if j = j∗ ,
 
 col
 
 π ~z = Lemma III.3. Let ~zcol 6= ~0T be an error vector. Define m ∈
 j zjcol , otherwise
 col {1, . . . , n0 } to be the codebook with most elements, i.e. m =
where j∗ is defined as in Algorithm 1. A vector ~z is called arg max Yicol . The following hold for a memoryless MAC:
a child of π ~zcol . Note that a child is not unique, while a i∈{1,...,n0 }
parent is. Moreover, while every non-zero error vector has a 
 1) π ~zcol 6= ~zcol
unique parent, not all vectors have children: an error vector   
 2) p θ ~y col − ~zcol | ~y col ≤ p θ ~y col − π ~zcol | ~y col
 
where j∗ = n0 and φ−1 (zj∗ ) =⊥ has no children. Any error 3) π ◦ . . . ◦ π ~zcol = ~0T after at most Ym col
 · j∗
vector has up to n0 children. Observe that Algorithm 1 adds compositions.
to S all the children of ~z, after removing it from S. To prove
the theorem, we first prove the following lemma: Proof. Let ~zcol 6= ~0T , and j∗ (with respect to ~zcol ) be defined
 as in Algorithm 1. Note that j∗ > 0. We prove each of the

 2605
 Authorized licensed use limited to: Maynooth University Library. Downloaded on January 19,2022 at 11:29:40 UTC from IEEE Xplore. Restrictions apply.
properties: D. Complexity
1) Let ~zcol 6= ~0T . By definition π ~zcol j = φ zjcol
  
 ∗
 , and At each iteration of the algorithm, we remove the most
  ∗
zjcol 6
 = 0. By property 3 of OSI, φ z col
 6
 = z col
 . likely vector from S, perform one membership test, and add
 ∗  j∗  j∗
 at most n0 new vectors to S, hence |S| = O (n0 g) after
 
 col col
2) Let qj (w) = p θ ~y j
 − w | y j and
 Qn g queries. An efficient way of implementing S for these
q (w)~ = q
 j=1 j (wj ). The claim is established if operations is via a Max-Heap [25]. The complexity of the
exists j such that q ~zcol = 0. Q Otherwise, assume  membership test, denoted by f n0 , s, X col , is twofold. At
 
q ~zcol >0. Then q ~zcol  = qj∗ zjcol ∗ j6 = j
 ∗
 q j zj
 col
 , and first, Gaussian elimination [24] has to be performed, and
 col col col
 Q
q π ~z  = qj∗  φ zj∗ j6=j
 ∗
 qj zj . Observe that also a codebook membership test. For linear codes, syndrome
q π ~zcol /q ~zcol = qj∗ φ zjcol ∗
 /qj∗ zjcol ∗
 ≥ 1, by the check is a simple mechanism of checking for codebook
definition of OSI, which completes the proof. membership [23], which boils down to matrix multiplication,
3) Let ~zcol 6= ~0T , and j∗ be defined as in Algorithm 1. Let whose complexity depends on implementation, e.g. [26], [27].
rj∗ denote the position of zjcol ∗
 at the OSI. After applying π, When S is implemented as a Max-Heap,  complexity after g
 the
 queries is O n0 gf n0 , s, X col log (n0 g)
 col
 
rj∗ ≤ Ym times, j∗ decreases by at least 1. By iterating  . Hence, the worst-
this argument at most j∗ times, we get the zero vector. case complexity is O n0 bf n0 , s, X col log (n0 b) , where b is
 the querying thershold. It is worth noting that in practice, we
 We now prove Theorem III.2. Note that in order to prove expect the average case to be significantly lower, as we can
Property 2, it is enough to show that an error vector can be anticipate that the total number of queries until a codeword is
added to S at most once. We prove the theorem by induction found is significantly lower than b. Also worth mentioning
on the number of queries performed g, which is evaluated at is the fact that as Signal to Noise Ratio (SNR) improves,
the end of the while loop. To prove the base case of g = 1, the expected number of queries until a codeword is found
we notice that S is initialized to contain only ~0T , which is the reduces, as is the case with all GRAND-based decoders. MAC
most likely error vector, so Property 1 is satisfied. We also SGRAND is therefore suitable for high-rate short-length codes
notice that in the while loop, ~0T is removed from S, and all in the high SNR regime.
of its children, which are different than ~0T (by Lemma III.3.1)
and one from another, so Property 2 is satisfied. Assume now IV. C ONCLUSION AND D ISCUSSION
that the theorem holds after g queries, and we establish that In this paper, we presented MAC SGRAND, a MAP al-
it holds after g + 1 queries. Suppose the algorithm queries gorithm for noisy MAC channels, which is known to be
the error vector ~zcol in query number g. To prove Property 2, optimally accurate. It requires no coordination between users.
suppose by contradiction that ~v col , an error vector that was This algorithm opens a new route for handling MAC: one
previously added to S, is added to S for the second time. that addresses interference cancellation and noise separately,
An error vector is added to S only when its unique parent where a dedicated algorithm that specializes in each of the
(by Lemma III.3.1) is queried, so ~zcol = π ~v col . As a problems is applied (SGRAND for the noise, ZigZag decoding
result, we conclude that ~zcol has been queried more than once for interference cancellation), without user coordination.
within g queries, which contradicts the induction assumption. For future work, we plan to show a simple capacity achiev-
To prove Property 1, suppose by contradiction that it is ing scheme which involves MAC SGRAND. In particular, non
not satisfied, and there exists another error vector ~v col that capacity-achieving approaches for transmitting over MAC are
has not been queried  before and  is morecollikely
  than ~zcol, based on ALOHA and related protocols [28], where transmit-
 col col col
namely p θ ~y − ~z | ~y < p θ ~y − ~v | ~y col .
 col
 ters send their packets sporadically, without any coordination
If there exists more than one such vector, let us pick the Future directions would consider an ALOHA based protocol,
most likely one. Notice that ~v col 6∈ S must hold for each where users transmit packets whenever they are available.
of the first g + 1 queries. Otherwise, Algorithm  1 would When one user is idle, a single user may transmit without
pick ~v col as its queried error vector. If π ~v col ∈ S ever interference. Unlike traditional ALOHA relying on backoff,
held, then ~v col would be contained at S at some point, since a MAC SGRAND based scheme may request that colliding
Algorithm 1 adds the children of the queried noise vector to users collide anew.
S, which contradicts our assumption. Therefore, assume that R EFERENCES
π ~v col 6∈ S at any point. By Lemma III.3.2,  we knowthat
p θ ~y col  − π ~v col  | ~y col  ≥ p θ ~y col − ~v col | ~y col . If [1] T. M. Cover, Elements of information theory. John Wiley & Sons,
 1999.
p θ ~y col − π ~v col | ~y col > p θ ~y col − ~v col | ~y col , we [2] R. Ahlswede, “Multi-way communication channels,” in Second Interna-
contradict the assumption that ~v col is the most likely noise tional Symposium on Information Theory: Tsahkadsor, Armenia, USSR,
sequence that should have been queried Sept. 2-8, 1971, 1973.
 col
  in query
 col
  number  [3] H. H.-J. Liao, “Multiple access channels.” HAWAII UNIV HON-
g + 1. Otherwise,
  assume  that p θ ~
 y − π ~
 v | ~y col = OLULU, Tech. Rep., 1972.
p θ ~y col − ~v col | ~y col . By repeating this argument at most [4] R. Gallager, “A perspective on multiaccess channels,” IEEE Transactions
 col on Information Theory, vol. 31, no. 2, pp. 124–142, 1985.
 Xm · j∗ times, we conclude that the zero vector was never
 [5] A. J. Grant, B. Rimoldi, R. L. Urbanke, and P. A. Whiting, “Rate-
added S (by Lemma III.3.3), which is false, since S is splitting multiple access for discrete memoryless channels,” IEEE Trans-
initialized to contain only the zero vector. actions on Information Theory, vol. 47, no. 3, pp. 873–890, 2001.

 2606
 Authorized licensed use limited to: Maynooth University Library. Downloaded on January 19,2022 at 11:29:40 UTC from IEEE Xplore. Restrictions apply.
[6] S. Ray, M. Medard, and J. Abounadi, “Random coding in noise-free
 multiple access networks over finite fields,” in IEEE Global Telecom-
 munications Conference (IEEE Cat. No.03CH37489), vol. 4, 2003, pp.
 1898–1902 vol.4.
 [7] P. Mathys, “A class of codes for a t active users out of n multiple-access
 communication system,” IEEE Transactions on Information Theory,
 vol. 36, no. 6, pp. 1206–1219, 1990.
 [8] B. L. Hughes and A. B. Cooper, “Nearly optimal multiuser codes for
 the binary adder channel,” IEEE Transactions on Information Theory,
 vol. 42, no. 2, pp. 387–398, 1996.
 [9] G. H. Khachatrian and S. S. Martirossian, “Code construction for
 the t-user noiseless adder channel,” IEEE Transactions on Information
 Theory, vol. 44, no. 5, pp. 1953–1957, 1998.
[10] Shih-Chun Chang and E. Weldon, “Coding for t-user multiple-access
 channels,” IEEE Transactions on Information Theory, vol. 25, no. 6, pp.
 684–691, 1979.
[11] S. Gollakota and D. Katabi, “Zigzag decoding: combating hidden
 terminals in wireless networks,” in Proceedings of the ACM SIGCOMM
 2008 conference on Data communication, 2008, pp. 159–170.
[12] A. ParandehGheibi, J. K. Sundararajan, and M. Médard, “Collision
 helps-algebraic collision recovery for wireless erasure networks,” in
 2010 Third IEEE International Workshop on Wireless Network Coding.
 IEEE, 2010, pp. 1–6.
[13] ——, “Acknowledgement design for collision-recovery-enabled wireless
 erasure networks,” in 48th Annual Allerton Conference on Communica-
 tion, Control, and Computing (Allerton). IEEE, 2010, pp. 435–442.
[14] K. R. Duffy, J. Li, and M. Médard, “Capacity-achieving Guessing
 Random Additive Noise Decoding,” IEEE Transactions on Information
 Theory, vol. 65, no. 7, pp. 4023–4040, 2019.
[15] K. R. Duffy and M. Médard, “Guessing random additive noise decoding
 with soft detection symbol reliability information,” in International
 Symposium on Information Theory, 2019, pp. 480–484.
[16] K. R. Duffy, A. Solomon, K. M. Konwar, and M. Médard, “5G NR CA-
 Polar maximum likelihood decoding by GRAND,” in 2020 54th Annual
 Conference on Information Sciences and Systems (CISS). IEEE, 2020,
 pp. 1–5.
[17] A. Solomon, K. R. Duffy, and M. Médard, “Soft Maximum Likelihood
 Decoding using GRAND,” in IEEE International Conference on Com-
 munications, 2020.
[18] K. R. Duffy, “Ordered reliability bits guessing random additive noise
 decoding,” in IEEE ICASSP, 2021.
[19] S. M. Abbas, T. Tonnellier, F. Ercan, and W. J. Gross, “High-Throughput
 VLSI Architecture for GRAND,” in IEEE International Workshop on
 Signal Processing Systems (SiPS). IEEE, 2020, pp. 1–6.
[20] S. M. Abbas, T. Tonnellier, F. Ercan, M. Jalaleddine, and W. J.
 Gross, “High-throughput VLSI architecture for soft-decision decoding
 with ORBGRAND,” 2021, accepted to International Conference on
 Acoustics, Speech and Signal Processingng Systems (ICASSP).
[21] J. Proakis, “Digital communications,” Me Graw-Hill, pp. 777–778, 2001.
[22] M. Kazemi, T. M. Duman, and M. Médard, “Double-zipper: Multiple
 access with zigzag decoding,” in IEEE International Conference on
 Communications. IEEE, 2020, pp. 1–6.
[23] R. Roth, Introduction to coding theory. CUP, 2006.
[24] K. E. Atkinson, An introduction to numerical analysis. John wiley &
 sons, 2008.
[25] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction
 to algorithms. MIT press, 2009.
[26] P. Bürgisser, M. Clausen, and M. A. Shokrollahi, Algebraic complexity
 theory. Springer Science & Business Media, 2013, vol. 315.
[27] M. Frigo, C. E. Leiserson, H. Prokop, and S. Ramachandran, “Cache-
 oblivious algorithms,” in 40th Annual Symposium on Foundations of
 Computer Science. IEEE, 1999, pp. 285–297.
[28] N. Abramson, “The ALOHA system: another alternative for computer
 communications,” in Proceedings of the Fall Joint Computer Conference,
 1970, pp. 281–285.

 2607
 Authorized licensed use limited to: Maynooth University Library. Downloaded on January 19,2022 at 11:29:40 UTC from IEEE Xplore. Restrictions apply.
You can also read