Soft-Decision Decoding of Binary Reed-Muller Codes ... - CiteSeerX

0 downloads 0 Views 196KB Size Report
tiple concatenation (GMC) of binary block codes of length. 2. They also defined the ..... Error- Correcting Codes”, New York, North-Holland,. 2nd reprint, p. 374 ...
International Symposium on Information Theory and Its Applications Honolulu, Hawaii, U.S.A., November 5–8, 2000, pp. 327 - 330

“Look-Ahead” Soft-Decision Decoding of Binary Reed-Muller Codes Norbert Stolte and Ulrich Sorger Institute for Communications Technology Darmstadt University of Technology (TUD) D-64283 Darmstadt, Germany Email:fstolte,[email protected]

Abstract We investigate the performance of a stack algorithm to decode binary Reed-Muller codes. As metric to evaluate the different elements in the stack we use a recently derived lower bound on the squared Euclidean distance. Similar to the A -algorithm the new metric comprises a lower bound on the increase of the squared Euclidean distance in future decoding steps. Simulation results show that in case of the codes RM(3; 8) and RM(4; 8) even at low SNR close to soft-decision ML-decoding performance can be achieved with reasonable stack size. 1. Introduction Binary Reed-Muller (RM) codes belong to the oldest known codes in coding theory. They are very easy to construct and there exist very simple bounded minimum distance decoding algorithms [1]. But decoding of RM codes remains a difficult problem if it is desired to decode significantly more than half the minimum distance of the code and in particular to achieve close to soft-decision maximumlikelihood (SDML) decoding performance. In [1] it was shown that binary RM codes of length 2m can be obtained from RM codes of length 2m?1 and the (u; u  v )-construction. Also for the RM(1; m) codes an algorithm to perform hard-decision maximum-likelihood decoding is given. General soft-decision decoding of RM codes as generalized concatenated codes was introduced by Kabatyansky [2]. Schnabel and Bossert [3] showed that the (u; u  v)-construction can be viewed as generalized multiple concatenation (GMC) of binary block codes of length 2. They also defined the soft-values for the case of transmission over the additive white Gaussian noise (AWGN) channel. In [4] this algorithm was improved and extended to list-decoding and so SDML decoding can be done for the RM(1; m) codes and quasi SDML performance was achieved for RM codes of length  64. In this article we show the performance of a stack algorithm in which a recently derived metric is used to evaluate

r; m) c ;c c

RM(

b

i

=(

C = RM(r ? 1; m ? 1) (1)

(2)

(1)

(2)

i

i

i

)

C

(2)

=

r; m ? 1)

RM(

Figure 1: Decomposition of the code outer codes

RM(r; m) into two

the different estimates. The paper is organized as follows: first we repeat the construction of RM codes. Then the main idea of the ”look-ahead” technique is explained and an example is given. In the following section the simulation results for the codes RM(3; 8) and RM(4; 8) are presented and at the end we give some concluding remarks. 2. Definitions We assume BPSK transmission over an additive white Gaussian noise (AWGN) channel. The transmitted codeword denoted by ct = (c1 ; c2 ; : : : ; cn ) 2 RM(r; m), ci 2 f+1; ?1g, n = 2m is corrupted by noise and at the receiver the real valued sequence y = (y1 ; y2 ; : : : ; yn ) is observed. It is known [1][3] that if the codes C (1) = RM(r ? 1; m ? 1) and C (2) = RM(r; m ? 1) are concatenated through

(2) (1) m?1 bi = (c(2) i ; ci  ci ) ; i = 1; 2; : : :; 2 the

obtained

generalized concatenated (GC) code is the code RM(r; m) (see Fig. 1). Since C (1) and C (2) are RM codes, they again can be viewed as GC codes and the same construction rule can be applied. This can be repeated several times and in this way the code RM(r; m) is decomposed into K simple repetition codes R(k) , k = 1; 2; : : :; K , were K is the dimension of the code RM(r; m). The codeword c completely

(b1 ; b2 ; : : : ; bn=2 )

determines the codewords r(k) = (r(k) ; r(k) ; : : :) 2 R(k) and vice versa. Soft-decision bounded-minimum-distance (SD-BMD) decoding can be done by decoding only the outer codes C (1) and C (2) . We define the components of the softinput vectors y(1) and y(2) of the codes C (1) and C (2) with ai = sign(yi ) and wi = jyi j as

and

yi(1) = a2i?1  a2i  min(w2i?1 ; w2i )

(1)

yi(2) = y2i?1 + c(1) i  y2i :

(2)

3. Metric Values In [5] it was shown that if (1) and (2) are applied suc(k) cessively to calculate the soft-input values yRi of the codes ( k ) R , the maximum-likelihood (ML) decoder decides for the codeword cML which minimizes the following sum

(c) =

K X

X

wR(ki) :

(k) k=1 fi j r(k) 6=aRi g

(3)

r; m)

RM(

P (1) Figure 2:

X (^r) =

X

wR(ki)

(k) k=1 fi j r^(k) 6=aRi g

P (3)

P (4)

Sketch of the decomposition of the code

RM(r; m) into four outer codes

As an example for s = 2 the code RM(r; m) can be decomposed into 2s = 4 outer codes P (k) , k = 1; 2; 3; 4 with P (1) = RM(r ? 2; m ? 2), P (2) = P (3) = RM(r ? 1; m ? 2), P (4) = RM(r; m ? 2) (see Fig. 2) and an inner block code of length 4. Since the inner code has length 4, four consecutive values y4i?3 ; : : : y4i have to be combined 0(1) to calculate one soft-input value yi for the code P (1) . De(1) ^ which is determined by ^r, the pending on the estimate p components of three different soft-input vectors are given by

yi0(2) = a4i?3  a4i?2  min(w4i?3 ; w4i?2 ) + p^(1) i  a4i?1  a4i  min(w4i?1 ; w4i )

r = (^r(1) ; r^(2) ; : : : r^() ), Given a vector of estimates ^ 2 f+1; ?1g, 1    K , the sum

r^(k)

P (2)

yi0(3) = a4i?3  a4i?1  min(w4i?3 ; w4i?1 ) + p^(1) i  a4i?2  a4i  min(w4i?2 ; w4i )

(4) and

is a simple lower bound on the least value of (3) in the subcode S^r  RM(r; m) which is defined by

S^r = fcjr(k) = r^(k) ; 1  k  g: Equation (3) can also be viewed as a cost function and the ML-decoder has to find a path through the code tree with minimal cost. In this case (^ r) is the cost from the start node to the node given by the estimate ^ r. The total cost of a path through the node ^ r is the value of (^r) plus the cost from the node ^ r to the final node. The search strategy to find the path with minimal cost can be improved if for each node under consideration a lower bound on the minimal cost to the final node is available. A search algorithm of this type is called an A*-algorithm [6][7]. r to the final node is given The minimal cost from node ^ by

(S^r) = cmin 2S

^r

K X

X

wR(ki) :

(k) k=+1 fi j r(k) 6=aRi g

(5)

In [5] it was shown that if  is equal to the dimension of the code RM(r ? s; m ? s) for some s, (S^r) can be lower bounded by independently decoding of short RM codes.

yi0(23) = a4i?2  a4i?1  min(w4i?2 ; w4i?1 ) + p^(1) i  a4i?3  a4i  min(w4i?3 ; w4i ): (k)

If we choose pML 2 RM(r ? 1; m ? 2), k 2 f2; 3; 2  3g to be the codeword with smallest Euclidean distance to the soft-input vector y0(k) then the following inequality holds

(S^r)  21

X

k2f2;3;23g

k) ): (p(ML

(6)

In general it can be shown that [5]

2X ?1 k) ) (p(ML (S^r)  2s1?1 k=1 s

(k)

(7)

for pML 2 RM(r ? s +1; m ? s). In that way similar to the A -algorithm the increase of (^r) in future decoding steps can be lower bounded.

1e+00 1e-01 W E 1e-02 R

W E 1e-02 R

;

;

;

100

150

200 250 300 Noise Energy E

350

Figure 3: Soft-decision decoding WER of the code, decoders with different stack sizes

RM(3; 8)

?

;

SDMLLB

f1024; 32g f256; 16g f256; 0g f16; 0g f1; 0g

RM(4; 8)

?

+ + 3

?

?

?

?

1e-04

E =N

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 b 0 [dB] E =N

RM(3; 8)

4. Simulation Results For decoding we use a stack algorithm in which the tree is searched in a breadth-first manner, that is all estiri have same length. Instead of performing mate vectors ^ (k) full maximum-likelihood decoding to find the pML in (7) a (k) (k) lower bound LB (pML ) on (pML ) is calculated. This is done by the help of a second stack decoder with small stack which uses the metric given in (4). Whenever a stack overflow in the second stack occurs, the respective value of (4) of the element which is not kept in the stack gives a maxi(k) mum value of LB (pML ) to be used in (7). Hence the metric to order the estimates in the main stack is given by

2X ?1 k) ): (^r) + 2s1?1 LB (p(ML k=1

Figure 5: Soft-decision decoding WER of the code, decoders with different stack sizes

?

;

Figure 4: Soft-decision decoding WER of the code, decoders with different stack sizes

60

?

;

?

;

+ 3 3 + + 3 1e-01 + + + 3 W + E 1e-02 R 3 + + 3 1e-03 + 3

?

;

?

;

;

1e+00

SDMLLB + 3 3 f1024 64g + f256 32g + 1e-01 3 + f256 0g + + 3 f16 0g 3 + W f1 0g E 1e-02 + 3 R + + 1e-03 3 + + ++ 3 1e-04 0 1 2 3 4 5 6 b 0 [dB] ?

?

;

1e-04

400

+

;

;

1e-03

;

1e-04

+ 3 33 3+ 3+ 3+ + 3 3 33 + 33 ++ + 3 333+ + 33 ++ f1 0g f16 0g 3 33 + f16 4g 3 f256 0g + 33 ++ f256 16g + SDMLLB 3 + 3 80 100 120 140 160 Noise energy E

1e-01

;

1e-03

1e+00

1e+00

3 3 3 3+ + + 33 3 +++ + 3 ++ 3 3 ++++ 3 3 +++ f1 0g 33 + f16 0g 3 3 ++ f16 8g 3 f256 0g + 33 + f256 32g + SDML LB + + 3

s

(8)

At first some simulation results for the code RM(3; 8) with code parameters (n; k; d) = (256; 93; 32) are presented. The influence of the stack sizes on the word error rate (WER) is investigated and we use the notation

Figure 6: Soft-decision decoding WER of the code, decoders with different stack sizes

RM(4; 8)

fLmain; L2ndg to denote a decoder with size of the main

stack Lmain and size of the second stack L2nd . A decoder with L2nd = 0 uses the metric given in (4). In Fig. 3 the WER is shown in dependence of the energy of the noise 2 vector E = i ni . Furthermore a lower bound (LB) on the WER of the SDML decoder is given. The decoder with stack sizes f1; 0g is equivalent to the decoder given in [3]. At a WER of 10?4 the performance of the decoder with stack sizes f256; 32g is very close to SDML. Note that since the code has minimum Hamming distance 32, WER = 0 is only guaranteed for E  32. In Fig. 4 the WER in dependence of the signal to noise ratio Eb =N0 is shown. At a WER of 10?3 the performance degradation of the decoders with stack sizes f256; 32g and f1024; 64g compared to SDML decoding is less than 0.4 dB and 0.2 dB, respectively. In Fig. 5 for the code RM(4; 8) with parameters (n; k; d) = (256; 163; 16) the WER in dependence of E is shown. Here the improvement using the ”look-ahead” technique is not as large as for the code RM(3; 8). In case of

P

the decoder with stack size f256; 16g increasing the size of the second stack to 32 does almost not improve the performance. Nevertheless at a WER of 10?4 the performance is also very close to SDML. Finally Fig. 6 shows the dependence of the WER on the signal to noise ratio. 5. Conclusion The performance of a stack algorithm was investigated in which a recently derived lower bound on the squared Euclidean distance to the received sequence is used to evaluate the different estimates. For the codes RM(3; 8) and RM(4; 8) the performance of the presented decoder in the AWGN channel at WER = 10?3 is close to SDML. It is clear that hence the whole family of RM codes of length 256 can be decoded close to SDML. References [1] F.J. MacWilliams, N.J.A. Sloane, ”The Theory of Error- Correcting Codes”, New York, North-Holland, 2nd reprint, p. 374, 1983 [2] G. A. Kabatyansky, ”On decoding Reed-Muller codes in semicontinuous channel”, Proc. Second Int. Workshop Algebr. and Combin. Coding Theory, Leningrad, USSR, pp. 87-91, 1990 [3] Gottfried Schnabel and Martin Bossert, ”Soft-Decision Decoding of Reed-Muller Codes as Generalized Multiple Concatenated Codes”, IEEE Trans. Inform. Theory, Vol. IT-41, pp. 304-308, 1995 [4] R. Lucas, M. Bossert, A. Dammann, ”Improved SoftDecision Decoding of Reed-Muller Codes as Generalized Multiple Concatenated Codes”, ITG Fach¨ bericht, Codierung f¨ur Quelle, Kanal und Ubertragung, No.146, pp. 137-141, 1998 [5] N. Stolte, U. Sorger, G. Sessler, ”Sequential Stack Decoding of Binary Reed-Muller Codes”, ITG Fachbericht, 3rd ITG Conference Source and Channel Coding, No. 159, pp. 63-69, 2000 [6] Y. Han, C. Hartmann, C. C. Chen, ”Efficient PriorityFirst Search Maximum-Likelihood Soft-Decision Decoding of Linear Block Codes”, IEEE Trans. Inform. Theory, Vol. IT-39, No. 5, pp. 1514 - 1523, 1993 [7] N. Nilsson, ”Principles of Artificial Intelligence” Berlin, Springer Verlag, 1982