Parallel concatenated gallager codes for CDMA ... - Semantic Scholar

4 downloads 1708 Views 426KB Size Report
On the other hand, codes with higher MCW show a strong performance at moderate to high. SNRs and no trace of undetected errors. One explanation is that.
Parallel Concatenated Gallager Codes for CDMA Applications Hatim Behairy and Shih-Chun Chang George Mason University MS 1G5, 4400 University Drive Fairfax, VA 22030, U.S.A. Abstract— We present a class of concatenated codes called Parallel Concatenated Gallager Codes (PCGC) using Low-Density Parity-Check (LDPC) component codes. We report simulation results for PCGC that outperform those for comparable LDPC codes both in Additive White Gaussian Noise (AWGN) and ßat Rayleigh fading channels. The good performance of PCGC increases the capacity of CDMA systems while the reduced encoding and decoding complexity of such codes keep the signal processing delays within a reasonable limit.

I. I NTRODUCTION Data delays on the order of tens of milliseconds are deemed acceptable in digital speech transmission, while larger delays may decrease the Quality of Service (QoS). Due to certain data processing, error-control coding introduces delay to the received information data stream before it forwards it to the receiver’s output. Under the revolutionary advances in coding theory, it becomes practically possible to use powerful errorcontrol codes in Code Division Multiple Access (CDMA) applications since very efÞcient decoding algorithms are available for these codes [1]-[2]. In order to keep acceptable signal processing delays, short-frame error-control codes must be used. However, short-frame codes do not perform as good as longframe codes do [3]-[5]. One of the key features of our proposed coding scheme is that it combines the strength of long-frame codes and the low decoding complexity and signal processing delays of short-frame codes. Low-density parity-check (LDPC) codes were Þrst introduced by Gallager [6] as a family of block codes characterized by a sparse parity-check matrix, with each row in the matrix representing the coefÞcients in a parity-check equation. Gallager showed that random LDPC codes are asymptotically good for binary symmetric channels (BSC) and that they achieve small error probabilities for rates bounded away from zero as the block length increases. Recent results by MacKay [7]-[8] proved that LDPC codes are very good, i.e., families of LDPC codes exist whose rates are arbitrarily close to channel capacity for a variety of communications channels. Turbo codes were introduced in 1993 by Berrou et al. [9] as a class of concatenated codes. Recursive encoders, large pseudorandom interleavers and iterative decoding schemes are the key essentials necessary to attain the exceptional performance provided by turbo codes. A property of these codes is to take advantage of the frame length as block codes do, while decoding is done as for convolutional codes. We present a new class of concatenated codes built from the parallel concatenation of LDPC codes, called Parallel Concatenated Gallager Codes (PCGC) [10]-[11]. The motivation is to

use the good LDPC codes in the well known turbo code structure to break the fairly complex encoding and decoding of a long code into steps, while maintaining the information ßow between the component decoders and minimizing any information loss between the decoding steps. In this scheme, component LDPC codes interact in parallel concatenation directly without interleavers [12]-[13]. We intend to show that short-frame PCGC have good errorcorrecting capabilities in both Additive White Gaussian Noise (AWGN) and Rayleigh fading channels. The good performance of PCGC increases the capacity of CDMA systems while the reduced encoding and decoding complexity of such codes keep the signal processing delays within a reasonable limit. The remaining of this paper is organized as follows: in section II, the proposed coding scheme is described. In section III, we compare PCGC to irregular LDPC codes. In section IV, we present simulation results of the proposed coding scheme both in AWGN and Rayleigh fading channels. The application of PCGC in CDMA systems is addressed in section V. Finally, in section VI we summarize our Þndings. II. PARALLEL C ONCATENATED G ALLAGER C ODES A. PCGC Construction Without loss of generality, in this paper we restrict our description of PCGC to rate 13 codes constructed by combining two rate 12 LDPC codes. Two distinct component codes, each of length L are used in parallel concatenation to build a PCGC of length N (as shown in Fig. 1), where x denotes the systematic information bits, while v1 and v2 are the parity bits generated by the Þrst and the second encoders, respectively. Note how the two component encoders are connected directly without the use of an interleaver. An LDPC parity-check matrix of dimensions M × L contains L columns, each of weight Cl  Z+ , l  {1, .., L}. We deÞne the Mean Column Weight (M CW ) of a parity-check matrix to be the average weight over all L columns in the matrix. When constructing the parity-check matrix of each component code, we require M ≥ Cl ≥ 1 and try to maintain a uniform row weight as close as possible throughout the matrix. Engineering of the optimum PCGC relies mainly on choosing the best parameters of the component codes. B. Concatenated Codes: Why Do They Work? At low Signal-to-Noise Ratios (SNRs), LDPC codes constructed with low M CW outperform higher M CW codes.

0-7803-7206-9/01/$17.00 © 2001 IEEE

1002

x x LDPC encoder 1

p

2e

(u ) = 0

y0y1

p1e(u)

LDPC DECODER 1

y0y2

LDPC DECODER 2

p2e(u)



p (u) 1

v1



p (u) 2

Fig. 2. PCGC decoder

LDPC encoder 2

v2

Fig. 1. PCGC encoder

However, such codes tend to show some undetected errors at the decoder’s output at moderate to high SNRs due to the existence of low-weight codewords. On the other hand, codes with higher M CW show a strong performance at moderate to high SNRs and no trace of undetected errors. One explanation is that at low SNRs (when the noise level is high) naively constraining the code by raising the M CW of the matrix often results in the message bit receiving more unreliable and possibly contradicting opinions from the check equations. Such information may confuse the decoder and results in a loss of low SNRs performance. At moderate to high SNRs (when the noise level is low) more reliable opinions from the check equations often result in having more checks agreeing on the corresponding bit value, and therefore message bits converge to their correct values faster. For LDPC codes of rate 12 , we have observed that codes with M CW < 2.5 outperform codes that have M CW > 2.5 at low SNRs. However, at moderate to high SNRs the opposite becomes true and codes with high M CW are the champions. Based on this observation, it was our intuition to build a code that combines the strength of low M CW codes at low SNRs and high M CW codes at high SNRs. A very good scenario that we have developed by extensive computer search is to use an LDPC code with low M CW as the Þrst component code and a higher M CW LDPC code as the second component code. The mixture of different column weights provided by combining the component codes along with the iterative decoding algorithm are the key essentials to the performance improvement achieved by this class of codes.

component decoders. The component LDPC decoders compute the a posteriori probability as described in [8] using the sum-product algorithm with modiÞcations to accommodate the a priori information. Let U  {+1, −1}, and y 0 denotes the received sequence corresponding to the systematic information bits, while y 1 and y 2 denote the received sequences corresponding to the parity bits of the Þrst and second component codes, respectively. We deÞne a “super iteration” as an iteration when the two component decoders exchange information among themselves. The PCGC decoder (one super iteration step) is illustrated in Fig. 2. Two Soft Input Soft Output (SISO) decoders are involved in each super iteration. In the Þrst super iteration, the Þrst decoder computes the a posteriori probability of the L coded bits p1 ( u) using received sequences y 0 and y 1 with no a priori information since the information bits are equally likely to be a −1 or a 1. The second decoder now computes the a posteriori probability p2 ( u) using the received sequences y 0 and y 2 along with the extrinsic information p1e (u) available from the Þrst decoder as a priori information. On subsequent iterations, the Þrst decoder uses the extrinsic information generated by the second decoder p2e (u) as a priori information to compute the a posteriori probability. The process of exchanging information between component decoders is continued until both decoders converge to valid codewords, or a maximum number of super iterations is reached. In the latter case, output from the second component decoder is declared as the best estimate of the transmitted sequence. D. How Does the “a priori” Probability Fit in? Let r = (r1 , ..., rN ) denote the received sequence at the channel output. The PCGC decoder starts by demultiplexing 1 the vector r into two vectors z 1 = (z11 , ..., zL ) = [y 1 y 0 ] and 2 z 2 = (z12 , ..., zL ) = [y 2 y 0 ] that shall be used as input vectors to the Þrst and second component decoders, respectively. For convenience, we will drop the superscript that denotes the component decoder’s number and continue our derivations for a general decoder case. If the Gaussian probability density function centered at +1 is

C. The Decoder Decoding PCGC follows the scenario of turbo decoding with the exception that an interleaver is not present between the

1003

−(zl−1)2 1 P (zl |ul = +1) = √ e 2σ2 , 2πσ

(1)

and the probability that the message vector u is +1 at site l

is K information bits

P (ul = +1|zl ) = P (ul=+1).P (zl|ul=+1) [P (ul=+1).P (zl|ul=+1)]+[P (ul=−1).P (zl|ul=−1)] ,

K prity bits from LDPC 1

K parity bits from LDPC 2

5

(2)

4 5

(2) can be written as:

K info. bits

5 5 5

1   l 1 + exp −2z σ2

fl1 = P (ul = +1|zl ) =

(3)

5 5

Parity checks for LDPC1

4 2

when the source information bits are equally likely (p (ul = −1) = p (ul = +1) = 12 ). By similar argument we can derive the probability that a message vector has a −1 at site l : fl0 = P (ul = −1|zl ) = 1 − fl1 =

1  l. 1 + exp 2z σ2

2 5 LDPC 1 K parity bits

2 5 2

(4)

4 2 5

In order to accommodate the a priori information available from the other component decoder, (3) and (4) must be modiÞed. Equations (5) and (6) are generalizations of (3) and (4) that use the extrinsic information available from the other decoder as a priori information fl1 =

fl0 =

 1+

1 p(ul=−1) p(ul=+1)

 1+



1 p(ul=−1) p(ul=+1)

exp

 −2zl 

(5)

 2zl  .

(6)

 exp

2

2 LDPC 2 K parity bits

Parity checks for LDPC2

2 6 3 4 3

Fig. 3. Bipartite graph of a 5 × 15, R = 2.667).

1 3

PCGC (MCW1= 2, MCW2=

σ2

σ2

In the Þrst super iteration, the Þrst component decoder assumes equally likely source information bits and computes fl1 and fl0 using (3) and (4) respectively using the received sequence. The Þrst component decoder delivers two values: soft outputs p1 ( u) on all information bits and the extrinsic information p1e (u). This extrinsic information is used as a priori information to the second component decoder which uses (5) and (6) to compute fl1 and fl0 , respectively. III. C OMPARISON TO I RREGULAR LDPC CODES The combination of different columns (of different weights) Cn , n  {1, .., N } that results from the concatenation of the two component LDPC codes amounts to an irregular LDPC code that can be represented in the classical bipartite Tanner-graph form, an example of a 5 × 15, R = 13 PCGC is shown in Fig. 3. In the proposed PCGC, N = 3K bits, the Þrst K of which are information bits, the second K are redundancy bits produced by the Þrst LDPC code and the third K are redundancy bits produced by the second LDPC code. On the right, we have nodes representing parity-check equations of both component codes. Since the row weights in the component codes are held (approximately) constant, the right degree of the graph, i.e., the number of bits participating in each parity check equation, is

almost regular. On the contrary, because of the parallel concatenation, the Þrst K information bits participate in both the Þrst set of parity-check equations (deÞning the Þrst LDPC) and the second set of parity-check equations (deÞning the second LDPC), while the second and third K bits participate in the Þrst set and in the second set of parity-check equations only. Therefore, the degree of the Þrst K bits is larger than the degree of the second and third K bits. The Þnal result is an irregular LDPC code. For example, take two R = 12 LDPC codes. Let the Þrst code M CW = 2, while the second code M CW = 2.667. For simplicity and without loss of generality, if we only use weight 2 and 3 columns in the construction of the codes, then almost all the columns in the Þrst code are of weight 2 (we use one weight 3 column to ensure that the matrix has full rank). On the other hand, in the second code, 13 of the columns are of weight 2, and 23 are of weight 3. Now, since the information block is of length K, we expect the systematic part of the PCGC codeword to be connected to both parity-check equations sets (set 1 from the Þrst code, and set 2 from the second code) and therefore we get a number of weight 5 columns. The parity bits from the Þrst and second code will be connected to its corresponding parity-check equations set respectively, and therefore we get a number of weight 2 and weight 3 columns as shown in Fig. 4. Irregular LDPC codes are known to outperform regular LDPC codes [15]. PCGC form a class of irregular LDPC codes constructed by the parallel concatenation of different component codes. The performance of an irregular

1004

0

10

0

LDPC1 Parity equations

2

2

PCGC- 576 LDPC- 576 LDPC- 1920 PCGC-1920 PCGC-1920* PCGC-6000

-1

10

-2

10

-3

BER

10

LDPC2 Parity equations

3

2

0

-4

3

10

-5

10

-6

10

K Parity bits from LDPC 2

K Parity bits from LDPC 1

K Systematic information bits

-7

10

0

0.5

Fig. 4. Matrix representation of PCGC (numbers in the circles indicates the columns weight in that particular part of the matrix)

1

1.5 SNR [dB]

2

2.5

3

Fig. 5. Performance in AWGN channels 0

10

PCGC- 576 LDPC- 576 PCGC-1920 LDPC- 1920

-1

10

-2

10

-3

10

BER

LDPC code constructed by the parallel concatenation to irregular matrix transformation method has a similar performance to the PCGC code. While their performance is similar, the decoding of PCGC broken into two less complex decoding operations (which can be done completely in parallel with minor performance loss) reduces both the time and complexity needed to decode the rather long and more complex equivalent irregular LDPC code.

-4

10

-5

10

IV. S IMULATION R ESULTS

-6

10

We present simulation results of different frame lengths R = PCGC. In Fig. 5, the performance of PCGC is compared to LDPC codes of the same rate and length in AWGN channels [14]. In the Þgure, the number associated with each code is its frame length. To study the effect of using an interleaver between the two component LDPC codes, a PCGC (PCGC1920*) was modiÞed to permute the systematic information bits as in the original turbo code scenario. Our results show no signiÞcant difference compared to the interleaver-free scenario. This is expected since permutation of the data is embedded naturally in the random nature of the code.We simulated the performance of a short-frame PCGC decoder in fully interleaved ßat Rayleigh-fading channels, i.e., we assumed that fading amplitudes at different time instants are independent Rayleigh random variables. In the simulation, we assumed that no channel state information is available; i.e., the decoder has no knowledge of fading amplitudes. In Fig. 6, the performance of the PCGC decoder in ßat Rayleigh-fading channels is compared to the performance of the sum-product iterative decoder for LDPC codes.

-7

10

1 3

-8

10

2

2.5

3

3.5

4

5

5.5

6

6.5

7

Fig. 6. Performance in ßat Rayleigh fading channels

nication channels. A user data sequence is then spread partially by the error correcting code and partially by an orthogonal sequence to maintain the same processing gain [1]. The interference density at the user demodulator is modeled as a sum of multiple-access interference from other users and channel noise density. In [3], it was shown that when the number of users s is large enough, the noise due to multiple access interference can be approximated by a Gaussian distribution. Powerful error correcting codes are then used to effectively correct errors caused by the interference from other users in the channel in the same manner as errors caused by the channel noise. The capacity of a CDMA cell can then be approximated by

V. P ERFORMANCE OF PCGC IN CDMA SYSTEMS

s≈

In practical CDMA schemes, a fraction of the processing gain is used for error correcting coding where low rate codes can be used for a better protection of user data in noisy commu-

4.5 SNR [dB]

η Eb /I0

(7)

where η is the processing gain [3]. If a code can operate at a lower signal-to-noise ratio EI0b for a given bit error rate, then

1005

[5]

-1

10

[6] [7]

-2

10

Uncoded

-3

10

[8]

-4

10

[9]

BER

PCGC -5

10

convolutional

[10]

-6

10

[11]

LDPC

-7

10

[12]

-8

10

-9

10

0

20

40

60 Active users

80

100

120

Fig. 7. Number of users vs. bit error rate in a CDMA cell (η = 128, frame length=1920)

[13] [14] [15]

the number of users supported in a CDMA cell at a given QoS increases. We use (7) to evaluate the capacity of a CDMA cell. We assume coherent detection in both forward and reverse links. The capacity can be computed from bit error rates for PCGC in AWGN environments as shown in Fig. 7. VI. S UMMARY The parallel concatenation of LDPC codes in an interleaverfree fashion results in a good class of concatenated codes (PCGC). However, in order to achieve such good performance, a speciÞc technique must be followed in choosing the component LDPC codes. In this paper, we described a procedure to build and decode PCGC with minimum complexity based on the mean column weight of the component codes parity-check matrices. The described class can be viewed as a sub-class of irregular LDPC codes but with less complexity encoding and decoding. The good performance shown by PCGC along with the reduced time and complexity encoding and decoding makes them good candidates for CDMA applications. We showed how the capacity of a CDMA cell can be increased by using such errorcontrol coding techniques while maintaining reasonable delay at the decoder. R EFERENCES [1]

[2]

[3] [4]

V. Sorokine, F. Kschischang and S. Pasupathy,“Gallager Codes for CDMA Applications-Part I: Generalizations, Constructions, and Performance Bounds,” IEEE Trans. Commun., vol.48, no.10, pp.1660 - 1668, Oct. 2000. V. Sorokine, F. Kschischang and S. Pasupathy,“Gallager Codes for CDMA Applications-Part II: Implementations, Complexity, and System Capacity,” IEEE Trans. Commun., vol.48, no.11, pp.1818 - 1828, Nov. 2000. A. J. Viterbi, CDMA, Principles of Spread Spectrum Communication. Addison-Wesley Pub., 1995. S. G. Glisic, and P. A. Leppanen, Code Division Multiple Access Communications. Kluwer Academic Publishers Pub. 1995.

1006

D. P. Whipple,“Cellular CDMA,” Applied Microwave & Wireless, vol. 6, no. 2, pp. 24-37, spring 1994. R. G. Gallager, Low-Density Parity-Check Codes. M. I. T. Press, 1963. D. J. C. MacKay and R. M. Neal,“Good error-correcting codes based on very sparse matrices,” in Proc. IEEE Inter. Sympos. Inform. Theory, Ulm, Germany, p. 113, 1997. D. J. C. MacKay, “Good error-correcting codes based on very sparse matrices,” IEEE Tran. Inform. Theory , vol. 45, no.2 pp. 399-431, Mar. 1999. G. Berrou, A. Glavieux and P. Thitimajshima,“Near Shannon limit errorcorrecting coding and decoding: Turbo-codes,” IEEE Trans. Commun., vol. 44, pp. 1261–1271, Oct. 1996. H. Behairy, S. Chang, “Parallel Concatenated Gallager Codes,” Electronics Letters, vol. 36, no.24, pp. 2025-2026, Nov. 2000. Hatim Behairy and Shih-Chun Chang, “Parallel Concatenated Gallager Codes”, in Proc. CDMA International Conference (CIC2000), pp. 123127, Seoul, R. O. Korea, Nov. 2000. J. Hagenauer, E. Offer, and L. Papke, “Iterative Decoding of Binary Block and Convolutional Codes,” IEEE Tran. Inform. Theory , vol. 42, pp. 429-445, Mar. 1996. O. Pothier, L. Brunel, and J. Boutros, “A Low Complexity FEC Scheme Based on the Intersection of Interleaved Block Codes,” in Proc. IEEE 49th Vehicular Technology Conference, pp. 274-278, 1999. D. MacKay and M. Davey, Two Small Gallager Codes. [online]. Available: http://www.cs.toronto.edu/˜mackay/abstracts/R3.html. T.J. Richardson, R.L. Urbanke, “The capacity of low-density paritycheck codes under message-passing decoding,” IEEE Tran. Inform. Theory , vol. 47, issue 2, pp. 599-618, Feb. 2001.