Systematic Convolutional Codes with Good Distance ...

5 downloads 15682 Views 500KB Size Report
Abstiacr -New systematic convolutional codes of rates R = k/n, 4 5 n I 7, 2 5 k ... coding were published by Lin and Lyne [l] (rates R = l/n,. 2 I n I 9 ..... pp. ios-ioS. -.
IEEE TRANSACTIONS

ON INFORMATION

THEORY, VOL. 37, NO. 3, MAY 1991

Thus, we find

SomeNewRate R=k/n(2sksn-2) Systematic Convolutional Codes with Good Distance Profiles

*exp i =

649

r

lu12+X*+2xIuIcosy 2a

H. C. Ferreira, D. A. Wright, A. S. J. Helberg, I. S. Shaw, and C. R. Wyman

dl”,(X,Y) 1 V,(vK~)

Abstiacr -New systematic convolutional codes of rates R = k/n, 4 5 n I 7, 2 5 k 5 n - 2, are presented together with their free distances and first few values of the distance spectra. These codes have rapidly growing column distance functions, and are intended for sequential decoding. The new codes were found by a nested step-by-step computer search, similar to the procedure introduced by Lin and Lyne.

fi(N-l)T(N/Z+l)(~siny)~-*

//0 0 +exp i

J;Tr[(iV+1)/2](27rNac)N’2 [*I2 + x2 +2xlulcosy 2a

xdrdy.

(58)

We now obtain an expression for Q(e) by following Shannon’s [6] approach on the Gaussian channel. S@ce the density of n + s, is spherically symmetric, P,+,,(U) = P,+,$lul). Q(O) represents the probability that e(i)+ n + s, lies outside a right circular cone of half-angle 0 with vertex at the origin and axis passing through e(i). Stated another way: Q(t?>is the integral of p”-, (lz - e(Z)/) over all points z that lie outside the cone. As in (58) “we can use (56) to decompose this integral over rings of points equidistant from the e(i) and the origin, as in [6]. Thus

y2 +

Q(e) =~~~mdKCyNp.+s..(J (a) T m (N - 1)&-“12 = lJ-9 0 T[@v+1)/2] (b)

Index Terms -Systematic computer searches.

= 7r[(N+1)/2]2(2Nac)N’2

codes, sequential decoding,

I. INTR~Du~I~N Sequential decoding is essentially a tree search procedure where codes with long constraint lengths and large free distances can be used. An important factor determining the decoder’s performance, is the reduction of the computational effort, which can be achieved when rejecting wrong paths in the code tree quickly. Consequently, a code’s column distance func-

le(i)l* -2yle(i)lcos$)dydt,h

(ysin51,)N-2Jj~+,u(

(i+l)*r(N/2+1)

convolutional

iTdmiT/,y”(

y*+Nb-2ycos$&)ydyd+

xy sin y sin +)“-”

Nb+y*-2ycos~~+x2f2xcosy\lNb+y2-2ycos+& 2a

where C,, EL(r, y, t, +), A and B are as defined in (47) and (48). The justification of these steps is as follows: Step (a) follows from (56) and the restriction that le(i)l* = I%%;Step (b) results by substituting (58) into (a); Step (c) follows from the change of variables t = y/m, r = .x/m. REFERENCES

whdy

&de

I

tion [2] or distance profile [4] may be more important in determining its performance than its free distance. Chevillat and

Costello [5], as well as Hagenauer [3], verified this by simulations. Previously, systematic convolutional codes for sequential decoding were published by Lin and Lyne [l] (rates R = l/n, 2 I n I 9 and R = 2/5), by Costello [2] (rates R = l/n, 2 I n I

PI B. Hughesand P. Narayan,“Gaussianarbitrarily varying channels,” Manuscript received October 7, 1988; revised September 20, 1990. IEEE Trans. Inform. Theory, vol IT-33, no. 2, pp. 267-284, Mar. This work was supported in part by the Department of Posts and 1987. Telecommunications and the Foundation for Research Development. Dl I. G. Stiglitz, “Coding for a class of unknown channels,”IEEE

Trans. Inform-. 7heory, vol. IT-12, no. 2, pp. 189-195, 1966. 131 R. G. Gallager, Information Theory and Reliable Communication.

This work was presented at the IEEE International Symposium on Information Theory, Kobe, Japan, June 19-24, 1988. H. C. Ferreira, I. S. Shaw, A. S. J. Helberg, and C. R. Wyman are New York: Wiley, 1968. with the Laboratory for Cybernetics, Rand Afrikaans University, P.O. A. N. Shiryayev,Probability. New York: Springer,1984. tz; B. Hughes and T. G. Thomas, “Exponential error bounds for ran- Box 524, Johannesburg, 2000 South Africa. D. A. Wright was with the Department of Electrical and Computer domly modulatedcodeson Gaussianchannelswith arbitrarily varying interference,”technical report no. 88-06, Dept. of Elect. and Engineering, San Diego State University, San Diego, CA 92182-0190. He is currently with Hysignal Incorporated, 5962 Laplace Court, Suite Comput.Eng., JohnsHopkins Univ., Baltimore, MD. 161C. E. Shannon,“Probability of error for optimal codesin a Gaussian 220, Carlsbad, CA 92008. IEEE Log Number 9041813. channel,” Bell Sysr. Tech. J., vol. 38, pp. 611-656, May 1959.

0018-9448/91/0500-0649$01.00 01991 IEEE

IEEE TRANSACTIONS ON INFORMATION

650

THEORY, VOL. 37, NO. 3, MAY 1991

4), and by Hagenauer [3] (rates R=(n -1)/n, 3s n I 8). dm,n Sun et al. [6] searched for similar codes suitable for sequential 1 W) decoding (rates R = (n - 1)/n, 4 I n 5 7 and R = 3/5), followi=l ing the criterion that after obtaining an optimum distance proA file, the free distance should also be maximized. Nonsystematic, IO4 short constraint length R = l/2 codes suitable for either se- 10s quential decoding or Viterbi decoding were published by Johannesson [4]. We now present new systematic, good distance IO2 profile convolutional codes of rates R = k/n, 4 I n I 7, 2 I 10’ k I n -2. These codes may be of interest on communications 100 > channels where a careful tradeoff between bandwidth expansion 9 10 11 12 j 3 4 5 6 7 8 0 12 and coding gain has to be made. The codes may furthermore be used in the construction of quasi-cyclic block codes [8], [9]. Fig. 1. Computational load and computer file size for R = 3/5 systematic code. Nonsystematic, short constraint length codes of rates similar to the rates of our codes, were presented in [7]. These maximum free distance codes were however intended for Viterbi decoding. We furthermore introduce n( j, d,(j) + r) to denote the number of information sequences [i]j+l, with i, # 0, whose code II. DEFINITIONS AND NOTATION sequences have weight d,( j> + r, r = 0, 1,2. In Hagenauer’s noWe prefer the basic notation of Hagenauer [3], although we tation, n,(j) = n( j, d,(j)). Similarly, when truncating G at j I M need to slightly expand it in order to describe our codes. Let we let n(dJ j)+ r) indicate the number of information sequences i with i, # 0, whose code sequenceseventually achieve i=(i,,i,,i,;..,ij...) (1) weight d,(j) + r, r = 0, 1,2 before remerging with the all zeros be an information sequence where each subblock ij contains k code sequence x = 0. binary digits. Similarly, let x=(xO,x~,xz;~~,xj;~‘)

(2)

be the encoded sequence with each subblock xi containing n code bits. The encoding operation is denoted as

III. CODE SEARCH

The nested step-by-step procedure that we used, was first introduced by Lin and Lyne [l] for codes of rate R = l/n. In x=iG, comparison to an exhaustive search, this procedure greatly re(3) duces the computational load during the code search. Costello where G is the generator matrix: [2] and Hagenauer [3] used similar methods. Briefly, the search G,G,G, .*. G, procedure attempts to optimize the weight spectrum of the code G,G, ... ..+ G, at each step j, i.e., it is attempted to maximize d,(j) and G= (4) Go ... ... ... G,’ furthermore to minimize n( j, d,(j)). If two different sets of .1 &(j) tie with respect to d,( j> and n( j, d,(j)), preference is :1 L : given to a set if this set minimizes n(j, d,(j)+ r-1, r 2 1, for the and the entries of the submatrices G4, q = 0,l; . ., M, specify smallest r where the two sets have different n( j, d,( j> + r). Our the code of constraint length M + 1 subblocks. For our system- computer implementation of the search procedure, can be outlined as follows. atic codes, G, =[Zk,GT(0)] and G, = [Ok,GT(q)], q = 1,2;. *, M, where Ik is the k X k identity matrix and Ok is the k X k Set-Up Phase: Select rate R = k/n and the desired minimum all-zero matrix. We let distance of the code, dmin = d,(M). Set j = 0. Choose a set of the initial connection vectors, dh(O>, k + 1 I h I n that optiS+l(q) mizes the initial weight spectrum n(O,s), s = 1,2,. . . , d,i,. Open G(q)= ; (5) a file containing the first diverging information bits i,, 1 I i, I 2k - 1, and their weights when encoded with Gh( j). w?> Recursive Phase: Increment j. Develop the weight spectra with dh(q), k + 1 I h I n, the (n - k) k-bit connection vectors. n(j,s), s=l,2;.*, dmin corresponding to all possible sets of The column distance d,(j) [2] of a convolutional code is the Gh( j), extending i = (i,, i,, * . . , ij- i) with all possible ii. Choose minimum Hamming distance between any two diverging code a set of Gh( j> that optimizes the weight spectrum. Erase the old sequencesof length j + 1 steps, or equivalently, file and open a new file containing all i = (i,, i,; . ., ii1 and d,(j)=Ir;linw~([iG]j+l), (6) their weights when encoded with the selected Gh(j); however, delete all i of which the encoded weights exceed dmin. Repeat ’,+1 io#O this procedure until the weight of all encoded sequencesexceed where [*Ii+ i denotes truncation after j + 1 subblocks and w,(.) dmindenotes Hamming weight. The vector [d,(O), d,(l), . * ., d,(M)] is called the distance profile [4]. The free distance d,(j), obtained IV. RESULTS when truncating G at j I M, is defined to be The computational load of the search is an important factor which determined how far we could extend the constraint length of our codes. This load grows exponentially during the first The minimum distance of the untruncated code is phase of the search. It reaches a maximum about halfway dmin = d,(M)* (8) through the search, as shown for example in Fig. 1. The maxi-

I.

[ 1

IEEE TRANSACTIONS

ON INFORMATION

651

THEORY, VOL. 37, NO. 3, MAY 1991

TABLE I RATE 2/4 SYSTEMATICCODE

j

G3(.i) G4(j) d,, 01 10 10 10 11 10 01 01 00 10 01 11 11

0

1 2 3 4 5 6 7 8 9 10 11 12

11 01 01 01 11 00 10 10 01 01 00 01 00

d,(j)

n(j,d,(jN

n(j,d,(j)+l)

2 3 4 5 6 7 7 8 8 9 10 10 11

1 1 1 3 9 21 3 12 1 6 30

2 5 11 18 31 65 44 87 35 79 203 83 217

2 2 3 3 4 4 5 5 6 6 7 7 7

5 28

n(j,d,(j)+2)

d,(j)

1 1 1 1 1 8 2 6 1 3 2 1 1

2 1 1 3 1 0 10

d,(j)

n(dljN

n(d,(j)+l)

3

2 2 1 4 2 1 1 1 1 1 1 1 1

2 3 4

0

4 13 35 73 152 129 295 183 430 866 469 -

n(dAj)+l) n(dAi)+2)

n(dAj)

5 6 8 9 10 9 11 11 12 13

0

4 4 3 9 34 26 SO 9 18 10 9 13

TABLE II RATE 2/S SYSTEMATICCODE j

g3(j)

e4(j)

01 11 10 00 10 00 10 00 00 10 11 10 01

10 11 10 11 00 11 01 11 00 11 00 10 01

0

1 2 3 4 5 6 7 8 9 10 11 12

8(j)

d,(j)

d,,

11 01 00 01 01 11 11 01 00 11 11 11 00

2 3 4 4 5 6 7 8 8 9 10 11 11

n(j,d,(jN

n(j,d,(j)+l)

2

3 5 6 7 8 9 11 12 12 13 14 15 16

n(j,d,(j)+2)

1 6 13

5 3 3 2 1 38 44 1 5 9 13 18

1s

20 28 116 160 SO 72 92 132 -

0 0

5

16 33 54 83 235 394 212 290 405 -

6 8 9 10 12 13 13 14 14 16 18

n(dAj)+2)

0 0 0 0

TABLE III RATE 3/S SYSTEMATICCODE j

G4(j) 0

1 2 3 4 5 6 7 8 9 10 11 12

d LL

g5(j)

101 110 110 100 010 000 111 100 100 010 011 011 101

d,(j)

n(j,d,(j))

011 111 000 001 111

2 2 2 3 3

2 3 4 5 5

011 lo1 000 110 100 010 111 111

43 ~ ; 4 7 5 8 5 8 6 9 6 9 6 10

2 2

5 19 1 10 54 7 47 5 47 8 60

n(j,d,(j)+l)

n(j,d,(j)+2)

d,(j)

n(d,(j))

n(d,(j)+l)

4 17 40 78 53 139 329 170 475 192 615 218 -

1 12 54 194 198 504 1384 907 2541 1378 -

2 4 4

2 11 3

5 5

4 0 12 12 12

1 7 2 1 4 1 2 6

:: 14 4 7 1 10 0

-

5 6 6 8 8 8 9 9 11 12

n(d,(j)+2) 1

52 16 27 33 24 38 37 21 27

5 28 60

TABLE IV RATE 3/6 SYSTEMATICCODE j

0 1 2 3 4 5 6

e4(j)

110 001 111 110 001 010 110

f?(j)

101 010 111 101 110 000 001

b6(j)

011 100 111 011 110 010 000

d,,

2 2 3 4 4 5 6

d,(j) 3 4 6 7 8 9 10

n(j,d,(j)> 4 3 24 27 38 54 89

n(j,d,(j)+l)

3 12 51 102 155 268 420

n(j,d,(j)+2) 0 16 60 208 471 847 1526

d,(j)

3 4 6 8 8 9 10

n(dAj))

4 3 3 6 1 3 2

n(dmCj)+l) 3 0 12 13 4 7 6

n(d,(j)+2)

0 16 12 9 12 17 6

652

IEEE TRANSACTIONS

ON INFORMATION

THEORY, VOL. 37, NO. 3, MAY 1991

TABLE V RATE 2/7 SYSTEMATICCODE j

e3(j>

d4(j)

d’(j)

f?(j)

6’(j)

dLL

d,(j)

01

01

10 10 11 11 00 11 01 00 01 00 11

11 01 01 01 10 00 00 01 01 10 11

11 11 10 11 10 01 00 00 10 00 11

3 4 6 7 9 10 12 13 14 16 17

4 7 9 11 13 1s 17 18 20 22 23

0

: 3 4 5 6 7 8 9 10

ii 01 10 10 01 11 10 11 10

ii 10 01 10 00 10 10 11 01

n( j, d,(j))

n(j, d,(j) + 1)

n(j, d,(j) + 2)

2 6 11 15 30 42 60 28 68 124 56

1

3 3 7 18 1 4 17 1

d,(j)

n(dAiN

4 8 10 13 16 18 19 21 24 26 30

0

2 18 25 43 87 167 111 184 363 -

n(dAi)

1 3 1 1 6 2 4 1 2 1 1

+ 1) n(dAi)

2 0 2 2 0 3 3 2 0 1 2

+ 2)

0

3 1 2 9 1 4 3 2 1 5

TABLE VI RATE 3/7 SYSTEMATICCODE j

G4( j)

0 1 2 3 4 5 6 7 8 9 10

011 001 110 101 010 111 011 011 011 111 100

f?(j)

101 010 011 000 001 110 110 110 100 011 111

k?j)

110 100 101 001 111 000 110 101 011 101 000

6’(j)

111 111 000 110 001 101 101 111 010 000 010

d,,

2 3 4 5 6 7 8 9 10 11 12

d,(j) 4 6 8 8 10 11 12 13 1s 16 17

n(j,d,(j))

n(j,dc(j)+l)

7 21 70 1 18 9 4 2 56 42 36

n(j,dcW+2)

0 0 0

28 147 99 306 412 425 448 1825 -

34 132 114 102 74 462 428 -

d,(j)

4 6 8 9 10 11 13

0

15

17 18 20

n(dlj)+l)

n(dAj))

7 6 7 3 1 1 1 2 2 1 1

n(dAj>+2)

0 0 0

0

16 24 7 3 3 3 1 4 4 5

3 3 0 1 0 1 1 4

TABLE VII RATE 5/7 SYSTEMATICCODE j

$(j)

6’(j)

0 1 2 3

00001 11010 10011 01110

11111 11101 00100 00011

d,-,

d,(j)

2 2 2 2

2 2 3 4

n(j,d,.(j))

10 1 4 17

n(j,d,.(j)+l) 5 22 62 211

mum is a function of the specified value of dmi,. During the second phase of the search, the load decreases as the encoded weight of more and more information sequences exceed d,i,, and these sequences are deleted. The new codes that we found are presented in Tables I-VII. It should be noted that for the R = 2/S code, we extended the R = 2/5 constraint length 8 code in [l] to constraint length 13. Where available, we also present the weight spectrum up to n( j, d,(j)+2). Using the procedures in [lo], we furthermore computed for each rate the d,(j) and n(d,( j)+ r), r = 0,1,2, obtained when truncating the code at step j. These results are also presented in Tables I-‘VII. We also compare the distances of our rate R = k/n codes to the Lin-Lyne Gilbert-type lower bound from [l]. According to this bound, d,(j) 2 d,,, where d,, is the largest integer satisfying (9)

V. CONCLUSION Seven new systematic convolutional codes with good distance profiles, mostly of rates not previously investigated, have been found. These codes complement the codes published before, which usually had rate R = k/n where k = 1 or k = n - 1. Although it is not claimed that our codes are optimum with

n(j,d,(j)+2)

d,(j)

5 53 255 991

2 2 3 4

n(dAj)) &L,(i)+l) 10 1 2 3

5 7 13 21

&Ai)+2) 5 22 55 100

respect to distance profile, we have not found any other better codes, by resolving some of the ties in the sets of dh(j) in different ways. It was observed that at many values of j, notably the small values of j, several sets of the eh(j) tied. Finding optimum distance profile codes using the Lin-Lyne method, thus implies an exhaustive search through a large tree of codes, which was computationally infeasible. The column distance functions of all our codes was never lower than the Lin-Lyne lower bound and eventually exceeded it with an increasing difference. It is also interesting to compare our constraint length 13, R = 3/5 code to the constraint length 9, R = 3/5 code in [lo]. Our code has a better distance profile while the code in [lo] achieves a larger d,(j) when truncated at large j. However, when comparing these two codes, neither d,(j) nor d,(j) differ with more than two units of distance for any j.

REFERENCES

[l] S. Lin and H. Lyne, “Some results on binary convolutional code generators,” IEEE Trans. Inform. Theory, vol. IT-13, no. 1, pp. 134-139, Jan. 1967. [2] D. J. Costello, “A construction technique for random-error-correcting convolutional codes,” IEEE Trans. Inform. Theory, vol. IT-15, no. 5, pp. 631-636, Sept. 1969. [3] J. Hagenauer, “High rate convolutional codes with good distance profiles,” IEEE Trans. Inform. Theory, vol. IT-23, no. 5, pp. 615-618, Sept. 1977.

IEEE TRANSACTIONS ON INFORMATiON

653

THEORY, VOL. 37, NO. 3, M A Y 1991

[41 R. Johannesson, “Robustly optimal rate one-half binary convolutional codes,” IEEE Trans. Znforni. Theory. DD. _ vol. IT-21. no. 4. __ 446-468, Juiy 1975. [51 P. R. Chevillat and D. J. Costello, “Distance and computation in sequential decoding,” IEEE Trans. Commun., vol. COM-24, pp. 440-447, Apr. 1976. 161J. Sun, I. S. Reed, H. E. Huey, and T. K. Truong, “Pruned-trellis search techniaue for high-rate convolutional codes.” ZEE Proc., vol. 136, pt. E, no: 5, pp. 405-414, Sept. 1989. 171 D. G. Daut, J. W. Modestino, and L. D. Wismer, “New short constraint length convolutional code constructions for selected rational rates.” IEEE Trans. Inform. Theorv. DD. , I vol. IT-28. no. 5. _I 794-800, Sept. 1982. 181R. M. Tanner, “Convolutional codes from quasi-cyclic codes: A link between the theories of block and convolutional codes,” presented at the IEEE Znt. Symp. Znform. Theory, Kobe, Japan, June 19-24, 1988. 191Q. Wang, V. K. Bhargava, and A. T. Gulliver, “Construction of optimal quasi-cyclic codes based on optimum distance profile convolutional codes,” in Proc. IEEE Pacific Rim Conf. Commun., Cornout., Sianal Processina, -. Victoria, BC. Canada. June 4-5. 1987, pp. ios-ioS.

1101M. Cedervall and R. Johannesson, “A fast algorithm for computing

binary digits as the punctured encoder. In [4], a punctured encoder was defined as a minimal punctured encoder if the corresponding ordinary encoder G is minimal and the punctured encoder and ordinary encoder have the same constraint length. In this correspondence we show that any rate k /(k + 11, noncatastrophic, antipodal (~6 = pj = 1, j = 1,2; * . , k + 11, punctured encoder of constraint length v is a minimal punctured encoder. In Section II, we establish the notation and state the definitions needed to prove our result. Also we summarize how to obtain the generator matrix from a punctured convdlutional encoder. We then, in Section III, prove that any rate kJ(k + l), noncatastrophic, antipodal punctured encoder is a minimal encoder. Finally, we conclude with a short discussion in Section IV. II. PRELIMINARIES The input sequences to a rate k /(k + 1) convolutional encoder are represented by a k-tuple

distance spectrum of convolutional codes,” IEEE Trans. Inform. Tlieory, vol. 35, no. 6, pp. 1146-1159, Nov. 1989.

v= [U’(D),U2(D);-,Uk(D)], U’(D)

Rate k/t k + 1) M inimal Punctured Convolutional Encoders Kjell Jorgen Hole

= I& + u’,D + u;D2

The output

+ a. * ,

Index Terms -Punctured

Isilk. by a (k + l)-

tuple. The encoder is defined by a k by (k + 1) matrix, G = [ Gj(D)I, of generator polynomials Gi’( D) = g& + g{,D + giiD2 + * * * , l