Multidimensional Binary Repetition Codes - IEEE Xplore

2 downloads 0 Views 236KB Size Report
Abstract— Cyclic shifts of the binary codewords in multiple di- mensions are used to construct a novel class of multidimensional binary block repetition codes.
This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the WCNC 2007 proceedings.

Multidimensional Binary Repetition Codes Pavel Loskot and Norman C. Beaulieu iCORE Wireless Communications Laboratory, Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Alberta, Canada, T6G 2V4 E-mail: {loskot,beaulieu}@ece.ualberta.ca Abstract— Cyclic shifts of the binary codewords in multiple dimensions are used to construct a novel class of multidimensional binary block repetition codes. This construction is well suited as an inner encoding scheme for concatenated one-dimensional outer codewords to increase their minimum Hamming distance without increasing the transmission bandwidth. Two design criteria for multidimensional binary repetition codes are given, and an efficient algorithm to search for good codes is presented. The construction is illustrated using examples of the proposed codes in two and three dimensions. The union bound on the bit-error rate is used to optimize the transmission energy distribution over the codewords to improve the transmission reliability.

I. I NTRODUCTION Forward error correction codes using binary cyclic matrices have been considered in the literature since the 60’s. These coding schemes can be formally described using a regular trellis, or using a quasi-cyclic parity check matrix; however, the design criteria are selected for a specific application at hand. The tail-biting convolutional codes (CC’s) avoid the rate loss of trellis termination by forcing the initial and ending states to be equal [1]. Quasi-cyclic block codes maximize the minimum Hamming distance for a given block length [2]. The generator polynomials for CC’s and computer search are usually employed to design quasi-cyclic and tail-biting codes exploiting their trellis structure [3], [4]. Tail-biting codes for the product codes are considered in [5]. Note that in general the product codes increase the transmission bandwidth. The transmission bandwidth is not increased if one exploits multidimensional set partitioning and multilevel modulation schemes [6]. Cyclic matrices are also used to design iteratively decodable LDPC codes [7], [8]. In this paper, we consider binary repetition codes (BRC’s) that trade-off complexity of the encoding and decoding with the desired minimum Hamming distance using sequences of cyclic shifts of the input information vectors [9]. In Section II, we discuss the properties of binary cyclic matrices, and the multidimensional BRC’s are defined. In Section III, optimization criteria to design the multidimensional BRC’s are given. We show that the code description using cyclic shifts of the input information vectors leads to a simpler search for good codes. We design one (1D), two (2D) and three (3D) dimensional BRC’s. We apply the 2D and 3D BRC’s as the inner codes to concatenate the 1D codewords improving the overall minimum Hamming distance, and importantly, without increasing the transmission bandwidth. In Section IV, we obtain the union bound (UB) of the bit-error rate (BER), and we optimize the distribution of the transmission energy over the codewords to improve the BER.

II. M ULTIDIMENSIONAL B INARY R EPETITION C ODES Let ZB = {0, 1, · · · , B − 1}. Let u ∈ Zk2 and v ∈ Zk2 be binary row vectors. Let A1 and A2 be binary cyclic matrices. Denote the i-th component of u as ui , and assume operations over a binary Galois field, GF(2). We have the following properties. K−1 ai Property 1: The binary cyclic matrix, A = i=0 J , where a = (a0 , · · · , aK−1 ) is the generating vector of A, and J ∈ Z2k×k is a cyclic matrix having the generating vector, j = (0, 1, 0, · · · , 0). Note that JT J = JJT = I where I ∈ Z2k×k is the identity matrix. Property 2: The  product, v = uA, corresponds to a cyclic k−1 convolution, vi = j=0 uj [A]0,modk(i−j) , of the polynomik−1 K−1 als, u(Z) = i=0 ui Z i , and, A(Z) = i=0 Z ai , where Z is a dummy variable.  Hence, v is a sum of K cyclically shifted K−1 vectors, u, i.e., v = i=0 uJai . Property 3: The sum, A = A1 ⊕ A2 , and the product, A = A1 A2 , are cyclic matrices. Also, if A = A1 ⊕ A2 , or A = A1 A2 , and A and A1 are cyclic, then A2 is cyclic. Property 4: If there exists a matrix, A† , such that A† A = AA† = I, and A is cyclic, then A† is cyclic. Note that A2 = A†1 A is a cyclic deconvolution of A = A1 A2 . Property 5: If the determinant, det|A| =  0, and thus, (algebraic) matrix inverse, A−1 , exists, then, insome cases, A† can be efficiently computed as, A† = mod2 A−1 det|A| , where · denotes the rounding function, and moda (·) is the modulo a operation [9]. Note that mod2 A−1 det|A| = 1 (the all-ones matrix) indicates that the inversion failed.  It is not known whether the inversion, mod2 A−1 det|A| , fails for every matrix, A, that is not invertible over Z2k×k . Property 6: If A† exists, then v = uA is a one-to-one mapping referred to as the permutation. We define the BRC’s and their design criteria next. Let C = (n, k, dmin ) be a binary block code of rate, R = k/n, with minimum Hamming distance, dmin , having the codewords, c = C(u) ∈ Zn2 , where u ∈ Zk2 is the input information vector. Let the cyclic matrix, A ∈ Z2k×k , be generated by a vector, a = (a0 , a1 , · · · , aK−1 ), of length, K, where 0 ≤ a0 < · · · < aK−1 < k, and, ai = modk (ai ). The parameter, Ka = K, will be referred to as the constraint weight of A. Define the constraint length of A, νa = aK−1−a0 , and the span of A, µa = νa + 1. Note that these parameters determine the complexity of encoding and decoding if A (or, equivalently, a) is used to generate the codewords of C. Consider the D-dimensional BRC’s of rate R = L/(L + 1), and R = 1/(L + 1), respectively. The codewords, c =

1525-3511/07/$25.00 ©2007 IEEE

693

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the WCNC 2007 proceedings.

(c1 , c2 , · · · , cL+1 ), of the BRC, C = (n, k, dmin ), are defined using a set of mappings, Ci : u → ci , of the input information vector, u, for i = 1, 2, · · · , (L + 1); thus, C = C1 ◦ · · · ◦CL+1 . Note that the information matrix, u ∈ Zk21 ×k2 ×···×kD , and, ci ∈ Zk21 ×k2 ×···×kD , are defined over a D-dimensional binary field, GF(2). If Ci is linear, then Ci can be described using 2D matrix multiplications and additions, and using a set of binary cyclic matrices, {A(j)i }ji (or, equivalently, generating vectors, a(j)i ), corresponding to mappings, C(j)i , where j = 1, 2, · · · , D. In order to simplify the following discussion, we assume that C(j)i = C(j) , and, A(j)i = A(j) , for ∀j; i.e., only one mapping, C(j) , and one cyclic matrix, A(j) , are used for each dimension, j. Hence, C is a D-dimensional block code that is specified by a set of D (not necessarily linear) 1D codes, C(j) = (n(j) , k(j) , dmin (j) ), having the block length, n(j) , dimension, k(j) , the rate, R(j) = k(j) /n(j) , and the 2D generator matrix, G(j) . Then, the code, C = (n, k, dmin ), D has rate, R = j=1 R(j) , the input dimension (i.e., the D number of information bits), k = j=1 k(j) , and the output D dimension (the block length), n = j=1 n(j) . Furthermore and importantly, the following constraints on the codes, C(j) , distinguish the D-dimensional BRC’s from the D-dimensional single parity check (SPC) product codes (described, e.g., in [10]). In particular, for BRC’s, let R(j) = 1, for j ≥ 2, and, R = R(1) , while for SPC product codes, R(j) < 1, for j ≥ 1. Hence, in the case of linear BRC’s, the generator matrices, k ×k G(j) = A(j) ∈ Z2(j) (j) , n(j) = k(j) , and dmin(j) = 1, for j ≥ 2, and typically, dmin dmin (1) . Note that, for SPC D product codes, we always have that dmin = j=1 dmin(j) , and D R = j=1 R(j) 1. We have the following definition. Definition 1: The D-dimensional BRC, C = (n, k, dmin ), is D specified by a set of (L + 1) j=2 k(j) cyclic matrices, A(j)i , and R(j) = 1 (n(j) = k(j) ), for j ≥ 2, and R(j) < 1, for j = 1. The minimum Hamming distance, dmin , of C , is bounded as, dmin (1) ≤ dmin ≤ 1 +

D  j=1

min K(j)i i

where K(j)i are the constraint weights of the matrices, A(j)i . D The constraint weight of C is, K = j=1 maxi K(j)i , the D constraint length, ν = j=1 maxi ν(j)i , and the span, µ = D j=1 maxi µ(j)i . More generally, note that the code, C(1) , can be any 1D binary block code. Thus, we have the following important application of Definition 1. D Definition 2: The set of j=2 k(j) 1D codewords of equal block length, n(1) , can be compounded using a D-dimensional BRC. This construction increases the overall minimum Hamming distance without increasing the transmission bandwidth. For the same dmin , the SPC product codes are easier to design than the BRC’s. However, the multidimensional BRC’s avoid a potentially large rate loss of SPC product codes, and thus, the BRC’s require significantly less transmission bandwidth. Note also that (D = 1)-dimensional BRC’s correspond to a general class of quasi-cyclic codes and tail-biting CC’s.

aK−1 +1 0 1 110 1 0 1 1 10 1 1 1 1 10 2 1 1 1 1 1 1 1 1 1 1 11 1 110 1 110 1 k−1 1 1 1 0 1 0 0 1 2 Fig. 1.

1 0 1 0 1 1 1

1 0 1 0 1 1 1

1 0 1 0 1 1 1

1 0 1 0 1 1 1

k−aK−1 1 0 1 0 1 1 1

1 0 1 0 1 1 1

1 0 1 0 aK−1 1 1 1 k−1

Structure of a cyclic matrix.

III. D ESIGN C RITERIA FOR BRC’ S Consider a cyclic matrix, A, in Fig. 1 having the constraint length, ν = aK−1 , to be used for generating a BRC. Since such a matrix corresponds to a cyclic convolution, we can use a trellis having 2ν states to describe the encoding and decoding. Note that the last aK−1 rows of A correspond to the input information bits that return the convolutional encoder either to its initial state (cf. tail-biting codes [1]), or to the all-zero state (cf. trellis termination). Hence, for D-dimensional BRC’s, the complexity of the encoding and optimum (maximum-likelihood sequence) decoding using the trellisrepresentation is given by the total constraint length, D ν = j=1 ν(j) . In this case, the design criterion for BRC’s is, max dmin

{A(j) }j

k

s.t. dmin ≥ d0 , ν ≤ νmax , k ∈ Ωk

(1)

×k

where A(j) ∈ Z2(j) (j) , d0 is the design minimum Hamming distance, k = (k(1) , · · · , k(D) ) is the vector of input dimensions, and Ωk is the set of feasible dimensions so that dmin ≥ d0 . We can approximate, Ωk , using the componentwise inequalities, k ≥ k0 ≡ (k(1) ≥ k(1)0 , · · · , k(D) ≥ k(D)0 ). Note that for a suboptimum multistage decoding (i.e., the independent decoding of all dimensions), the optimization (1) is subject to maxj ν(j) ≤ νmax . We can also consider the BRC’s to be modulo 2 sums of the input information vectors. In this case, the complexity of the encoding and decoding is given by the constraint weight, D K = j=1 K(j) , and the design criterion is, max dmin

{A(j) }j

s.t. dmin ≥ d0 , K ≤ Kmax , k ∈ Ωk .

(2)

For suboptimum multistage decoding, the optimization (2) is subject to maxj K(j) ≤ Kmax . Importantly, since typically K(j) ν(j), the optimization (2) and the cyclic shifts based encoding and decoding appear to be less complex than the optimization (1) and the trellis based encoding and decoding; hence, in this paper, we consider the design (2). Note also that the optimization problems (1) and (2) trade-off dmin with the complexity of encoding and decoding. It is useful to consider a universal design such that we can achieve the desired dmin by lengthening (i.e., appending a(j)K > a(j)(K−1) to a(j) ), and shortening (i.e., removing a(j)(K−1) from a(j) ) of the generating vectors, {a(j) }j . In

694

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the WCNC 2007 proceedings.

general, lengthening of the vectors, a(j) , increases dmin while shortening of the vectors, a(j) , decreases dmin and also the complexity of the encoding and decoding. Furthermore, lengthening and shortening of the vectors, a(j) , should be mutually independent between dimensions. We can use the following procedure to find the generating vectors, a(j) . 1) initialize a(j) = (a(j)0 ), K(j) = 1, for j = 1, 2, · · · , D, and dmin = 1 2) for ∆ > 0, select the dimension, ˆj, and find, a(ˆj)K , so that (a(ˆj)K −a(ˆj)K−1 ) ≤ ∆, and dmin is increased; then, a(ˆj) := (a(ˆj) , a(ˆj)K ), and K(ˆj) := K(ˆj) + 1 3) when no such a(ˆj)K exists for a given, ∆, either use another a(ˆj)K−1 of a(ˆj) , or select another dimension, ˆj, or increase the dimensions, k, or increase ∆ 4) repeat the search in step 2 until the code of dmin ≥ d0 is found Note that for every candidate set, {a(j) }j , we have to evaluate weights dmin . The efficient search for dmin enumerates small  input information vectors [2], [9]. There are, ko , codewords of the input information weight, o. Let the l-th input information vector of weight, o, corresponding to the l-th  ordered combination of o out of k elements, l = 1, · · · , ko , have the “1” bits in the positions, b = (b0 , b1 , · · · , bo−1 ). Then,  b1 −b  b0  0 −1    k−i k − 1 − b0 − i l = + + ··· o−1 o−2 i=1 i=1  bo−2 −bo−3 −1   k − 1 − bo−3 − i + + bo−1 − bo−2 1 i=1 and we can use the following algorithm to find b. Algorithm 1: Input: k, o, l Output: b ! find the l-th combination o out of k s := 0 for i = 0 : (o−2) bi := s for j = 0 : (k−o+i−s−1)  then if l > k−s−1−j o−i−1 bi := bi +1   l := l− k−s−1−j o−i−1 else break end if end for s := bi + 1 end for bo−1 := bo−2 + l A. Examples of BRC’s We consider examples of 1D, 2D and 3D BRC’s. 1) 1D BRC’s: Assume the generator matrix,   AT2 A11 A2   .. .. G=  . . T A2 A A1L 2

where A2 , A11 , · · · , A1L are cyclic matrices having the generating vectors, a2 , a11 , · · · , a1L . Hence, the codewords, c = 1 1 ×Lk1 , A ∈ ZLk , and B ∈ (uA, uB), where u ∈ Z1,×Lk 2 2 Lk1 ×k1 1 Z2 , for R = L/(L + 1) codes, and u ∈ Z1,×k , 2 k1 ×k1 k1 ×Lk1 , and B ∈ Z2 , for R = 1/(L + 1) codes. A ∈ Z2 In general, a search for the low rate BRC’s is easier than for the high rate BRC’s; thus, the rate, R = L/(L + 1), codes are rarely reported in the literature, for L > 2, [4]. Assume the following generating sequences for a BRC of rate R = 4/5, i.e., L = 4, a11 =(0,1,2,4,5,7,9,12,15,21,24,25,29,32,38,41,46,49,50,51,55,57,62,63) a12 =(0,1,5,8,10,14,16,18,20,25,27,31,32,38,42,43,50,51,53,56,58,61,64,67) a13 =(0,2,3,11,13,18,22,23,28,31,33,34,35,39,44,47,48,52,54,59,60,64,68,69) a14 =(0,3,5,12,15,19,24,25,29,33,34,36,37,42,45,49,51,54,55,61,63,66,71,72) a2 = (0,2,5,7,8,9,10,12,14,15,18,19,22,23,27,29,30,31,34,35,36,37,39,41).

Using these sequences shortened to K1i = K1 components, i = 1, 2, 3, 4, and K2 components, Table I gives dmin for dimensions, k1 ≥ k0 , and the maximum achievable dmin for k1 k0 . Note that non-systematic codes have typically much larger dmin than systematic codes, and we can obtain other BRC’s for L = 1, 2 and 3 using an arbitrary subset of sequences, a11 , a12 , a13 , and a14 . The generator matrix of the rate, R = 1/(L + 1), BRC’s is, G = [A2 | A11 · · · A1L ] . Assume, A2 = I, and, for L = 4, the generating sequences, a11 =(0,1,2,4,5,7,9,12,15,21,24,25,29,32,38,41,46,49,50,51,55,57,62,63) a12 =(0,1,3,5,7,10,11,14,16,19,22,27,31,33,34,37,39,40,42,44,46,48,52,53) a13 =(0,1,2,5,6,9,11,12,13,14,16,18,19,22,23,26,27,28,30,32,33,34,35,37) a14 =(0,1,3,5,7,11,13,14,15,16,17,19,20,21,22,25,26,29,31,33,34,35,37,38).

For L < 4, we can again use an arbitrary subset of a11 , a12 , a13 , and a14 . 2) 2D BRC’s: For R = L/(1 + L), the information matrix, u ∈ Zk22 ×Lk1 , and, for R = 1/(1 + L), u ∈ Zk22 ×k1 , and the codewords, c = (uA0 , AT2 uA1 ). Note that AT2 u and uA1 corresponds to the vertical and horizontal parity bits of a SPC product code, respectively. For R = 1/2 (L = 1), we propose a class of 2D BRC’s having the codewords, c = (uA0 , AT2 u ⊕ uA1 ); thus, the vertical encoding for BRC’s does not increase the transmission bandwidth. Assume the generating sequences, a0 = (0, 2, 3, 6, 8), a1 = (0, 1, 2, 4, 5, 7, 9, 12, 13, 15, 17, 20, 22, 23, 25), and a2 = (1, 2, 4, 6, 7), shortened to K0 , K1 , and K2 components, respectively. Table II shows dmin and the minimum input dimensions, k0 , for these codes. Finally, parallel encoder structures for the 2D BRC’s are shown in Fig. 2 and Fig. 3. 3) 3D BRC’s: Given a 2D BRC having the codewords, c = (u, AT2 uA1 ), or, c = (u, A22 u ⊕ uA1 ), we search for a cyclic matrix, A3 , to perform encoding across the 2D codewords. Examples of 3D BRC’s are given in Table III assuming the generating sequences, a1 = (0, 1, 2, 4, 5, 7), a2 = (1, 2, 4, 6, 7), and a3 = (0, 1, 3, 5, 6). Consider the BRC’s for concatenation of 1D codewords. In particular, Table IV shows dmin and k0 of the 3D BRC’s

695

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the WCNC 2007 proceedings.

TABLE I E XAMPLES OF BRC’ S OF RATE R = 4/5 dmin(max) k 1 ≥ k0 1 K2 = 1 2(2) ≥1 2 X 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

K1 = 4 8 12 16 20 24 5(5) 9(9) 13(13) 17(17) 21(21) 25(25) ≥ 18 ≥ 32 ≥ 46 ≥ 56 ≥ 75 ≥ 91 8(8) 10(12) 16(16) X X X ≥ 17 ≥ 25 ≥ 45 6(6) 9(9) 14(16) 19(19) 26(26) 34(34) X ≥ 11 ≥ 15 ≥ 32 ≥ 43 ≥ 63 ≥ 80 8(8) 14(14) 16(18) X 28(30) X X ≥ 11 ≥ 25 ≥ 32 ≥ 65 10(10) X 16(16) 24(24) 32(32) X 42(42) ≥ 15 ≥ 31 ≥ 53 ≥ 76 ≥ 95 12(12) 18(18) 18(18) 26(26) 32(34) 36(36) 42(42) ≥ 20 ≥ 31 ≥ 31 ≥ 46 ≥ 65 ≥ 82 ≥ 87 14(14) 20(20) 20(21) 27(27) 33(33) 36(40) 44(46) ≥ 19 ≥ 32 ≥ 34 ≥ 53 ≥ 70 ≥ 71 ≥ 90 16(16) 20(20) 22(24) 30(32) 34(38) 40(42) X ≥ 20 ≥ 37 ≥ 37 ≥ 53 ≥ 65 ≥ 75 18(18) 24(24) 22(22) 32(33) 37(37) 42(43) 48(51) ≥ 34 ≥ 41 ≥ 43 ≥ 54 ≥ 74 ≥ 81 ≥ 91 20(20) 26(26) 24(28) 34(34) 38(38) X X ≥ 25 ≥ 41 ≥ 42 ≥ 65 ≥ 79 22(22) 28(30) 26(35) 36(40) 40(49) 44(44) 50(54) ≥ 40 ≥ 41 ≥ 39 ≥ 60 ≥ 72 ≥ 100 ≥ 97 24(24) 30(32) 28(36) 36(38) 42(48) X 52(58) ≥ 31 ≥ 45 ≥ 39 ≥ 74 ≥ 75 ≥ 93 26(26) 32(37) 30(40) 38(45) 44(52) 56(57) 54(60) ≥ 33 ≥ 47 ≥ 52 ≥ 65 ≥ 73 ≥ 100 ≥ 105 28(28) 34(38) 32(40) 38(44) 44(48) X 56(64) ≥ 44 ≥ 57 ≥ 56 ≥ 68 ≥ 92 ≥ 101 30(30) 36(40) 34(44) 40(54) 48(58) 58(66) 58(72) ≥ 50 ≥ 57 ≥ 56 ≥ 67 ≥ 88 ≥ 100 ≥ 104 32(32) 38(44) 36(52) 44(56) 50(62) 60(68) 66(68) ≥ 41 ≥ 55 ≥ 55 ≥ 69 ≥ 83 ≥ 105 ≥ 128 34(34) 40(47) 40(50) 50(57) 54(66) 62(69) 66(68) ≥ 44 ≥ 60 ≥ 67 ≥ 80 ≥ 92 ≥ 101 ≥ 115 36(36) 42(50) 42(52) 50(60) 54(66) 62(68) 66(76) ≥ 56 ≥ 56 ≥ 73 ≥ 79 ≥ 87 ≥ 109 ≥ 116 38(38) 44(48) 44(55) 52(64) 58(62) 64(73) 68(81) ≥ 49 ≥ 72 ≥ 67 ≥ 83 ≥ 99 ≥ 107 ≥ 119 40(40) 46(50) 46(56) 54(62) 60(70) 66(70) 68(80) ≥ 53 ≥ 68 ≥ 75 ≥ 90 ≥ 96 ≥ 117 ≥ 128 42(42) 48(56) 48(58) 58(64) 64(68) 66(74) 71(81) ≥ 56 ≥ 67 ≥ 72 ≥ 93 ≥ 102 ≥ 112 ≥ 118 44(44) 50(60) 50(60) 60(64) 66(74) 70(74) 68(80) ≥ 67 ≥ 74 ≥ 72 ≥ 92 ≥ 113 ≥ 118 ≥ 115 46(46) 52(57) 52(62) 64(71) 66(77) 70(72) 70(87) ≥ 56 ≥ 75 ≥ 79 ≥ 102 ≥ 105 ≥ 135 ≥ 125 48(48) 54(60) 54(68) 64(76) 68(74) 72(78) 80(90) ≥ 71 ≥ 80 ≥ 76 ≥ 100 ≥ 125 ≥ 130 ≥ 134

u(1,:) 1

2

k1

AT2

A1

u(k2 ,:) 1

2

k1 AT2 uA1 A1

Fig. 2. bits.

A parallel encoder for the 2D BRC’s to generate, AT 2 uA1 , parity

TABLE II E XAMPLES OF 2D-BRC’ S

K1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

(uA0 , AT (uA0 , AT 2 u ⊕ uA1 ) 2 uA1 ) K2 = 1, K0 = 5 K2 = 5, K0 = 1 K2 = 1, K0 = 5 K2 = 5, K0 = 1 dmin k ≥ k0 dmin k ≥ k0 dmin k ≥ k0 dmin k ≥ k0 7 (2, 10, 10) 7 (15, 1, 1) 6 (1, 10, 10) 6 (13, 1, 1) 8 (2, 9, 9) 8 (9, 2, 2) 7 (2, 13, 13) 9 (12, 13, 13) 9 (2, 9, 9) 9 (9, 3, 3) 8 (2, 12, 12) 10 (9, 6, 6) 10 (2, 9, 9) 10 (8, 5, 5) 9 (3, 15, 15) 12 (9, 8, 8) 11 (2, 13, 13) 11 (9, 6, 6) 10 (2, 17, 17) 14 (9, 10, 10) 12 (2, 13, 13) 12 (8, 8, 8) 11 (4, 22, 22) 12 (9, 8, 8) 13 (2, 15, 15) 13 (8, 10, 10) 12 (4, 25, 25) 18 (9, 14, 14) 14 (2, 16, 16) 14 (8, 13, 13) 13 (5, 19, 19) 14 (9, 14, 14) 15 (2, 20, 20) 15 (8, 14, 14) 14 (5, 24, 24) 18 (9, 14, 14) 16 (2, 22, 22) 16 (8, 16, 16) 15 (5, 25, 25) 22 (9, 16, 16) 17 (3, 25, 25) 17 (8, 18, 18) 16 (6, 25, 25) 22 (9, 16, 16) 18 (3, 25, 25) 18 (8, 22, 22) 17 (6, 28, 28) 22 (9, 16, 16) 19 (3, 28, 28) 19 (8, 24, 24) 18 (6, 29, 29) 20 (9, 17, 17) 20 (3, 30, 30) 20 (8, 24, 24) 19 (6, 32, 32) 21 (9, 17, 17) 21 (3, 32, 32) 21 (8, 27, 27) 20 (6, 33, 33) 22 (9, 17, 17)

AT2

u(1,:) 1

2

k1

A1

u(k2 ,:) 1

k1

2

A1

AT2 u uA1

Fig. 3. A parallel encoder for the 2D BRC’s to generate, AT 2 u ⊕ uA1 , parity bits.

TABLE III E XAMPLES OF 3D BRC’ S FROM 2D CODEWORDS , c = (u, AT 2 uA1 ) a2 = (1, 2, 4, 6, 7) a3 = (0, 1, 3, 5, 6) a1 = (0) a1 = (0, 1, 2) a1 = (0, 1, 2, 4, 5, 7) K1,2,3 dmin k ≥ k0 K1,2,3 dmin k ≥ k0 K1,2,3 dmin k ≥ k0 111 2 (1, 1, 1) 3 1 1 4 (4, 1, 5) 6 1 1 7 (11, 1, 1) 121 3 (1, 3, 1) 3 2 1 6 (4, 4, 1) 6 2 1 13 (11, 5, 7) 131 4 (1, 4, 1) 3 3 1 8 (4, 5, 10) 6 3 1 19 (11, 7, 6) 141 5 (1, 8, 1) 3 4 1 10 (4, 6, 7) 6 4 1 25 (11, 8, 7) 151 6 (1, 9, 1) 3 5 1 12 (4, 6, 6) 6 5 1 31 (11, 12, 5) 112 4 (1, 1, 2) 3 1 2 8 (3, 1, 2) 6 1 2 14 (11, 7, 2) 122 6 (1, 3, 2) 3 2 2 12 (4, 3, 2) 6 2 2 26 (11, 8, 2) 132 8 (1, 4, 2) 3 3 2 16 (4, 4, 2) 6 3 2 38 (11, 8, 7) 1 4 2 10 (1, 8, 2) 3 4 2 20 (4, 7, 2) 6 4 2 50 (11, 10, 6) 1 5 2 12 (1, 7, 2) 3 5 2 24 (4, 9, 2) 6 5 2 62 (11, 12, 4) 113 6 (1, 1, 7) 3 1 3 12 (4, 1, 7) 6 1 3 21 (11, 7, 7) 123 9 (1, 3, 7) 3 2 3 18 (4, 5, 7) 6 2 3 39 (11, 9, 7) 1 3 3 12 (1, 4, 7) 3 3 3 24 (4, 7, 7) 6 3 3 57 (11, 7, 10) 1 4 3 15 (1, 8, 7) 3 4 3 30 (4, 10, 8) 6 4 3 75 (11, 8, 7) 1 5 3 18 (1, 9, 7) 3 5 3 36 (4, 9, 9) 6 5 3 93 (12, 7, 6) 114 8 (1, 1, 12) 3 1 4 16 (4, 6, 9) 6 1 4 28 (11, 7, 8) 1 2 4 12 (1, 3, 12) 3 2 4 24 (5, 5, 10) 6 2 4 52 (11, 8, 8) 1 3 4 16 (1, 4, 12) 3 3 4 32 (4, 8, 8) 6 3 4 76 (11, 9, 8) 1 4 4 20 (1, 8, 11) 3 4 4 40 (5, 9, 9) 6 4 4 100 (11, 9, 11) 1 5 4 24 (1, 9, 11) 3 5 4 48 (6, 7, 3) 6 5 4 124 (11, 10, 8)

assuming the generating sequences, a2 = (1, 2, 4, 6, 7), and, a3 = (0, 1, 3, 5, 6), shortened to K2 and K3 components, re-

696

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the WCNC 2007 proceedings.

TABLE IV E XAMPLES OF 3D BRC’ S USING CONCATENATED CODES a2 = (1, 2, 4, 6, 7) a3 = (0, 1, 3, 5, 6) C = (15, 11, 3) C = (7, 4, 3) C = (23, 12, 7) C= (12, 11, 2) K2,3 dmin k0 K2,3 dmin k0 K2,3 dmin k0 K2,3 dmin k0 11 3 (1, 1) 1 1 3 (1, 1) 1 1 7 (1, 1) 1 1 2 (1, 1) 12 6 (1, 4) 1 2 6 (1, 5) 1 2 14 (1, 4) 1 2 4 (1, 4) 13 9 (1, 7) 1 3 9 (1, 8) 1 3 21 (1, 7) 1 3 6 (1, 7) 1 4 12 (1, 8) 1 4 12 (1, 11) 1 4 28 (1, 8) 1 4 8 (1, 9) 21 4 (4, 3) 2 1 4 (4, 2) 2 1 12 (5, 5) 2 1 2 (2, 1) 22 8 (4, 3) 2 2 8 (4, 4) 2 2 24 (5, 5) 2 2 4 (2, 3) 2 3 12 (4, 4) 2 3 12 (4, 7) 2 3 36 (5, 5) 2 3 6 (2, 4) 2 4 16 (4, 6) 2 4 16 (4, 8) 2 4 48 (5, 8) 2 4 8 (2, 8) 31 5 (4, 3) 3 1 5 (4, 2) 3 1 13 (5, 1) 3 1 2 (4, 1) 3 2 10 (4, 3) 3 2 10 (4, 4) 3 2 26 (5, 3) 3 2 4 (4, 3) 3 3 15 (4, 4) 3 3 15 (4, 7) 3 3 39 (5, 4) 3 3 6 (4, 4) 3 4 20 (4, 6) 3 4 20 (4, 8) 3 4 52 (5, 6) 3 4 8 (4, 6) 41 6 (6, 2) 4 1 6 (6, 1) 4 1 22 (6, 3) 4 1 2 (6, 1) 4 2 12 (6, 3) 4 2 12 (6, 3) 4 2 44 (6, 3) 4 2 4 (6, 3) 4 3 18 (6, 4) 4 3 18 (6, 4) 4 3 66 (6, 4) 4 3 6 (6, 4) 4 4 24 (6, 6) 4 4 24 (6, 7) 4 4 88 (6, 8) 4 4 8 (6, 6)

1 k2 ×k1

k3 k2 ×k1

B. Pairwise Error Probability Conditioned on perfect knowledge of the channel coefficients, {gi }i , at the receiver, the probability of the pairwiseerror event (PEP) that the codeword, c, is transmitted, and c is decoded is, [11], [12],

A3 P/S

P/S

the average signal-to-noise ratio (SNR) per bit at the receiver to be, γb = Eb /N0 , where N0 is the double-sided noise power 2 the noise variance, spectral density.  2  Thus, N0 = 2σw , where 2 σw = E wi . Assume the energies, βi2 , are normalized so that the average energy per bit over a transmitted block, x, is n−1 2 = 1/(2Rγb ). unity, i.e., Eb = (1/n) i=0 βi2 = 1. Then, σw Assume a systematic code having the codewords, c = (cu , cp ), where ku = k information bits, cu = u, and the corresponding, kp = (n − ku ), parity bits, cp = uA, where A is the parity check matrix, and thus, the code rate, R = ku /n. For simplicity, denote βu2 to be the transmission energy for information bits, and βp2 to be the transmission energy for parity bits. Hence, Rβu2 + (1 − R)βp2 = Eb , and, 0 < βu2 < Eb /R, and, 0 < βp2 < Eb /(1 − R). Note that βu2 = βp2 = Eb corresponds to the case of a uniform energy distribution over a transmitted codeword. Note also that βu2 < Eb if βp2 > Eb , and vice versa.

k3 ×(k2 k1 )

Fig. 4. A parallel encoder to generate 3D parity bits from the 2D codewords.

spectively, for the Hamming (15, 11, 3) and (7, 4, 3) codes, the Golay perfect code, (23, 12, 7), and the SPC code, (12, 11, 2). A parallel encoder structure to generate codewords of the 3D BRC’s from 2D codewords is shown in Fig. 4. IV. BER U NION B OUND We evaluate the union bound (UB) of the BER. We also consider the problem of how to distribute the transmission energy over the codeword so that the average transmission energy per bit, Eb , is constant. In this paper, we assume ideal interleaving, and thus, the channel coefficients are independent for each transmitted symbol (i.e., fast fading assumption). A. System Model Consider a system with one transmitter and one receiver antennas. The codewords, c, of rate, R = k/n, and the block length, n, bits are interleaved and mapped to binary phase shift keying (BPSK) symbols, x ∈ {−1, +1}n . The symbols, x, are transmitted over a Rayleigh fading channel, and coherently detected at the receiver. Hence, the received signal after coherent demodulation, yi = gi βi xi + wi where i = 0, 1, · · · , n − 1, βi2 is the transmission energy for the i-th BPSK symbol, and the channel coefficients, gi , are assumed to be Rayleigh distributed, and mutually uncorrelated; thus, E[gi gj ] = 1, if i = j, and, E[gi gj ] = 0, if i = j. Define

PEP(c → c |{gi }i ) =   n−1 2 2  ku −1 2 2 2α2 β g c + β g c Q u i i i=0 i=ku p i i N0

(3)

√ ∞ 2 where the Q-function, Q(x) = 1/( 2π) x e−t /2 dt, [11]. We can use the Prony approximation of the Q-function to efficiently evaluate the average PEP, i.e., let 2  √ A˜i exp(−˜ ai x) Q( x) ≈

(4)

i=1

˜1 = where A˜1 = 0.295848095, A˜2 = 0.131073880, a 1.042977585, and a ˜2 = 0.516883300, [13]. Then, assuming that the all-zero codeword is transmitted, the PEP, PEP(0 → c |{gi }i ) =

2 

A˜j

n−1 

j=1

2

e−˜aj fi gi

i=0

2 where fi = 2α2 βu(p) c /N , and gi2 are exponentially dis 2 i 0 ∞ tributed, and, E gi = 1. Since 0 e−(1+˜aj fi )t dt = (1 + a ˜j fi )−1 , and the channel coefficients, gi , are mutually independent, we obtain the average PEP, 

PEP(0 → c ) =

2  j=1

A˜j

n−1  i=0

1 . 1+a ˜j fi

i:ci =1

Finally, the UB of the average BER, [11]  ub wH (cu ) PEP(0 → c ) BER ≤ nR 

(5)

c ∈L (omax )

where wH (cu ) denotes the Hamming weight of the input information bits in the codeword, c , and the list of the codewords, L (omax ) = {c : wH (cu ) = o, o = 1, · · · , omax }.

697

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the WCNC 2007 proceedings.

0

0

10

10

−1

−1

10

10

uncoded BER

BER

uncoded −2

10

BRC (36,18,6) BRC (36,24,5) BRC (36,12,7) Hamming (16,11,4) Golay (24,12,8)

−3

10

−4

10

0

0.5

1 β2u

1.5

−2

10

BRC (36,18,6) BRC (36,24,5) BRC (36,12,7) Hamming (16,11,4) Golay (24,12,8)

−3

10

−4

2

10

0

0.5

1 β2u

1.5

2

2 , of the information bits over a Fig. 5. The BER UB versus the energy, βu AWGN channel, for SNR, Eb /N0 = 2 dB.

2 , of the information bits over a Fig. 6. The BER UB versus the energy, βu Rayleigh fading channel, for SNR, Eb /N0 = 5 dB.

For a large code dimension, the list, L (omax ), can be approximated by a Monte Carlo method, and using Algorithm 1. Note that for a AWGN channel, gi = 1. We illustrate the transmission energy distribution, (βu2 , βp2 ), using the 1D systematic codes. The UB of the BER versus the transmission energy for the information bits, βu2 , for three BRC’s, (36, 18, 6), (36, 24, 5), and (36, 12, 7), the extended Hamming code, (16, 11, 4), and the extended Golay code, (24, 12, 8), over a AWGN channel, for SNR, Eb /N0 = 2 dB, is shown in Fig. 5. Recall also that βu2 = 1 corresponds to a case of a uniform energy distribution over a codeword. We observe from Fig. 5 that the BER for the higher code rates, R > 1/2, exhibits a minimum of the BER for values of βu2 < 1. On the other hand, for the code rates, R ≤ 1/2, the BER curves have local maxima. Thus, in general, less energy should be allocated for information bits, and more energy for parity check bits. Fig. 6 shows the UB of the BER for the codes from Fig. 5 over a Rayleigh fading channel (assuming coherent detection) for SNR, Eb /N0 = 5 dB. We can observe from Fig. 6 that the optimum energy, βu2 < 1, for the code rates, R 1/3, while the optimum energy, βu2 > 1, for the code rates, R ≤ 1/3.

Finally, we obtained the UB of the BER for a AWGN channel, and for the fast Rayleigh fading channel. We found that the transmission energy over a codeword can be optimized to decrease the BER. Future work will consider optimization of the transmission energies without channel state information, and decoding methods to obtain the BER using simulation.

V. C ONCLUSION We proposed a novel class of multidimensional binary block codes. We discussed the properties of binary cyclic matrices to design the multidimensional BRC’s. The construction of BRC’s was shown to be well suited for concatenation of the 1D binary codewords since the overall minimum Hamming distance was increased without increasing the transmission bandwidth. We presented two optimization problems to design the multidimensional BRC’s using either the constraint length, or the constraint weight. We showed that the constraint on the weights makes the search for good codes significantly easier. We studied the 2D and 3D BRC’s; many examples of these codes and parallel encoder structures were given.

R EFERENCES [1] H. H. Ma and J. K. Wolf, “On tail biting convolutional codes,” IEEE Trans. Commun., vol. COM-34, no. 2, pp. 104–111, Feb. 1986. [2] M. Karlin, “New binary coding results by circulants,” IEEE Trans. Inform. Theory, vol. IT-15, no. 1, pp. 81–92, Jan. 1969. [3] I. E. Bocharova, M. Handlery, R. Johannesson, and B. D. Kudryashov, “Tailbiting codes obtained via convolutional codes with large active distance-slopes,” IEEE Trans. Inform. Theory, vol. 48, no. 9, pp. 2577– 2587, Sept. 2002. [4] P. Ståhl, J. B. Anderson, and R. Johannesson, “Optimal and near-optimal encoders for short and moderate-length tail-biting trellises,” IEEE Trans. Inform. Theory, vol. 45, no. 7, pp. 2562–2571, Nov. 1999. [5] C. Weiß, C. Bettstetter, and S. Riedel, “Code construction and decoding of parallel concatenated tail-biting codes,” IEEE Trans. Inform. Theory, vol. 47, no. 1, pp. 366–386, Jan. 2001. [6] S. Rajpal, D. J. Rhee, and S. Lin, “Multidimensional trellis coded phase modulation using a multilevel concatenation approach part I: Code design,” IEEE Trans. Commun., vol. 45, no. 1, pp. 64–72, Jan. 1997. [7] J. Li, K. R. Narayanan, and C. N. Georghiades, “Product accumulate codes: A class of codes with near-capacity performance and low decoding complexity,” IEEE Trans. Inform. Theory, vol. 50, no. 1, pp. 31–46, Jan. 2004. [8] M. Yang, W. E. Ryan, and Y. Li, “Design of efficiently encodable moderate-length high-rate irregular LDPC codes,” IEEE Trans. Commun., vol. 52, no. 4, pp. 564–571, Apr. 2004. [9] P. Loskot and N. C. Beaulieu, “A family of low-complexity binary linear codes for Bluetooth and BLAST signaling applications,” IEEE Commun. Lett., vol. 9, no. 12, pp. 1061–1063, Dec. 2005. [10] D. M. Rankin and T. A. Gulliver, “Single parity check product codes,” IEEE Trans. Commun., vol. 49, no. 8, pp. 1354–1362, Aug. 2001. [11] J. G. Proakis, Digital Communications, 3rd ed. McGraw-Hill, 1995. [12] G. Taricco and E. Biglieri, “Exact pairwise error probability of spacetime codes,” IEEE Trans. Inform. Theory, vol. 48, no. 2, pp. 510–513, Feb. 2002. [13] P. Loskot and N. C. Beaulieu, “Average error rate evaluation of digital modulations in slow fading by Prony approximation,” in Proc. ICC, Paris, France, June 20–24, 2004, vol. 6, pp. 3353–3357.

698