Achievable Rates for Multiple Access Relay Channel with ... - CiteSeerX

0 downloads 0 Views 129KB Size Report
2b. Since S1 already knows Wc. 1b, it can then form the estimate for ˜Wb, denoted as ˜W1 b . The ..... (7b). Rc. 1 + Rc. 2 ≤ I(V1,V2; Yr|U, Xr). (7c) for large N. Note that (7) is the achievable rate re- gion for MAC ... I(V2,Xr; Y1|U, V1,X1) = I(V2,Xr; Y1|U, X1) (8a) for large .... GF, the rate inequalities (10a), (10b), (10c) simplify as.
International Symposium on Information Theory and its Applications, ISITA2008 Auckland, New Zealand, 7-10, December, 2008

Achievable Rates for Multiple Access Relay Channel with Generalized Feedback Chin Keong Ho, Kiran T. Gowda and Sumei Sun Institute for Infocomm Research, A*STAR 1 Fusionopolis Way, #21-01 Connexis, Singapore 138632 Email: {hock,kthimme,sunsm}@i2r.a-star.edu.sg

Abstract We introduce the multiple access relay channel with generalized feedback (MARC-GF), which is more general than the known MARC (without feedback) and MAC-GF (without relay). In the MARC-GF, two sources, assisted by a common relay, send independent messages to a destination. The sources and relay each receives some feedback that may facilitate this communication. We propose and analyze a decode-andforward scheme for the MARC-GF to obtain an achievable rate region. This scheme facilitates cooperation between the two sources, in addition to cooperation by the relay, via block Markov superposition encoding. 1. Multiple Access Relay Channel with Generalized Feedback (MARC-GF) We introduce the multiple access relay channel with generalized feedback (MARC-GF), as shown in Fig. 1(a). Consider a four-node network, where two sources S1 , S2 send independent messages W1 , W2 , respectively, to destination D, while a relay R assists in the communication. The MARC-GF is time invariant and memoryless and is described by the conditional channel distribution pYd ,Yr ,Y1 ,Y2 |Xr ,X1 ,X2 (yd , yr , y1 , y2 |xr , x1 , x2 ), where Xr , X1 , X2 denote the transmitted signal of R, S1 , S2 , respectively, and Yd , Yr , Y1 , Y2 denote the feedback received by D, R, S1 , S2 , respectively. For brevity, we will drop the subscripts of the distribution p. In this paper, we consider a full-duplex channel, in that all nodes can transmit and listen concurrently. The scheme and analysis in this paper can be straightforwardly extended to a half-duplex channel. Fig. 1(b) shows the MARC, which is introduced by Kramer and van Wijngaarden in [1]; see [2] for more results and discussions. The MARC consists of the same four nodes as MARC-GF. However, in the MARC S1 , S2 can only transmit, while R can transmit and listen concurrently. The MARC is thus described by the

W1 W2

7654 +3 0123 SO 1 Qh QQ QQ( ()*+ /.-, 6  mv mmmm R 7654 +3 0123 S2

/.-, /$: ()*+ D

+3 W1 , W2

(a) MARC-GF.

W1 W2

7654 +3 0123 S1 QQQ QQ( ()*+ /.-, mm6 R mmm 7+3 654 0123 S2

$/ ()*+ /.-, :D

+3 W1 , W2

(b) MARC.

W1

0123 +3 7654 SO 1

W2

 0123 +3 7654 S2

% /.-, D 9 ()*+

+3 W1 , W2

(c) MAC-GF.

Figure 1: Sources S1 , S2 sent independent messages W1 , W2 to the destination D. A relay R assists in (a) MARC-GF and (b) MARC. The sources receive some feedback in (a) MARC-GF and (c) MAC-GF.

distribution p(yd , yr |xr , x1 , x2 ). The MARC-GF clearly generalizes the MARC, since in MARC-GF the sources additionally receive some feedback from the channel. Fig. 1(c) shows the multiple access channel with generalized feedback (MAC-GF), which is introduced by Willems et al. in [3], see also [4]. The MAC-GF is the same as MARC-GF but without a relay, and is described by the distribution p(yd , y1 , y2 |x1 , x2 ). Clearly, the MARC-GF generalizes the MAC-GF. In practice, the MARC models, for instance, an uplink transmission where several mobile stations communicate to a base station in a cellular network, and a dedicated relay is available to assist. The MARC-GF models the cellular network where the mobile stations are additionally able to actively listen to the ongoing transmissions. This setup allows the sources and the relay to cooperatively communicate to the destination. Even if the relay is absent, cooperation between the sources can be highly beneficial, as demonstrated by

Sendonaris et al. for the MAC-GF [5]. Hence, we expect that further gain can be obtained for the MARCGF, in the presence of the relay. In this paper, we shall exploit the enhanced capability in the MARC-GF, compared to the MARC or MAC-GF, to facilitate user cooperation by the sources and the relay. Our objective is to characterize the achievable rate region from an information theoretic perspective. To this end, we obtain an achievable rate region by introducing a new decode-and-forward (DF) scheme, based on block Markov superposition encoding. In this scheme, the relay concurrently serves two purposes. First, the relay facilitates the transfer of information to the destination, in the same spirit as [2]. Second, the relay facilitates better user cooperation between the sources, in the same spirit as [5]. We first obtain the results for the discrete MARC-GF and then state the results for the Gaussian MARC-GF. Numerical results are given for the Gaussian case, where it is shown that the achievable rate region approaches the region achieved with total cooperation as the channels among the sources and the relay improve.

Y1

 W1 / S1 Encoder R Encoder O

X1

Xr

/ Multiple / Access

D

Channel

D

Yr W2

/ S2 Encoder O

X2

/

c c

/ Decoder W1 ,/ W2

Yd

/

Yr

D Y2

Figure 2: The MARC-GF block diagram, corresponding to Fig. 1(a). The generalized feedback is received causally with a symbol delay D.

2.1. Preliminaries

The decoder for D is described by the decoder function (wˆ1 , w ˆ2 ) = gd (Yd1 , · · · , YdN ). An (M1 , M2 , N, Pe )-code for the discrete memoryless MARC-GF consists of three sets of encoding functions {f1n , f2n , frn } and a decoding function gd with c1 , W c2 ) 6= (W1 , W2 )). error probability Pe = Pr((W A rate pair (R1 , R2 ) is said to be achievable if for any ǫ > 0, there exists for all N sufficiently large an (M1 , M2 , N, Pe )-code such that 0 ≤ Ri ≤ log2 (Mi )/N, i = 1, 2, and Pe ≤ ǫ. A rate region R consists of the closure of a set of achievable rate pairs.

Consider the communication situation shown in Fig. 2. Denote a discrete memoryless MARC-GF by

2.2. Coding Scheme

2. An Achievable Rate Region: Discrete Case

(Xr ×X1 ×X2 , p(yd , yr , y1 , y2 |xr , x1 , x2 ), Yr ×Yd ×Y1 ×Y2 ), where Xi denotes the input alphabet of node i ∈ {r, 1, 2} and Yj denotes the output alphabet of node j ∈ {d, r, 1, 2}. All the alphabets are finite. For every N transmissions, the time invariant and memoryless channel is described by Pr(yd , yr , y1 , y2 |xr , x1 , x2 ) N Y = p(ydn , yrn , y1n , y2n |xrn , x1n , x2n ), n=1

where yd = [yd1 , · · · , ydN ], and similarly for yr , y1 , y2 . In general, we denote the nth element of a vector x as xn . The source Si , i = 1, 2, produces a random integer Wi ∈ {1, · · · , Mi } for every N channel uses, generated i.i.d. using a uniform distribution. Also, W1 and W2 are generated independently. The encoders for Si , i = 1, 2, and relay R are each described by N causal encoding functions: xin = fin (Wi , Yi1 , Yi2 , · · · , Yi,n−1 ), xrn

= frn (Yr1 , Yr2 , · · · , Yr,n−1 ), n = 1, · · · , N.

We employ block-Markov superposition encoding by transmitting over B + 2 blocks, each block is of length N . The last two blocks represent overhead. We assume B is large so that the fractional overhead B/(B + 2) approaches one. We use the index i ∈ {1, 2} to distinguish the source and the index b to denote the block index. Let us denote the message sets Wic , N Rci d N Rdi {1, · · · , 2 }, Wi , {1, · · · , 2 }, Wi , {1, · · · , 2N Ri }, Ri = Ric + Rid , i = 1, 2, and ˜ , {1, · · · , 2Rc1 +Rc2 }. The superscript c denotes W messages (and rates) meant for cooperation by both sources. The superscript d denotes messages (and rates) to be decoded directly by the destination, without cooperation from another source. We give an overview of the coding scheme, see Fig. 3. For source Si , i = 1, 2, and block b, the fresh message to be sent Wib ∈ Wi is split into two submessages Wibc ∈ Wic , Wibd ∈ Wid , i.e., Wib = (Wibc , Wibd ). In our scheme, Wibc , transmitted at rate Ric , is meant to be decoded by both R and Sj , j 6= i, while Wibd ,

S1

S2

R

Block 1

Block 2

Block 3

Block 4

Block 5

c d x1 (1, w11 , w11 )

c d x1 (1, w12 , w12 )

c d x1 (w ˜11 , w13 , w13 )

x1 (w˜21 , 1, 1)

x1 (w ˜31 , 1, 1)

c v1 (1, w11 )

c v1 (1, w12 )

c v1 (w ˜11 , w13 )

v1 (w˜21 , 1)

v1 (w ˜31 , 1)

u(1)

u(1)

u(w˜11 )

u(w ˜21 )

u(w ˜31 )

c d x2 (1, w21 , w21 )

c d x2 (1, w22 , w22 )

c d x2 (w ˜12 , w23 , w23 )

x2 (w˜22 , 1, 1)

x2 (w ˜32 , 1, 1)

c v2 (1, w21 )

c v2 (1, w22 )

c v2 (w ˜12 , w23 )

v2 (w˜22 , 1)

v2 (w ˜32 , 1)

u(1)

u(1)

u(w˜12 )

u(w ˜22 )

u(w ˜32 )

xr (1, 1)

xr (1, w ˜1r )

xr (w ˜1r , w ˜2r )

xr (w ˜2r , w ˜3r )

xr (w ˜3r , 1)

u(1)

u(1)

u(w˜1r )

u(w˜2r )

u(w ˜3r )

Figure 3: Block-markov superposition coding scheme over B + 2 blocks; here B = 3. In block b, source Si sends a d c fresh message wib to be decoded by the destination and another fresh message wib to be decoded by the relay R c c and the other source. Moreover, S1 , S2 , R cooperatively send the (delayed) message pair w ˜b−2 = (w1,b−2 , w2,b−2 ), i r which is estimated as w ˜b−2 by Si and as w ˜b−2 by R, to the destination. transmitted at rate Rid , is meant to be decoded directly by D without any cooperation from another source. To set up cooperation, all the nodes uses the feedback to decode the message pair ˜ b , (W c , W c ), W 2b 1b

˜ b ∈ W. ˜ W

In our scheme, the relay R uses the block of feedback ˜ b . This estimate is denoted as W ˜ r. yrb to decode for W b The source S1 uses two blocks of feedback y1b , y1,b+1 c c to decode for W2b . Since S1 already knows W1b , it can ˜ b , denoted as W ˜ 1 . The then form the estimate for W b decoding for source S2 is carried out similarly, and its ˜ b is denoted as W ˜ 2. estimate for W b If all nodes decode successfully, the relay and the ˜ b to the destination. sources then cooperatively send W Since the sources use two blocks of feedback, cooperation is carried out with a two-block delay. This contributes to an overhead of two blocks in our scheme. 2.2.1. Random Codebook Generation Fix the distribution p(u, v1 , v2 , xr , x1 , x2 ) = p(u)p(xr |u)p(v1 |u)p(v2 |u)p(x1 |u, v1 )p(x2 |u, v2 ) (1) where U, V1 , V2 are auxiliary random variables. The codebooks for S1 , S2 , R are generated i.i.d for each block b = 1, · · · , B+2. For each block, a codebook consists of length-N codewords xi (w, ˜ wic , wid ) for Si , i = 1, 2, and ′ ˜ wc ∈ W c , wd ∈ W d . xr (w, ˜ w ˜ ) for R, where w, ˜ w ˜′ ∈ W, i i i i Specifically, the codebooks for a given block are constructed as follows, similar to [2, 3]. ˜ generate the corresponding se(1) For every w ˜ ∈ W, QN quence u(w) ˜ according to the distribution n=1 p(un ).

˜ generate the (2) For every u(w) ˜ and every w ˜′ ∈ W, ′ corresponding codeword xr (w, ˜ w ˜ ) according to the QN distribution n=1 p(xrn |un (w)). ˜ (3) For every u(w) ˜ and every wic ∈ Wic , generate the corresponding sequence vi (w, ˜ wic ) according to the QN distributions n=1 p(vin |un (w)), ˜ i = 1, 2. (4) For every pair u(w), ˜ vi (w, ˜ wic ) and every d d wi ∈ Wi , generate the corresponding codeword xi (w, ˜ wic , wid ) according to the distribution QN ˜ vin (w, ˜ wic )), i = 1, 2. n=1 p(xin |un (w), 2.2.2. Encoding and Transmission To initialize the encoding, let w ˜b1 = w ˜b2 = w ˜br = 1 c (or any fixed value), b = −1, 0. To terminate, let w1b = d c d r w1b = w2b = w2b = 1, b = B + 1, B + 2 and w ˜B+2 = 1. c d To send the fresh message wib = (wib , wib ) in block b = 1, · · · , B + 2, the source Si sends i c d xi (w˜b−2 , wib , wib ), i = 1, 2. Although the relay R does not have any message to send, it assists in the commur r nication by sending xr (w ˜b−2 ,w ˜b−1 ). 1 2 r r The estimates w ˜b−2 , w ˜b−2 , w ˜b−2 , w ˜b−1 required for encoding are made available by decoding some (past) feedback, as explained in Section 2.2.3. 2.2.3. Decoding Let S = (Z1 , · · · , Zk ) denote a collection of discrete random variables with some fixed joint distribution p(s). Let (z1 , · · · , zk ) denote N independent realizations of S. We denote the set of ǫ-typical N (N ) sequences (z1 , · · · , zk ) as Aǫ (S) [6]. All nodes employ typical-set decoding. For nodes R, S1 , S2 , they decode so as to obtain the estimates

required for subsequent encoding as described in Section 2.2.2. These nodes R, S1 , S2 decode in a timeforward direction according to b = 1, · · · , B. Note that the known messages (fixed as 1) need not be estimated. At R: After block b is sent, the relay R uses the feedback c c yrb to estimate w ˜b = (w1b , w2b ) as w ˜br = (i, j) if (u(w˜b−2 ), xr (w ˜b−2 , w ˜b−1 ), v1 (w ˜b−2 , i), v2 (w ˜b−2 , j), yrb ) ) ∈ A(N (U, Xr , V1 , V2 , Yr ), ǫ

(2)

where w ˜b−2 and w ˜b−1 are replaced by its past estimates r r w ˜b−2 and w ˜b−1 , respectively. Here and subsequently, a decoding error is declared if none or more than one estimates are found. At S1 : After block b + 1 is sent, the source S1 uses both c y1b and y1,b+1 to estimate w2b as j if c (u(w˜b−2 ), xr (w ˜b−2 , w ˜b−1 ), v1 (w˜b−2 , w1b ), c d x1 (w ˜b−2 , w1b , w1b ), v2 (w ˜b−2 , j), y1b ) (N ) ∈ Aǫ (U, Xr , V1 , X1 , V2 , Yr ),

and

(3)

c c ), (u(w˜b−1 ), xr (w ˜b−1 , (w1b , j)), v1 (w ˜b−1 , w1,b−1 c d x1 (w ˜b−1 , w1,b+1 , w1,b+1 ), y1,b+1 ) ) ∈ A(N (U, Xr , V1 , X1 , Yr ), ǫ

(4)

where w ˜b−2 and w ˜b−1 are replaced by its past estimates 1 1 w ˜b−2 and w ˜b−1 , respectively. Since S1 already knows c c its message w1b , it can estimate w ˜b as w ˜b1 = (w1b , j). At S2 : The decoding for S2 is similar as for S1 . After block b + 1 is sent, S2 uses both y2b and y2,b+1 to c c estimate w1b . Since S2 already knows its message w2b , it can then estimate w ˜b . At D: The destination D uses a different strategy c d for decoding its desired messages w1b = (w1b , w1b ) and c d w2b = (w2b , w2b ). It is sufficient for D to decode for d d c c (w ˜b , w1b , w2b ), since w ˜b = (w1b , w2b ). To this end, D first buffers all its feedback ydb , b = 1, · · · , B +2. Then, D decodes for w1b , w2b in a time-backward direction according to b = B, · · · , 1, based on the backward decoding technique [4]. Specifically, D uses yd,b+2 to estimate d d (w ˜b , w1,b+2 , w2,b+2 ) as (k, i, j) if c c (u(k), xr (k, w ˜b+1 ), v1 (k, w1,b+2 ), x1 (k, w1,b+2 , i), c c v2 (k, w2,b+2 ), x2 (k, w2,b+2 , j), yd,b+2 ) ) ∈ A(N (U, Xr , V1 , X1 , V2 , X2 , Yd ), ǫ

(5)

c c where w ˜b+1 = (w1,b+1 , w2,b+1 ) and w ˜b+2 = c c (w1,b+2 , w2,b+2 ) are replaced by its “future” (but alr r ready available) estimates w ˜b+1 and w ˜b+2 , respectively.

2.2.4. Comparison with MARC and MAC-GF Our scheme can improve cooperation, but at the expense of additional memory, delay and overhead.

Cooperation: In our scheme, a common auxiliary random variable U is used to obtain codewords x1 , x2 , xr . This is the key to cooperation among S1 , S2 , R, made possible if the nodes successfully decode the messages intended for cooperation. In the scheme for MARC in [2], two auxiliary random variables U1 , U2 are used instead to obtain x1 , x2 , respectively. Hence, the sources cannot cooperate. In the scheme for MACGF in [3], a common random variable U is used to obtain x1 , x2 , but no relay is present for cooperation. Memory: In our scheme, the relay introduces a memory of one block in its transmission: each message w ˜br is used to construct codeword xr in both block b + 1 and block b + 2. Assuming correct decoding so that w ˜br = w ˜b , this memory allows the relay to serve all node, namely the sources and the destination, as follows. In block b + 1, xr is used to improve the information of the two sources about the message w ˜b , e.g., via the decoding in (4). In block b + 2, xr is used to cooperatively send the message to the destination. In comparison, the relay does not need to introduce memory to its codewords for MARC in [2]. Overhead: In the scheme for MAC-GF [3], the sources use one block of feedback. To exploit the information sent by the relay, in our scheme the sources use instead two blocks of feedback. Thus, an additional block delay is incurred. Moreover, this results in an overhead of two blocks. Most backward decoding schemes, such as for MARC in [2] and for MAC-GF in [3], incur only an overhead of one block. The difference in fractional overhead is however negligible for large B. 2.3. Rate Region Theorem 1. An achievable rate region for the MARCGF is given by R, {(R1 , R2 ) : R1 = R1c + R1d , R2 = R2c + R2d , R1c ≥ 0, R1d ≥ 0, R2c ≥ 0, R2d ≥ 0, R1c ≤ min{I(V1 ; Yr |U, V2 , Xr ), I(V1 , Xr ; Y2 |U, X2 )},(6a) R2c ≤ min{I(V2 ; Yr |U, V1 , Xr ), I(V2 , Xr ; Y1 |U, X1 )},(6b) R1c + R2c ≤ I(V1 , V2 ; Yr |U, Xr ), (6c) R1d ≤ I(X1 ; Yd |U, V1 , X2 , Xr ), R2d ≤ I(X2 ; Yd |U, V2 , X1 , Xr ),

(6d) (6e)

R1d + R2d ≤ I(X1 , X2 ; Yd |U, V1 , V2 , Xr ),

(6f)

R1c

(6g)

R2c

R1d

R2d

+ + + ≤ I(X1 , X2 , Xr ; Yd ) for distribution (1)}.

Proof. Fix distribution (1). We show that the error probability P¯e = E[Pe ] averaged over all random codes can be made arbitrarily small for all N sufficiently

R1c ≤ I(V1 ; Yr |U, V2 , Xr )

(7a)

R2c ≤ I(V2 ; Yr |U, V1 , Xr ) R1c + R2c ≤ I(V1 , V2 ; Yr |U, Xr )

(7b) (7c)

R2c ≤ I(V2 ; Y1 |U, V1 , X1 , Xr ) + I(Xr ; Y1 |U, V1 , X1 ) = I(V2 , Xr ; Y1 |U, V1 , X1 ) = I(V2 , Xr ; Y1 |U, X1 ) (8a) for large N . Here, the mutual information adds because the codewords are generated independently for every block. By symmetry, S2 can decode w ˜b−1 correctly with high probability if (8b)

for large N . The messages w ˜−1 , w ˜0 are known. Since decoding progresses according to b = 1, · · · , B, by induction our initial assumptions is valid if (7), (8) hold. Next, consider the destination D. Suppose that (7), (8) are satisfied. Since no decoding error occurs for R, S1 , S2 , we have w ˜br = w ˜b1 = w ˜b2 = w ˜b , b = 1, · · · , B. Assume that both w ˜b+1 , w ˜b+2 are known by D. By using the typical-set decoding given by (5), D can decode d d (w ˜b , w1b , w2b ) correctly with high probability if R1d ≤ I(X1 ; Yd |U, V1 , X2 , Xr ) R2d ≤ I(X2 ; Yd |U, V2 , X1 , Xr )

upper bound for colocated sources and relay

1.6 1.4 1.2

inter−channel among sources and relay improves

1 0.8 0.6 0.4 0.2

for large N . Note that (7) is the achievable rate region for MAC, where sources transmit V1 , V2 while the destination has knowledge of U, Xr . Moreover, by using the typical-set decoding given by (3), (4), S1 can c decode w ˜2b correctly with high probability if

R1c ≤ I(V1 , Xr ; Y2 |U, X2 )

2 1.8

R2 (bit/symbol)

large, if the inequalities (6) are satisfied. We skip the detailed analysis for the error probabilities, which follows from the standard application of the properties of ǫ-typical sequences [6], see e.g., [2, 4]. First, consider the nodes R, S1 , S2 . Assume w ˜b−2 , w ˜b−1 are known by R, and w ˜b−2 is known by both S1 , S2 . By using the typical-set decoding given by (2), R can decode w ˜b correctly with high probability if

(9a) (9b)

R1d + R2d ≤ I(X1 , X2 ; Yd |U, V1 , V2 , Xr ) (9c) R1c + R2c + R1d + R2d ≤ I(X1 , X2 , Xr ; Yd ) (9d) for large N . The messages w ˜B+1 , w ˜B+2 are known. Since decoding according to b = B, · · · , 1, by induction our initial assumption is valid if (9) holds. The inequalities (7), (8), (9) can be expressed equivalently as (6). If (6) is satisfied, then there exist at least one code in the ensemble of random codes for which the error probability Pe ≤ P¯e → 0 for all N sufficiently large. This holds for arbitrary distribution (1).

0

0

0.5

1 R1 (bit/symbol)

1.5

2

Figure 4: Rate regions for Gaussian MARC-GF. Corollary 1. The rate region R can be expressed alternatively (and more compactly) as R′ , {(R1 , R2 ) : R1 ≥ 0, R2 ≥ 0, R1 ≤ min{I(V1 ; Yr |U, V2 , Xr ), I(V1 , Xr ; Y2 |U, X2 )} + I(X1 ; Yd |U, V1 , X2 , Xr ), (10a) R2 ≤ min{I(V2 ; Yr |U, V1 , Xr ), I(V2 , Xr ; Y1 |U, X1 )} + I(X2 ; Yd |U, V2 , X1 , Xr ), (10b) R1 + R2 ≤ min{ I(V1 , V2 ; Yr |U, Xr ) +I(X1 , X2 ; Yd |U, V1 , V2 , Xr ), I(X1 , X2 , Xr ; Yd )} for distribution (1)}.

(10c)

Proof. To prove Corollary 1, we eliminate the auxiliary variables R1c , R2c , R1d , R2d in Theorem 1 for a fixed arbitrary distribution (1). Clearly, R1 ≥ 0, R2 ≥ 0. We substitute R1c = R1 − R1d ≥ 0, R2c = R2 − R2d ≥ 0 into (6), then use Fourier-Motzkin elimination to eliminate R1d , R2d . After some algebraic manipulations, we obtain (10), hence completing the proof. The rate region R and R′ is convex, so taking a convex hull does not enlarge the region. To prove convexity, an argument such as [4, Appendix A] is applicable. 3. Gaussian MARC-GF Consider the Gaussian MARC-GF where the feedback at node j is given by p p p Yj = P1j X1 + P2j X2 + Prj Xr + Zj , j = 1, 2, r, d.

Here, Pij describes the SNR of the channel from node i to node j, and Zj is i.i.d. zero-mean unit-variance Gaussian noise. Let the channel inputs be given by q p p Xi = αui U + αci Xic + αdi Xid , i = 1, 2, p p Xr = αur U + 1 − αur U ′

where αui +αci +αdi = 1, i = 1, 2, and U , U ′ , X1c , X1d , X2c , X2d are mutually independent i.i.d. zero-mean unitvariance Gaussian random variables. Here, α generally denotes a power allocation variable; Xic , Xid represent the respective messages Wic , Wid ; U represents the message meant for cooperation; U ′ represents the message relayed to the sources, in preparation for cooperation. Let C(x) = log2 (1+x)/2. For the Gaussian MARCGF, the rate inequalities (10a), (10b), (10c) simplify as   c   c  α1 P1r α1 P12 + (1 − αur )Pr2 R1 ≤ min C ,C β 1 + αd1 P12 +C(αd1 P1d ),    c   c α2 P2r α2 P21 + (1 − αur )Pr1 R2 ≤ min C ,C β 1 + αd2 P21  d +C α2 P2d , n  αc P + αc P  1 1r 2 2r R1 + R2 ≤ min C β o  +C αd1 P1d + αd2 P2d , C(ψ) d d where P1d + P2d + Prd + 1r + α2 P2r , ψ ,p p uβ , 1u + α1 Pp u 2( α1 P1d α2 P2d + α1 P1d αur Prd + αu2 P2d αur Prd ). To obtain an achievable rate region, we vary the power allocation variables subject to αui +αci +αdi = 1, i = 1, 2. Fig. 4 shows a typical plot of the rate region as the SNRs among the sources and relay improve, i.e., as Pij increases for all i, j ∈ {1, 2, r}. As these SNRs improve, the sum rate R1 + R2 approaches the total cooperation bound C(ψ) with αu1 = αu2 = αur = 1.

4. Conclusion and Further Discussion We have introduced the MARC-GF and proposed a coding scheme where the relay facilitates communication by concurrently relaying to the destination and to the sources. This allows us to obtain an achievable region and quantify the effectiveness of cooperation among the sources and relay. To achieve cooperation by all nodes, in our scheme the relay helps to exchange information between the two sources, much like a two-way relay [7]. Our scheme can therefore be alternatively viewed as an amalgam of the schemes for MARC and for two-way relay. Recently, the MAC-GF with three sources is considered in [8]. If one of the sources transmits at zero rate,

it becomes a relay. The three-source MAC-GF is thus more general than the two-source MARC-GF. Hence, the results given here can be treated as a step towards the MAC-GF with more than two users. In particular, in our scheme, the two sources and the relay effectively form a two-way relay, so information can flow from one source to another and vice versa. This will be especially useful if the relay is closer to the sources than to the destination. In the scheme proposed in [8], the emphasis is on the information flow from the sources to the relay (when a source transmits at zero rate) and eventually to the destination, while information exchange between the sources is not considered. References [1] G. Kramer and A. van Wijngaarden, “On the white Gaussian multiple-access relay channel,” in Proc. IEEE Int. Symposium on Information Theory, Sorrento, Italy, Jun. 2000, p. 40. [2] G. Kramer, M. Gastpar, and P. Gupta, “Cooperative strategies and capacity theorems for relay networks,” IEEE Trans. Inf. Theory, vol. 51, no. 9, pp. 3037–3063, Sep. 2005. [3] F. M. J. Willems, E. van der Meulen, and J. Schalkwijk, “Achievable rate region for the multiple access channel with generalized feedback,” in Proc. 21st Annnal Allerton Conference on Commun. Contr. and Computing, Monticello, IL, USA, Jul. 1983, pp. 284–292. [4] F. M. J. Willems, “Informationtheoretical results for the discrete memoryless multiple access channel,” Ph.D. dissertation, Katholieke Universiteit Leuven, Leuven, Belgium, Oct. 1982. [5] A. Sendonaris, E. Erkip, and B. Aazhang, “User cooperation diversity–part I and part II,” IEEE Trans. Commun., vol. 51, no. 11, pp. 1927–1948, Nov. 2003. [6] T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd ed. John Wiley & Sons, Inc., 2006. [7] B. Rankov and A. Wittneben, “Achievable rate regions for the two-way relay channel,” in Proc. IEEE Int. Symposium on Information Theory, Jul. 2006, pp. 1668–1672. [8] C. Edemen and O. Kaya, “Achievable rates for the three user cooperative multiple access channel,” in Proc. IEEE Wireless Communications and Networking Conference, Mar. 2008, pp. 1507–1512.