Compressive Coded Modulation for Seamless ... - Semantic Scholar

4 downloads 15084 Views 469KB Size Report
applications inject uncompressed data into the network (e.g., emails ...... E,CND. I E,VND. , I. A,CND. CND. VND. (a) W3. 0. 0.2. 0.4. 0.6. 0.8. 1. 0. 0.2. 0.4. 0.6. 0.8.
4892

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 12, NO. 10, OCTOBER 2013

Compressive Coded Modulation for Seamless Rate Adaptation Hao Cui, Student Member, IEEE, Chong Luo, Member, IEEE, Jun Wu, Member, IEEE, Chang Wen Chen, Fellow, IEEE, and Feng Wu, Fellow, IEEE

Abstract—This paper presents a novel compressive coded modulation (CCM) which simultaneously achieves joint sourcechannel coding and seamless rate adaptation. The embedding of source compression into modulation brings significant throughput gain when the physical layer data contain non-negligible redundancy. The kernel of CCM is a new random projection (RP) code inspired by the compressive sensing (CS) theory. The RP code generates multilevel symbols from source binaries through weighted sum operations. Then, the generated RP symbols are mapped into a dense constellation for transmission. The receiver performs joint decoding based on received symbols. As the number of RP symbols can be adjusted in fine granularity, the rate adaptation becomes seamless. Two key design issues in the proposed CCM are addressed in this paper. First, we consider the RP code design for sources with different redundancies. Three principles are established and a concrete implementation is given. Second, we devise a lineartime decoding algorithm for the proposed RP code. In this belief propagation (BP) algorithm, we find that computing convolution in time domain is more efficient than that in frequency domain for binary variable nodes. Moreover, we invent a ZigZag deconvolution to further reduce the complexity. Analysis show that the proposed decoding algorithm is nearly 20 times faster than the state-of-the-art BP algorithm for CS called CS-BP. Emulations on traced data show that CCM achieves significant throughput gain, up to 33% and 70%, respectively, over the Hybrid ARQ with compression and BICM with compression, under practical time-varying wireless channels. Index Terms—Modulation, JSCC, rate adaptation.

I. I NTRODUCTION

T

HE end-to-end performance of wireless communications depends on the effective processing in both the source and the channel. In practical communication systems, source coding is handled at the application layer, mostly for multimedia contents, and channel coding is most commonly performed at the physical layer (PHY). Guided by Shannon’s separation

Manuscript received August 29, 2012; revised February 8 and June 3, 2013; accepted August 7, 2013. The associate editor coordinating the review of this paper and approving it for publication was A. Vosoughi. This work was supported in part by the National Science Foundation China under Grant 61173041. H. Cui is with the Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, P.R. China (e-mail: [email protected]). C. Luo and F. Wu are with the Internet Media Group, Microsoft Research Asia, Beijing, P.R. China (e-mail: {cluo, fengwu}@microsoft.com). J. Wu is with the School of Electronics and Information Engineering, Tongji University, Shanghai 201804, P.R. China (e-mail: [email protected]). C. W. Chen is with the Department of Computer Science and Engineering, University at Buffalo, State University of New York, Buffalo, USA (e-mail: [email protected]). Digital Object Identifier 10.1109/TWC.2013.090413.121308

principle, source coding and channel coding are usually accomplished independently. Hence, an implicit assumption at PHY is that the data to be transmitted are compressed source without redundancies. Contrary to this assumption, numerous applications inject uncompressed data into the network (e.g., emails, web pages, and uncompressed files). The measurement on web pages [1] shows that there exist substantial amount of compressible data in practical networking traffic. Undoubtedly, intelligently utilizing the data redundancy in transmission will greatly boost the communication efficiency. Wireless channel is time-varying and subject to fading, additive noise and interference. In order to increase spectrum efficiency, PHY has to adapt the transmission rate to varying channel conditions. In conventional adaptive modulation and coding (AMC), the sender selects the best combination of channel coding and modulation that achieves the highest transmission rate under the estimated channel condition. However, the success of AMC depends on the instant and accurate channel estimation, which cannot be obtained simultaneously. Besides, AMC only achieves stair-shaped rates due to the limited number of rate combinations. To overcome these two drawbacks, there are three recently published pieces of research on smooth and blind rate adaptation [2], [3], [4]. The adaptation is blind in the sense that channel information is not required at the sender, and the adaptation is smooth as it provides fine-grained rate adjustment. These smooth and blind rate adaptations are very valuable to the wireless communications, especially in fast fading channels. Unfortunately, none of them take into account possible redundancies in the source data. As it is not practical to require all applications to compress their data before transmission, a natural question to ask will be whether it is possible to achieve high-efficiency compression at PHY? A straightforward answer would be no, because conventional data compression techniques, such as Huffman coding and arithmetic coding, need sufficient statistics for a specific data type. This requirement usually cannot be fulfilled because PHY contains small sets of hybrid data (the maximum IP packet size is 1.5KB). Besides, application data have already been represented by streams of bits instead of logical data structures at PHY. It would require dramatic change to the entire network stack if the PHY needs high-level information of application data. In this paper, we propose a novel PHY-layer scheme, called compressive coded modulation (CCM), which simultaneously achieves source compression, channel protection and seam-

c 2013 IEEE 1536-1276/13$31.00 

CUI et al.: COMPRESSIVE CODED MODULATION FOR SEAMLESS RATE ADAPTATION

less rate adaptation over the existing network stack. CCM operates on binary sources, and is widely applicable to timevarying wireless channels. The core of the proposed CCM is a novel random projection (RP) code inspired by the compressive sensing (CS) theory [5], [6]. The proposed RP code is significantly different from the random measurement matrices used in CS, because the source is binary and the generated RP symbols will pass the existing DAC (digital-toanalog converter) and be transmitted with the existing digital communication hardware. The RP code is also fundamentally different from the contemporary channel codes because it generates multilevel symbols instead of binary symbols through weighted sum operations. According to CS theory, the required number of symbols for successful decoding decreases as the source redundancy increases, and also decreases as the channel quality increases. Therefore, a source with higher redundancy can be decoded from less number of RP symbols and achieves a higher transmission rate. In contrast, more RP symbols need to be transmitted when channel condition worsens, and hence it leads to lower transmission rate. Because the rate can be adjusted on a per symbol basis, the adaptation becomes seamless. It is worth noting that the proposed CCM design can be easily integrated into the existing PHY layer. In particular, we have implemented CCM in 802.11a PHY. It simply bypasses the FEC and modulation in the existing design and directly generates the modulation symbols on the data subcarriers. Experiments over a software radio platform have proved the feasibility of our design. The proposed CCM is a major advance from our previous work called Rate Compatible Modulation (RCM) [2]. Such advance encompasses novel solutions to two crucial problems in RCM. First, we shall design code for binary sources with different levels of redundancy or sparsity. Three principles are established based on both source and channel considerations. In particular, we shall derive the upper bound of achievable rate for any given RP code under source sparsity p and a given channel error distribution. Based on these principles, we design weight multisets of RP code for typical source sparsities. Although PR code has a saturation effect at high SNRs, our evaluations show that the same weight multiset can be used over the typical SNR range (from 5dB to 25dB) for moderate source sparsity. Second, we shall design an algorithm to reduce RP decoding complexity so that the decoding speed matches with the contemporary wireless communication rate. This is a linear time belief propagation (BP) algorithm for RP decoding. Harnessing the binary-input property of the variable nodes, we propose to compute convolutions (at constraint nodes) in the time domain. We shall show that this is more efficient than the conventional computation in the frequency domain. Observing that the probability distribution functions (PDFs) of the constraint nodes have limited support, we devise a novel ZigZag deconvolution to further reduce the decoding complexity. We perform careful analysis to show that this decoding algorithm is nearly 20 times faster than the CS-BP algorithm for a typical code. The rest of this paper is organized as follows. Section II discusses related work and provides background on CS theory. Section III presents a review of RCM and an overview of

4893

CCM for sparse binary sources. Section IV highlights RP code design and explains the modified BP algorithm for practical RP decoding at reduced complexity. Section VI-A illustrates a typical implementation of CCM, and Section VI evaluates the proposed scheme through both simulations and emulations. Finally, Section VII concludes this paper. II. R ELATED W ORK AND BACKGROUND In this section, we will first discuss the related work on joint source-channel coding and rate adaptation, and then provide background information on compressive sensing theory. A. Joint Source-Channel Coding Joint source-channel coding (JSCC), in its broad sense, covers all schemes that jointly considering source coding and channel coding under a unified resource constraint and/or toward a common optimization objective. Existing work on JSCC can be classified into three categories. The first category usually applies to image/video transmission. Among them, some works have exploited the ratedistortion behavior of the source and study the problem of allocating the transmission rate between the source coder and the channel coder to minimize expected distortion [7], [8], [9] or optimize the power consumption [10]. Others have designed unequal error protection (UEP) for progressively coded sources. They are called JSCC because more important contents (source) are encoded with stronger channel codes [11]. Both variations of LDPC (low-density parity-check) codes, such as rate-compatible LDPC codes in [12], and rateless codes, such as Raptor codes over GF(2) in [13] and Raptor codes over GF(4) in [14], have been studied in the literature. Works in the second category jointly consider the compression of correlated sources and channel protection of them. Liveris et al. [15] show that LDPC codes are able to compress correlated binary sources close to the Slepian-Wolf limit. Sartipi et al. [16] apply a similar idea to wireless sensor networks (WSNs) for data collection. Zhong et al. [17] and Xu et al. [18] use low-density generator matrix (LDGM) codes and Raptor codes for distributed JSCC. More related to the proposed scheme are the works in the third category, which apply the same code for both compression and protection of a single source. Researches have shown that conventional channel codes can be used for source compression. In particular, LDPC codes, Raptor codes and Turbo codes [19], [20], [21] can be concatenated with a Burrows-Wheeler block sorting transform (BWT) to serve this purpose. Furthermore, Fresia et al. [22] use the concatenation of two independent LDPC codes in the transmitter, and implement a joint BP decoder at the receiver. Zhu et al. and Cabarcas et al. [23], [24] design Turbo code for transmitting nonuniform binary memoryless or i.i.d. sources over noisy channels. However, all these codes are designed at a fixed rate. None of them is able to achieve smooth rate adaptation. B. Coded Modulation The proposed CCM scheme is essentially an enhanced coded modulation with embedded compression, so we also

4894

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 12, NO. 10, OCTOBER 2013

briefly review related works on coded modulation. The trelliscoded modulation (TCM) proposed by Ungerboeck is the first scheme of coded modulation [25], [26]. By introducing trellis coding and set partitioning in the PHY layer, the minimum free Euclidean distance is maximized. Multilevel coded (MLC) modulation [27], [28] is another elegant way for coded modulation. For a M bits modulation, the channel coding is divided into M levels and each level is designed independently. The advantage of MLC is that there is no need to design the coding on Euclidean distance. Any binary code designed on Hamming distance can be used. To address the issue of robustness under a Rayleigh channel, Zahavi proposes bit-interleaved coded modulation (BICM) [29] and Caire et al. [30] present a detail analysis on BICM from the information theoretic perspectives. On one hand, bit interleaving increases the channel diversity which equivalently increases the Hamming distance of coding. On the other hand, infinite depth interleaving tackles the mismatch decoding issue [31]. The discovery of turbo code [32] and the rediscovery of LDPC codes [33], [34] open up new research avenues for advance development in contemporary coded modulation. With these capacity-achieving codes, most of the existing coded modulation schemes can approach the Shannon limit at the designed SNR. However, they lack rate adaptation capability, and have to rely on separate adaptive modulation to change the rate when channel varies dramatically. In addition, the capacity-achieving performance usually requires infinite block coding length and high complexity. C. Rate Adaptation Rate adaptation refers to the techniques which adjust the modulation, coding, power and other protocol parameters based on varying channel conditions. The most well-known rate adaptation technique is adaptive modulation and coding (AMC), which has been extensively studied [35], [36], [37], [38] and practically applied to wireless systems. However, AMC has two major drawbacks. First, rate selection relies heavily on accurate and instant channel estimation, which cannot be simultaneously obtained in practice. Second, AMC is only able to achieve stair-shaped rates due to limited rate combinations. Hybrid automatic repeat request (HARQ) [39] is a supplemental technique to AMC. The incremental redundancy scheme, a.k.a. type II HARQ, can provide a smoother rate by employing either a rate compatible code, e.g. punctured Turbo code [40] or rateless codes, e.g. Raptor code [41]. However, both codes have a limited rate adaptation range. A more recent thread of research is able to achieve smooth and blind rate adaptation in a wide range of channel conditions. Three independent works on this topic have been published in recent years. Gudipati and Katti [3] propose a scheme named Strider which appends a minimum distance transformer (MDT) after existing coding and modulation. The free distance of constellation can be transformed to fit the channel condition. Perry et al. [4] invent the spinal codes which replace the state transition of convolution code in TCM by an infinite state hash function. The rateless property is

achieved by refining the state transition. Our previous work RCM [2] proposes a novel bit to symbol mapping, and the averaged bit energy can be smoothly refined by accumulating symbols at the receiver. The rate adaptation is therefore seamless. Though not explicitly stated, all these rate adaptation works assume that PHY data are incompressible. When this is not the case, i.e. when data contain non-negligible redundancy, the end-to-end performance of these schemes becomes suboptimal. Through extensive analyses, we discover that the RCM we developed earlier can be naturally extended to opportunistically exploit data compressibility. In other words, the bitto-symbol mapping can be used for JSCC. When properly designed, RCM has the potential to significantly improve the system throughput. D. Compressive Sensing The bit-to-symbol mapping in RCM converts source bits into multilevel symbols through weighted sum operation. It is essentially a CS [5], [6] scheme with binary input. Therefore we first review this emerging theory with special emphasis on its connection with channel codes. CS theory states that an n-dimensional signal having a sparse or compressible representation can be reconstructed from m linear measurements even if m < n [42], [6]. Traditionally, the signals under consideration are sparse or compressible signals in RN , the projection matrix Φ is a dense matrix in Rm × Rn , and CS reconstruction can be achieved by l1 -minimization [43], matching pursuit [44], iterative threshold methods [45], and subspace pursuit [46]. Although these algorithms can be extended to sparse Φ or the binary signal, the decoding complexity is high due to the nature of the iterative greedy algorithms. It is observed that CS can be used as a joint source-channel coding scheme [47], [48], [49], [50], [51]. Both [48] and [51] examine the energy-performance design space of CS, and show that CS is robust to channel fading and noise. Duarte et al. [47] and Feizi et al. [49], [50] apply CS to compress correlated sources and protect the measurements under noisy channels. Their results show that a CS decoder can provide trade-offs between rate and decoding complexity. In addition, CS-based joint source-channel coding scheme has continuous rate-distortion performance in a noisy Gaussian channel. These researches are done in the context of wireless sensor systems and the application of CS theory is rather ”traditional”, i.e. the sources are highly-sparse real-valued sensor readings and the CS decoding is based on computation-intensive convex optimization. These schemes are not readily extended to the transmission of generic binary data seen in the current physical layer of communication systems. A less traditional application of CS theory is to use it purely as channel codes and to apply various restrictions to reduce its complexity. Sarvotham et al. [52] design the Sudocodes, which can be used as erasure codes for real-valued data. Both encoding and decoding complexity can be reduced by limiting Φ to sparse binary matrices. Wu et al. [53] and Liu et al. [54] consider binary source in CS, and derive closedform formulation for its decoding. However, these works only

CUI et al.: COMPRESSIVE CODED MODULATION FOR SEAMLESS RATE ADAPTATION Source bits

1 1 1

-2 -4

0

-4

1

+4

-2

0

-1 +4

1

0 …





… …



(1,-1) (6,0)

+1

1





(6,0)

+2 +2

the signal power. Therefore, the actual signal amplitude used to transmit sm is: 2A xm = −A + (sm − ψmin ) (2) ψmax − ψmin

Q

6

1 … …

Modulation constellation

RP symbols

+1 -1

1



(1,-1) -1









I



… …

1

Fig. 1. Illustration of CCM: source bits are mapped to RP symbols through weighted sum operation, then every two RP symbols constitute one wireless symbol.

consider noiseless case. CS-BP [55] provides a solution to decode noisy CS under certain constraints. It considers a sparse Φ whose non-zero entries are drawn from Rademacher distribution, and adopts BP algorithms for decoding. The complexity of CS-BP decoding is much higher than LDPC decoding because the constraint nodes have to compute convolutions of several PDFs, since CS computes measurements via weighted sum operation instead of logical exclusive OR (XOR). A standard solution will be to perform processing in frequency domain via fast Fourier transform (FFT) and inverse FFT (IFFT), in order to convert convolution to multiplication and deconvolution to division. However, such processing is not efficient for binary-input PDFs. A better solution to this complexity problem is critical for practical deployment of CCM, so as to facilitate the broader application of CS. III. C OMPRESSIVE C ODED M ODULATION A. RCM Overview Fig. 1 presents the schematic diagram of RCM. As shown, the two key steps in RCM are bit-to-symbol mapping and modulation of multilevel symbols. Bit-to-symbol mapping operates on length-N bit blocks. Let b = (b1 , b2 , . . . , bN ) ∈ {0, 1}N be an N -dimensional bit vector. Bit-to-symbol mapping generates a series of symbols s1 , s2 , . . . , sM by weighted sum operation. Denote W = {wl |wl ∈ Z, l = 1 . . . L} as the weight multiset, the mth symbol is generated by: sm =

4895

L 

wl · bnml

(1)

l=1

where nml is the index of the bit weighed by wl to generate values symbol sm . Each generated symbol takes multilevel  and the symbol alphabet is a finite set Ψ = { l∈Λ wl |Λ ∈ P(L)} = {ψ1 , ψ2 , . . . , ψK }, where L = {1, 2, . . . , L} and P(L) is the power set of L. To generate the waveform for radio frequency (RF) devices, the multilevel symbols are sequentially and evenly mapped to the amplitude of sinusoid signals. Let ψmin and ψmax be the minimum and the maximum value in Ψ, respectively. The mapping is a linear projection from [ψmin , ψmax ] to [−A, A], where A represents the maximum amplitude that normalizes

In quadrature amplitude modulation (QAM), two consecutive RP symbols are transmitted as the I and Q component of a wireless symbol, i.e. xm + j · xm+1 , where j is the imaginary unit. Such bit-to-symbol mapping and modulation can be repeated for arbitrary times. The sender may keep generating and transmitting the symbols until the receiver successfully decodes the source bits and returns an acknowledgement. If this happens after M RP symbols ( M 2 wireless symbols) are transmitted, then the transmission rate will be 2·N (3) R= M Obviously, different M corresponds to different transmission rate. As M can be adjusted at very fine granularity, RCM achieves smooth rate adaptation. If we stack the M transmitted symbols to form a symbol vector s = (s1 , s2 , . . . , sM ). The bit-to-symbol mapping process can be concisely described as s = G · b, where G is an M × N low-density matrix. In particular, only L entries in each row gm (m = 1, 2, . . . , M ) are non-zeros, and they take values from the weight multiset W. The receiver obtains noisy versions of the transmitted symbols. Let ˆs denote the vector of the received symbols, then we have ˆs = G · b + e, where e is the error vector. In an AWGN (additive white Gaussian noise) channel, e is comprised of Gaussian noises. In a fading channel with fading parameter h, each element in e is ni /hi where ni and hi are the noise level and fading parameter for the ith symbol. In the latter case, we could also model the error as additive, white and Gaussian, which though not exact, works well in practice [56]. Decoding is equivalent to finding out the bit vector with the maximum a posteriori (MAP) probability. It is an inference problem which can be formalized as: ˆ= b

arg max P (b|ˆs)

b∈{0,1}N

(4)

s.t. ˆs = Gb + e B. CCM for Sparse Binary Sources Notice that the physical layer is responsible for transmitting raw bits instead of logical data packets over a physical link. The redundancy of physical layer data can be easily modeled by unequal probabilities on the appearance of 0’s and 1’s [57]. In particular, we model the input bits as i.i.d. Bernoulli random variables with probability P (b = 1) = p. Without loss of generality, we consider only the cases where p ≤ 0.5, as a bit stream with p > 0.5 can be converted to a stream with P (b = 1) = 1 − p through simple bit flipping. Borrowing the terminology from the CS theory, we call p the source sparsity or we call the length-N bit vectors p-sparse. For a sparse binary source, the bit-to-symbol mapping serves dual purposes in data compression and channel coding. We name this process random projection coding. According to the CS theory, the number of symbols (M ) required to

4896

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 12, NO. 10, OCTOBER 2013

decode the source is proportional to the source sparsity p. Therefore, the modulation rate as defined in (3) for sparse source, i.e. p < 0.5, will be higher than that for non-sparse source (p = 0.5). The compression gain is thus achieved. In our previous work [2], we selected the weight multiset W = {±1, ±2, ±4, ±4} for encoding non-sparse sources. However, for sparse sources, this weight selection may not be optimal. The reason is that the distribution of symbol alphabet Ψ, denoted by P (Ψ), plays a key role in achieving the communication efficiency. As P (Ψ) is jointly decided by weight multiset and source sparsity, it is crucial to study how these two factors affect each other and how to select proper weight multiset for different source sparsity. IV. RP C ODE D ESIGN A. Design Principles This section outlines three design principles which should be enforced in RP code design. The first two principles concern individual RP symbols, and the third principle takes the dependency of RP symbols into consideration. Following these principles which are necessary conditions to find an optimal code, we derive several guidelines to facilitate code design. Principle 1. The entropy of RP symbols should exceed half of the desired maximum transmission rate. One major objective of any physical layer design is to achieve high transmission rate. The achievable rate is bounded by the entropy of wireless symbols. As every two RP symbols constitute one wireless symbol, the entropy of RP symbols should exceed half of the desired maximum transmission rate. We have defined Ψ = {ψ1 , ψ2 , ...ψK } as the symbol alphabet and P (Ψ) as the symbol distribution. In particular, we denote P (ψk ) as the probability that symbol s is equal to ψk . By definition, the entropy of RP symbols, denoted by H(s), is: H(s) = −

K 

P (ψk ) · log P (ψk ).

(5)

k=1

It should be noted that, both Ψ and P (Ψ) are functions of weight multiset W and source sparsity p, such that H(s) is a function of W and p too. We consider two anchors on symbol entropy. First, the highest modulation rate used in WLAN is 6 bits/s/Hz achieved with 64QAM. Therefore, each RP symbols should carry at least 3 bit information (if the source has been optimally compressed) to avoid early rate saturation. Second, the typical SNR range of WLAN is 5 dB to 25 dB, and the Shannon capacity for 25 dB AWGN channel is around 8.3 bits. Therefore, it is not necessary to consider weight sets whose RP symbol entropy is larger than 4.15 bits. Principle 2. The generated RP symbols should have a fixed mean regardless of source sparsity. We consider a communication system with an average (i.e. per wireless symbol) power constraint of 2E, or equivalently a constraint of E per RP symbol. Let ERP denote the average symbol energy before power scaling. We shall minimize ERP

so that each RP symbol can be scaled by a larger factor in transmission, and becomes more robust to channel noise. Given a set of RP symbol Ψ and its distribution P (Ψ), the mean and variance of the RP symbols can be calculated as: E[ψk ] =

K 

ψk · P (ψk );

(6)

k=1

V ar[ψk ] =

K 

2

|ψk − E[ψk ]| P (ψk )

(7)

k=1

It is known that ERP is minimized when each RP symbol is shifted by E[ψk ] in the modulation constellation, and the minimum achieved is exactly the symbol variance. As it is not practical to shift symbols differently when source sparsity p varies, the RP symbols should have a fixed mean regardless of source sparsity. Lemma 1. The weight multiset has a zero mean is the sufficient and necessary condition for the generated RP symbols to have a fixed mean regardless of source sparsity p. Proof: The mean of RP symbols can be written into:  L  L L    wl bl = wl · E[bl ] = wl · p, (8) E[ψk ] = E l=1

l=1

l=1

which L is a function of p. It does not change with p iff l=1 wl = 0, i.e. the weights are zero-mean. Principle 3. Let GM be the sub-matrix composed of the first M rows of the encoding matrix G. Denote GM [i] as the ith column of GM . Given a weight multiset, G should be organized such that mini ||GM [i]||2 is maximized for all possible M . CCM can be viewed as a coded modulation scheme. Therefore, it is important to measure the free distance df ree of the code, which is defined as the minimum distance between any two codewords (vectors of RP symbols). A large free distance indicates that a sequence of wireless symbols are robust against channel noise. Denote s(b) as the codeword generated from bit block b by a RP code, then the free distance of the code is: d2f ree =

min

b1 =b2 ∈{0,1}N

||s1 (b1 ) − s2 (b2 )||2 .

(9)

Usually, the free distance is evaluated by taking an all-zero vector as reference. Since the RP encoding of an all-zero bit sequence will generate an all-zero codeword, the free distance can be deduced to a simpler form d2f ree = min ||s(b)||2 . The minimum is usually taken when there is only one bit 1 in the source vector b, and the minimum value is ||G[i]||2 . However, the proposed CCM is a rate adaptation scheme. The transmission could stop at any time before all the N symbols generated by G are transmitted. Suppose the decoding is successful after the first M symbols are transmitted, the actually encoding matrix used is GM . Therefore, we shall require that GM creates a large free distance for all possible M.

CUI et al.: COMPRESSIVE CODED MODULATION FOR SEAMLESS RATE ADAPTATION

TABLE I C ANDIDATE WEIGHT MULTISETS . W1 W2 W3 W4 W5 W6 W7

{±1, ±1, ±1, ±1, ±1, ±1, ±1, ±1, ±1, ±1} {±1, ±1, ±1, ±1, ±1, ±1, ±1, ±1, ±2, ±2} {±1, ±1, ±1, ±1, ±1, ±1, ±2, ±2} {±1, ±1, ±1, ±2, ±2, ±2} {±1, ±1, ±1, ±2, ±4, ±4} {±1, ±2, ±4, ±4} {±1, ±1, ±4, ±4}

6.42 7.09 6.90 7.00 8.37 8.28 7.91

B. Weight Selection The achievable transmission rate of CCM is a function of source sparsity, channel condition, and RP code design. Ideally, there is an optimal code for each source sparsity and channel condition. However, CCM is designed for ”blind” rate adaptation, i.e. the channel condition is not known to the sender. Therefore, the objective of RP code design is to find a set of weights which achieve an overall high throughput for the primary SNR range of wireless channels. We consider source sparsity between 0.1 and 0.5, and pick four representative values of source sparsity including 0.1, 0.15, 0.25 and 0.5 for the performance study. These values are selected such that their entropies are evenly spaced. For the sake of simplicity, we focus on the integer weights {1, 2, 4 . . .} which are powers of 2. According to Principle 2, the weights should have zero-mean. A simple choice is to have positive-negative symmetric weights. Besides, the size of weight multiset does not need to exceed 20. According to the numerical results by Barron et al. [55], for weights {−1, 1} the optimal check node degree is Lopt ≈ 2/p beyond which the performance gains will become marginal. Therefore, Lmax = 20 will be sufficient for source sparsity 0.1 and above. Table I lists several candidate weight multisets that satisfy the above requirements and Principle 1. The entropies of the generated RP symbols are listed in the last column. Next, we could select among these weight multisets by two ways. The first way is through Matlab simulations. For each weight multiset and source sparsity combination, 106 bits are transmitted over AWGN channel with SNR ranging from 5 dB to 25 dB. These throughputs are shown in Fig. 2. We observe that no single weight multiset can excel for all cases of source sparsity over the entire channel SNR range. However, if the primary SNR range is 5dB to 20 dB, as in typical WLAN, we could make the following choices: W0.5 = W6 , W0.25 = W5 , W0.15 = W4 , and W0.1 = W3 . The selected weight multisets are highlighted with red curves in Fig. 2. For completeness, the black curve in Fig. 2 shows the performance of using random weights drawn from the standard Gaussian distribution. We adopt the decoding algorithm proposed by Rangan [58]. Comparison shows that random weights do not perform well especially when the source is not sparse or has moderate sparsity. Note that when the primary SNR range changes, different choices can be made. For example, if the channel varies between 18 dB to 25 dB, W5 (instead of W4 ) should be selected for source sparsity p = 0.15 according to Fig.2(c). If the channel is always below 12 dB, and the source sparsity is 0.5, then according to Fig.2(a), all the listed weight multisets have

4897

similar performance. Besides, the saturation effect which has been observed in linear encoding of real symbols also appears here. Different weight multisets have different saturation SNR and rates, and it is also affected by source sparsity. The second way to select weights is through EXIT (extrinsic information transfer) chart analysis [59]. EXIT chart has been widely used for the design and analysis of LDPC codes [60]. The RP codes resemble LDPC codes in that they can be represented by bipartite graphs too, where variable nodes are source bits and check nodes are RP symbols. We refer to [59], [60] for further details on extrinsic information processing. As it is difficult to obtain a closed-form expression for the extrinsic information transfer in RP decoding, we measure the mutual information after each iteration of decoder processing. We use the notation of [59] and write IA for the average mutual information between the bits and the a priori probabilities. Similarly, we write IE for the average mutual information between the bits and the extrinsic probabilities. However, since the source has sparsity, eq.(12) in [59] should be modified to:  +∞ pA (ξ|X = 0) dξ IA = (1 − p) pA (ξ|X = 0) log2 pA (ξ) −∞ (10)  +∞ pA (ξ|X = 1) +p dξ, pA (ξ|X = 1) log2 pA (ξ) −∞ where pA (ξ) = p · pA (ξ|X = 1) + (1 − p) · pA (ξ|X = 0). The extrinsic information IE can be calculated similarly. Due to the space limit, we only show the EXIT chart for source sparsity p = 0.5 and p = 0.1 and compare weight multiset W3 through W6 in Fig.3 and Fig.4. For successful decoding, two curves in a chart should cross at the source entropy (H(p)). Fig.3 is obtained when SNR is 20 dB and 190 RP symbols are transmitted for a length-480 bit block (spectrum efficiency of 5.05 bit/s/Hz). It is clear from the figure that weight multiset W3 and W4 cannot decode successfully under this setting. W5 is very close to successful decoding, but the two curves do not reach 1 in Y-axis. W6 is the weight multiset we chose for p = 0.5, and it can be seen that it ensures successful decoding at such a high spectrum efficiency. Besides, the tunnel between the two curves is still quite wide, indicating that a higher spectrum efficiency could be achieved. Fig.4 can be interpreted similarly. Since the source sparsity is 0.1, the two curves should cross at H(0.1) = 0.469. Under this test setting, W5 and W6 fail decoding, and W3 and W4 succeed. We further decrease the value of M to 120, and find that only W3 decodes successfully. This is also consistent with our previous selection. C. Encoding Matrix Construction According to Principle 3, the construction of the encoding matrix G affects CCM performance, especially when the channel condition is good and the required number of RP symbols for decoding is small. Next, we present three steps to construct G that satisfies Principle 3. We will take W6 as an example. First, we construct three elementary matrices A1 , A2 and A4 . The structure of A1 is shown as follows. Matrices A2

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 12, NO. 10, OCTOBER 2013

10

1

W

2

8

W3 W4

6

W5 W6 W

4

7

R

2 0 5

10

15 20 Es/N0 (dB)

1

W

2

8

W3 W4

6

W5 W6 W

4

7

R

2 0 5

25

(a) p = 0.50

10

15 20 Es/N0 (dB)

1

W

2

8

W3 W4

6

W5 W6 W

4

7

R

2 0 5

25

10

W

(b) p = 0.25

10

15 20 Es/N0 (dB)

8

W

1

W2

6

W3 W

4

4

W

5

W

6

2

W7

0 5

25

(c) p = 0.15

1

1

A,CND

A,CND

A,CND

0.8

0.8

0.4

E,VND

0.4

E,VND

0.4

0.2

0.4 0.6 0.8 IA,VND, IE,CND

CND VND

0.2 0 0

1

,I

0.6

0.2

(a) W3

0 0

1

CND VND

0.2 0.2

25

0.4 0.6 0.8 IA,VND, IE,CND

0.6 0.4

0 0

1

CND VND

0.2 0.2

(c) W5

0.4 0.6 0.8 IA,VND, IE,CND

1

(d) W6

0.1

0.2 0.3 0.4 IA,VND, IE,CND

(a) W3

A,CND

,I

0.3 0.2 CND VND

0.1 0 0

0.1

0.2 0.3 0.4 IA,VND, IE,CND

(b) W4

(0.469, 0.469)

0.4 0.3 0.2 CND VND

0.1 0 0

0.1

0.2 0.3 0.4 IA,VND, IE,CND

(c) W5

IE,VND, IA,CND

CND VND

0.1

0.4

E,VND

,I

0.2

E,VND

0.3

(0.469, 0.469)

I

A,CND

(0.469, 0.469)

I

A,CND

,I

E,VND

I

15 20 Es/N0 (dB)

EXIT chart for different weight multisets when SNR = 20 dB, p = 0.5, M = 190 for N = 480.

0.4

Fig. 4.

0.6

(b) W4

(0.469, 0.469)

0 0

0.4 0.6 0.8 IA,VND, IE,CND

I

,I CND VND

0.2

I

I

0.6

IE,VND, IA,CND

1 0.8

,I

1

Fig. 3.

10

Weight selection according to throughput in a simulated AWGN channel.

0.8

0 0

R

(d) p = 0.10

E,VND

Fig. 2.

10

W

Throughput (bit/s/Hz)

W

Throughput (bit/s/Hz)

Throughput (bit/s/Hz)

10

Throughput (bit/s/Hz)

4898

0.4 0.3 0.2 0.1 0 0

CND VND 0.1

0.2 0.3 0.4 IA,VND, IE,CND

(d) W6

EXIT chart for different weight multisets when SNR = 15 dB, p = 0.1, M = 120 for N = 480.

and A4 have the same structure, but the non-zero values are replaced with +2/ − 2 or +4/ − 4. ⎡ ⎤ +1 − 1 ⎢ ⎥ +1 − 1 ⎢ ⎥ A1 = ⎢ ⎥ . .. ⎣ ⎦ +1 − 1 Second, we form matrix G0 by stacking random permutations of A1 , A2 and A4 as follows: ⎡ ⎤ π(A4 ) π(A4 ) π(A2 ) π(A1 ) ⎢ π(A2 ) π(A1 ) π(A4 ) π(A4 ) ⎥ ⎥ G0 = ⎢ ⎣ π(A4 ) π(A4 ) π(A1 ) π(A2 ) ⎦ π(A1 ) π(A2 ) π(A4 ) π(A4 ) where π(·) denotes randomly permutated columns of a matrix. This matrix has a dimension of N/2 × N , and ensures that mini ||GN/2 [i]||2 = 12 + 22 + 42 + 42 = 37. Further, mini ||GN/4 [i]||2 = 12 + 42 = 17. By using different permutation choices, we may construct virtually unlimited number of matrix G0 . The third and final step in constructing G is to stack all the randomly generated G0 s. In practice, we only stack two matrices to form an N ×N matrix, and repeatedly use it when the channel condition is poor.

V. B ELIEF P ROPAGATION D ECODING The RP code (RPC) can be represented by a bipartite graph as LDPC code, where variable nodes are source bits and constraint nodes are RP symbols. Hence, RPC can be decoded through belief propagation in a similar fashion. However, since RP symbols are generated by arithmetic addition rather than logical XOR, the belief computation at constraint nodes is much more complex than that in LDPC decoding. Reducing the computational complexity of the decoding algorithm has become the key to the success of the entire CCM scheme. We discover that, for binary variable nodes, the belief computation at constraint nodes can be more efficiently accomplished by direct convolution rather than the fast Fourier transform (FFT) as used in CS-BP. For each constraint node, we compute the convolution of the distributions for all its neighboring variable nodes, and then reuse the convolution for all outgoing messages by deconvolving the distribution of the node being processed. To save computational cost, the zero multiplication in the convolution is avoided. In addition, we propose ZigZag iteration to guarantee a unique solution to the deconvolution. Finally, we show through analysis that the proposed decoding algorithm is computationally more efficient than the CS-BP algorithm. Let v and c denote the variable node and the constraint node, respectively. Throughout the algorithm description, we

CUI et al.: COMPRESSIVE CODED MODULATION FOR SEAMLESS RATE ADAPTATION

4899 -7 -6 -5

TABLE II D EFINITIONS FOR BELIEF PROPAGATION

0

……

11

……

11

pc\v -4

*

……

0

w(v,c)·pv -7

-11

……

……

0

0

……

7

0

……

7

De-convolution

pc\v (·)

The weight on the edge between v and c. The message sent from v to c. The message sent from c to v. The a priori probability of bit 1 for all variable nodes. The Probability Density Function (PDF) of variable node v. The PDF of constraint node c calculated based on the PDFs of all its neighboring variable nodes. The PDF of constraint node c calculated based on the PDFs of all its neighboring variable nodes except variable node v.

Convolution

w(v, c) µv→c µc→v p pv (·) pc (·)

……

+ introduce some notations as listed in Table II.

-11 ... ...

RPC-BP Decoding Algorithm

μv→c = pv (1) = p 2 Computation at constraint nodes: For each constraint node c, compute the probability distribution function pc (·) via convolution (11). For each neighboring variable node v ∈ n(c), compute pc\v (·) via deconvolution (12): (11) (12)

Then, compute pv (0) and pv (1) based on the noise PDF pe and the received symbol sc :  pv (0) = pc\v (i) · pe (sc − i), (13) pv (1) =

i 

pc\v (i) · pe (sc − i − w(c, v)). (14)

i

Finally, compute the message μc→v via normalization: μc→v =

pv (1) . pv (0) + pv (1)

(15)

3 Computation at variable nodes: For each variable node v, compute pv (0) and pv (1) via multiplication:

pv (0) = (1 − p) (1 − μu→v ) , (16) pv (1) = p

u∈n(v)

μu→v .

(17)

u∈n(v)

Then for each neighboring constraint node c ∈ n(v), compute μv→c via division and normalization: μv→c =

pv (1)/μc→v . pv (0)/(1 − μc→v ) + pv (1)/μc→v

……

… … 11

pc

1 Initialization: Initialize messages from variable nodes to constraint nodes with a priori probability.

pc = (∗)v∈n(c) (w(c, v) · pv ) , ˜ (w(c, v) · pv ) . pc\v = pc ∗

-7

(18)

Repeat steps 2 and 3 until the maximum iteration time is reached. We find that the maximum iteration time can be set to 15, beyond which the any performance gain is marginal. 4 Output: For each variable node v, compute pv (0) and pv (1) according to (16), and output the estimated bit value via hard decision. The main difference between this RPC-BP decoding algorithm and the CS-BP decoding algorithm lies in the computation at the constraint nodes (step 2). First, we process the

Fig. 5. Convolution by shift addition (from top to bottom) and ZigZag deconvolution (from bottom to top)

variable nodes and the noise node in separate steps, because the former is binary and the latter is continuously valued. Second, we propose a ZigZag deconvolution to compute pc\v in (12). Fig. 5 depicts the convolution by shift addition and ZigZag deconvolution. The convolution flow is shown from top to bottom. In this example, the weight between variable node v and constraint node c is w(v, c) = −4. Hence, the PDF of w(v, c) · pv only has two spikes at −4 and 0. The convolution of w(v, c) · pv and any PDF pc (including but not limited to pc\v ) can be computed by (pc ∗ w(v, c) · pv ) (n) = pv (0)·pc (n)+pv (1)·pc (n−w(v, c)). (19) The addition in the equation is shown in the middle of the figure. The deconvolution flow is shown from bottom to top in Fig. 5, in which the rectangles highlight the ZigZag process. In this figure we demonstrate the ZigZag deconvolution from right to left. In theory, it can be performed in both directions. However, due to the computational accuracy of C program, the practical direction should be determined by the values of pv (0) and pv (1). When pv (0) > pv (1), it should be computed by pc\v (n) =

pc (n) − pc\v (n − w(v, c)) × pv (1) . pv (0)

(20)

When pv (0) ≤ pv (1), it should be computed by pc\v (n + w(v, c)) × pv (0) − pc (n + w(v, c)) . pv (1) (21) Let nmin and nmax be the minimum and the maximum value of constraint node c. For n ∈ / [nmin , nmax ], pc\v (n) = 0. Thus, we can get a unique solution for the deconvolution by recursion. Similar to CS-BP, the complexity of RPC-BP algorithm is O(N ). However, the linear scalar in front of N in RPC-BP is much smaller than CS-BP. Since most of the computation is taken by the constraint node message and other computation pc\v (n) =

4900

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 12, NO. 10, OCTOBER 2013

TABLE III C OMPLEXITY COMPARISON BETWEEN RPC-BP AND CS-BP.

W ±(1244) ±(111244) ±(111222) ±(11111122)

RPC-BP × + 492 246 856 428 640 320 954 477

CS-BP × + 8192 9216 12288 13824 12288 13824 16384 18432

cost of both algorithms are the same, we compare the computation cost on constraint node in each iteration, as shown in Table III. We can see that the computation cost of CS-BP is around 20 times of RPC-BP. VI. P ERFORMANCE E VALUATION In this section, we will report the performance evaluation of the proposed CCM through the simulations on MATLAB 2011b and the emulations based on traced channel state information. Four representative values of source sparsity including 0.1, 0.15, 0.25 and 0.5 are considered. The primary evaluation metric is throughput (in bit/s/Hz). A. Implementation We implement CCM and three reference schemes as follows. CCM: For each block of information bits, the sender progressively transmits RP symbols with a given step size until an acknowledgement is received or the maximum number of transmission is reached. At the receiver end, the RP symbols are accumulated for RPC-BP decoding. If the demodulated bits pass the CRC check, an acknowledgement will be delivered to the sender. Otherwise, more RP symbols are needed to perform next round of decoding. The bit block length for RP code is set to be N = 480 which is a multiple of all the possible weight multiset sizes. The progressive transmission step is set to be ΔM = 24 and the maximum number of symbols for transmission is set to be Mmax = 1920, which is equivalent to the rate for PLCP header transmission in 802.11 PHY. In the PLCP header, we introduce 3 bits. One is the bit flipping indicator and the remaining two are the source sparsity indicator. These three bits can be placed in the reserved position of PLCP header. Therefore, the overhead is negligible. BICM with ideal source compression (denoted by BICM for brevity): We implement BICM [30] which is the stateof-the-art coded modulation scheme. In order to avoid the performance loss due to short block length, we implement BICM with 2304 block length and 23040 interleaver length, which is the longest block length setting defined in WiMax standard [61]. The coding rates are 12 , 23 , 34 and 56 , and the modulation schemes are QPSK, 16QAM, 64QAM, and 256QAM. Altogether, they create 16 combinations and 14 identical rates. As BICM does not have compression capability, we assume an ideal source compressor for BICM capable of representing a length-N source block with N H(p) bits, where H(p) is the source entropy. For simplicity, we just generate source blocks

of length N H(p) from non-biased Bernoulli trial, but count N bits for each block when computing the throughput. HARQ with ideal source compression (denoted by HARQ for brevity): We implement type-II HARQ (a.k.a. incremental redundancy) which is the state-of-the-art rate adaptation scheme. Rate-1/3 Turbo code is used as the mother code. As in WCDMA and LTE, the code length is 1024 bits, and the component encoder is based on recursive convolution code with polynomial (13, 15) in octal. The puncture period is set to 8 for smooth rate adaptation, and the puncture pattern is the same as that employed by Rowitch and Milstein [40]. 8 (l = 0, 2, ...16) are thus created. Rates corresponding to 8+l Due to limited adaptation range of HARQ, three modulation schemes QPSK, 16QAM, and 64QAM are adopted. Different modulation and coding schemes create 27 combinations and 21 identical rates. The decoder we used at the receiver is the soft input Viterbi decoder with 8 iterations. Similarly, we assume an ideal source compressor to be concatenated in front of HARQ. Note that the ideal rates are not practically achievable for either BICM or HARQ, because ideal source compressor does not exist for short block length on the order of one thousand bits. However, we consider these two reference schemes as the upper bounds in the achievable rates by the modulation schemes for these two categories of rate adaptation. JSCC: We also implement a practical joint source-channel coding scheme as described in [23]. It is essentially an HARQ scheme, so the same punctured Turbo code as in HARQ is used. The difference is that, instead of using a separate source encoder, JSCC utilizes the prior information at the decoder. Therefore, JSCC and HARQ become exactly the same when p = 0.5. B. Simulations over AWGN channel Simulations are carried out under static AWGN channels at integer channel SNRs (in dB) from 5 dB to 25 dB, which covers the main SNR range of most 802.11 systems. Fig. 6 shows the results in terms of throughput versus the required channel SNR for BER less than 10−6 . For each scheme, we plot one curve for each sender setting (in terms of modulation and coding combination for BICM, modulation scheme for HARQ, and weight multiset for CCM), and the curves beyond the effective range may be omitted for clarity. The Shannon limits, calculated by dividing the Shannon capacity by the source entropy, are also plotted. Notice that CCM is not expected to achieve a higher throughput than the two reference schemes with ideal source coding. Through the simulations, we would like to show that CCM has a much wider adaptation range than the two reference schemes while achieving a similar throughput as the ideal reference schemes. The performance of any practical implementations of separate source coding combined with BICM or HARQ would not achieve the throughput as the reference schemes, due to the inevitable loss in source coding. Fig. 6(a) and Fig. 6(b) show that, when the source is not sparse or has moderate sparsity, CCM can use the same weight multiset for channels from 5dB to 25dB, with an adaptation range spanning over 20dB. In contrast, for the

CUI et al.: COMPRESSIVE CODED MODULATION FOR SEAMLESS RATE ADAPTATION

14

10

14 CCM HARQ BICM Shannon

12 Throughput (bit/s/Hz)

Throughput (bit/s/Hz)

12

8 6 4 2 0 5

4901

10

CCM HARQ JSCC BICM Shannon

8 6 4 2

10

15 Es/N0 (dB)

20

0 5

25

10

(a) p = 0.50

12

8 6 4 2 0 5

10

20

25

CCM HARQ JSCC BICM Shannon

8 6 4 2

10

15 Es/N0 (dB)

20

25

0 5

10

(c) p = 0.15 Fig. 6.

25

14 CCM HARQ JSCC BICM Shannon

Throughput (bit/s/Hz)

Throughput (bit/s/Hz)

10

20

(b) p = 0.25

14 12

15 Es/N0 (dB)

15 Es/N0 (dB)

(d) p = 0.10

Comparing the throughput of CCM and two reference schemes for different source sparsities.

same SNR range, BICM switches among 12 modulation and coding schemes, and HARQ switches among three modulation schemes. Such wide adaptation range of CCM brings significant benefits as the sender does not require channel state feedback, and avoids potential performance loss due to mismatch between the estimated and the actual channel states. The other reference scheme HARQ+JSCC has a similar adaptation range as the proposed CCM, but its achievable rate is much lower. Fig. 6(c) and Fig. 6(d) show the results when the source sparsity is more prominent. We observe that the rate for CCM saturates at lower SNR when p decreases. The saturation point is around 19dB when p = 0.15 and is around 16dB when p = 0.1. This suggests that, for very sparse data, CCM may need to change the weight multiset when the channel becomes good. These two figures show that, using two weight multisets is sufficient for CCM to cover the main channel SNR range when p = 0.15 and p = 0.1. C. Emulation in Real Channel Environment We also implement the CCM scheme on a software radio platform called SORA [62]. The evaluation scenarios include mobile Line-of-Sight (LOS), stationary Non-Line-ofSight (NLOS) and mobile NLOS. For each scenario, the channel state is traced and a fair comparison between CCM

TABLE IV AVERAGE RATE IN EACH SCENARIO ( BITS / S /H Z ) Scenario LOS M

NLOS S

NLOS M

Scheme CCM HARQ BICM CCM HARQ BICM CCM HARQ BICM

p = 0.5 4.74 4.02 3.64 3.74 2.93 2.28 2.83 2.43 2.16

p = 0.25 5.83 4.95 4.49 4.79 3.62 2.81 3.73 2.99 2.66

p = 0.15 7.23 6.58 5.97 6.18 4.81 3.74 5.30 3.98 3.54

p = 0.1 8.50 8.56 7.76 7.42 6.25 4.87 6.61 5.18 4.60

and two reference schemes HARQ and BICM are carried out over the traced CSI. We do not include JSCC in the emulation because the earlier simulations have shown that its achievable rate is much lower than that of the other schemes. In this evaluation, CCM selects a fixed weight multiset for the given source sparsity and does not adapt it to the channel condition (even when p = 0.15 and p = 0.1). Thus, CCM completely avoid channel estimation and CSI feedback. The two reference schemes, as they need to switch between several modulation and coding settings, require channel feedback. We assume that the receiver could preciously measure the actual SNR from received packet and provide immediate feedback, and then sender uses the algorithm in [63] for rate adaptation. Table IV lists the average throughput of the three schemes

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 12, NO. 10, OCTOBER 2013

0.6 CCM HARQ BICM

0.4 0.2

0.6

0.4

0.2

0.2

2 4 6 8 10 Throughput (bits/s/Hz)

(c) p = 0.15

1 0.8

0.6 CCM HARQ BICM

0.2

0.6 CCM HARQ BICM

0.4 0.2 0 0

2 4 6 8 10 Throughput (bits/s/Hz)

CDF

1 0.8 CDF

2 4 6 8 10 Throughput (bits/s/Hz)

(a) p = 0.50

0.6

1

CCM HARQ BICM

0.8 0.6 0.4

0.2

0.2

(b) p = 0.25

0 0

2 4 6 8 10 Throughput (bits/s/Hz)

(c) p = 0.15

2 4 6 8 10 Throughput (bits/s/Hz)

CDF of Throughput in stationary NLOS case. 1

1

0.8

0.8

0.6 CCM HARQ BICM

0.4 0.2

2 4 6 8 10 Throughput (bits/s/Hz)

0.6 CCM HARQ BICM

0.4 0.2 0 0

2 4 6 8 10 Throughput (bits/s/Hz)

(a) p = 0.50

(b) p = 0.25

0.6 CCM HARQ BICM

0.4 0.2 0 0

2 4 6 8 10 Throughput (bits/s/Hz)

CDF

1 0.8 CDF

1

Fig. 9.

CCM HARQ BICM

(d) p = 0.10

0.8

0 0

2 4 6 8 10 Throughput (bits/s/Hz)

(d) p = 0.10

0.4

0 0

CDF

CDF

(b) p = 0.25

0 0

2 4 6 8 10 Throughput (bits/s/Hz)

CCM HARQ BICM

CDF of Throughput in mobile LOS case.

0.4

CDF

0 0

1

Fig. 8.

0.6

0.2

0.8

0 0

0.8

0.4

(a) p = 0.50 Fig. 7.

0.6

1

CCM HARQ BICM

0.4

0 0

2 4 6 8 10 Throughput (bits/s/Hz)

0.8

CDF

0 0

1

CCM HARQ BICM

CDF

1 0.8

CDF

1 0.8 CDF

CDF

4902

0.6

CCM HARQ BICM

0.4 0.2 0 0

(c) p = 0.15

2 4 6 8 10 Throughput (bits/s/Hz)

(d) p = 0.10

CDF of Throughput in mobile NLOS case.

under different source and channel settings. To provide more details of how each scheme behaves, we show the CDFs (cumulative distribution functions) of instant rates in Fig. 7, Fig. 8 and Fig. 9, respectively, for the three evaluated scenarios. Each point (d, T ) on the curve can be interpreted as ”The rate in d portion of the time is below T .” Therefore, the rightmost curve achieves the highest throughput. Overall, CCM performs the best and BICM the worst in the trace-driven emulation, despite the simulation results suggest that BICM could achieve the highest rate in static channel settings. This clearly demonstrates the advantage of CCM brought by using a fixed modulation. In the two NLOS scenarios, the performance gap between reference schemes and CCM is even greater than that in LOS scenario, because the channel varies more dramatically and the two reference schemes make more improper decisions in rate selection. In NLOS static scenario, CCM achieves 70% throughput gain over BICM when p = 0.25. In NLOS mobile scenario, CCM achieves 33% throughput gain over HARQ when p = 0.15. These results amply confirm that CCM has successfully achieved the design goal to embed data compression into seamless rate adaptation.

source compression and seamless rate adaptation. In this novel scheme, data compression is elegantly embedded into coded modulation through a virtually rateless multilevel code named RP code. Through careful design, this RP encoder in turn generates multilevel symbols which are sequentially and evenly mapped to a dense and universal constellation. Such innovative mapping maintains the statistics of channel noise to be used effectively during the process of RPC-BP decoding. We have also developed practical implementation of the proposed CCM scheme. EXIT chart analysis is used for the weight selection of RP codes. Extensive simulations and emulations have been carried out to evaluate the performance of CCM. The comparisons against conventional schemes show that CCM has a much wider adaptation range than existing approaches, and consequently achieves a higher throughput under varying channel conditions. These promising results validate the practical feasibility of CCM. We are currently working on extending the proposed CCM to Multi-Input Multi-Output (MIMO) systems. In the MIMO system, we aim to achieve approximately universal tradeoff between diversity gain and multiplexing gain, resulting desired seamless receiver rate adaptation.

VII. C ONCLUSION

R EFERENCES

We have described in this paper a novel compressive coded modulation scheme capable of achieving simultaneously both

[1] P. Destounis, J. D. Garofalakis, P. Kappos, and J. Tzimas, “Measuring the mean web page size and its compression to limit latency and improve download time,” Internet Research, vol. 11, no. 1, pp. 10–17, 2001.

CUI et al.: COMPRESSIVE CODED MODULATION FOR SEAMLESS RATE ADAPTATION

[2] H. Cui, C. Luo, K. Tan, F. Wu, and C. W. Chen, “Seamless rate adaptation for wireless networking,” in Proc. 2011 ACM MSWiM, pp. 437–446. [3] A. Gudipati and S. Katti, “Strider: automatic rate adaptation and collision handling,” in Proc. 2011 ACM SIGCOMM, pp. 158–169. [4] J. Perry, P. A. Iannucci, K. E. Fleming, H. Balakrishnan, and D. Shah, “Spinal codes,” in Proc. 2012 ACM SIGCOMM, pp. 49–60. [5] E. Cand`es, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 489–509, 2006. [6] D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, 2006. [7] R. Hamzaoui, V. Stankovic, and Z. Xiong, “Optimized error protection of scalable image bit streams,” IEEE Signal Process. Mag., vol. 22, no. 6, pp. 91–107, 2005. [8] L. Kondi, F. Ishtiaq, and A. Katsaggelos, “Joint source-channel coding for motion-compensated DCT-based SNR scalable video,” IEEE Trans. Image Process., vol. 11, no. 9, pp. 1043–1052, 2002. [9] Z. He, J. Cai, and C. W. Chen, “Joint source channel rate-distortion analysis for adaptive mode selection and rate control in wireless video coding,” IEEE Trans. Circuits Syst. Video Technol., vol. 12, no. 6, pp. 511–523, 2002. [10] Q. Zhang, W. Zhu, Z. Ji, and Y.-Q. Zhang, “A power-optimized joint source channel coding for scalable video streaming over wireless channel,” in Proc. 2001 IEEE ISCAS, vol. 5, pp. 137–140. [11] S. Arslan, P. Cosman, and L. Milstein, “Generalized unequal error protection LT codes for progressive data transmission,” IEEE Trans. Image Process., vol. 21, no. 8, pp. 3586–3597, 2012. [12] X. Pan, A. Cuhadar, and A. Banihashemi, “Combined source and channel coding with JPEG2000 and rate-compatible low-density paritycheck codes,” IEEE Trans. Signal Process., vol. 54, no. 3, pp. 1160– 1164, 2006. [13] S. Aditya and S. Katti, “FlexCast: graceful wireless video streaming,” in Proc. 2011 ACM Mobicom, pp. 277–288. [14] O. Bursalioglu, G. Caire, and D. Divsalar, “Joint source-channel coding for deep-space image transmission using rateless codes,” IEEE Trans. Commun., vol. PP, no. 99, pp. 1–14, 2013. [15] A. Liveris, Z. Xiong, and C. Georghiades, “Compression of binary sources with side information at the decoder using LDPC codes,” IEEE Commun. Lett., vol. 6, no. 10, pp. 440–442, 2002. [16] M. Sartipi and F. Fekri, “Source and channel coding in wireless sensor networks using LDPC codes,” in Proc. 2004 IEEE SECON, pp. 309– 316. [17] W. Zhong and J. Garcia-Frias, “LDGM codes for channel coding and joint source-channel coding of correlated sources,” EURASIP J. on Advances in Signal Process., vol. 2005, no. 6, pp. 178–231, 2005. [18] Q. Xu, V. Stankovic, and Z. Xiong, “Distributed joint source-channel coding of video using raptor codes,” IEEE J. Sel. Areas Commun., vol. 25, no. 4, pp. 851–861, 2007. [19] G. Caire, S. Shamai, and S. Verdu, “A new data compression algorithm for sources with memory based on error correcting codes,” in Proc. 2003 IEEE ITW, pp. 291–295. [20] G. Caire, S. Shamai, A. Shokrollahi, and S. Verdu, “Universal variablelength data compression of binary sources using fountain codes,” in Proc. 2004 IEEE ITW, pp. 123–128. [21] J. Del Ser, P. M. Crespo, I. Esnaola, and J. Garcia-Frias, “Joint sourcechannel coding of sources with memory using turbo codes and the burrows-wheeler transform,” IEEE Trans. Commun., vol. 58, no. 7, pp. 1984–1992, 2010. [22] M. Fresia, F. Perez-Cruz, and H. Poor, “Optimized concatenated LDPC codes for joint source-channel coding,” in Proc. 2009 IEEE ISIT, pp. 2131–2135. [23] G.-C. Zhu and F. Alajaji, “Turbo codes for nonuniform memoryless sources over noisy channels,” IEEE Commun. Lett., vol. 6, no. 2, pp. 64–66, 2002. [24] F. Cabarcas, R. Demo Souza, and J. Garcia-Frias, “Turbo coding of strongly nonuniform memoryless sources with unequal energy allocation and PAM signaling,” IEEE Trans. Signal Process., vol. 54, no. 5, pp. 1942–1946, 2006. [25] G. Ungerboeck and I. Csajka, “On improving data-link performance by increasing the channel alphabet and introducing sequence coding,” in Proc. 1976 IEEE ISIT, p. 53. [26] G. Ungerboeck, “Channel coding with multilevel/phase signals,” IEEE Trans. Inf. Theory, vol. 28, no. 1, pp. 55–67, 1982. [27] H. Imai and S. Hirakawa, “A new multilevel coding method using errorcorrecting codes,” IEEE Trans. Inf. Theory, vol. 23, no. 3, pp. 371–377, 1977.

4903

[28] U. Wachsmann, R. F. H. Fischer, and J. Huber, “Multilevel codes: theoretical concepts and practical design rules,” IEEE Trans. Inf. Theory, vol. 45, no. 5, pp. 1361–1391, 1999. [29] E. Zehavi, “8-PSK trellis codes for a Rayleigh channel,” IEEE Trans. Commun., vol. 40, no. 5, pp. 873–884, 1992. [30] G. Caire, G. Taricco, and E. Biglieri, “Bit-interleaved coded modulation,” IEEE Trans. Inf. Theory, vol. 44, no. 3, pp. 927–946, 1998. [31] A. Martinez, A. Guillen i Fabregas, G. Caire, and F. M. J. Willems, “Bit-interleaved coded modulation revisited: a mismatched decoding perspective,” IEEE Trans. Inf. Theory, vol. 55, no. 6, pp. 2756–2765, 2009. [32] C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limit error-correcting coding and decoding: turbo-codes. 1,” in Proc. 1993 IEEE ICC, vol. 2, pp. 1064–1070. [33] R. Gallager, “Low Density Parity Check Codes,” MIT, Cambridge, 1963. [34] D. J. C. MacKay and R. M. Neal, “Near Shannon limit performance of low density parity check codes,” Electronics Letters, vol. 33, no. 6, pp. 457–458, 1997. [35] S. Nanda, K. Balachandran, and S. Kumar, “Adaptation techniques in wireless packet data services,” IEEE Commun. Mag., vol. 38, no. 1, pp. 54–64, 2000. [36] S. H. Y. Wong, H. Yang, S. Lu, and V. Bharghavan, “Robust rate adaptation for 802.11 wireless networks,” in Proc. 2006 ACM MobiCom, pp. 146–157. [37] Q. Xia and M. Hamdi, “Smart sender: a practical rate adaptation algorithm for multirate IEEE 802.11 WLANs,” IEEE Trans. Wireless Commun., vol. 7, no. 5, pp. 1764–1775, 2008. [38] Y. Song, X. Zhu, Y. Fang, and H. Zhang, “Threshold optimization for rate adaptation algorithms in IEEE 802.11 WLANs,” IEEE Trans. Wireless Commun., vol. 9, no. 1, pp. 318–327, 2010. [39] S. M. Kim, W. Choi, T.-W. Ban, and D. K. Sung, “Optimal rate adaptation for hybrid ARQ in time-correlated Rayleigh fading channels,” IEEE Trans. Wireless Commun., vol. 10, no. 3, pp. 968–979, 2011. [40] D. Rowitch and L. Milstein, “On the performance of hybrid FEC/ARQ systems using rate compatible punctured turbo (RCPT) codes,” IEEE Trans. Commun., vol. 48, no. 6, pp. 948–959, 2000. [41] A. Shokrollahi, “Raptor codes,” IEEE Trans. Inf. Theory, vol. 52, no. 6, pp. 2551–2567, 2006. [42] E. J. Cand`es, “Compressive sampling,” in Proc. 2006 Intl. Congress of Mathematicians, pp. 1433–1452. [43] D. Donoho and Y. Tsaig, “Fast solution of l1-norm minimization problems when the solution may be sparse,” IEEE Trans. Inf. Theory, vol. 54, no. 11, pp. 4789–4812, 2008. [44] J. Tropp and A. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Trans. Inf. Theory, vol. 53, no. 12, pp. 4655–4666, 2007. [45] T. Blumensath and M. E. Davies, “Iterative hard thresholding for compressed sensing,” Applied and Computational Harmonic Analysis, vol. 27, no. 3, pp. 265–274, 2009. [46] W. Dai and O. Milenkovic, “Subspace pursuit for compressive sensing signal reconstruction,” IEEE Trans. Inf. Theory, vol. 55, no. 5, pp. 2230– 2249, 2009. [47] M. Duarte, M. Wakin, D. Baron, and R. Baraniuk, “Universal distributed sensing via random projections,” in Proc. 2006 IEEE IPSN, pp. 177– 185. [48] W. Bajwa, J. Haupt, A. Sayeed, and R. Nowak, “Joint source-channel communication for distributed estimation in sensor networks,” IEEE Trans. Inf. Theory, vol. 53, no. 10, pp. 3629–3653, 2007. [49] S. Feizi, M. Medard, and M. Effros, “Compressive sensing over networks,” in Proc. 2010 Allerton Conference on Communication, Control, and Computing, pp. 1129–1136. [50] S. Feizi and M. Medard, “A power efficient sensing/communication scheme: joint source-channel-network coding by using compressive sensing,” in Proc. 2011 Allerton Conference on Communication, Control, and Computing, pp. 1048–1054. [51] F. Chen, F. Lim, O. Abari, A. Chandrakasan, and V. Stojanovic, “Energyaware design of compressed sensing systems for wireless sensors under performance and reliability constraints,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 60, no. 3, pp. 650–661, 2013. [52] S. Sarvotham, D. Baron, and R. Baraniuk, “Sudocodes-fast measurement and reconstruction of sparse signals,” in Proc. 2006 IEEE ISIT, pp. 2804–2808. [53] F. Wu, J. Fu, Z. Lin, and B. Zeng, “Analysis on rate-distortion performance of compressive sensing for binary sparse source,” in Proc. 2009 IEEE DCC, pp. 113–122. [54] X. L. Liu, C. Luo, and F. Wu, “Formulating binary compressive sensing decoding with asymmetrical property,” in Proc. 2011 IEEE DCC, pp. 213–222.

4904

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 12, NO. 10, OCTOBER 2013

[55] D. Baron, S. Sarvotham, and R. Baraniuk, “Bayesian compressive sensing via belief propagation,” IEEE Trans. Signal Process., vol. 58, no. 1, pp. 269–280, 2010. [56] S. Jakubczak and D. Katabi, “A cross-layer design for scalable mobile video,” in Proc. 2011 ACM Mobicom, pp. 289–300. [57] G. Caire, S. Shamai, A. Shokrollahi, and S. Verd´u, “Fountain codes for lossless data compression,” DIMACS Series in Discrete Mathematics and Theoretical Computer Science, vol. 68, pp. 1–18, 2005. [58] S. Rangan, “Generalized approximate message passing for estimation with random linear mixing,” in Proc. 2011 IEEE ISIT, pp. 2168–2172. [59] S. ten Brink, “Convergence behavior of iteratively decoded parallel concatenated codes,” IEEE Trans. Commun., vol. 49, no. 10, pp. 1727– 1737, 2001. [60] S. ten Brink, G. Kramer, and A. Ashikhmin, “Design of low-density parity-check codes for modulation and detection,” IEEE Trans. Commun., vol. 52, no. 4, pp. 670–678, 2004. [61] “IEEE standard for local and metropolitan area networks - part 16: air interface for broadband wireless access systems,” IEEE Std 802.16-2009, May 2009. [62] K. Tan et al., “Sora: high performance software radio using general purpose multi-core processors,” in Proc. 2009 USENIX NSDI, pp. 75– 90. [63] X. Chen, P. Gangwal, and D. Qiao, “Practical rate adaptation in mobile environments,” in Proc. 2009 IEEE PerCom, pp. 1–10. Hao Cui received his B.S. degree in Electronic Engineering from University of Science and Technology of China in 2008. He is currently pursuing the Ph.D. degree in the Department of Electronic Engineering and Information Science, University of Science and Technology of China. His research interests include source and channel coding and signal processing for wireless multimedia communication and networking.

Chong Luo received her B.S. degree from Fudan University in Shanghai, China, in 2000 and her M.S. degree from the National University of Singapore in 2002. She has been with Microsoft Research Asia since 2003, where she is a researcher in the Internet Media group. She is a Member of the IEEE. Her research interests include wireless networking, wireless sensor networks and multimedia communications.

Jun Wu received his B.S. degree in Information Engineering and M.S. degree in Communication and Electronic System from XIDIAN University in 1993 and 1996, respectively. He received his Ph.D. degrees in Signal and Information Processing from Beijing University of Posts and Telecommunications in 1999. Wu joined in Tongji University as an professor in 2010. He has been a Principal Scientist in Huawei and Broadcom before his join Tongji. His research interests include Wireless Communication, Information Theory, and Signal Processing. Chang Wen Chen (F’04) received the B.S. degree from the University of Science and Technology of China in 1983, the M.S.E.E. degree from the University of Southern California, Los Angeles, in 1986, and the Ph.D. degree from the University of Illinois at Urbana-Champaign in 1992. He is a Professor of Computer Science and Engineering at the State University of New York at Buffalo. Prof. Chen has been the Editor-in-Chief for the IEEE T RANSACTIONS ON C IRCUITS AND S YS TEMS FOR V IDEO T ECHNOLOGY for two terms from January 2006 to December 2009. He has served as an Editor for P ROCEEDINGS OF THE IEEE, the IEEE T RANSACTIONS ON M ULTIMEDIA, the IEEE J OURNAL ON S ELECTED A REAS IN C OMMUNICATIONS, IEEE Multimedia Magazine, the Journal of Wireless Communication and Mobile Computing, EUROSIP Journal of Signal Processing: Image Communications, and the Journal of Visual Communication and Image Representation. He has also chaired and served in numerous technical program committees for IEEE, ACM, and other international conferences. He was elected an IEEE Fellow for his contributions in digital image and video processing, analysis, and communications, and elected an SPIE Fellow for his contributions in electronic imaging and visual communications. Feng Wu (F’13) received the B.S. degree in electrical engineering from XIDIAN University in 1992, and the M.S. and Ph.D. degrees in computer science from the Harbin Institute of Technology, Harbin, China, in 1996 and 1999, respectively. In 1999, he joined Microsoft Research Asia, Beijing, China, where he is currently a Senior Researcher/Research Manager. He has authored or co-authored over 200 publications, including 50 journal papers. He has had 13 of his techniques adopted into international video coding standards. His current research interests include image and video compression, media communication, and media analysis and synthesis. Dr. Wu serves as an Associate Editor for various publications, such as the IEEE T RANSACTIONS ON C IRCUITS AND S YSTEMS FOR V IDEO T ECHNOLOGY and IEEE T RANSACTIONS ON M ULTIMEDIA. He served as the Technical Program Committee (TPC) Chair in MMSP 2011, VCIP 2010 and PCM 2009, the TPC Track Chair in ICME 2013, ICIP 2012, ICME 2012, ICME 2011 and ICME 2009, and the Special Sessions Chair in ISCAS 2013 and ICME 2010. He was the recipient of the Best Paper Award in IEEE T-CSVT 2009, PCM 2008 and SPIE VCIP 2007. He was elected an IEEE Fellow for his contributions to visual data compression and communication