Soft-Decoding SOM for VQ Over Wireless Channels - Springer Link

6 downloads 0 Views 199KB Size Report
Abstract. A self-organizing map (SOM) approach for vector quantization (VQ) over wireless channels is presented. We introduce a soft decoding SOM-based ...
Neural Processing Letters (2006) 24:179–192 DOI 10.1007/s11063-006-9020-y

© Springer 2006

Soft-Decoding SOM for VQ Over Wireless Channels CHI-SING LEUNG1, , HERBERT CHAN1,2 and WAI HO MOW2 1 Department of Electronic Engineering, City University of Hong Kong, Kowloon Tong, Hong Kong. e-mail: [email protected] 2 Department of Electrical and Electronic Engineering, Hong Kong University of Science and Technology, Hong Kong, Hong Kong

Abstract. A self-organizing map (SOM) approach for vector quantization (VQ) over wireless channels is presented. We introduce a soft decoding SOM-based robust VQ (RVQ) approach with performance comparable to that of the conventional channel optimized VQ (COVQ) approach. In particular, our SOM approach avoids the time-consuming index assignment process in traditional RVQs and does not require a reliable feedback channel for COVQ-like training. Simulation results show that our approach can offer potential performance gain over the conventional COVQ approach. For data sources with Gaussian distribution, the gain of our approach is demonstrated to be in the range of 1–4 dB. For image data, our approach gives a performance comparable to a sufficiently trained COVQ, and is superior with a similar number of training epoches. To further improve the performance, a SOM–based COVQ approach is also discussed. Key words. Soft-decoding, self-organizing map, vector quantization, wireless channels

1.

Introduction

Vector quantization (VQ) is a robust and efficient source coding method in data compression [1, 2]. In the past, the vector quantizer and the channel signalling scheme are often designed and implemented separately. It means that the codebook of VQ is usually designed for a noiseless channel, namely, the so-called source-optimized VQ (SOVQ) approach. The traditional hard decoding algorithm for VQ over a noisy channel produce an estimation of the transmitted codevector based on the received signal. As the estimation is one of the codevectors, the hard decoding output [3, 4] is suboptimal with respect to the minimum mean square error (MMSE) criterion. Therefore, the intensity of impulsive noise in the received data is very high when the communication channel is noisy. To reduce the effect of channel noise on received data, the soft decoding scheme [3, 4] could be used. In soft decoding, the estimation output is a weighted sum of all codevectors and the weighting factors are the unquantized matched filter outputs of the receiver. Even with soft decoding, the resultant distortion may still be large if the codebook and the channel signalling scheme are designed separately [5, 6]. To further reduce the distortion we should properly create the mapping from the VQ codebook to the signal constellation [5–10]. This problem can 

Corresponding author.

180

CHI-SING LEUNG ET AL.

be formulated as an assignment problem. Many heuristic algorithms have been proposed to addresss this so-called index assignment problem [5–10]. These algorithms may be classified into two categories, namely, the robust VQ (RVQ) [5–7] and the channel-optimized VQ (COVQ) [8–10]. However, most of the existing algorithms require intensive computation, tremendous storage overhead and channel state information. These incur various practical difficulties in applying the two aforementioned approaches to wireless applications. Specifically, the characteristics of a wireless mobile channel [11] is often fast time-varying due to unpredictable movement, obstacles, and weather. Also, the channel state information with sufficient accuracy is typically unavailable for VQ training. In the RVQ approach, a codebook is trained from the data source. Afterwards, the issue of sensitivity to channel noise for VQ is formulated as an assignment problem from the codebook to the signal constellation [5–7]. In these approaches, the objective function is a function of the system symbol error rate. Zeger and Gersho [5] introduced an algorithm to construct the assignment of codevectors to codewords of a binary error-correcting code. The algorithm iteratively exchanges the positions of two codevectors so as to successively reduce the cost function. This technique was then applied to encode speech data in [15]. Instead of using a deterministic rule to exchange the positions of two codevectors, Farvardin [6] used a simulated annealing (SA) method to reduce the cost function. In addition to using the average distortion as a cost criterion, Potter and Chaing [7] used a minimax distortion as the cost criterion. In these algorithms, we need to calculate the distances between all pairs of distinct code vectors and assume the knowledge of the expression of conditional symbol error probabilities. Hence, this approach is not suitable for fast time-varying situations, for example, a wireless channel and a time varying data source. In the COVQ approach [8–14], the codebook is typically trained by using a Linde-Buzo-Gray (LBG)-like algorithm over a noise channel [8]. Hence, after training, the codebook is optimized for the noise channel. In [9, 10], COVQ was further investigated under various models of noise and channel impairments. In [12, 13], the performance of COVQ with a turbo code was investigated.1 Also, the use of COVQ for transmission of the speech data was investigated in [14]. In the COVQ approach, we assume that a reliable feedback channel for training is available and the channel variation is slow enough such that the channel signalto-noise ratio (SNR) is almost the same in both the VQ training and operating modes. The number of required training epoches in a COVQ is very large. These limitations incur the practical difficulties of applying COVQ for wireless applications. Moreover, although the objective function of COVQ is well organized, 1 The channel gain range of the turbo-COVQ is very narrow. This is because the bit error rate curve of the turbo code is very sharp. That means that, the turbo code can correct most bit errors when the channel noise is less than a certain value. When the channel noise is greater than this value, the turbo code cannot correct bit errors and hence there is a sudden drop in the signal-to-reconstruction error ratio (SRER) curve in the Figure 2 of [12].

SOFT-DECODING SOM FOR VQ OVER WIRELESS CHANNELS

181

the trained codebook is a sub-optimum one. This is because the training of the codebook is a highly nonlinear optimization ones. Hence, it is of practical interest to find a VQ method that can achieve a performance comparable with COVQ, but without requiring so much knowledge of the wireless channel. An alternative form of RVQ is the self-organizing map (SOM) approach [16] originally proposed in [17] for hard decoding over additive white Gaussian channels. This approach provides channel robustness by preserving a neighborhood structure. It avoids the undesirable time-consuming index assignment process. However, a problem of the SOM approach as presented in [17] is that the VQ decoder uses the hard decision method and is not a MMSE decoder. This paper proposes the soft decoding SOM-based robust VQ approach. We investigates the performance of soft decoding SOM approaches over a fast timevarying wireless channel which is modelled as an independent Rayleigh fading channel [11]. A key advantage of the soft decoding SOM is that although it does not need any channel state information during training, its performance is comparable to that of COVQ that takes full advantages of channel state information. Under the assumption of the availability of accurate channel state information, we shall show how a hybrid of the SOM and COVQ approaches, namely, the soft decoding SOM-COVQ approach, can further improve the performance. Simulation results show that our approaches are generally superior (or at least comparable) to the conventional COVQ approach. For data sources with Gaussian distribution, our approaches can result in a channel gain in the range of 1–4 dB compared with the conventional COVQ. For image data, our approaches are better than the conventional COVQ approach with a similar number of training epoches, and are comparable to a sufficiently trained COVQ. The rest of this paper is organized as follows. In Section 2, some background information is presented. Section 3 describes our SOM based approaches. Simulation results are presented in Section 4. Section 5 concludes the paper.

2.

Background

A VQ model over a wireless channel is shown in Figure 1. We use the common independent Rayleigh fading channel to model the wireless channel [11]. Given an codebook Y = { c1 , . . . , cM } in k , for an input x, the output of the VQ process is an index i∗, where ci∗ ∈ Y is the closest codevector to the input x. In other words, the codebook Y partitions the space k into M regions  = {1 , . . . , M }. The mapper interfaces the source coder to the channel. It takes in the source coder’s output i∗ and produces channel signal si∗ ∈ S = {s1 , . . . , sM }. The set of

Figure 1. The VQ system over a wireless channel.

182

CHI-SING LEUNG ET AL.

Figure 2. The signal sets of 16 QPSK and 16 QAM.

channel signals S = {s1 , . . . , sM } is called the signal constellation. Two common modulation methods (Figure 2), namely quadrature phase shift keying (QPSK) and quadrature amplitude modulation (QAM), are considered here. The channel is assumed to be an independent Rayleigh fading channel. The received signal is given by r = a si∗ + n,

(1)

where a is the Rayleigh-distributed fading factor that randomly scales the signal according to the Raleigh distribution, and the noise n is a two-dimensional Gaussian random vector with a covariance matrix equal to  Cn = σn2

1 0

 0 . 1

Denoting the variance of a as σa2 , the SNR of the channel can be expressed as SNR =

σa2 . 2σn2

The symbol detector makes symbol decisions based on the conditional probability density values (likelihood values) p(r |si )’s, where i = 1, . . . , M. In case of hard decoding, the output xˆ is given by xˆ = ci  , where p(r |si  ) > p(r |si ) for all i = i  . When the estimated codevector ci  is not equal to the transmitted codevector, a symbol error occurs. In such a case, the estimated signal si  is usually close to the transmitted signal si∗ but the estimated codevector ci  may not be close to the transmitted codevector ci∗ . The distortion from ci  to ci∗ depends on the association between the codebook Y = { c1 , . . . , cM } and S = {s1 , . . . , sM }. If the association is not created in a proper manner, the noise in the received data is impulsive. Such impulsive noise cannot be removed by linear or nonlinear filters easily.

SOFT-DECODING SOM FOR VQ OVER WIRELESS CHANNELS

183

In case of hard decision, the objective function that measures the sensitivity to channel noise is given by D(Y, ) =

M  M 

 P (sj |si )

i=1 j =1

i

p( x ) x − cj 2 d x,

(2)

where p( x ) is the density function of x and P (sj |si ) is the conditional error probability. In [5–7], in order to remove the effect of channel noise, an index assignment procedure is carried out. It assigns each codevector to a signal of the signal constellation. During training, we need to know the symbol error probabilities P (sj |si )’s of the channel. The main drawback of this approach is that the assignment procedure must be carried out again when a new codebook is used or the characteristics of the channel (symbol error probabilities) is time-varying. In the COVQ approach [8], a codebook is trained for minimizing the objective function given by D(Y, ) =

M   i=1 i

  E p(r |sj ) x − xˆ 2 p( x )d x,

(3)

where the expectation is operated on the channel noise and the transmitted signals (codevectors). In this approach, the reconstruction vector xˆ at the receiver M is a weighted sum of all current codevectors, given by si |r ) ci . In this i=1 P ( approach, if the channel noise level changes or the data source characteristics change, we need to retrain the codebook. Hence, the approach is not suitable for adaptive environment. Another drawback is that we need to have an additional reliable channel for training otherwise we can feedback the reconstruction information xˆ to the transmitter for updating the codebook.

3. 3.1.

The SOM approach hard som approach

In the SOM learning scheme [16, 17], before training, a neighborhood structure, represented by a graph G = {V , E}, is imposed on a codebook, where V = {v1 , . . . , vM } is a set of vertices and E is the set of edges. Each vertex is associated with a codevector. DEFINITION 1. If ci is defined to be a neighbor of cj , two corresponding vertices vi and vj are joined by an edge with weighted value equal to 1. DEFINITION 2. The neighborhood distance between ci and cj is the length of the shortest path between vi and vj in the graph G. A codevector ci is a levelu neighbor of cj if the the neighborhood distance between the two codevectors is

184

CHI-SING LEUNG ET AL.

Figure 3. Neighborhood structure. (a) An 1-D circular structure. (b) A 2-D grid structure.

less than or equal to u. The collection of level-u neighbors of a codevector ci is denoted as Ni (u). The order uG of a topological order is the longest neighborhood distance in G. Figure 3(a) shows a 1-D circular structure and Figure 3(b) shows a two dimensional grid structure. Given a neighborhood structure, the learning algorithm is summarized as follows. 1. Given the tth training vector x(t), calculate the distances di =  x (t) − ci (t)’s from x(t) to all training codevectors. 2. Find the closest codevector ci∗ (t), where di∗ < di ∀i = i∗. 3. Update the codebook as follows: ci (t) ∈ Ni∗ (ut ) ci (t + 1) = ci (t) ∀ ci (t + 1) = ci (t) + αt ( x (t) − ci (t))

(4) ∀ ci (t) ∈ Ni∗ (ut )

(5)

The parameter ut controls which codevectors should be updated at the training iteration t. In practice, it decreases to zero during the learning process [19, 20]. To ensure the trained SOM has the ordering property, the initial value u0 should be large enough such that at the beginning of the training process most codevectors G can be updated [17]. In our experiment, u0 = u4 gives out a satisfactory result [17]. The learning rate αt is a gain that controls the percentage of updates of codevectors. In the theoretical proof of the convergence and ordering property of a onedimensional SOM, the analysis [21] usually assumes that the gain αt satisfies the   2 Robins–Monroes rules [18], i.e., t αt should be infinite while t αt should be finite. However, with this setting, the convergence is very slow. In practice [19, 20], the learning rate αt is either a small constant or a decreasing sequence to zero. In this paper, the learning rate αt decreases linearly from α0 ( p(r |si ) for all i = i  and p(r |si  )’s are conditional likelihood values. The output of the hard decoding is taken from a finite codebook. Such a decision rule causes an irreversible loss of likelihood values. That means that, the hard decoding rule does not utilize all the conditional likelihood values provided by the channel. Hence, it is not a MMSE estimator. To further improve the SOM approach, we should utilize the detector’s outputs (the density likelihood values p(r |sj )’s). In this case, the output is given by, xˆ =

M 

P (si |r ) ci i=1 M p(r |si ) P (si ) ci = i=1 M r |si ) P (si ) i=1 p(

(7) (8)

where P (si |r ) is the conditional probability that the transmitted signal (codevector) is si ( ci ) given the received signal, and P (si ) is the a prior probability that si is transmitted. In soft decoding (7), the output is a weighted sum of the codevectors. The decision rule utilizes all likelihood values provided by the channel. Hence, the decision is the optimal in the statistical sense. 3.3.

som-covq approach

As the COVQ approach is a highly nonlinear optimization algorithm. The initial codebook will affect the performance. So, it is interesting to investigate whether a good initial codebook for COVQ can produce a better COVQ’s codebook. Since the SOM training method can produces a codebook with a good neighborhood structure, it is also interested to investigate the hybrid model of the SOM and COVQ approaches. That is, we use the SOM training method to train a codebook YSOM which has good resistance to channel noise. Afterwards, we use the trained codebook YSOM as the initial codebook of the COVQ algorithm to get a new better codebook. For distinguishing with the conventional COVQ, we call the hybrid approach as SOM-COVQ. 3.4.

complexities of som and covq training

For SOM, we sequentially and repeatedly present the samples to SOM encoder. Equation (5) is used for updating the codevectors. Note that for SOM we no need to utilize any channel information for training. Hence, the training complexity for each training example in a training epoch is equal to O(kM).

SOFT-DECODING SOM FOR VQ OVER WIRELESS CHANNELS

187

In the cases of COVQ and SOM-COVQ, we need to train the codebooks over the noisy channel. In COVQ and SOM-COVQ, we need to present all samples to the transmitter and then collect all the soft decoding outputs at the receiver. Afterwards, we update the codebook. It should be noticed that the complexity of COVQ and SOM are the same for each training sample in a training epoch although the COVQ only update one time in a training epoch. This is because for COVQ in a training epoch we need to reconstruct all the estimated output vectors based on the soft decoding rules (7). Hence, the complexity for each training example in a training epoch is still equal to O(kM). In summary, the complexities of SOM and COVQ per sample are the same. Hence, the number of required training epoches for convergence determine the training efficient.

4.

Simulation

The performance of various data protection schemes, COVQ, SOM, and SOMCOVQ, are investigated. Also, the performance without any data protection (LBG trained codebook) is presented. It is well known that the soft decoding VQ is better than the hard decoding VQ [3, 4]. Also, the COVQ is a soft decoding method. Hence, to have a fair comparison, the soft decoding technique are applied to all the schemes. Two analog sources: Gaussian source and image data, are used. The fast time-varying wireless channel is modelled as an independent Rayleigh fading channel and the fading factor is assumed to be perfectly estimated at the receiver. 4.1.

gaussian source and sufficient trained covq

In this section, we investigate the SRER performance of different algorithms when the codebooks are well trained. Three Gaussian sources with different dimensions are considered. Each source contains 1024 k-dimensional samples, where k = 2, 3, 4. The signal constellation is 16 QAM. Since there are 16 signals in the constellation, we set the number of codevectors equal to 16. That means, the code rates are 2, 4 3 , and 1 bits/per dimension. For SOM, we sequentiaedy and repeatedly present the samples to SOM encoder. Equation (7) is used for updating the codevectors. The number of training epoches for SOM is 10 because we find that SRER does not significant change (less than 2%) after 10 training epoches. In the cases of COVQ and SOM-COVQ, we need to train the codebooks over the noisy channel. For COVQ and SOM-COVQ, when we set the number of training epoches as 10, the codebooks do not convergent well. Hence, we increase the number of training epoches to 40 such that the codebooks of COVQ and SOMCOVQ are well trained. The performance, in terms of SRER (reconstruction data) versus SNR in the channel, is summarized in Figure 5. From the figure, the performance of those three data protection schemes is better than that of the simple LBG algorithm.

188

CHI-SING LEUNG ET AL.

6

(c) 5 4

5

SRER (dB)

SRER (dB)

(b) 7

9 8 7 6 5 4 3 2 1 0

SRER (dB)

(a)10

4 3 2

SOM SOM – COVQ COVQ LBG

SOM SOM – COVQ COVQ LBG

1

5

10

15

20

25

30

2

SOM SOM – COVQ COVQ LBG

1

0 0

3

0 0

5

10

15

20

25

30

0

5

10

15

20

25

SNR in the channel (dB)

SNR in the channel (dB)

SNR in the channel (dB)

2-D Gaussian Source

3-D Gaussian Source

4-D Gaussian Source

30

Figure 5. The signal-to-reconstruction error ratio (SRER) in dB.

The performance of the two SOM approaches (SOM and SOM-COVQ) is better than that of the COVQ approach. Compared with SOM and COVQ, SOM-COVQ further improve the SRER performance. This implies that further performance improvement can be achieved by using a better initial codebook for COVQ. For a fixed SRER in the reconstruction data, the two SOM approaches can achieve about 1–4 dB channel gains. For example, in the 2-D Gaussian cases, to achieve 7 dB in the SRER, the SNR’s of the two SOM approaches should be around 10 dB. In the COVQ case, to achieve the same SRER, the SNR in the channel should be around 14 dB. For a fixed SNR in the channel, the two SOM approaches can achieve about 1–2 dB in SNR gains. For example, in the 2-D Gaussian cases, when SNR in the channel is equal to 10 dB, the SRERs of the two SOM approaches are around 7 dB while the SRER of the COVQ approach is around 5.5 dB only.

4.2.

image data and insufficient training covq

In this section, we consider the performance of difference scheme for real data. We use three images (256 × 256) as the data source to compare the performance of LBG, SOM, COVQ and SOM-COVQ. Images: Lena, Pepper, and Baboon, are considered. Each image is divided into a number of 4 × 2 blocks. Each block is regarded as an 8-D input vector and there are 24576 samples. The codebook size is equal to 256 and hence the compression rate is 1 bit per dimension. With 256 codevectors, we can choose some more common modulation schemes, such as 16 QPSK or 8 QPSK, in wireless communications. With 256 codevectors, we can use two QPSK symbols of 16 QPSK to represent a codevector. For the SOM codebook, the Cartesian product of two 1-D circular graphs is used as the neighborhood structure. To demonstrates the training efficient of SOM, the number of training epoches for SOM is equal to 10 only and the codebook is used for all channel SNR values. In COVQ and SOM-COVQ for a large codebook, we find that if we only present each sample one time in a training epoch, the convergence, in terms of training epoches and reconstruction error, is very poor. This is because the COVQ training may not be able to capture enough noise statistics of the channel. So, we consider

189

SOFT-DECODING SOM FOR VQ OVER WIRELESS CHANNELS

the re-transmissions of the training samples in each epoch. Therefore, in COVQ and SOM-COVQ, there are two training parameters: one is the number N of training epoches, and the other is the number T of re-transmissions of the training examples in each epoch. Of course, in terms of distortion, it is desirable to have N and T sufficiently large so that distortion is small and does not further decrease significantly. Note that the additional bandwidth/power/delay requirement associated with the COVQ is proportional to the value of N × T . In practice, the minimum sufficient values of N and T should be used such that the distortion does not further decrease significantly. In the LBG and SOM approaches, there is no re-transmission at each training epoches. Figure 6 shows the RMSE performance at two channel SNRs (i.e., 15 and 29 dB). Other values of SNR have similar performance. From the figure, we can easily observe that the RMSE performance of SOM (10 training epoches) is comparable to that of the sufficient trained COVQ and SOM-COVQ. For COVQ, with a small value of T (Figure 6(a) and (b), T = 2), the convergence is very poor. Especially, for a noisy channel (Figure 6(a), T = 2 at SNR = 15 dB), the COVQ training process may not converge even after performing a large number of training epoches. When a large value of T is used, the convergence becomes better (Figure 6(a) T = 32 and Figure 6(b) T = 8 and 32). To sum up, the COVQ can achieve a performance similar to the SOM performance only when many training epoches are employed. For example, at SNR = 29 dB, the required values of T and N for a good convergence are equal to 2-to-8 and 16, respectively. That means, the COVQ need about 32–96 training epoches. However, the SOM needs 10 epoches only. Clearly, the computational load of the SOM is much lower than that of COVQ. From the figure, the SOM-COVQ can further improve the RMSE and the convergence. Of course, we need some additional computations. For example, at (a) 32

(b) 20

30

26 24

SOM N=10 COVQ T=2 COVQ T=8 COVQ T=32 SOM – COVQ T=32 SOM – COVQ T=8 SOM – COVQ T=2 LGB N=20

16

RMSE

RMSE

18

SOM N=10 COVQ T=2 COVQ T=8 COVQ T=32 SOM – COVQ T=32 SOM – COVQ T=8 SOM – COVQ T=2 LBG N=20

28

22

14 12

20 10

18 16

0

5

10

15

20

25

30

35

8

0

5

10

15

20

25

30

N The number of training epoches in COVQ

N The number of training epoches in COVQ

SNR =15 dB

SNR =29 dB

35

Figure 6. The effect of insufficient training. In the SOM approach, the number of training epoches is equal to 10. In the LBG approach, the number of training epoches is equal to 20. In the COVQ or SOM-COVQ, the effective number of training epoches is equal to N × T , where T is the number of re–transmission in each epoch.

190

CHI-SING LEUNG ET AL.

(a) LBG RMSE=15.24

(b) SOM RMSE=9.93

(c) COVQ RMSE=9.13

(d) SOM–COVQ RMSE=8.85

Figure 7. The reconstructed images at SNR = 29 dB with T = 32 and N = 32.

SNR = 29dB, in the SOM-COVQ the required values of T and N for a very good RMSE are about 8 and 2, respectively. That means, the SOM-COVQ need about 16 training epoches only. In summary, these observations suggest that the COVQ approach is beneficial, only if many training epoches are affordable; otherwise the performance may actually degrade. This justifies that the SOM as an excellent RVQ. Note that in COVQ and SOM-COVQ, a different codebook for a different channel SNR is required. Finally, reconstructed images “Lena” at SNR = 29 dB with T = 32 and N = 32 are shown in Figure 7. Compared with the conventional LBG without enhanced channel robustness, the SOM approach significantly improves the reconstruction quality of images. Also, the visual performance of the SOM is comparable to that of the sufficiently trained COVQ and SOM-COVQ.

5.

Concluding remarks

In this paper, we study the performance of soft-decoding SOM and COVQ over wireless channels. From our simulations, although the COVQ training has a welldefined objective function to be minimized and takes full advantages of the

SOFT-DECODING SOM FOR VQ OVER WIRELESS CHANNELS

191

perfect channel state information available during training, its performance is only comparable to that of our SOM approach which does not assume any channel state information. This is because the COVQ training is a nonlinear optimization algorithm. Even though full channel information are utilized, the trained codebook is not the global optimum solution. It should be noticed that in wireless mobile communications, the channel SNR is typically fast time-varying. Hence it is often realistic to assume that the channel SNR during operation is the same as the SNR during training. The resultant SNR mismatch will degrade the performance of COVQ. On the other hand, some advantages of the SOM approach are that it does not require any channel information during training and that it does not need a reliable feedback channel for training. These implementation advantages make SOM a practically attractive VQ solution for wireless communications. In addition, our simulation results show that the performance of SOM is comparable to that of COVQ. Moreover, the convergence rate of the COVQ training is very slow, namely, COVQ needs more training epoches. Such a requirement may not be allowed for a time-varying environment. Our current study is based on some simple modulation schemes. It is interesting to investigate our SOM approach for more sophisticated coding schemes, such as, the trellis coded modulation and the space time coding scheme [22, 23].

Acknowledgement This research was supported by RGC Competitive Earmarked Research Grant, Hong Kong. (Project No.: CityU 1122/01E).

References 1. Nasrabadi, N. M. and King, R. A.: Image coding using vector quantization: A review, IEEE Transactions on Communications, 36 (1988), 957–971. 2. Gray, R. M.: Vector quantization, IEEE Acoust., Speech, Signal Processing Mag., (1984) pp. 4–29. 3. Skoglund, M.: Soft decoding for vector quantization over noisy channels with memory, IEEE Transcations on Information Theory, 45 (1999), 1293–1307. 4. Skoglund, M. and Ottosson, T.: Soft multiuser decoding for vector quantization over a CDMA channel, IEEE Transcations on Communications, 46 (1996), 327–337. 5. Zeger, K. A. and Gersho, A.: Pseudo-gray coding, IEEE Transaction on Communications, 38 (1990), 2147–2158. 6. Farvardin, N.: A study of vector quantization for noisy channels, IEEE Transactions on Information Theory, 36 (1990), 799–809. 7. Potter, L. C. and Chiang, D. M.: Minimax nonredundancy channel coding, IEEE Transactions on Communciations, March 1995, pp. 804–811. 8. Farvardin, N. and Vaishampayan, V.: On the performance and complexity of channeloptimzed vector quantizers, IEEE Transactions on Information Theory, 37 (1991), 155–160. 9. Alajaji, F. and Phamdo, N.: Soft-decision COVQ for Rayleigh-fading channels, IEEE Transcations on Communications Letters, 2 (1998), 162–164.

192

CHI-SING LEUNG ET AL.

10. Phamdo, N. and Alajaji, F.: Soft-decision demodulation design for COVQ over white, colored, and ISI Gaussian channels, IEEE Transcations on Communications, 48 (2000), 1499–1506. 11. Patzold, M.: Mobile fading channels, Chichester, England; New York: Wiley, J.: 2002. 12. Ho, K. P.: Soft-decoding vector quantizer using reliability information from turbo-codes, IEEE Communications Letters, 3(7) (1999), 208–210. 13. Zhu, G. C. and Alajaji, F. I.: Soft-decision COVQ for turbo-coded AWGN and Rayleigh fading channels, IEEE Communications Letters, 5 (2001), 257–259. 14. Xiao, H. and Vecetic, B. S.: Combined low bit rate speech coding and channel coding over a Rayleigh fading channel, In: Proc. Global Telecommunications Conference, Vol. 3, 1997, pp. 1524–1528. 15. Ruoppila, V. T. and Ragot, S.: Index assignment for predictive wideband LSF quantization Ruoppila, In: Proc. IEEE Workshop on Speech Coding 2000, pp. 108–110. 16. Kohonen, T.: Self-organization and associative memory, third edition. Berlin: Springer, 1993. 17. Leung, C. S. and Chan, L. W.: Transmission of Vector Quantized Data over a Noisy Channel, IEEE Trans. on Neural Networks, 8 (1997), 582–589. 18. Robbins, H. and Monro, S.: A stochastic approximation method, Ann Math Stat., 22 (1951), 400–407. 19. Flanagan, J. A.: Self-organisation in Kohonen’s SOM, Neural Networks, 9(7) (1996), 1185–1197. 20. Liou, C. Y. and Tai, W. P.: Conformal self-organization for continuity on a feature map, Neural Networks, 12 (1999), 893–905. 21. Sum, J. and Chan, L. W.: Convergence of one-dimensional self-organizing map, In: Proceedings, ISSIPNN ’94, pp. 81–84. 22. Tarokh, V., Naguib, A., Seshadri, N., and Calderbank, A. R.: Space-time codes for high data rate wireless communication: performance criteria in the presence of channel estimation errors, mobility, and multiple paths, IEEE Transactions on Communications, 47 (1999), 199–207. 23. Liu, Y. J., Oka, I., and Biglieri, E.: Error probability for digital transmission over nonlinear channels with application to TCM, IEEE Transactions on Information Theory, 36 (1190), 1101–1110.