Recurrent Multiple-Repetition Coding for Channels With ... - IEEE Xplore

4 downloads 0 Views 184KB Size Report
Abstract—We consider multiple repetition strategies with fixed delay decoding for discrete memoryless channels with noiseless feedback. Existing binary ...
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

5983

Recurrent Multiple-Repetition Coding for Channels With Feedback Thijs Veugen

Abstract—We consider multiple repetition strategies with fixed delay decoding for discrete memoryless channels with noiseless feedback. Existing binary schemes by Schalkwijk and Zigangirov are analyzed and their results are extended. The general error exponents are computed and presented by elegant expressions in the strictly symmetric case. An important class of precoded sequences, so-called flip sequences, is found and their degrading effect on the error exponent is investigated. This effect is shown negligible when the repetition parameters are chosen such that the transmission rate is maximized. Even when signalling at channel capacity, the error exponent is shown to be strictly positive. Index Terms—Error exponent, feedback, multiple repetition, recurrent coding.

I. INTRODUCTION

I

N 1971 Schalkwijk [11] presented the class of repetition strategies for the binary symmetric channel with feedback. These block-coding schemes achieve capacity for several values of the channel error probability and are easy to implement. Schalkwijk’s construction is derived from a special case of Horstein’s scheme [4], in which the medians exhibit a regular behaviour. Later, Schalkwijk and Post [13], [14] showed that similar recursive coding schemes exist with fixed coding delay as well as variable coding delay. Their variable coding delay error exponent is the largest possible error exponent for the binary symmetric channel with noiseless feedback [10]. In 1996 Veugen [19] extended these results to strategies for arbitrary memoryless channels showing that for each multiple-repetition strategy, a channel exists for which this strategy achieves capacity. In 2007 Veugen [21] showed how to choose the repetition parameters for an arbitrary memoryless channel with feedback, in order to maximize the transmission rate. Related work has been done by Ooi and Wornell [7] in 1998, who developed efficient variable-rate coding schemes for communicating over discrete memoryless channels with noiseless feedback. Tchamkerten and Telatar [17] also worked on variable length coding schemes for discrete memoryless channels in 2002, considering universal schemes for compound channels. Shayevitz and Feder [16] more recently provide a sequential scheme that attains capacity over any memoryless channel, thereby proving that Horstein’s scheme [4] for the BSC indeed achieves capacity. Sahai [9] explains in 2008 why Manuscript received April 02, 2009; revised August 19, 2010; accepted April 19, 2011. Date of current version August 31, 2011. The author is with the Multimedia Signal Processing Group, Delft University of Technology, Delft, The Netherlands, and also with TNO Technical Sciences, Delft, The Netherlands (e-mail: [email protected]). Communicated by B. S. Rajan, Associate Editor for Coding Theory. Digital Object Identifier 10.1109/TIT.2011.2161922

coding schemes with delay can do much better than block coding when a feedback link is present for discrete memoryless channels. Together with Draper [10], he discovers the “hallucination” bound, deriving an upper bound on the exponent of undetected error for a large family of erasure decoders, under an erasure probability constraint. This exponent turns out to coincide with that of Horstein. Although multiple-repetition strategies are designed for communicating over a discrete memoryless channel with noiseless feedback, several other applications are possible [20]. When coding memory cells with known or unknown defects, multiple-repetition strategies can be used to efficiently cope with these defects [5], [12]. Another area is the two-person game of searching with lies [8]. They could even be used for reaching the economic equilibrium in economic markets [6]. And finally, they can lead to efficient estimation methods for measuring a statistical parameter [1], [2]. A. Binary Scheme of Schalkwijk and Post The general principle of repetition strategies is that when a 0 is erroneously received as a 1, than this 0 has to be retransmitted a fixed number of times in order to “correct” this error. The receiver will scan the received sequence from right to left and by 0. For this purpose, each substitute each subsequence transmitted sequence will have to be precoded such that it no (and ). And conlonger contains subsequences of type sequently, this precoding step will have to be reversed by the receiver when the received sequence has been “corrected”. Schalkwijk [11] showed that in the binary symmetric case, this strategy will achieve channel capacity whenever the channel error probability equals , being the solution of . Schalkwijk and Post [13], [14] introduced a left-toright decoding algorithm in stead of the right-to-left substitution algorithm, which enabled them to find efficient recursive strategies for the binary symmetric channel with feedback. Although they also suggest recursive schemes with variable coding delay, we focus here on schemes with fixed delay. Such a scheme contains the following steps: 1) The transmitter wants to transmit a (long) message from an arbitrary alphabet. is precoded to a (long) binary sequence 2) Message that does not contain the forbidden subsequences and . 3) The sequence is transmitted over a binary symmetric channel with feedback, while transmission errors are “corrected” by repetitions of the symbol, leading to the transmitted sequence . 4) The receiver computes, with some delay , an estimate of the transmitted sequence from the received sequence

0018-9448/$26.00 © 2011 IEEE

5984

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

They conclude that equals the exponent of the error proba, because this bility of their recursive coding scheme when reflects the output distribution of the binary symmetric channel. Note that they only consider the case when channel capacity is for some achieved, so the channel error probability equals . The reason that represents the decoding error exponent, can be explained by the underlying ideas of Horstein. Each state of the codeword estimator represents the relative position of the median of the expected message. When the state goes up, it becomes more likely that the message resides in the upper half, and when a 0 is received, the median goes down. Due to the feedback link, the transmitter can also compute the current position of the median and adapt the next transmitted symbol, so eventually the median will end up at the correct side (either a positive or a negative state) of the codeword estimator. In case of a decoding error, the transmission errors caused the median steps) in the wrong side of the codeword esto end (after timator, which means that eventually (after more than steps) the random walk will cross the zero state and end up in the other side of the codeword estimator. Fig. 1. Codeword estimator for k

= 3, Pr(r = 1) = b.

B. Scheme of Zigangirov

, by using a codeword estimator. The estimate of the transmitted symbol is computed by using the received as input for the codeword estisubsequence ), starting in mator (see Fig. 1 for an example when state 0. The output will be 1 when the codeword estimator ends in a positive state, and 0 otherwise.1 5) The transmission errors in the estimated transmitted sequence are “corrected” by deleting repetition symbols for every (estimated) transmission error. This results in an estimated precoded sequence . 6) The estimated precoded sequence is decoded to an estimated message . When all transmission errors are estimated correctly, i.e. , then . So the error probability of the when recursive scheme depends mainly on the error probability of the codeword estimator. In the codeword estimator, as depicted in Fig. 1, the state goes up after receiving a 1 (1 step up from steps up otherwise), thereby increasing a negative state, the likelihood that a 1 was received, and goes down otherwise. In Section II we proof the correctness of such a codeword estimator. Schalkwijk and Post define the probability of receiving a 1 and thereby going up in the codeword estimator, by , and the . They compute the probability of receiving a 0 by , exponent of the probability of ever returning to state 0 after steps, and show that

(1) relates to an initially received 1 in the delay The exponent , corresponding with an initial step going frame up in Fig. 1. The exponent similarly relates to . 1When

the codeword estimator ends in state 0, a fair coin is tossed.

In 1977 Zigangirov [23] considers a variation with a bounded constraint length of the fixed coding delay scheme of Schalkwijk and Post. Zigangirov uses the right-to-left substitution decoding method and an encoding buffer. The length of the encoding buffer is equal to the coding delay. Zigangirov claims that an error exponent can be achieved whose exponent versus rate curve hits the recurrent sphere packing bound. The recurrent sphere packing bound is the inverse concatenation construction of the sphere packing bound, and is shown by Viterbi [22] to determine the optimal exponent for convolutional codes (without feedback). However, Zigangirov’s proof does not hold because a mistake was made. More precisely, Zigangirov erroneously assumes that an information symbol is decoded correctly when the number of channel errors that occur during transmission of the first symbols does not exceed . A simple counterexample for is the transmitted sequence ( denotes the transmission of a 0 that is erroneously received as a 1) which can not be distinguished by the receiver from the transmitted . Both sequences are decoded to 1, but the first sequence one should be decoded to 0. This problem of so-called flip sequences is explained in the next subsection. C. Our Goal A multiple repetition system for a discrete memoryless channel with arbitrary alphabets will consist of the same building blocks as described in Section I-A for the binary case. The main goal of this paper is to determine the symbol error exponent for multiple repetition strategies with fixed decoding delay, especially the generalization of the binary case to multiple input symbols. More precisely, we are interested of the decoder that in the symbol error probability as an input and takes the received subsequence symbols, the estimate of the outputs, after a delay of transmitted symbol. We assume a proper precoder is used for avoiding forbidden subsequences in the original message,

VEUGEN: RECURRENT MULTIPLE-REPETITION CODING FOR CHANNELS WITH FEEDBACK

5985

which eliminates the possible effect of error propagation in the inverse precoder. Thus

and the symbol error exponent we would like to determine is

We consider two types of decoding, as described in Section I-A, namely left-to-right decoding using a codeword estimator, and right-to-left decoding using a substitution algorithm. Since a generalization to multiple input symbols has not been found yet, left-to-right decoding is only possible in the binary case, whereas right-to-left decoding is available in any setting. An important class of precoded sequences will be the flip sequences. These are the precoded sequences which cause the codeword estimator to end in state 0, even when all transmission errors have been corrected. In the binary case there are two flip sequences, namely the 1-flip sequence, consisting of a , concatenation of (an arbitrary number of) subsequences and the 0-flip sequence that consists of (an arbitrary number of) . A flip sequence can easily lead to a desubsequences coding error, because only one transmission error at the end can “flip” the entire sequence e.g. having one transmission error at will lead to the the end of the precoded sequence which will be decoded to 0. The received sequence right-to-left substitution algorithm will decode to 0 through two , and the left-to-right codeword estimator substitutions and therefore also decode to 0. will end in state In the general case, with multiple input symbols (see Section III), there will be exponentially many -flip sequences of the form where and , . One erroneously received at the end of such a sequence will ’flip’ the entire sequence to . When determining the symbol error exponent for channels with multiple input symbols, the right-to-left substitution algorithm is used. The corresponding error exponent is analysed through a state diagram of the error correction process and is . Since (generalized) flip-sequences might disdenoted by turb the correctness of this exponent, the error exponent of their error events is also computed and compared to . II. BINARY SCHEME The recursive scheme of Schalkwijk and Post is easily gento arbitrary values of . Although eralized from values channel capacity is only achieved when , in which case the median paths [11] on which Schalkwijk based his idea are regular, the same coding and decoding schemes can be used for arbitrary values of . However, it is not clear from Schalkwijk’s work how the error exponent from (1) will look like for arbitrary values of . The following observation gives some insight. Theorem 1: The error exponent from (1) equals the information divergence: . equals Proof: The error exponent which can be further rewritten as . The inforequals mation divergence

Fig. 2. Error correction state diagram for k

= 3, Pr(r = 1) = p.

TABLE I

and, therefore, can be further rewritten as . The equality easily follows. Also, since is the solution of , . So actually, for it is easy to show that and for , the error exponent in (1) will yield the same value! When looking at the top half of Fig. 1, what does of the estimator state going up it mean when the probability equals ? What we then have, is not an estimator based on the received sequence, where 0 and 1 are equally likely, but actually a state diagram of the error correction process, as depicted in Fig. 2, that goes on until the zero state has been reached again! Fig. 2 presents the state diagram for correcting an initial error. A , will be one of the received 0, which occurs with probability repetitions induced by the last error, while a received 1, which occurs with probability , corresponds with another (nested) error. The error correction process is initiated transmission, starts in state , and ends by an erroneous in state zero when this initial erroneous transmission has been is depicted in Table fully corrected. A small example with 1. From a decoding point of view, the codeword estimator relates to a decoding scheme where the received sequence is scanned from left to right. And the state diagram of the error correction process relates to a decoding scheme where the received sequence is scanned from right to left: from right to left 1) Scan the received sequence and by 1 and 0 and substitute the subsequences

5986

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

respectively. A small scanning window should be used because there may be nested errors (of repetition symbols) like in the previous example. 2) When all such subsequences have been substituted, the estimated transmitted symbol equals the left most symbol of the remaining sequence. This decoding method is clearly the inverse of the encoding method where each transmission error is “corrected” by repetitions. We show that in the binary case both types of decoding are equivalent [20], except when the precoded sequence happened to be a flip sequence. Theorem 2: Suppose we are given a received subsequence . The estimate produced by the left-to-right estimation algorithm is denoted by . The estimate produced . by the right-to-left substitution algorithm is denoted by If the left-to-right estimation algorithm does not end in state received symbols, then zero after having processed the . Proof: The proof is by natural induction on the delay . If , then , so suppose . If no forbidden subsequence occurs in , then . Assume w.l.o.g. . If is , i.e. a 0-flip sequence, a concatenation of sequences then the left-to-right algorithm ends in state 0. Otherwise, the left-to-right algorithm will end in a positive state and obtain . In each case the assertion holds. has a forbidden subsequence. Let Suppose that be the rightmost forbidden subsequence. After substituting this forbidden subsequence by , the sequence . Let and looks like be the estimates of the left-to-right estimation algorithm and the right-to-left substitution algorithm respectively, . By inducresulting from this sequence of length , or the left-to-right estimation tion, steps. Since this algorithm ended in state zero after substitution is the natural first step of the right-to-left substitu. It remains to show that tion algorithm, does not alter the estimate the substitution of the left-to-right estimation algorithm. Let be the state of the left-to-right algorithm after input . Let and be the states of the left-to-right algoand respectively. rithm after inputs , so . Assume w.l.o.g. , then and . If If then and . So when , the end-state of the left-to-right algorithm , is not affected by the substitution, so and when the end-state was zero before the substitution, it will remain zero after the substitution. , then and . Both If does states are positive, and since the sequence not contain a forbidden subsequence, the end-state of the left-toright algorithm will be positive in either case and . This formally explains our observation that and proves the correctness of the codeword estimator of Schalkwijk and Post. The proof of Theorem 2 distinguishes a

typical form of sequences which we called flip sequences. These are the only sequences for which the codeword estimator will end in state 0, so the decoding result will be unclear. This type of sequences will be important when generalizing our results. In general, a decoding error can be caused by two different events: 1) A channel error occurred while transmitting , but the error correction proces could not be finished within steps. 2) The precoded sequence is a flip sequence, and only one caused the received setransmission error at step quence to flip. The first type of decoding error leads to the following exponent of the general binary recursive scheme: (2) , we know that the exponent from (2) At least when equals the true error exponent from (1), as derived by Schalkwijk and Post. In the next section is shown that for other values of , our exponent is not degraded by flip sequences as long as . the repetition parameter is chosen suitably, i.e. III. MULTIPLE REPETITION SCHEMES The class of repetition strategies is easily generalized to arbitrary discrete memoryless channels with noiseless feedback. Let and be the input and output alphabet respectively. Let denote the channel probabilities. W.l.o.g. because when it is possible to eliminate a suitably chosen input symbol without affecting the capacity of the channel [15]. Suppose a multiple-repetition feedback strategy is used with repe( , , ), being a postition parameters itive integer. The repetition parameter equals the number of repetitions introduced by the encoder following an error. Since a received symbol is always erroneous when , the corresponding repetition parameters are set to 1. For reasons of consistency the numbers are defined as . 0. All logarithms are to the base For generalizing the recurrent coding scheme with fixed delay, we have two options. The first one is looking for a generalized codeword estimator to be used for left-to-right decoding. The second one is using the straightforward right-to-left substitution algorithm that substitutes, for all and , the subby an while scanning the received sequence sequences from right to left. When all such subsequences have been substituted, the estimated transmitted symbol equals the left most symbol of the remaining sequence. Up to now it has been impossible to find a suitable generalization of the codeword estimator. But since we showed in the binary case that both decoding methods will result in the same estimate (except when the precoded sequence is one of the two flip sequences), we could just as well choose the second option. In order to compute the generalized error exponent, we consider the error correction process and the corresponding error correction state diagram. In Fig. 3 an example is depicted of an error correction state diagram resulting from an erroneously and transmitted symbol 0, and repetition parameters . Depending on whether a or transmission error occurred, the error correction process starts in respectively

VEUGEN: RECURRENT MULTIPLE-REPETITION CODING FOR CHANNELS WITH FEEDBACK

5987

should equal the intransmission rate. More precisely, each . In that case, the difteger closest to ference between the tranmission rate and the channel capacity will be of the same order as the channel error probabilities [21], which means that especially for small channel error probabilities, the transmission rate of multiple repetition strategies will be close to channel capacity, which is explicable since for decreasing the density of integers such that increases. Theorem 3 shows that this choice also guarantees that the error correction process will eventually (not necessarily within the decoding delay) end.

Fig. 3. Error correction state diagram for k r x p .

Pr( = ) =

= 3 and k = 4,

state 3 or 4. The state goes up after receiving a 1 ( steps) or 2 ( steps) and goes down (one step) after receiving a 0. The erroneously transmitted symbol 0 can be considered corrected when state 0 is reached before the end of the delay . , the error correction state diagram is preFor each sented by the infinite Markov chain with states , and for each state transition probabilities and . So the state goes one step down after receiving an , otherwise it goes up. This Markov chain represents the error correction process initiated by an incorrectly in case an received -symbol. The process starts in state error occurred and stops when arriving at state 0: “the error is corrected”. When the process does not arrive at state 0 before the end of the delay , the error is not corrected and be the error exponent of leads to a decoding error. Let this error correction process initiated from an erroneously transmitted symbol . In Appendix A is shown that

The number equals , where is the . Note that in case of solution of for each constant repetition parameters: , the asymptotic exponent equals , where is the information divergence (or relative entropy). So is a natural extension of (2) to the general case. The error correction process will eventually end when the expected value of the next step in the state diagram is negi.e. ative, or in other words when . This condition is also reflected in , because when , then , , and . Otherwise so the error correction process is not expected to end. In [21] Veugen showed that the repetition parameters should in order to maximize the be chosen such that

. Assume the repetition parameters Theorem 3: Let are suitably chosen for each , then the error correction process will eventually end. . Assume the repetition parameters Proof: Let are suitably chosen for each , . Note that and for each , . Let be the set . Let be equal to . . Since is suitably chosen, equals the Let . Since integer closest to , we conclude . Since , we , and thus, . derive is increasing for and Since , we derive . Using these upper bounds, we derive

and thus, the error correction process will eventually end. Just like in the binary case, a decoding error is not always the consequence of an unfinished correction process, caused by an erroneous transmission of the first symbol in the delay window. This is due to the prementioned class of so called flip sequences. , an -flip sequence is defined as a sequence of For each the form where and , . When an -flip sequence is correctly transmitted, right-to-left substitution decoding will decode the first symbol to . However, only one erroneously received at the end would lead to a collaps of the (received) -flip sequence and consequently a wrong estimate of the first symbol, namely . The problem of flip sequences could be solved by increasing the precoding conditions such that flip subsequences of certain

5988

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

, and thus, comes down to . From these relations, we derive that

Fig. 4. The error exponents for k

= 3.

lengths are excluded, but this would lead to a serious reduction due to -flip of the transmission rate. The error exponent sequences is computed in Appendix B

where is the solution of and is the amount of information per precoded symbol. In [19] is shown how can be computed. In Fig. 4 is shown how, in the binary symmetric case, and vary for different values of the channel error probability when the repetition parameter equals 3. The capacity which equals 0.19 here. The Figure achieving value of is , shows that only for small channel error probabilities which means that in this case flip sequences cause a serious problem. For other channel error probabilities, especially when , the effect of flip sequences fades away and remains the true error exponent. In the binary symmetric case, the condican be achieved by properly choosing the repetition tion parameter such that , which maximizes the transmission rate [21]. Also in the general case, flip sequences do not decrease the as long as the repetition parameters are suiterror exponent ably chosen such that the transmission rate is maximized. This is shown in Theorem 4 for the strictly symmetric case when and all channel error probabilities and repetition parameters are equal. Theorem 4: Let the channel be strictly symmetric. When the repetition parameters are such that channel capacity is achieved, then (3) Proof: Assume the channel is strictly symmetric, then and and for all , . So both and are independent of . Let be the number of input (and output) symbols. The error exponents equal and , where and , , is the solution of [19]. Channel capacity is achieved when the repetition parameters are such that for all and [19], which

The result of Theorem 4 is reflected in Fig. 4, where for , the difference between and is indeed . Since , the error introduced by flip sequences can be seen as the price to pay for coding the random message with a factor in stead of . Theorem 4 shows that when capacity is the true factor and . Considachieved, there is a clear distance between ering arbitrary channels, it can be argued that when the repetition parameters are suitably chosen, and thus channel capacity will still be smaller than , and is approximated [21], then will be the true error exponent of multiple repetition thus strategies with fixed decoding delay. IV. CONCLUSION When determining the error exponent for multiple repetition strategies with fixed decoding delay, we discover the class of flip sequences. Such precoded sequences are more likely to lead to decoding errors. However, when analyzing the effect of flip sequences on the error exponent, we find that it fades away as long as the repetition parameters are suitably chosen, i.e. such that the transmission rate is maximized. Although only formally proven in the strictly symmetric case, we conjecture that the for multiple repetition strategies with fixed error exponent decoding delay and right-to-left decoding equals

This error exponent is shown to be strictly positive, even when signalling at capacity, which is the case for specific channel error probabilities [19]. An interesting direction for further research remains a generalization of the binary left-to-right codeword estimator to multiple symbols. This might e.g. lead to a generalization of the binary coding scheme with variable delay [13], [14] to multiple symbols. In the binary case, the use of variable delay increases to [20], the error exponent even further from which is the best possible error exponent for this channel [10].

VEUGEN: RECURRENT MULTIPLE-REPETITION CODING FOR CHANNELS WITH FEEDBACK

APPENDIX A PROOF OF Let

. Consider the infinite Markov chain with states , and state transition probabilities for and . This Markov chain represents the error correction process initiated by an incorrectly in case an received -symbol. The process starts in state error occurred and stops when arriving at state 0: “the , the process alerror is corrected”. When ways will eventually end up in state 0. The process does not . necessarily end when be the probability of going from state to state 0 Let in exactly steps. This presents the probability that the error correction process ends in exactly steps. We will compute the error exponent

Note that the extra condition does not influence the be the amount of probability in an asymptotic sense. Let information per precoded symbol. In [19] is shown how can be computed. Let be an -flip sequence of length . Since there are approximately precoded sequences of length , the probability that the appropriate subsequence of the precoded sequence (i.e. the subsequence that is transmitted at trans) equals is close to . Let missions up to be the number of subsequences in . Then . Therefore

(4)

where Although depends on the value of the initial error, the asymptotic behaviour of the random walk will rule out the effect of the initial state. Sanov’s theorem [3] in large deviation , since the steps theory can be used to effectively compute of the random walk can be considered as independent random variables with the same probability distribution. Let be the of probability distributions set on , then corresponds to the type of random walks that return to the same state. Since the asymptotic behaviour of these random walks is the same as the ones that correspond to a corrected error, we immediately derive

where is the probability distribution on such that for each . By using Lagrange multipliers, it can be shown satisfies [3] that the minimal

where that of

is chosen such that . It follows , where is the unique real (positive) solution , and

where . The same result was derived by Veugen [20] through a more complicated function theoretic analysis. APPENDIX B PROOF OF Fix . An -flip sequence is defined as a sequence of where and the form , . . We compute the asymptotic exponent Let of the probability

5989

is a non-negative integer for

each

and

denotes the multinomial coefficient. As for the asymptotic exfor each ponent, the two sides of (4) are equal. Fix , when , then

(5)

When the right-hand side of (5) is maximized under the using Lagrange we obtain as constraint , where is the solution of a maximum . Since the size of is upperbounded , we obtain by

REFERENCES ˇ Zigangirov, “An interval estimation problem [1] M. V. Burnaˇsev and K. S. for controlled observations,” Problemy Peredachi Informatsii, vol. 10, no. 3, pp. 15–61, 1974. ˇ Zigangirov, “One problem of observation [2] M. V. Burnaˇsev and K. S. control,” Problemy Peredachi Informatsii, vol. 11, no. 3, pp. 44–52, 1975. [3] T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd ed. Hoboken, NJ: Wiley, 2006. [4] M. Horstein, “Sequential transmission using noiseless feedback,” IEEE Trans. Inf. Theory, vol. 9, pp. 136–143, 1963. [5] A. V. Kusnetsov and B. S. Tsybakov, “Coding in a memory with defective cells,” Problemy Peredachi Informatsii, vol. 10, no. 2, pp. 52–60, 1974. [6] W. D. O’Neill, “An application of Shannon’s coding theorem to information transmission in economic markets,” Inf. Sci., vol. 41, pp. 171–185, 1987. [7] J. M. Ooi and G. W. Wornell, “Fast iterative coding techniques for feedback channels,” IEEE Trans. Inf. Theory, vol. 44, no. 7, pp. 2960–2976, Nov. 1998. [8] A. Pelc, “Searching games with errors—fifty years of coping with liars,” Theoret. Comput. Sci., vol. 270, pp. 71–109, 2002.

5990

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

[9] A. Sahai, “Why do block length and delay behave differently if feedback is present?,” IEEE Trans. Inf. Theory, vol. 54, no. 5, pp. 1860–1886, May 2008. [10] A. Sahai and S. C. Draper, “The hallucination bound for the BSC,” in Proc. Int. Symp. Information Theory, Jul. 2008, pp. 717–721. [11] J. P. M. Schalkwijk, “A class of simple and optimal strategies for block coding on the binary symmetric channel with noiseless feedback,” IEEE Trans. Inf. Theory, vol. 3, no. 17, pp. 283–287, May 1971. [12] J. P. M. Schalkwijk, “On powers of the defect channel and their equivalence to noisy channels with feedback,” in Proc. 7th Symp. Information Theory in the Benelux, 1986, pp. 41–48. [13] J. Pieter, M. Schalkwijk, and K. A. Post, “On the error probability for a class of binary recursive feedback strategies,” IEEE Trans. Inf. Theory, vol. 19, no. 4, pp. 498–511, Jul. 1973. [14] J. Pieter, M. Schalkwijk, and K. A. Post, “Correction to on the error probability for a class of binary recursive feedback strategies,” IEEE Trans. Inf. Theory, vol. 20, no. 2, pp. 284–284, Mar. 1974. [15] C. E. Shannon, “Some geometrical results in channel capacity,” Nachrichtentechnische Zeit, p. 10, 1957. [16] O. Shayevitz and M. Feder, “Optimal feedback communication via posterior matching,” IEEE Trans. Inf. Theory, vol. 57, no. 3, pp. 1186–1222, Mar. 2009. [17] A. Tchamkerten and E. Telatar, “Variable length coding over an unknown channel,” IEEE Trans. Inf. Theory, vol. 52, no. 5, pp. 2126–2145, May 2006. [18] T. Veugen, “Error probabilities of repetition feedback strategies with fixed delay for discrete memoryless channels,” in Proc. 15th Symp. Information Theory in the Benelux, May 1994, pp. 188–191.

[19] T. Veugen, “A simple class of capacity achieving strategies for discrete memoryless channels with feedback,” IEEE Trans. Inf. Theory, vol. 42, no. 6, pp. 2221–2228, Nov. 1996. [20] T. Veugen, “Multiple-Repetition Coding for Channels With Feedback,” Ph.D. dissertation, Eindhoven Univ. Technology, Eindhoven, The Netherlands, 1997. [21] T. Veugen, “Choosing the parameters of multiple-repetition strategies,” Eur. Trans. Telecommun., vol. 18, no. 3, pp. 245–252, 2007. [22] A. J. Viterbi, “Error bounds for convolutional codes and an asymptotically optimum decoding algorithm,” IEEE Trans. Inf. Theory, vol. 13, no. 2, pp. 260–269, Apr. 1967. [23] K. S. Zigangirov, “Recurrent transmission trough the binary symmetric channel with feedback,” Probl. Control Inf. Theory, vol. 6, no. 4, pp. 189–205, 1977.

Thijs Veugen received two Master of Science degrees from the Eindhoven University of Technology, The Netherlands, in 1991. He passed both Mathematics and Computer Science with distinction. He also obtained a Ph.D. degree in Information Theory from the same institute in 1997. After three years at Statistics Netherlands, he started working for TNO, a Dutch organization for applied scientific research, where he’s still working in the field of Information Security. Since 2008, he has also held a position with the Multimedia Signal Processing Group of the Delft University of Technology as a researcher in Applied Cryptography. His research interests include information theory, cryptography, and information security.