Improving TCP Performance over Last-hop Wireless Networks for Live ...

1 downloads 109 Views 2MB Size Report
enhances performance of the Transmission Control Protocol. (TCP) over a last-hop wireless network (LHWN) to make it more suitable for delivery of a live video ...
Improving TCP Performance over Last-hop Wireless Networks for Live Video Delivery Igor Radovanović, Richard Verhoeven, Johan Lukkien

Abstract — This paper discusses a link layer protocol that enhances performance of the Transmission Control Protocol (TCP) over a last-hop wireless network (LHWN) to make it more suitable for delivery of a live video with a short playout deadline. The presented protocol provides preference control of the transmission of video data within the network and it maximizes network goodput (the successfully transmitted amount of useful video data) by maximizing reliability of transmission for a given playout deadline and a varying digital bandwidth. The protocol makes use of the skip function in the TCP receiver [1], [2] to maximize end-to-end throughput of the connection at the expense of reducing full reliability. It can be implemented in a wireless access point to improve quality of the received live video over an Internet connection to an office or home. The main ideas of the protocol are: hide packet losses from the sender by performing local retransmissions based on the information from the video application and according to a certain scheduling policy, drop stale segments before they arrive at the receiver and prevent end-to-end retransmissions of segments received by the access point. The performance of the TCP Reno protocol [3] in combination with both the presented protocol and the skip function in the receiver is compared against the performance of TCP Reno [3], TCP-RTM [2], TCP Reno in combination with Snoop [4] and TCP-RTM in combination with Snoop. The results of simulations show improvement of network goodput across a wide range of packet-error rates. 1. Index Terms — TCP/IP, last-hop wireless, live video.

I. INTRODUCTION Live digital video transmission is usually done using dedicated broadcast networks (cable, terrestrial, satellite) with mostly deterministic performance. Recent technological improvement in the form of broadband computer access networks, operating systems with real-time support and the usage of more intelligent, resource enriched terminals, enabled streaming of a live video over non-dedicated computer networks, viz., the Internet. If those networks include wireless access parts, they also have nondeterministic performance [5]. Resources in those networks (e.g. bandwidth and buffer space) 1

Authors are with the Eindhoven University of Technology, 5600MB, Eindhoven, The Netherlands, (e-mails: [email protected], [email protected], [email protected]). This work was partly funded by the Freeband project I-SHARE [http://www.freeband.nl/project.cfm?id=520].

are shared among different types of traffic, viz., data, voice, and video. Allocating resources to specific traffic or types of traffic is difficult or even impossible in the current Internet [6]. But even if resource reservation could be guaranteed, no performance guarantees can be given in view of wireless access: it is not possible to predict the actual wireless bandwidth availability due to interference and terminal movements. Generally speaking, in order to design the transport system that will enable live video streaming it is necessary to identify system design requirements based on live video properties. The most important live video properties are delay sensitivity and loss-tolerance [7]. The former includes sensitivity to jitter and the fact that the validity period of the video data is determined by the playout deadline. The playout deadline specifies the time instant at which the video data has to be displayed or otherwise must be skipped. It determines the maximum playout delay of the video data, which is the time elapsed between the generation and the display of a video data [7]. The loss-tolerance property on the other hand, gives freedom to the design of the transport system. It means that certain types of losses causing glitches in the video playback can often be partially or fully concealed [7]. The system design requirements posed onto the transport system include functional and extra-functional parts. A functional requirement is simply the transport of the video data from the sending to the receiving application. An extra-functional requirement is that transmitted video data arrive at the receiver before the given playout deadline. In order to quantify this, we define the normalized goodput as the ratio of the amount of video data received before their playout deadline to the amount of video data transmitted. Multiplying normalized goodput with the video bit rate yields network goodput. In principle, this normalized goodput must be equal to one. However, due to the loss-tolerance property of a video, this ratio can be less than one while still achieving a similar quality as the original video. This is used here to improve the design of the transport system for the case of a last-hop wireless network (LHWN). There are several reasons for video data not making their playout deadlines, viz., queuing delays, errors in the transmission and the delay added for recovery of those errors. Ideally, both the playout delay and the amount of missing video data have to be minimal. The amount of missing video data drops with the increased playout delay as there is more time available to recover those. Therefore, to recover more video data, the playout delay should be as large as possible, which is contradictory to the previous requirement. An optimal

solution is found in the tradeoff between the two. This tradeoff is studied in this paper. More precisely, the functionality of the protocol presented is to minimize the amount of lost video data for a given playout deadline. If network goodput of LHWN is lower than the live video bit rate, losses occur. To adapt to this network goodput variation (i.e. losses), adaptable video encoders are mostly used. These, however, have some disadvantages explained later. In addition, techniques are used in the application layer to indicate relative importance of video data [8] and, by using feedback of the transport system, decide upon which data is actually handed over for transmission. The reason for this is that degradation of received video quality for the same amount of losses in the transport system may differ due to inter frame error propagation in predictive video coding [9],[10]. The result is that the video data containing information being more critical for enhancing received video quality should have a preference over those containing less critical information. If variation in network goodput happens on a small, millisecond time scale, using adaptable video encoders is not enough to maximize quality of the received video. What needs to be done in addition is to indicate relative importance of video data within the transport system and to control transmission of those data based on this indication. This paper focuses on the latter, within the transport system. In this paper, for the transport system we use a last hop wireless network with the TCP/IP protocol suite [3], and the Transmission Control Protocol (TCP) for the transport protocol. The main reason for choosing TCP rather than the User Datagram Protocol (UDP) [3], which is currently mostly used, is that in case of losses, better quality of the received video can be achieved due to the error recovery mechanism in TCP, assuming that the playout deadline is large enough to allow this. Other reasons may be that a majority of the Internet traffic today is TCP based [14] and that many firewalls block UDP traffic. Hence, rather than extending UDP with recovery mechanisms, we adapt TCP towards dealing with partial reliability [2]. We assume in this work that application layer framing is used such that each TCP segment contains an integral number of application layer frames (typically one), and that each application level frame is sent to the transport layer immediately after video data are generated (i.e. the Nagle algorithm is turned off) [2]. We also assume that the information about the importance of video data for received video quality is conveyed in the TCP segments in the form of the playout deadline. The bits in the video frame that are put in a single TCP segment for transmission do not have different playout deadlines. Further we assume that quality of the displayed video increases with the amount of video data received and processed before their playout deadline [13]. Other assumptions we take are that the delay in the wireless part is much smaller than the delay in the wired part, that the (erroneous) wireless link is a connection bottleneck, and that there is synchronization between the clock in the sender and

the clock in the access point. The rest of the paper is organized as follows. In Section II, a short description of the last hop wireless network is given as well as the limitation of such a network for live video streaming2 when the TCP/IP protocol suite is used. Section III gives a description of the conceptual problems of streaming live video over LHWN and provides conceptual solutions to it. Existing protocols that could be used for live video streaming and their limitations are presented in Section IV. Section V shows the description of the SPLADE protocol to enhance performance of the TCP connection over LHWN to make it more suitable for live video streaming. Simulation setup and results are presented in Section VI. Finally, Section VII rounds up the paper with conclusions.. II. LAST-HOP WIRELESS NETWORKS We define the last hop wireless network (LHWN) as a concatenation of a wired network and a single-hop wireless network with a sender placed at the ingress of the wired part and the receiver placed at the egress of the wireless part. To avoid confusion, we define a wired network as a network not containing a wireless part. An example of LHWN is shown in Fig 1.

Fig 1. An example of a last hop wireless network: an Internet connection that ends up with a wireless access point and a laptop equipped with a wireless card.

To analyze suitability of LHWN for live video streaming it is important to examine its extra-functional properties and to determine if those match the extra-functional properties of the live video transport system in general. The main focus is on reliability and delay as those two are related to the live video properties: loss-tolerance and delay sensitivity. Reliability refers to data being delivered correctly and in order [7], [15]. As video data can be lost in both wired and wireless part of the network, the end-to-end connection is unreliable. The main causes of losses are buffer overflows in the former and interferences and fading in the latter, assuming that all the hardware components are fully reliable. The delay incurred in the wired part of LHWN is more than an order of magnitude larger than the delay in the wireless part [18]. The video data in the wired part have to pass through interconnecting devices which increases both processing3 and

2

Live video sstreaming is video that is constantly received by, and normally displayed to, the end user while it is being delivered by the sender. 3 Processing delay is the time required to examine the packet’s header and determine where to direct the packet [23].

queuing4 delay. In addition, the physical distance is most often larger in the wired part leading to an increase in the propagation5 delay. The processing delay is assumed to be constant and known in advance. The transmission6 delay depends on the digital bandwidth in both wired and wireless part. Besides the physical properties of the network, the extrafunctional properties of the used protocols have to be analyzed. This pertains to the TCP protocol, the Internet Protocol (IP) and the Medium Access Control7 (MAC) protocols. TCP is by definition fully reliable and its reliability cannot be changed if the protocol is used in its original form. It trades reliability for delay; in case of packet losses it also decreases the bandwidth of the sent data stream through its congestion control8 mechanism. These are drawbacks since we need just limited reliability and short, controlled delays while losses in the wireless part should not be interpreted as congestion. Hence, we need changes in the TCP protocol but, as a system requirement, we want to keep these changes minimal and pertaining to one device only, without affecting the others. Analyzing further the TCP/IP protocol suite we observe that the IP protocol is unreliable. Popular MAC protocols like the Ethernet9 protocols are also unreliable. On the other hand, wireless protocols using a retransmission mechanism provide reliability but again at the expense of delay increase. Generally, it is the task of the TCP protocol to improve reliability of the end-to-end connection on top of the unreliable MAC and IP layers. However, reliability can also be provided and controlled in the lower layers, especially in the wireless part which is more prone to errors [17]. This is explored in the work presented, including lower-layer interaction with the TCP protocol. We want to use reliability control in the lower layers to maximize network goodput for a given playout deadline. The challenge is really to have this network goodput value approach the network throughput 10value. III. PROBLEM ANALYSIS AND CONCEPTUAL SOLUTION We proceed by analyzing the problem of maximizing network goodput for TCP over LHWN and by proposing a conceptual solution.

4 Queuing delay is the time that packet waits in the queue to be transmitted across a link [23]. 5 Propagation delay is the time required for a bit to propagate from the beginning to the end of the link [23]. 6 Transmission delay is the amount of time required to transmit all of the packet’s bits into the link [23]. 7 A Medium Access Control (MAC) layer provides amongst other things addressing and channel access control mechanisms that make it possible for several end nodes to communicate within multipoint network. 8 A congestion control mechanism is used to control traffic entry into the network in order to avoid the situation when little or no useful communication is happening in the network. 9 Ethernet protocol is standardized as IEEE 802.3. It is the most widely used protocol in the wired Local Area Networks (LANs). 10 Network throughput is defined as the average rate of successful message delivery over the network.

A. Maximizing goodput In order to maximize network goodput, it is necessary to analyze how network goodput depends on extra-functional properties of the transport system. Network goodput is determined by 1. LHWN network resource availability, 2. reliability of LHWN determined by physical properties and protocol interactions, and 3. the playout deadline. The network resources are buffer space and the digital bandwidth. Our assumption is that there is enough buffer space in the access point11 available to avoid uncontrolled losses of the video data, i.e., the access point can store sufficient data to admit a choice which data to forward. The maximum value of the digital bandwidth is determined by the MAC layer protocol specification. It is known that reliability affects the delay in the network [19] and is usually determined from the reliability-delay tradeoff. The delay, on the other side, is determined from the delay-throughput tradeoff [7]. For live video streaming, the upper bound of the delay equals the playout deadline. Therefore, there is a limit to the reliability, given by the playout deadline. Within the boundaries of this deadline we therefore 1. maximize network throughput using reliabilitythroughput tradeoff, typically by having a higher network throughput while accepting a lower reliability, and then 2. make network goodput as close as possible to network throughput by explicitly selecting the data that is lost as a result of the decreased reliability. In order to do this, we need control over the reliability of the used protocols. As a first problem, for TCP no tradeoff can be made with respect to either network throughput or goodput, i.e., TCP does not support increasing network throughput by decreasing reliability nor does it support indicating which data can be dropped for such a decreased reliability. As a result, throughput of the TCP connection is already low (hence, delay is high) for small error rates (see Section VI) as a consequence of the TCP congestion control mechanism. In addition, network goodput can be much smaller than network throughput. This is the result of handling data after their deadlines, and of the end-to-end nature of the retransmission: data that already arrived at the access point is retransmitted across the wired part as well. The second problem concerns the control of the reliability of the wireless link. Most of the protocols from the 802.11 family include a retransmission mechanism for error recovery, if they do not work in broadcast mode. However, this mechanism does not provide control of retransmissions on a per-packet basis, which is needed if network goodput has to be made as close as possible to network throughput. In most of the contemporary wireless cards, the number of

retransmissions can only be set prior to their operation. The consequence of this is that network goodput can be much lower than network throughput as stale segments12 may be (re)transmitted before valid ones, introducing extra delay to the valid segments that in turn have now less time to arrive at the receiver before their playout deadline. Concluding, the problem we address appears in two layers in the TCP/IP protocol suite. It is the lack of transport reliability control, which boils down to a lack of control of retransmissions on a per-packet basis in the wireless part and a per-segment basis in the overall transport part. B. Conceptual solution The conceptual solutions to the two problems is to 1. modify TCP in order to introduce control of endto-end transport reliability at runtime, and 2. modify protocols in the wireless part to introduce control of point-to-point transport reliability at runtime. The implementation of the first conceptual solution, which also maximizes throughput of the TCP connection, would be to avoid using a congestion control mechanism in TCP for those cases were it is pointless. This concerns the following: a. Segment losses in the wireless part. As the delay in the wireless part is an order of magnitude smaller than the delay in the wired part, it is intuitive to conclude that a huge number of local retransmissions can be done for the duration of just one end-to-end retransmission. b. Losses of stale segments. Retransmission of stale segments is a waste of network resources. To address point a., any duplicated acknowledgement13 coming from the receiver (which is an implicit segment retransmission request) concerning segments that already arrived at the access point should be caught by the access point to avoid end-to-end segment retransmission. To address point b., stale segments must be acknowledged by the TCP receiver. In all other cases of segment loss, the TCP source must indeed adapt the bandwidth of the stream to mitigate congestion problems in the wired part, sacrificing throughput. We believe that this should be done to facilitate a wide acceptance of the proposed protocol. Note that we assume that the wireless link is mostly the bottleneck of the resource provisioning in the whole connection [4]. The protocol across the wireless link should perform the following tasks: 1. Control retransmission on a per-packet basis, 2. Drop stale packets before they arrive at the receiver, and 11

Access point is, in this context, a device that connects wireless communication devices to the wired part of the network. 12 A segment is a Transport-layer packet formed by breaking video frames into smaller chunks and attaching headers to those chunks [23]. 13 Duplicated acknowledgment is a double acknowledgement of the same segment. It is used in the TCP protocol implementations as a request for transmission of subsequent segments.

3.

Hide duplicated acknowledgements from the TCP sender in order to hide losses in the wireless part. IV. RELATED WORK

Several protocols have already been developed to solve some of the problems presented so far. To provide partial reliability to TCP, a TCP-RTM protocol is used [2]. This protocol is a real-time extension of TCP based on the skip function implemented in the receiver. This function is used to acknowledge missing stale segments. When an acknowledgement for a stale segment is received, the sender removes the stale segment from its retransmission buffer and instead starts sending fresh ones. The congestion window size increases, inherently increasing network goodput, especially for a small value of the playout deadline. The disadvantage of this protocol is that stale segments may still be transported across the wireless part delaying the transmission of valid segments, which in turn might become stale due to this delay. Other disadvantages are the waste of bandwidth and an additional delay due to end-to-end retransmissions whereas the losses take place in the last hop. The Snoop protocol can be used [4] to perform retransmissions across the wireless part, in order to improve throughput and goodput of the TCP-based protocols. It is a link layer protocol that uses knowledge of the higher layer transport protocol (TCP) to perform retransmissions. Snoop hides losses in the wireless part from the sender by blocking duplicated acknowledgements. In this way it tries to match local retransmissions with the end-to-end ones, such that data lost in the wireless part does not need to be retransmitted both end-to-end and locally. Nevertheless, this cannot be avoided if the timer at the TCP sender expires due to a large delay caused by the local retransmissions. The drawback of this protocol is that it has no notion of timeliness which makes it not directly suitable for live video streaming. To the best of our knowledge no one has published the performance of a combination of the Snoop and the TCP-RTM protocols over a LHWN. In this paper we study the performance of these combined protocols through simulations while comparing it with the performance of the TCP Reno14 in combination with the SPLADE protocol (presented below) and the skip function in the receiver. SPLADE stands for Short PLAy-out DEadline. The obtained simulation results show substantial goodput improvement of the latter combination of protocols. V. SPLADE PROTOCOL DESCRIPTION SPLADE is a link layer protocol that uses knowledge of the higher layer transport protocol (TCP) to eventually provide control over video data transmission across the wireless part. Moreover, it maximizes goodput of the end-to-end connection in LHWN making it more suitable for live video streaming. 14 The TCP Reno protocol is an improvement of the original TCP protocol. It uses Fast Recovery mechanism [24] to improve the network throughput.

The one piece of knowledge from the transport layer is conveyed in the form of the playout deadlines stored in the TCP options field in the TCP header. The other piece of knowledge is the sequence numbering in both TCP data segments and acknowledgements. The SPLADE protocol can completely be implemented in the wireless access point. To maximize goodput of the TCP connection, the SPLADE protocol relies on the TCP cumulative acknowledgements and the skip function in the receiver [1], [2]. Since no skip function exists in the standard TCP protocol, modifications are also required in the receiver. No modifications are required in the wired part of LHWN or in the sender. The main functionalities of the protocol are: • caching of data packets to enable local retransmissions, • retransmitting data packets over the wireless part, according to a certain scheduling policy, • determining validity of data packets that are to be transmitted across the wireless part, • estimating RTT of the wireless part, • dropping stale segments, to transmit useful data only, • handling of duplicate acknowledgements (explained below). Once a packet is added into the cache, it will immediately be forwarded to the receiver. If the acknowledgement for this packet does not arrive within 2RTT seconds, retransmissions will take place according to a certain policy. Either, first-infirst-out (FIFO), earliest deadline first (EDF) or some other policy can be used based on the type of the application that has to be supported. EDF is used in the work presented here. The packets with the playout delay value closer to the playout deadline have priority over packets with a playout delay much smaller than the playout deadline. We say that the validity of the latter packet is longer than the validity of the former one. This validity is calculated from the timestamps in the TCP segments minus the local time at the access point, and includes the propagation, transmission and the processing delay. It is assumed here that clocks in the sender and the access point are synchronized. If this is not the case, then validity of packets in the access point can only be estimated. In the access point, the SPLADE protocol estimates the delay in the wireless part by estimating the RTT value across the wireless part. The half of this estimated value plus the fixed processing delay in the receiver is used to determine the validity of the packet. The RTT estimation across the wireless part is usually based on an infinite impulse response filter, where the old RTT estimation and a new RTT measurement are combined in a linear way. However, when the skip function is used in the TCP receiver, due to high error rates, the acknowledgments will be sent based on the playout deadline rather than based on the actual receipt of the segment, and the RTT estimation of the wireless link will be overestimated. Another RTT estimation algorithm is used to ensure that local

retransmissions are possible. This algorithm updates the RTT estimation based on the minimum measured RTT during a short period. If this minimum is significantly larger than the current RTT estimation, the RTT estimation is multiplied by a factor. Otherwise, this minimum is used as the new RTT estimation. The SPLADE protocol makes a distinction between TCP segments lost in the wired and those lost in the wireless part of the connection. The duplicate acknowledgements (DupACKs) from the receiver requesting retransmission of cached segments are not propagated to the TCP sender while DupACKs requesting retransmission of segments not present in the cache are forwarded to the TCP sender. VI. SIMULATION RESULTS We performed extensive simulations using the ns2 simulator [21]. The goal is to compare goodput of an end-to-end connection over the LHWN when TCP Reno is used in combination with the SPLADE protocol and the skip function in the receiver to goodput of the same connection when the benchmarking protocols are used. The benchmarking protocols are: a) TCP Reno in combination with IEEE 802.11b, b) TCPRTM in combination with IEEE 802.11b, c) TCP Reno in combination with Snoop, and d) TCP-RTM in combination with Snoop. Throughput figures are analyzed as well in order to check the fulfillment of the requirement that goodput values must be close to throughput values (see Subsection III-A). The network topology used in the simulation is shown in Fig 2. The application generates a 3Mbps constant bit rate stream from a sender to a receiver. 100 Mbit

100 Mbit

802.11b

802.3

802.3 Sender

11 Mbit

Cross traffic

Access Point

Receiver

Fig 2. Network topology of the simulation.

The information about the importance of video data for received video quality is conveyed in the TCP segments in the form of the playout deadline. Mapping the importance to the playout deadlines is out of scope of this paper. The application layer framing is used, such that the bits in the video frame that are put in a single TCP segment for transmission, do not have different playout deadlines. Without loss of generality, the playout deadline of each consecutive TCP segment is chosen to be a fixed time interval larger than the playout deadline of the previous segment. This is chosen due to specific nature of the video traffic. The playout deadline of the first segment in the stream is chosen to be equal to this fixed time interval. The consequence of this choice is discussed later in this section. The digital bandwidth of the wired part is 100 Mbps (Fast Ethernet), and that of the wireless part is 11 Mbps (IEEE 802.11b). Errors in the network occur only in the wireless part. Those are modeled as independent Bernouilli trials which then gives a uniform distribution of errors, in both forward and

backward route. The number of retransmissions to be performed by the IEEE 802.11b protocol is set to 0 in the cases where end-to-end throughput and goodput are determined when the TCP protocol is used in combination with the Snoop and the SPLADE protocols, as those protocols are in control of retransmissions across the wireless part. In all other cases, the number of retransmissions is set to 4. The RTS and CTS packets are not used in any of the cases. To allow the TCP congestion window size to grow, the errors start occurring after one second of simulation time. The simulation is divided into a startup phase, and a stable phase. The stable phase starts from the moment when the goodput value of the end-to-end connection reaches the value of the video bit rate (3Mbps). All goodput and throughput figures of the end-to-end connection are presented for the stable phase only. This is because during the startup phase, the value of goodput is very low as no segment (after the first one) reaches the receiver before its playout deadline. This effect is most apparent when the playout deadline is close to the total network delay. Nevertheless, some analysis of the startup phase is also done. Two cases are simulated. In the first case, a constant bit rate stream represents a live video conference traffic over the LHWN, whereas in the second case, it represents a live video broadcast traffic over the LHWN. The playout deadline is chosen to be 0.15 sec [7] and 0.8 sec, for the first and the second case, respectively. Within the two cases analyzed, two subcases are considered. Those subcases concern the value of the total (i.e. end-to-end) delay in the network. In the first subcase, this value is chosen to be 40 ms [22], which is typical for the case where a video source is within the same Internet Service Provider's network, or within the same country as the video sink. In the second subcase, this value is chosen to be 120 ms, which is typical for the case where a video source is at the other side of a continent or in a different continent with respect to the video sink. In both subcases it is assumed that the delay in the wireless part is fixed (around 2ms) and is much smaller than the delay in the wired part. Results of the simulations are shown in Fig 3. to Fig 10.

Fig 4. Goodput of the end-to-end connection for the playout deadline of 0.15 sec and the total network delay of 0.12 sec.

Fig 5. Throughput of the end-to-end connection for the playout deadline of 0.8 sec and the total network delay of 0.12 sec.

Fig 6. Goodput of the end-to-end connection for the playout deadline of 0.8 sec and the total network delay of 0.12 sec.

Fig 3. Throughput of the end-to-end connection for the playout deadline of 0.15 sec and the total network delay of 0.12 sec.

Fig 7. Throughput of the end-to-end connection for the playout deadline of 0.15 sec and the total network delay of 0.04 sec.

Fig 8. Goodput of the end-to-end connection for the playout deadline of 0.15 sec and the total network delay of 0.04 sec.

Fig 9. Throughput of the end-to-end connection for the playout deadline of 0.8 sec and the total network delay of 0.04 sec.

Fig 10. Goodput of the end-to-end connection for the playout deadline of 0.8 sec and the total network delay of 0.04 sec.

As figures show, TCP Reno in combination with both SPLADE and the skip function in the receiver, provides much more stable values of both throughput and goodput of the endto-end connection in all cases. Goodput is better than in all other cases used for benchmarking. Moreover, it is close to throughput, which was one of the design requirements. Throughput is smaller only for the case where TCP-RTM is used in combination with IEEE 802.11b in which the number of retransmissions is set to 4. From Figs. 7 and 9 it can be concluded that TCP Reno performs well as the end-to-end retransmissions are possible within the valid time. The difference between the playout deadline and the delay in the wired part determines how often SPLADE is able to retransmit a packet, which directly relates to how well it handles different error rates. For the high error rates, the retransmissions are limited by the digital bandwidth of the wireless part and the skip function limits the time to perform retransmissions. The drop in the throughput and goodput of the end-to-end connection when the TCP-RTM protocol is used in combination with Snoop occurs due to the bad estimation of the RTT value across the wireless part. The Snoop protocol was not designed to take into account acknowledgments of packets that have not been received in the receiver but are sent as a consequence of the skip function in the receiver. The consequence of the bad RTT estimation is that many stale packets are buffered in the access point and (re)transmitted over the wireless part. There are also additional reasons for this drop. At a low error rate, the Snoop protocol would forward duplicated acknowledgments to the sender in case where the missing segment is not buffered in the access point. As a result, the sender would reduce the TCP congestion window and introduce additional buffering delay. When the error rate becomes larger, the TCP-RTM protocol becomes more dominant at the receiver and it reduces the number of duplicate acknowledgments. As a result, there are less retransmissions of stale segments (packets) across the wireless part and the behaviour of the combined protocol follows the behavior of

UDP, where goodput linearly decreases with the error rate. The playout deadline directly determines the amount of buffering that has to be provided at the access point for the cache at one side, and at the receiver for the playout, on the other. When the playout deadline is larger than 3RTT/2, the amount of buffering might become too large, especially at the access point. This amount can be limited at the access point by sending selective acknowledgments from the receiver, such that the access point can drop delivered segments and selectively retransmit the missing ones. As this would further improve the retransmission algorithm, it would probably improve the SPLADE protocol. It is mentioned that the previous graphs show the performance of the protocols during the stable phase, but ignore possible issues during the startup phase. The slow start of TCP might delay the initial part of the data stream due to buffering at the sender caused by the limited size of the TCP congestion window. For the protocols and network conditions simulated, the duration of the startup phase is determined. Since goodput is the essential measurement for the comparison of different protocol combinations, the end of the startup phase is defined by the first time that the goodput value reaches the transmitted video bit rate. These startup times, obtained from the simulations, are shown in Table I and Table II. TABLE I DURATION OF STARTUP PHASE IN SECONDS FOR 0.0 PER. Protocol TCP Reno TCP-RTM TCP Reno+Snoop TCP-RTM+Snoop TCP Reno +SPLADE+skip funct.

Playout deadl. 0.15s

Playout deadl. 0.8s

40 ms 0.6 0.6 6.6 0.6

120 ms 14.8 8.5 n.a. 9.7

40 ms 0.0 0.0 0.6 0.4

120 ms 12.9 1.6 n.a. 4.4

0.6

2.7

0.0

1.6

TABLE II DURATION OF STARTUP PHASE IN SECONDS FOR 0.04 PER. Protocol TCP Reno TCP-RTM TCP Reno+Snoop TCP-RTM+Snoop TCP Reno +SPLADE+skip funct.

Playout deadl. 0.15s

suite is presented. The protocol provides control over preferences of video data transmission across the wireless part of LHWN and maximizes goodput of the TCP Reno protocol. It is used in Combination with the skip function implemented in the TCP receiver. Simulations show that the TCP Reno protocol in combination with both SPLADE and the skip function provides higher goodput than other protocols used for benchmarking. Moreover, goodput of the end-to-end connection is maximized and is close to throughput. The value of both goodput and throughput are stable across the wide range of errors. By putting this protocol into an access point, a much more stable and reliable video streaming can be obtained that with any of the currently used protocols. In order to enable live video streaming, modifications are required in the wireless access point (SPLADE) and the TCP receiver (skip function). No modifications are required in the wired part. ACKNOWLEDGMENT We thank Dr. Tanir Ozcelebi for his comments and suggestions regarding the video coding and adaptation part. REFERENCES [1]

[2]

[3] [4]

[5]

[6]

Playout deadl. 0.8s [7]

40 ms 0.6 0.6 45.6 0.6

120 ms 15.7 13.4 n.a. 8.5

40 ms 0.0 0.0 80.4 20.7

120 ms 13.7 1.6 n.a. 6.1

[8]

0.6

2.7

0.0

1.7

[9]

From these tables, it is clear that the fully reliable transport protocols have a very long startup time, which is caused by a slow window size growth and a delay due to buffering. When the playout deadline is larger than 3RTT/2, the TCP congestion window grows fast enough to deliver almost all the data before the deadline.

[10]

[11]

[12]

VII. CONCLUSION The SPLADE protocol that enables live video streaming over a last hop wireless network using the TCP/IP protocol

[13]

T. Hasegewa, T.Kato, K.Suzuki, “A video retrieval protocol with video data prefetch and packet retransmission considering play-out deadline,” in Proc. of IEEE Int. Conf. Network Protocols, pp.32-39, Oct.Nov.1996. S. Liang, D. Cheriton, “TCP-SMO: extending TCP to support mediumscale multicast applications", in Proc. of IEEE INFOCOM 2002, vol. 3, pp.1356-1365, June 2002. W. R. Stevens, TCP/IP Illustrated, Volume 1, Addison-Wesley, Reading, MA, Nov. 1994. H. Balakrishnan, S. Seshan, E. Amir, R.H. Katz, “Improving TCP/IP Performance over Wireless Networks,” in Proc. of 1st ACM Int'l Conf. on Mobile Computing and Networking (Mobicom), Nov. 95. S. Liese, D. Wu, P. Mohapatra, “Experimental characterization of an 802.11b wireless mesh network,” in Proc. of 2006 International Conference on Communications and Mobile Computing, pp. 587-592, July 2006. B. Teitelbaum, S. Shalunov, “What QoS research hasn't understood about risk,” in Proc. of ACM SIGCOMM workshop on Revisiting IP QoS: What have we learned, why do we care?, pp. 148-150, Aug. 2003. J. F. Kurose, K. W. Ross, Computer networking: A top-down Approach Featuring the Internet, 4th edition, Addison Wesley, 2005, pp. 611-612. H.-C. Wei, Y.-C. Tsai, C.-W. Lin, “Prioritized retransmission for error protection of video streaming over WLANs,” in Proc. of the 2004 International Symposium on Circuits and Systems, 2004. ISCAS '04, vol. 2, pp. 65-68, May 2004. D. Jarnikov, J. Lukkien, P. van der Stok, “Adaptable video streaming over wireless networks,” in Proc. of the 14th International Conference on Computer Communication and Networks, 2005. ICCCN2005, Oct. 17-19, 2005. K. Stuhlmüller, N. Färber, M. Link, B. Girod, “Analysis of Video Transmission over Lossy Channels," IEEE Journal on Selected Areas in Communications, vol. 18, no. 6, pp. 1012-1032, June 2000. L. Zhao, C. C. J. Kuo, “Buffer-constrained R-D optimized rate control for video coding,'' in Proc. of IEEE Int. Conference on Acoustic, Speech and Signal Processing, Hong Kong, pp. III_89-III_92, Apr. 2003. S. Ma, W. Gao, F. Wu and Y. Lu, “Rate control for JVT video coding scheme with HRD considerations,” in Proc. of IEEE ICIP 2003, vol. 3, pp. III_793-III_796, Sep. 2003. G. J. Sullivan, T. Weigand, “Rate-distortion optimization for video compression,” IEEE Signal Processing Magazine, vol. 15, no. 6, pp.7490, Nov. 1998.

[14] A. Foronda, L. C. Pykosz, W. G. Junior, “A New Schema Congestion Control to Promote Fairness in the Internet Traffic,” in Proc. of AICT/ICIW 2006, pp.22-27, Feb. 2006. [15] R. Caceres, L. Iftode, “Improving the performance of reliable transport protocols in mobile computing'', IEEE Journal on Selected Areas in Communications, vol. 13, no. 5, June 1994. [16] V. Jacobson, “Congestion Avoidance and Control'', in Proc. of SIGCOMM 88, Aug. 1988. [17] J. H. Saltzer, D. P. Reed, D. D. Clark, “End-To-End Arguments In System Design,” ACM Tran. on Computer Systems, vol. 2, no. 4, pp. 277-288, Nov. 1984. [18] http://www.internettrafficreport.com/faq.htm\#response. 2007. [19] V. Tsaoussidis, S. Wei, “Reliability/Throughput/Jitter Tradeoffs for Real Time Transport Protocols,” in Proc. of 20th IEEE Real Time Systems Symposium, RTSS ’99, Phoenix, Arizona, Dec. 1999. [20] R. Ludwig, R. H. Katz, “The Eifel Algorithm: making TCP robust against spurious retransmissions'', in SIGCOMM Comput. Commun. Rev., vol. 30, no. 1, Aug. 2000. [21] N. S. (ns2). http://www.isi.edu/nsnam/ns/. [22] G. Armitage, M. Claypool, P. Branch, Networking and Online Games, John Wiley & Sons, Ltd, 2006, Chapter 5. [23] J. F. Kurose, K. W. Ross, Computer Networks, a top-down approach , 4th edition, Addison Wesley, 2008, pp. 34-35, pp. 196, [24] RFC 2001 – TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms.

Igor Radovanović was born in Peć, Serbia, in 1974. He received the B.Sc. and M.Sc. degrees in electrical engineering from University of Ni\v s, Serbia, in 1997 and 2000, respectively. He also received the PhD degree in Telecommunications from the University of Twente, The Netherlands in 2003. After finishing his PhD, he joined the System Architecture and Networking group at the Eindhoven University of Technology, The Netherlands in a position of an Assistant Professor, where he is still working. His research interests include both computer and IP-based telecom networks, service oriented architectures, and service quality management.

Richard Verhoeven was born in Veghel, the Netherlands, in 1969. He received the M.Sc. degree in Mathematics and Computer Science from the Eindhoven University of Technology in 1992, and a PhD degree in Computer Science in 2000 from the same institute. After two year working for the Eindhoven Embedded Systems Institute, he joined the SAN group as researcher and systems engineer. His research interests include computer networks, video, service oriented architectures and wireless sensor networks.

Johan Lukkien is head of the System Architecture and Networking Research group at Eindhoven University of Technology since 2002. He received M.Sc. and Ph.D. from Groningen University in the Netherlands. In 1991 he joined Eindhoven University after a two years leave at the California Institute of Technology. His research interests include the design and performance analysis of parallel and distributed systems. Until 2000 he was involved in large-scale simulations in physics and chemistry. Since 2000, his research focus has shifted to the application domain of networked resource-constrained embedded systems. Contributions of the SAN group are in the area of component-based middleware for resource-constrained devices, distributed coordination, Quality of Service in networked systems and schedulability analysis in real-time systems.