Estimating Arrival Rates from the RED Packet Drop History - CiteSeerX

17 downloads 0 Views 131KB Size Report
Apr 6, 1998 - drops to give a reasonable estimate of the arrival rate of the high-bandwidth flow in that sample. Section 4 defines both a packet and byte drop ...
Estimating Arrival Rates from the RED Packet Drop History Sally Floyd, Kevin Fall, and Kinh Tieu Network Research Group Lawrence Berkeley National Laboratory, Berkeley CA ffloyd,[email protected] ** DRAFT *** April 6, 1998

Abstract

state for each active flow. Keeping per-flow counters for packet arrivals for all active flows could be an unnecessary overhead for a router handling packets from a large number of very low bandwidth flows. Our identification mechanism uses a periodic pass in the background over information about the packets dropped at the router by the RED queue management. The mechanism is independent of the granularity used to define a flow. One possibility would be for a router to define a flow by source and destination IP addresses. This would have the advantage of not being “fooled” by an application that breaks a single TCP connection into multiple connections to increase throughput. [FF98] discusses the negative impact of “breaking up” TCP connections on the general Internet. Another possibility for defining the granularity of a flow would be to use source and destination IP addresses and port numbers to distinguish flows. For IPv6 flows that do not use the IPv6 Encapsulating Security Header, routers could use the flow ID field to define some flows. Routers attached to high speed links in the interior of the Internet might use a coarser granularity to define a flow, rather than have each TCP connection belong to a separate flow. The identification mechanism in this section assumes a router with RED queue management, and draws on the discussion in [Nai96] for identifying high-bandwidth flows from the RED packet drop history. Section 2 distinguishes between forced and random packet drops for RED queue management. Section 3 considers the number of packet drops that should be included in a single sample of packet drops to give a reasonable estimate of the arrival rate of the high-bandwidth flow in that sample. Section 4 defines both a packet and byte drop metric, and shows that a mechanism for identifying high-bandwidth flows from RED packet drops should use the packet drop metric for random packet drops, and the byte drop metric for forced packet drops. Section 4 shows simulations illustrating the identification mechanism. Appendix B shows that for queues with Drop-Tail queue management, the history of packet drops does not give suf-

This paper outlines efficient mechanisms to estimate the arrival rate of high-bandwidth flows for a router implementing RED active queue management. For such a router, the RED packet drop history constitutes a random sampling of the arriving packets; a flow with a significant fraction of the dropped packets is likely to have a correspondinglysignificant fraction of the arriving packets. In this paper we quantify this statement. We distinguish between two types of RED packet drops, random and forced drops, and show how the two types of drops should be used differently in estimating the arrival rate of a high-bandwidth flow.

1 Introduction This paper describes an efficient mechanism for a router to identify and estimate the arrival rate of high-bandwidth flows in times of congestion, using the RED packet drop history. This work is in the context of the design of mechanisms to identify and restrict the bandwidth of high-bandwidth flows that are not using end-to-end congestion control in a time of high congestion [FF98]. In this context, there is no need to estimate the arrival rate of any flows in the absence of congestion, or in times of acceptably-low congestion. Similarly, in this context there is no need to estimate the arrival rate of any of the lower-bandwidth flows even in times of high congestion. All that is required is some mechanism to estimate the arrival rate of high-bandwidth flows in times of high congestion. RED queue management gives an efficient sampling mechanism and provides exactly the information needed for identifying high-bandwidth flows in times of congestion. This mechanism does not require the router to keep per-flow  This work was supported by the Director, Office of Energy Research, Scientific Computing Staff, of the U.S. Department of Energy under Contract No. DE-AC03-76SF00098, and by ARPA grant DABT63-96-C-0105.

1

3 The number of packet drops for a single sample

ficiently reliable information for identifying high-bandwidth flows.

This section considers the number of packet drops that should be included in a single sample or reporting interval of packet drops to allow a reasonable estimate of the arrival rate of the high-bandwidth flow in that sample. For the purposes of this section, we assume a RED queue in packet mode with a fixed average queue size, where each arriving packet has the same fixed probability p of being dropped. Section X of [FJ93] gives a statistical result showing that given a fixed average queue size, n packet drops in the sample (including Si;n packet drops from flow i), and flow i with a fraction pi of the arriving bandwidth in packets per second, the probability a flow receives more than c times its “share” pi n of packet drops in that sample is as follows:

2 Forced and Random packet drops This section distinguishs between forced and random packet drops. Definitions: forced and random packet drops. We say a packet drop is forced if a packet is dropped because either the FIFO buffer overflowed, or the average queue size estimated by RED exceeded the RED maximum threshold parameter maxthresh. Otherwise a packet drop is called random. Random packet drops are expected to represent the majority of all packet drops for a properly-configured RED gateway, and result from RED's probabilistic sampling of the arriving packet stream. When the average queue size exceeds some minimum threshold, indicating incipient congestion, RED queue management uses a random sampling method to choose which arriving packets to drop. [FJ93] describes two variants of the RED algorithm. In packet mode, for a given average queue size, each arriving packet has the same probability of being dropped regardless of the packet size in bytes. In byte mode, a packet's probability of being dropped is a function of its size in bytes. The simulations later in this paper use RED queue management in byte mode. RED in packet mode is preferable for those routers limited by the number of packets arriving from each flow, rather than the number of bytes. RED in packet mode would give flows an incentive to use larger packets. RED in byte mode is designed so that a flow's fraction of the aggregate random packet drops roughly equals its fraction of the aggregate arrival rate in bytes per second.1 One motivation for the design of byte-mode RED comes from the operation of TCP congestion avoidance. TCP assumes that a single packet drop indicates congestion to the end nodes, regardless of the number of bytes lost in any dropped packet. Thus the goal of RED queue management in byte mode is to have each flow's fraction of the random congestion indications correspond to its fraction of the arriving traffic in bytes per second, regardless of how those bytes are grouped into packets. In contrast, the goal of RED queue management in packet mode is to have each flow's fraction of the random congestion indications correspond to its fraction of the arriving traffic in packets per second,

Prob(Si;n



cpi n)

(

e

2n(c?1)2 (pi )2 ):

This is illustrated quantitatively in [FJ93] for n = 100. For RED in byte mode, the same result applies for p the probability that a fixed-size packet is dropped, and pi defined as flow i's fraction of the arriving bandwidth in bytes per second. The result above quantifies the statement that with sufficiently many packet drops, a flow is unlikely to receive more than c times its “share” of packet drops. Thus, the RED packet drop history can be an effective aid in identifying high-bandwidth flows. From Appendix A, we get a more precise estimate of the probability that a flow receives more than c times its “share” pi n of packet drops, for c > 1: (Si;n



cpi n)



1 c

? 1? 1

pi

1?cpi #n

cpi

:

(1)

0

2000

Sample Size 6000

10000

P rob

"   cpi

0.0

0.05

0.10 Fraction p_i of Arrival Rate

0.15

0.20

Figure 1: Sample size as a function of the high-bandwidth flow's fraction of the arrival rate, for P = 0.01, c=2.

1 For RED in byte mode, arriving bytes are marked, and packets containing those bytes are dropped. We assume that the packet drop rate is sufficiently low, relative to the packet size, that, in byte mode, it is unlikely that two “bytes” in the same packet will be marked to be dropped. That is, we assume that when the packet drop rate is high, RED will not longer be probabilistically dropping packets as random packet drops.

Let P be the desired upper bound for P rob (Si;n  cpi n). How large must n be (i.e., how many drops must be included in the drop history) in order to achieve the assurance that P rob (Si;n  cpi n)  P for a given c and pi ? Solving for n 2

in equation (1): n

S1



ln[P ]

ln

 ? 1 cpi  c

:  1?pi 1?cpi 1?cpi

10 Mbps 2 ms

(2)

10 Mbps 4 ms R2

R1 S2

Figure 1 shows this equation for P = 0:01 and c = 2; the x-axis shows pi , and the y -axis shows the lower bound for n needed to satisfy equation (2). For example, if we want to know how many total packet drops RED should include in its packet history to have probability at most 1% that a flow with a fraction pi of the arriving bandwidth receives at most twice its share of packet drops, then we substitute P = 0:01 and c = 2 into equation (2). The approach in this paper uses equation (2) to determine how many drops n to examine to get a reasonable estimate of the bandwidth of the highest-bandwidth flow. At one-second intervals, starting from a minimum interval of three seconds, the router evaluates the sample size n (i.e., the cumulative number of packet drops in the packet drop history) and the fraction d1 of the sample drops that belong to the flow with the highest number of drops. For given parameters P and c, the sample should be sufficiently large that there is probability at most P that the drop history overestimates the highbandwidth flow's arrival rate by a factor of c or more. If n and p1 satisfy the relationship in equation (2) for parameters P and c and for p1 = d1 =c, then we can use this packet drop history to estimate the bandwidth of the high-bandwidth flow. If, on the other hand, n and p1 do not satisfy the relationship in equation (2), then we continue to add to the packet drop history, and recheck one second later. This does not give a rigorous guarantee that the probability is at most P that the flow with the most drops in the sample received more than c times its share of packet drops. However, simulations later in the paper show that in this case, the packet drop history gives a good estimate of the bandwidth of the high-bandwidth flow.

10 Mbps 3 ms

S3

1.5 Mbps 20 ms

10 Mbps 5 ms

S4

Figure 2: Simulation network. mates a flow's fraction of the aggregate arrival rate in packets per second. Figure 3 shows the results of a simple simulation for the topology in Figure 2. For all simulations in this paper, the RED queue management is configured with a minimum threshold of five packets, a maximum threshold of 20 packets, and a packet drop rate approaching 10% as the average queue size approaches the maximum threshold.2 The buffer size in router R1 for the queue for the congested link R1R2 is set to 100 packets; packets are rarely dropped due to buffer overflow. For these simulations we use probability P set to 0.01, and share c set to 1.5 for determining the size of the drop sample. Thus, each drop sample contains enough drops that the probability that the high-bandwidth flow receives more than 1.5 times its “share” of the packet drops is at most 1%. The simulation includes a range of two-way traffic, including bulk-data TCP and constant-bit-rate (CBR) UDP flows. The TCP connections have a range of start times, packet sizes (from 512 to 2000 bytes), receiver's advertised windows, and round-trip times. Of particular interest are the high-bandwidth flows. Flow 3 is a CBR flow with 190-byte packets and an arrival rate of 64 KBps, about one-third of the link bandwidth. Flow 4 is a TCP flow whose high bandwidth is due to its larger packet size of 2000 bytes; most of the TCP flows in the simulation use 512-byte packets. More details of the simulation scenario are available in the simulations scripts [FF98]. The upper left graph in Figure 3 shows the packet drop metric for the random packet drops in the simulation. For every drop sample, there is a mark in the graph for every flow experiencing at least one packet drop. For each flow i, the number i is plotted on the graph, with the x-axis giving i's fraction of the aggregate arrival rate in Bps over the reporting interval, and the y -axis giving i's fraction of the packet drops in that reporting interval. If each flow's packet drop metric for random packet drops was an exact indication of that flow's arrival rate in Bps, then all marks in the upper left graph would lie on the diagonal line. A mark in the upper left quadrant of the graph indicates a flow with a larger fraction of dropped packets that arriv-

4 Packet, byte, and combined drop metrics This section defines the packet, byte, and combined drop metrics, and uses simulations to show that the combined metric gives a good estimate of the arrival rate of the highbandwidth flow. Definition: the packet drop metric. We define the packet drop metric for a flow over some time interval as the ratio of the number of packets dropped from that flow to the total number of dropped packets from that time interval. For RED in byte mode, the packet drop metric for the random packet drops estimates a flow's fraction of the aggregate arrival rate in bytes per second (Bps). For RED in packet mode, the packet drop metric for the random packet drops instead esti-

2 This is a change from the upper bound on the packet drop rate used for simulations in [FJ93]. This change is better suited for routers that typically have high levels of congestion.

3

100 1

0

20

40

60

80

80

1

60 0

1 19 19 1 11919 218 18 18 1 19 19 1 111 118118119 18 2 18 18 119 11 18 19 18 18 19 19 119 9 1818 1 19 18 18 1 19 19 19 19 18 119 19 18 14 19 1 18 12 18 19 12 118 1 12 18 12 18 18 11 11 19 19 12 12 12 18 6 12 1 2 12 1 1 19 19 12 6 18 6 6 1 18 2 1 19 19 66 66 66 18 6618 19 22 6666 22 222 2 6 2 218 22 26 2212 226 2 10 72 7 2 5 7 10 75 10 7 5 14 7 10 11 3 11 5226 10 5 5 14 7 7 10 10 5 14 7 10 11

40

333333 3 3 13 3 33 33 33 33 333 333

20

Flow’s Percent of Drops (Bytes)

100 80 60 40 20 0

Flow’s Percent of Drops (Packets)

1

1

100

1 19 1919 19 18 18 18 19 18 18 18 18 19 18 19 18 18 19 19 18 19 1 9919 18 19 19 18 18 119 19 19 18 18 18 19 19 1 18 1 1 18 19 1819 18 19 18 18 18 19 1 1 1818 19 111 1111 11 1 19 1919 1 18 18 1 1 1 11111 18 1 19 18 11 12 333 3 333 12 12 2 12 33 12 61 3 3333 66612 12 612 3333 66 66666 12 66 6 666 6 11 3 26 22222 22 10 22 2222 1 10 7522 022 2 11 10 10 10 7 77 7 75 5 5 5 14 14 0

20

60

80

100

0

20

60 40 0

1

80

100 2 222 2 2 2 10 19 5 10 10 2 10 12 5 14 118 18 77 22 612 75 6619 18 7 18 10 18 19 19 14 1 12 19 7 119 18 12 5 1 7 18 6 10 11 7 7 619 1819 19181 6111 5 6 19 18 11 10 11

20

Flow’s Percent of Drops (Bytes)

80 60 40 20

Flow’s Percent of Drops (Packets)

3 33 333 333 3

0

40

Per-Flow Arrival Rate(%) (Bytes Drop Metric, for Random Packet Drops)

100

Per-Flow Arrival Rate(%) (Packets Drop Metric, for Random Packet Drops)

40

60

80

100

18 19 18 1818 1 19 11919 18 19 19 12 1919 18 18 6 1 181 12 1 61 18612 1 22212 66 19 2 2 2 2 1 5 10 10 11 5 7 2622 19 5 714 14 10 7 10 77 18 11 10 11 66 1 0

3

20

Per-Flow Arrival Rate(%) (Packets Drop Metric, for Forced Packet Drops)

1

3 3 333 3 33

40

60

80

100

Per-Flow Arrival Rate(%) (Bytes Drop Metric, for Forced Packet Drops)

Figure 3: Comparing drop metrics for forced and random packet drops. rate, as shown in the lower right graph of Figure 3. The upper right graph of Figure 3 shows that the byte drop metric is not adequate for random packet drops, because it overestimates the arrival rate of flows with larger packets and underestimates the arrival rate of flows with smaller packets.

80

1

60 40 20 0

Flow’s Percent of Drops (Combined)

100

ing packets. As the graph shows, the packet drop metric for the random packet drops gives a reliable identification of the high-bandwidth flows. Unlike random packet drops, with forced packet drops the RED algorithm does not get to “choose” whether or not to drop a packet. When the buffer is full, or when the average queue size exceeds the maximum threshold, RED drops all arriving packets until conditions change (until the buffer is not longer full, or the average queue size no longer exceeds the threshold). Thus, a flow with one large packet arriving during a forced-drop time interval will have its packet dropped, and a flow with several small packets arriving during this interval will instead have all of its small packets dropped. The lower left graph in Figure 3 shows the packet drop metric for forced packet drops. As Figure 3 shows, the packet drop metric with forced packet drops has a systematic bias overestimating the arrival rate for flows with small packets such as Flow 3 and underestimating the arrival rate for flows with larger packets such as Flow 4. Definition: the byte drop metric. The byte drop metric is defined as the ratio of the number of bytes dropped from a flow to the total number of bytes dropped. For forced packet drops, this metric gives the best estimate of a flow's arrival

1

19 18 2 19 18 19 18 119 19 1 18 19 19 1119 19 18 18 19 19 18 19 18 19 18 1 1 1 12 19 18 1 1 18 18 1 18 19 18 18 18 1119 19 18 19 1 18 18 19 18 18 118 118 219 18 19 1 118 9118 18 12 11 1 18 118 119 6 18118 1 19 118 12 18 18 19 3 1 12 1 12 19 12 1 19 111 19 19 19 18 1918 12 19 112 6 1 6 12 121 19 1 12 1 12 6 1 18 6 19 2 1 12 6 12 1 1 19 2 1 6 6 6 12 18 2 18 12 1 6 12 12 19 2 6 2 6 6 19 6 12 19 12 66 2 19 18 2 2 66 6 661812 219 2 1 22 12 222 218 2 262 222 10 66 6 7 5 10 1 7 2222226 18 19 10 7 10 10 5 14 14 266666 5 7 7 10 75222 57 7 11 11 18 10 14 10 7 5 14 5 14 5 14 7 5 0 20

1

33 3 33 3333 333 3 333 333 33 3 3 3 3 333 3333 3 33 3 1

40

60

80

100

Per-Flow Arrival Rate(%) (Combined Drop Metric, for All Packet Drops)

Figure 4: The combined drop metric for all packet drops, for simulation 1. Definition: the combined drop metric. By weighting a 4

2

. .

..

.

.

.

.

.

. .

.

.

1

.

0.8 0.6

.

0.4

.

.

.

.

0.2

.

. .

. .

..

.

.

.

. . .

Arrival Rate (Fraction of Bytes)

. . .

.

.

.

.

0.0

3

.

.

.

.

0

0

Packets Dropped (Fraction of Arrivals)

. .

100

200

300

400

100

.

. . . .

. .

. .. .

.

. . . . .

0

.

. .

100

.

.

.

. .

.

. ..

.

.

. . .

. .

..

.

. .

. 300

. ...

.

.

200

.

.

.

.

400

500

500

1.5 1.0

.

400

0.5

100

.

.

50

. . . . . . . . .. . . . . . . . . . . . . ... . . . . . .

.

300

0.0

. . .. . . . .. . .

.

Combined Metric/Fraction of Arrivals

150

. . ..

0

Number of Drops in Sample

Time

.

200

Time (For the highest-bandwidth flow in each reporting interval)

500

0

100

Time (Dotted line: Number of Forced Drops)

200

300

400

500

Time (For the highest-bandwidth flow in each reporting interval)

20

Figure 6: Statistics for the high-bandwidth flow from each sample, for simulation 1.

.

. .

5

10

. .

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

. .

.

.

.

.

.

. .. ... . .

. ..

.

.

.

. .

.

size, the appendix shows that packets dropped because of buffer overflow should be counted with the random packet drops, using the packet drop metric. In contrast, for a queue in units of bytes, packets dropped because of buffer overflow should be counted with the forced packet drops, using the byte drop metric. The appendix shows that routers with Drop-Tail queue management cannot use the packet drop history to reliably identify high bandwidth flows. Figure 4 shows the combined drop metric for each flow for the simulation in Figure 3, calculated each reporting interval. As Figure 4 shows, the combined drop metric is a reasonably accurate indicator of the arrival rate for high-bandwidth flows. The graphs in Figure 5 shows the percent of arriving

....

0

100

200

300

400

500

Time

Figure 5: The percent of packets dropped, number of drops in a sample, and length of each sample, for simulation 1. flow's byte and packet drop metrics by the ratio of forced and random packet drops, we can better estimate a flow's behavior than by using either metric alone. We define the combined drop metric for forced and random packet drops as follows:

 fForced + MRandom  fRandom

;

. . .... .. .. . . . .. . .

0.03

MForced

. .. . . . .

0.01

Density

where MForced is the flow's byte drop metric for the forced packet drops, MRandom is the flow's packet drop metric for the random packet drops, and fForced and fRandom are the fraction of the total packet drops from that sample that are forced and random, respectively. For the simulations in this section, the buffer is sufficiently large that packets are rarely dropped due to buffer overflow; the forced packet drops in these simulations result from the average queue size exceeding the upper threshold maxthresh. For a queue in units of packets, where the buffer is able to accommodate a fixed number of packets regardless of packet

0.02

15

.

0

Reporting Interval in Seconds

.

0.0

. 0.0

.. .. .. .. . . . .. . .. .. .. . .. . ..... ... . .. . . . .. .. .... .

.. . . .. . . .. . ...... ....

.. 0.5

1.0

1.5

.

. 2.0

Combined Metric/Fraction of Arrivals

Figure 7: Statistics for the high-bandwidth flow from each sample, for 100 runs of simulation 1. 5

packets dropped, the number of packet drops in a sample, and the length of a reporting interval in seconds. The two graphs in Figure 6 show the statistics for the highbandwidth flow for each sample in the simulation; in this scenario the high-bandwidth flow is usually the CBR UDP flow. The top graph shows the arrival rate of the high-bandwidth flow as a fraction of the overall arrival rate in Bps. The second graph shows, for the high-bandwidth flow in each sample, the ratio between that flow's combined metric and that flow's fraction of the arrival rate in bytes. If the combined metric was a perfect estimate of the flow's arrival rate in Bps, then this ratio would always be one. Figure 7 shows the density function for this ratio over 100 runs of the simulation, from more than 4,900 samples. This graph is consistent with the parameters that we used in the simulations for determining the sample size, specifying the probability that a flow received more than 1.5 times its “share” of the packet drops should be at most 0.01. Figure 7 shows that our algorithm for choosing the sample size is conservative; In 99% of these 4,900 samples, the highbandwidth flow received at most 1.3 times its “share” of the packet drops. Figures 8-10 show a simulation that differs from Figure 4 only in that it has none of the UDP flows, and fewer TCP flows. As Figure 8 shows, the high-bandwidth flow still receives a significant fraction of the link bandwidth, but the percent of arriving packets dropped is very low throughout this simulation, and as a result the reporting intervals are up to 30 seconds long. Figure 10 shows that for this simulation scenario, the combined drop metric is a good estimate of the arriving rate of the high-bandwidth flow. Figures 11-13 show a simulation that differs from Figure 4 in that the bandwidth of the congested link is 45 Mbps, and there are more active flows. The level of congestion varies during the simulation, increasing up to time 350, and then decreasing again. The bottom graph of Figure 13 shows that the combined drop metric generally underestimates the arrival rate of the high-bandwidth flow. The flows that are overrepresented in terms of packet drops are the flows with smaller packet sizes. This simulation has two-way traffic, with many flows with 512-byte packets, and many other flows with 4000-byte packets. The simulation shows that in byte mode, RED has a slight bias in favor of flows with larger packets, in that flows with larger packets are somewhat less likely to have packets dropped than flows with smaller packets but the same arrival rate in Bps.

6

100 80

1

100

0.15

400

.

.

.

.

. .

. .

.

.

.

.

.

.

0.8 0.6 1.0

.

.

.

.

.

.

.

.

.

300

.

.

.

.

400

.

.

.

. .

Density

.

.

.

. . . .. ..

. .

.

.. .. .. ... . . .

.

0.0

.

. . .. .

0.01

30

.

.

. . ... . . . . .. . . .. .

.

.

20

400

Figure 9: Statistics for the high-bandwidth flow from each sample, for simulation 2. 200

.

300

.

Time (Dotted line: Number of Forced Drops)

.

200

.

.

100

.

100

Time (For the highest-bandwidth flow in each reporting interval)

.

0

0

.

0.04

50 100

.

.

0.03

200

. .

0

Number of Drops in Sample

300

Time .

400

0.5

. 300

300

0.0

Combined Metric/Fraction of Arrivals

.

0.02

0.10

.

.

.

0.05

.

.

.

200

1.5

.

.

100

Time (For the highest-bandwidth flow in each reporting interval)

0.0

Packets Dropped (Percent of Arrivals)

0

.

200

0.0

0.5

. .. . .. . .... ... . ... .. .. . 1.0

1.5

2.0

Combined Metric/Fraction of Arrivals

10

.

Figure 10: Statistics for the high-bandwidth flow from each sample, for 100 runs of simulation 2.

0

Reporting Interval in Seconds

0.4

80

0.2

60

Per-Flow Arrival Rate(%) (Combined Drop Metric, for All Packet Drops)

0.0

20

18 19 1918 1 18 2 19119 1 19 111 18 1119 119 118 18 19 211818 18 1 19 19 1918 18 19 18 18 12 19 1 18 12 12 22266 18 2222 2266666 6 22 12 62 7 70 110 710 10 19 7 10 14 7 10 0 20 40

Arrival Rate (Fraction of Bytes)

40

60

1

0

Flow’s Percent of Drops (Combined)

1

0

100

200

300

400

Time

Figure 8: The combined drop metric, percent of packets dropped, number of drops in a sample, and length of each sample, for simulation 2.

7

100 80 60

20

40

60

80

100

1.2

. .

.

1.0

400

500

.

. .

.

.

300

400

500

1.5 1.0

.

.

0.5

.

.

.

.

100

800

. .

600

.

. .

. .

.

.

.

. . . .

.

.

.

.

.

200

. . .

.

Figure 12: Statistics for the high-bandwidth flow from each sample, for simulation 3.

. .

. .

.

.

.

.

.

. . . . . . . . . . . . . . . . . . . .

200

300

. .

400

500

Density

Time (Dotted line: Number of Forced Drops)

.

.

.

.

0.02

.

. .

.

15

.

. .

.

. .

.

. .

.

.

. .

.

.

.

.

0.0

.

0.5

1.0

.. .

.

.

1.5

. 2.0

Combined Metric/Fraction of Arrivals

5

10

.

.

.

0.0

20

. .

.. .. . .. .. . .. . .. .. . .. . .. .. . . .. .. . .. . .. . . . .. . . . . .. .. . .. . ... ... . .... .. .. . .. . . . . .. ... .......

0.06

.

100

.

500

.

.

0.04

.

400

.

. . . .

300

Time (For the highest-bandwidth flow in each reporting interval)

.

400 200

300

0.0

0.6

.

.

Combined Metric/Fraction of Arrivals

0.8

.

. .

.

Number of Drops in Sample

0.4

.

Time

0

200

Time (For the highest-bandwidth flow in each reporting interval)

. .

.

200

Figure 13: Statistics for the high-bandwidth flow from each sample, for 100 runs of simulation 3.

0

Reporting Interval in Seconds

0.3

100 .

.

0.4 0.2

.

.

0.0

Packets Dropped (Percent of Arrivals)

Per-Flow Arrival Rate(%) (Combined Drop Metric, for All Packet Drops)

0.2

20

0.1

23

0.0

0

Arrival Rate (Fraction of Bytes)

40 20

Flow’s Percent of Drops (Combined)

0

2321 1 1 21 23 21 18 21 19 1 23 21 18 21 1 23 28 23 19 21 19 18 1 18 1 9 29 23 19 21 18 18 19 1 21 19 1 1 18 2323 2320 1 2928 19 1 19 31 29 31 23 33 18 40 39 35 2020 21 18 28 40 32 1 23 40 19 23 21 838 39 29 20 31 8 19 33 23 2020 20 32 21 21 28 29 3 5 19 23 18 31 32 21 8 8 28 35 18 29 1 1 35 20 33 31 23 1 40 39 33 20 3939 29 31 32 23 23 21 2 37 35 20 35 19 8 33 31 21 40 38 32 40 36 37 35 36 32 33 1 1 20 4 0 40 38 18 9 35 19 20 36 20 39 39 39 37 33 2 39 37 37 37 19 19 31 21 9 31 23 23 8 9 37 33 23 23 2 18 18 36 19 1 9 38 12 15 18 21 9 13 37 20 38 18 36 36 36 1 20 2020 88 29 38 13 20 9828 16 36 21 4 820 40 1717 19 33 23 23 23 23 21 21 18 22 20 20 8 1 18 13 19 19 33 1 21 20 12 12 15 16 15 1 20 8 12 18 15 1 8 9 9 23 2 8 8 13 35 20 4 9 9 15 20 3 4 12 2 4 37 12 2 12 15 22 4 16 26 24 4 13 3 4 4 24 36 27 25 10 38 24 25 26 3 6 7 7 10 12 29 22 22 27 2 22 33 17 15 24 34 14 26 3 6 7 7 10 14 22 5 7 34 32 25 11 36

100

200

300

400

500

Time

Figure 11: The combined drop metric, percent of packets dropped, number of drops in a sample, and length of each sample, for simulation 3.

8

5 Conclusions

A Chernoff bounds

The paper has presented a mechanism for estimating the arrival rate of high-bandwidth flows based of the RED packet drop history. This low-overhead mechanism would not be needed in a router with sufficient resources to measure directly the arrival rate of high-bandwidth flows. In an environment with Explicit Congestion Notification [Flo94], it would be straightforward to extend this mechanism to take into account packets with the Explicit Congestion Notification bit set in the packet header. Modifications would also be needed to deal with the realities of packet fragmentation.

In Section 3, we needed an upper bound on the probability that a flow receives more than c times its expected number pi n of packet drops, for c > 1. This Appendix shows the calculation of this bound. Let Si;n be the number of the n drops in a sample that are from flow i, and let pi be flow i's fixed fraction of the arrival rate during that period. The expected value of Si;n is pi n. We require a bound for P rob (Si;n  cpi n). Using Chernoff-type bounds [HR95], for 0 < u < 1 and u  p: P rob

(Si;n



un)



"  p u  1 u

? 1?

p

1?u #n

:

u

(3)

Letting p = pi and letting u = cpi in equation (3), for c  1, we get the following:

6 Acknowledgments This paper is part of a series of papers on router mechanisms in support of end-to-end congestion control. This work results in part from a long collaboration with Van Jacobson. It also results from a history of discussions in the Internet Endto-End Research Group and elsewhere. We are indebted to Van Jacobson and members of the Internet End-to-End Research Group for discussions on these matters. We are also indebted to Hari Balakrishnan, Greg Minshall, Lixia Zhang, and the anonymous reviewers from SIGCOMM 97 for feedback on an earlier draft on this paper, and to Jean Bolot, Bob Braden, Jamshid Mahdavi, Matt Mathis, and Scott Shenker for discussions of related matters.

P rob (Si;n



cpi n)



"   cpi 1 c

? 1? 1

pi

1?cpi #n

:

cpi

It is possible also to bound the probability that a flow receives less than its share of packet drops. From [HR95], the bound in equation (3) holds for 0 < u < 1 and u < p: P rob

(Si;n



un)



"  p u  1 u

? 1?

p u

1?u #n

:

B Identifying high-bandwidth flows for queues with drop-tail queue management

References

Figure 14 shows the results from a simulation that differs from the simulation in Figure 3 largely in that the router uses Drop-Tail rather than RED queue management. With DropTail queue management, the router only drops arriving packets when the buffer overflows. For the simulation in Figure 14, the buffer is measured in packets, with a buffer size of 25 packets. That is, the buffer can store exactly 25 packets, regardless of the size of each packet in bytes. Because this simulation was done from an older script with an older version of the simulator, it also differs from the simulation in Figure 3 in that the traffic is slightly different, and the drop samples are taken after every 100 packet drops. The graphs in Figure 14 compare the packet and the byte drop metric. Figure 14 shows that for a Drop-Tail queue measured in packets, the byte drop metric is better than the packet drop metric in indicating a flow's arrival rate in Bps. This is because a Drop-Tail queue with a queue measured in packets drops proportionately more packets from smallpacket flows than from large-packet flows with the same arrival rate in Bps. However, Figure 14 also shows that for

[FF98] S. Floyd and K. Fall. “Promoting the Use of End-to-End Congestion Control in the Internet”. Submitted to IEEE/ACM Transactions on Networking, URL http://www-nrg.ee.lbl.gov/floyd/end2end-paper.html, Feb. 1998. [FJ93] S. Floyd and V. Jacobson. “Random Early Detection Gateways for Congestion Avoidance”. IEEE/ACM Transactions on Networking, 1(4):397–413, Aug. 1993. URL http://www-nrg.ee.lbl.gov/nrg-papers.html. [Flo94] S. Floyd. “TCP and Explicit Congestion Notification”. ACM Computer Communication Review, 24(5):10– 23, Oct. 1994. [HR95] T. Hagerup and C. Rub. “A Guided Tour of Chernoff Bounds”. Information Processing Letters, 33:305–308, 1995. [Nai96] T. Nairne. “Idenfitying High-Bandwidth Users in RED Gateways”. Technical report, Oct. 1996. UCLA Computer Science Department. 9

40

60

40

60

80

100 20

0

20

1

10 10 10 10 10 10 10 10 7 10 10 777 7 777 7 777 7 10 77 7 10 7 77 777 277 310 7 1 2 77 5 7 77 10 2 7 7 7 5 2 7 7 57777 75 10 5 7 10 5 7 7 5 7 5 5 10 1 2 777 5 5 7 5 5 7 77 10 10 2 7 5 5 10 2 2 5 5 7 10 9 1 1 2 5 5 7 2 2 5 5 7 7 2 2 5 5 118 1119 119 2 5 18 5 18 419 7 5 7 10 1 2 2 18 18 19 5 19 7 10 9 918 10 1 1 1 1 1 26 2 61 696 6 6 18 18 18 18 18 18 18 6 19 4 418 5 7 5 5 9 819 7 9 918 14 1 11 1 1 1 1 1 2 6 6 18 18 5 6 18 18 18 19 7 19 19 19 19 19 19 19 19 4 4 4 4 44419 5 5 8 9 9 8 9 8 1 1 1 1 1 18 1 1 1 2 2 2 6 6 6 18 5 6 18 18 6 18 6 6 18 18 18 18 18 18 18 18 18 18 7 19 19 19 1919 18 19 19 19 19 19 19 19 19 19 19 4 4 19 419 418 4 4 4 7 10 9 8 9 8 8 9 8 8 8 8 818 9 9 9 10 1 1 1 1 1 1 119 114 1 1 1 11 1 1 1 1 2 2 2 5 6 5 6 6 6 6 6 6 6 6 18 18 18 18 18 6 18 18 18 18 18 18 7 19 19 19 19 19 19 19 19 19 19 19 19 19 19 4 4 4 19 19 4 4 4 4419 4 419 418 4 4 419 4 419 4 419 4 5 8 8 8 10 9 9 8 9 9 8 8 8 8 8 9 819 9 8 9 9 9 1418 1 1 1 1 1 1 1 1 1 1 1 1 1 1 11 1 1 1 2 2 2 2 6 6 5 5 6 6 6 6 6 18 18 18 18 6 18 18 18 18 18 6 18 18 18 6 19 19 19 19 1919 19 19 1919 19 19 19 19 19 19 19 19 19 1919 19 19 19 19 19 19 19 4 4 4 4 4 4 4 4 4 4 4 4 4 4 418 4 441 418 419 4 4 4 4 4 5 8 9 7 8 9 9 8 9 8 7 8 8 8 8 9 8 9 9 1 1 1 1 119 1 1 14 1 2 6 6 18 1818 18 18 18 18 18 18 18 18 18 18 66 6 18 18 18 18 18 18 19 19 19 19 19 19 19 19 414 4 419 4 4 41 418 4 4 4 4 4 4 8 9 8 8 8 9 8 8 8 8 918 1 1819 1 1 1 1 1 118 1 118 1 1 1 1 118 1 14 1 118 1 1 1 2 2 2 2 2 5 6 6 5 6 18 1818 6 6 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 1818 1819 1919 6 18 18 18 18 6 6 666 6 18 19 19 19 19 19 19 19 19 19 19 19 19 19 19 19 19 19 19 19 19 19 4 19 19 19 19 19 4 4 418 19 19 19 4 5 4 4 4 4 418 4 4 4 4 4 44 4 4 4 4418 48 419 4 4 444 4 4 44 4 4 4 4 4 4 8 8 9 9 8 9 9 919 8 8 9 9 8 8 8 7 7 8 9 9 8 918 9 9 8 9 9 10 10 0

Flow’s Percent of Drops (Bytes)

100 80 60 40 20 0

Flow’s Percent of Drops (Packets)

3333 33 33 3 33 3 333 3 33 33 3 3 3 3 33333 3 333 333333 33 3 3333 33 333 333 3 3 333 33 33 333 33 333333 33 3 33 33 3 33333 33 3 3 3333 33 3 3 3 3333 3 3 3 3 3 333 33 33 33 333 3 333 33 33 3 33

80

100

3 3 33 333 3 33 3 33 3 3 3333333 33 3 333 1 33 33 3 3 3 333 3 3 3 3 3 33 3 333 3 3 3333333 333 3 3 3 3 3 33 3 33 33 33333333 33 333 3333 1 333 3333333 33333 3 33 33 3 1 1 3 3 3 3 333 3 33333 3 3 18 3 33 19 33333319 3 3 3 19 3 18 1818 4333 3 19 418 44 18 18 19 448118 19 18 18 3 3 4 418 1 418 19 119 34333 18 419 18 19 4419 484 418 18 19 4 419 119 418 4819 418 19 19 18 19 18 419 18 3194 118 18 4119 18 18 48 419 1818 19 18 1 19 19 418 19 1 19 1 19 19 1 4419 18 18 419 18 419 19 18 19 19 18 1 18 1 19 18 18 4419 418 19 818 18 18 18 418 19 19 19 19 118 18 44419 11 1 819 8 19 4 1 4418 19 10 18 19 18 18 14 19 419 4 18 419 19 18 18 19 19 1 419 419 418 81 19 19 9 9 8 18 1 8 18 8 19 1 1 1 4 19 19 18 19 8 18 1 1 9 8 18 18 18 1 8 4 19 1 19 4 4 4 1 8 44419 4 19 10 4 19 19 19 19 19 18 1 4 19 19 18 18 1 1 1 1 9 8 4 1 1 1 10 9 4 8 49 6 18118 918 9 19 18 111618118 11144 4 10 8 18 8 8 444 44119 4 89 18 168 10 1 1 18 1 4 19 18 19 4 4 19 4 1 1 19 9 6 9 1 1 8 4 19 1 1 1 10 19 19 10 8 6 19 6 18 19 1 6 1 1 19 18 7 18 10 4 19 18 1 1 19 19 18 18 18 1 1 8 8 6 18 7 6 1 9149 18 10 8 8 19 19 3 19 91941118 7 4 4 419 19 18 19 9 9 7 19 18 10 819 4 19 18 1 1 10 6 66 1 110 44 8 4 4 18 18 818 7 18 8 6 19 19 9 161 441 4 19 1 9 7 8 444418 6 61 9 7 18 7 6 6 18 9 8 6 6 1818 18 10 7 6 6 1 9 6 19 9 444 19 1 9 8 7 7 19 1 9 6 19 19 18 9 9 6 6 19 8 9 6 6 6 18 1 10 1 7 4 44844 18 1 18 6 5 19 181 18 18 19 1 8 8 1 1 6 9 7 7 7 6 19118 816819 18 18 2 1 1 6 8 7 7 6 6 1 1 8 9 8 5 6 6 8 7 7 7 7 6 6 6 8 10 7 7 9 10 6 10 10 9 7 7 6 5 5 6 2 2 6 2 9 5 6 7 7 6 7 5 6 2 10 2 5 9 2 10 10 5 7 7 7 5 5 2 55 2 5 287 7 5 18 18 1819 18 19 19 19 19 19 19 19 18 18 1819 18 18 18 19 18 18 18 18 18 18 18 1818 2 18 18 2 2 1 1 1 11419 119 1 1 118 1 1 19 19 19 19 1819 19 18 1 1 118 1 9 9 918 9 9 918 9 10 9 9 4 418 44 4 8 9 9 919 8 8 4 9 8 8 8 418 419 44 444 4 4 691 81418 81 6 8 44 48 4 4414 666 6 6 6 6 4 4 6 6 6 444 418 4118 418 4418 19 19 19 19 19 19 19 19 19 19 19 4 4 4 19 19 419 4 418 0 20 40 60

Per-Flow Arrival Rate(%) (Packets Drop Metric, for a Drop-Tail Queue Measured in Packets)

1

80

100

Per-Flow Arrival Rate(%) (Bytes Drop Metric, for a Drop-Tail Queue Measured in Packets)

100 40

60

80

100

Per-Flow Arrival Rate(%) (Packets Drop Metric, for a Drop-Tail Queue Measured in Bytes)

60

80

1

40 0

3 3 1 3 3 2 3 333 33 3 1 3 3 2 1118 118 19 19 118 119 1 33 3333 1 18 19 1 3 18 18 19 119 118 3 19 19 44 9 8118 1 3 18 18 19 19 18 19 419 4 22 921 9 1 1418 19 19 18 9 8 8 1 18 18 18 18 19 19 19 4 9 8 1 1 1 1 18 19 18 19 4 4 4 6 9 9 2 8 8 8 222 2 119 18 18 18 19 19 29 4 2 64 96 2 8 1 1 1 18 18 19 19 28 6 4 6 418 48418 4 69 9 342 419 44 1 18 4 6 8 2 2 18 18 19 19 19 44 6 6 2 8 8 8 888 2 18 19 19 19 19 4 44 66 4 4 4 6 926 8 2 1 18 6 2 8 2 2 18 6 4 6 6 18 22 26 5 10 97 7726 2 10 5 5 5 10 7 7 7 55 5 5 5 10 2 226 710 7 7 7 7 0 20

20

Flow’s Percent of Drops (Bytes)

80 60

1

40 20 0

Flow’s Percent of Drops (Packets)

100

Figure 14: Comparing drop metrics for packet drops for a Drop-Tail queue measured in packets.

1 4 18 4 418 19 1918 19 44 18 18 4419 19 18 4 4 4 18 19 1 44 4 4 19 18 19 419 19 18 18 18 19 1 4 18 19 4 4 4 1 19 1118 1 19 119 4 19 19 18 418 18 1 418 19 44 18 18 18 19 8 19 1118 1 19 818 11 18 4 4 44 19 181844 8 18 119 8 18 11819 19 18 19 181 4 118 19 819 19 8 11 1 19 4 1 1 89 18 88 8 1 9 8 8 1 9 8 8 18 8 3 9 99 89 18 9 66 16 66 333 6 3 9 33 3 96 3 6666 33 3 22 666 9 20 2 2 22 226 2 22 226 2222 10 26 10 10 10 10 1 7 7 7 7 7 7 55 5 5 5 0

20

40

60

80

100

Per-Flow Arrival Rate(%) (Bytes Drop Metric, for a Drop-Tail Queue Measured in Bytes)

Figure 15: Comparing drop metrics for packet drops for a Drop-Tail queue measured in bytes. a Drop-Tail queue measured in packets, neither drop metric gives a very reliable indication of a flow's arrival rate in bytes per second. Figure 15 shows that when the Drop-Tail buffer is measured in bytes rather than packets, both drop metrics gives a plausible indication of a flow's arrival rate in Bps, with the packet drop metric doing better than the byte drop metric. These graphs show a simulation where the Drop-Tail buffer is measured in bytes, with a buffer size of 12.5 KB. In this case, an almost-full buffer might give room for a small packet but not for a larger one. We note that with very heavy congestion and unresponsive flows, even a RED queue will no longer be dropping all packets probabilistically, but will be forced to drop many arriving packets either because of buffer overflow (for a queue with a small buffer relative to the maximum threshold for the average queue size), or because the average queue size is too high (for queues with larger buffers). As the packet drop rate increases, the computational overhead of monitoring dropped packets approaches the computational overhead of monitoring packet arrivals directly. Our hope is that the

deployment of mechanisms for the identification and regulation of high-bandwidth unresponsive flows, coupled with sensible network provisioning, will in many cases be sufficient to prevent these high packet drop rates.

10