tcp/ip over atm - Semantic Scholar

3 downloads 0 Views 215KB Size Report
control mechanisms were designed for best-effort networks and therefore may cause performance .... UBR, with or without traffic shaping by the application).
TCP/IP OVER ATM - PERFORMANCE EVALUATION AND OPTIMISATION José Ruela, Nelson Silva INESC Porto, Largo Mompilher 22, Porto , Portugal ([email protected], [email protected])

ATM Technology Users Symposium Košice Slovak Republic 17-19 February, 1999

Abstract. The integration of TCP/IP and ATM is a challenging architectural issue in today’s global networks, where the unique properties of ATM technology can be combined with the TCP/IP mature and well-proven protocol suite. A rigorous characterisation of the performance of TCP/IP over ATM can be a powerful tool for the efficient use of both technologies. TCP error and flow control mechanisms were designed for best-effort networks and therefore may cause performance degradation in ATM networks, if incorrectly used. On the other hand standard overlay techniques, such as LAN Emulation and Classical IP over ATM (CLIP), hide Quality of Service (QoS) from applications. This paper presents a detailed and systematic analysis and evaluation of TCP/IP over ATM, identifies the factors that affect performance, derives rules for the tuning of protocol and network parameters and proposes an extension of CLIP that provides QoS to TCP applications. The main conclusion is that, even for real time applications, TCP/IP is still a good choice, provided that critical parameters are correctly tuned and there is a means to deliver QoS to TCP applications. Keywords. ATM, TCP/IP, CLIP, Quality of Service, Protocol.

INTRODUCTION Asynchronous Transfer Mode (ATM) [1] is a fast packet switching technology designed to support a wide variety of services. In ATM data is carried in small fixed size packets (cells) over virtual channel connections, either permanent (PVC) or switched (SVC). ATM provides bandwidth on demand and offers a variety of service categories [2], with distinct Quality of Service (QoS) guarantees that can be negotiated per connection. On the other hand, TCP was designed and optimised for best-effort networks and is typically used with IP over different subnetwork technologies. Classical IP over ATM (CLIP) [3] specifies an overlay solution that treats ATM as any other subnetwork technology. With CLIP, legacy applications that run over TCP/IP can be reused without modifications, thus not profiting from ATM QoS. Although in the future QoS will be available to applications that run natively over ATM (or in IP networks when extended with guaranteed services), it may be useful in the short term to provide QoS to applications with minor modifications to current protocol implementations. This paper addresses two related topics, performance evaluation of TCP/IP over ATM and a solution to the QoS problem. In order to identify the various factors that may affect TCP/IP performance, a systematic approach was adopted. In the first place theoretical performance limits, at various protocol levels, were calculated. Then experimental tests were run varying traffic conditions, protocol and network parameters and emulating different types of services. This allowed to completely characterise TCP/IP performance and

to derive rules for its optimisation. Related work may be found in [4], [5], [6], [7]. To overcome the limitations of CLIP, a new solution that allows delivering QoS to TCP applications was specified, implemented and evaluated. The tests described in this paper were performed in an ATM testbed composed of a FORE ASX-200WG ATM switch and four hosts – two Pentium PCs at 133 MHz and two Pentium II PCs at 233 MHz, running Linux 2.0.27. All hosts were equipped with Fore PCA200E ATM adapter cards and a Linux Device Driver developed according to [8].

ANALYSIS OF PROTOCOL OVERHEAD To fully evaluate the performance of TCP/IP over ATM it is useful to start by quantifying the overhead at each protocol layer and then derive the corresponding throughput limits that will be used as a reference for the experimental tests. The overheads are mainly due to the lower protocol layers and are easily calculated considering the format of Protocol Data Units (PDUs) at each layer. The protocol stack and relevant interfaces are shown in Fig. 1. AAL 5 Test software

ATM Socket Test software

TCP/IP Test software

TCP/IP EPFL ATM for Linux ATM Adaptation Layer (AAL5) ATM Layer Physical Layer Fig. 1. Protocol Stack.

ATM stack A 155.52 Mbit/s SONET STS-3c/OC-3c Physical Layer Interface has been used in the testbed. In SONET each frame of data is sent every 125 µs, for a total of 800 frames per second. The OC-3c frame has a total of 2430 bytes, with 90 bytes of overhead. The ratio between the payload and the frame size is 2340/2430=0.963 and thus the maximum bandwidth available to the ATM layer is 149.760 Mbit/s. Each ATM cell has 5 bytes of overhead for a total cell size of 53 bytes. Therefore, the maximum bandwidth available to the ATM Adaptation Layer (AAL) is further reduced to 135.632 Mbit/s. AAL5 has been adopted in the testbed since it is the industry preferred solution for data communications. The AAL5 PDU payload can be up to 65535 bytes long and is followed by a PAD field (0 to 47 bytes) and an 8-byte trailer; the PAD field is included so that the total frame is a multiple of 48 bytes. The maximum theoretical throughput depends on the payload size, and due to the PAD field, there is a ripple effect, as seen in Fig. 2. 170

Physical (155,520) SONET/SDH (149,760)

Throughput (Mbit/s)

ATM (135,632) 130

introduces 8 bytes of overhead. The PDUs of each protocol layer are shown in Fig. 3. 20

MTU size - 40

Header

Payload

20

MTU size - 20

Header

Payload

TCP Packet

IP Packet

8

MTU size

0-47

8

LLC

MTU

PAD

Tr.

1-65535

0-47

8

Payload

PAD

Tr.

AAL5

Fig. 3. TCP, IP and AAL5 PDUs. The maximum bandwidth available to TCP applications can be easily calculated and depends on the MTU size, as shown in Tab. 1. For a given MTU (IP packet), the table shows the size of TCP payload and the length of TCP and IP packets and of AAL5 frames (the number of padding bytes is the same in all cases). The last column shows the maximum rate available to TCP applications, which should be compared to the bandwidth left to AAL (135.632 Mbit/s).

AAL5

90

50 0

400

800

1200

1600

User data (bytes)

MTU bytes 1500 3036 4572 6108 7644 9180

TCP Payload 1460 2996 4532 6068 7604 9140

Length of packet/frame TCP IP AAL5 1480 1500 1536 3016 3036 3072 4552 4572 4608 6088 6108 6144 7624 7644 7680 9160 9180 9216

PAD AAL5 20 20 20 20 20 20

Rate Mbit/s 128,921 132,277 133,395 133,954 134,290 134,514

Fig. 2. Maximum theoretical throughputs. Tab.1. Maximum Rate available to TCP connections. For payloads greater than 1000 bytes, the target throughput lies between 128.4 and 135.6 Mbit/s. These values will be used to determine whether the experimental results are influenced by hardware, software or network limitations. TCP/IP In TCP/IP, data segments do not need to be all the same size. However, both hosts must agree on a maximum segment size they will transfer. IP defines default Maximum Transmission Unit (MTU) for each network. In the case of Ethernet, the default is 1500 bytes, while in CLIP is 9180 bytes. The default socket buffer size is also different according to the network used. In Ethernet it is 8192 bytes, while on ATM networks is 65536 bytes. These parameters are very important, since the assumptions made for Ethernet are no more valid in ATM networks. In fact, the relation between the MTU and the socket buffer size is critical, due to the fact that the window size depends on the available buffer at the receiver and the throughput is related to the ratio of the window and packet sizes. If these values are close, deadlocks or highly degraded performance may occur. Therefore, a socket size adequate for Ethernet is not suitable on ATM where a larger MTU is used. The default TCP and IP headers are each 20 bytes long. CLIP uses IEEE 802.2 LLC/SNAP encapsulation, which

ATM PERFORMANCE TESTS The tests described in this section are divided in two groups that correspond to AAL5 and ATM socket interfaces. AAL5 In these tests a raw AAL5 test interface in the Linux ATM device driver was used. This allows to obtain the maximum sending and receiving rates and to identify possible bottlenecks on the hosts. Two Pentium PCs at 133 MHz were used and the network was loaded only with the flow under test. Sender throughput. The test software generates sequences of PDUs (buffers) and sends them to the network. The buffers of each sequence are the same size, but different sizes are used to study the influence of the buffer size on the rate. Buffer sizes between 128 and 63488 bytes were used and results are plotted in Fig. 4 and compared to the theoretical limit previously calculated.

connection, application and socket buffer sizes, number of buffers to send, PCR value and the traffic mode (CBR or UBR, with or without traffic shaping by the application). The network was loaded only with the traffic under test.

160 140 AAL5

Theoretical Limit

Throughput (Mbit/s)

120 100

One PVC without rate control. This test allows a simple comparison with the AAL5 test previously described. The same application buffer sizes were used and both results are plotted in Fig. 5; the socket size has the default value (65636 bytes).

80 60 40 20 0 0

8000

16000

24000

32000

40000

48000

56000

64000

PDU size (bytes)

160

Fig. 4. AAL5 throughput vs. PDU size.

140

Range (bytes) 1280 - 18176 18432 - 19200 19456 - 32256 32512 - 37376 37632 - 42496 43008 - 44800

Sender Rate (%) 98 % 97% 97% - 91% 90% 89% 88%

Throughput (Mbit/s)

The ratio of the measured to the maximum theoretical rate is shown in Tab. 2 for various PDU sizes; steady performance degradation is noticeable between 20 and 32 Kbytes. The sending rate is close to the theoretical maximum and degradation is due to CPU speed and/or the ATM card.

Theoretical Limit

AAL5

ATM Sockets

120 100 80 60 40 20 0 0

8000

16000

24000

32000

40000

48000

56000

64000

Data/socket buffer (bytes)

Fig. 5. PVC throughput vs. buffer size.

Tab. 2. AAL5 relative sending rate.

The ATM socket interface introduces a small overhead when compared with raw AAL5. For small buffers this can be up to about 9% but for buffer sizes over 8 Kbytes the overhead is below 2%.

Receiver rate. A second test was performed to evaluate the maximum data rate that can be sustained by the receiver, without losses, thus putting in evidence the influence of CPU processing in the performance of the receiver.

One PVC with rate control. An ATM network is expected to maintain several simultaneous connections, possibly with different categories of service and QoS parameters. In particular it is necessary guarantee the PCR to CBR traffic.

Data must be sent over the network at different rates, but since no flow control was possible at this level, the variation of the buffer size was used to modify the sender data rate. The processing of the received cells by the ATM board and of the received PDUs (buffers) by the driver, before delivery to the test software, may cause losses. Since the receiver needs more processing than the sender, the data rate at which losses begin to occur is lower than the maximum rate achieved in the sender. Losses occurred at about 127 Mbit/s, which is the rate that can be successfully used both on transmission and reception, at the AAL5 Linux Device Driver Interface.

From the hosts point of view the ideal solution would be to rely on the adapter card to provide the traffic shaping of CBR traffic, thus spacing and interleaving cells of the various multiplexed CBR connections. UBR traffic would be scheduled for transmission only when resources were not used by CBR traffic. The built-in rate control mechanism has been successfully tested to control a single PVC.

ATM socket interface To evaluate the performance of the ATM socket interface, CBR (Constant Bit Rate) and UBR (Unspecified Bit Rate) traffic was generated. In Fore Systems cards, a rate control mechanism may be applied to CBR traffic. The goal of this set of tests is to evaluate the maximum throughput at the ATM socket interface and its dependence on the sizes of application buffers, socket buffers and MTU. For CBR traffic a PCR (Peak Cell Rate) was specified for the test. Tests with multiple virtual channel connections (PVCs) were performed, as well. The test software generates sequences of PDUs (buffers) and sends them to the network, as in the AAL5 test. A new test program was written, which allows to select an ATM

Multiple PVCs without rate control. Several PVCs were established without specifying the rate control option, thus requesting a UBR service. The results of these tests confirm that the FORE board and the driver make use of the total bandwidth. Moreover the sharing of bandwidth by the PVCs is fair, as shown in Tab. 3. Nº PVC 1 2 3 4 5 6 7 8 9 10 20

Average 132,533 66,523 44,435 33,385 26,702 22,288 19,084 16,708 14,828 13,359 6,691

Rate (Mbit/s) Max Min 66,613 66,433 44,525 44,358 33,446 33,319 26,763 26,663 22,390 22,257 19,129 19,056 16,789 16,640 14,839 14,812 13,409 13,338 6,747 6,673

Total 132,533 133,046 133,306 133,540 133,508 133,730 133,585 133,665 133,453 133,588 133,827

Tab. 3. Sharing of bandwidth among PVCs.

Std Dev 0,127 0,084 0,058 0,041 0,050 0,026 0,041 0,007 0,020 0,026

Multiple PVCs with rate control. To test the rate control mechanism with multiple PVCs, each one requests a CBR service and specifies a PCR. The application tries to send data at the maximum rate. Under these conditions, the rate control on the FORE board was unable to carry efficiently more than two connections, due to a queue management problem on the board.

ATM Linux to use the TCP scaled windows mechanism. The socket buffers on the transmission and reception side were varied independently. The application buffers were set to 65536 bytes and the MTU size to 9180 bytes.

100 90

However, if some form of elementary shaping is performed by the application (to emulate a CBR source) and rate control is disabled, results are close to the target. Although not dispensing rate control and shaping at the ATM layer, shaping by the application may be useful to control the overall performance of the system.

TCP/IP PERFORMANCE TESTS All TCP/IP tests were performed using an application tcperf especially developed for this purpose. To evaluate the performance of TCP/IP over ATM, SVCs were established (by means of signaling) to carry TPC/IP traffic. Each host on the testbed had its own IP address on the ATM network. The influence of various parameters on the throughput is analysed separately.

50 131071 114688 98304

40 30 20

81920

10

65536

90,00-100,00 80,00-90,00 70,00-80,00 60,00-70,00 50,00-60,00 40,00-50,00 30,00-40,00 20,00-30,00 10,00-20,00 0,00-10,00

131071

98304

81920

65536

32768

114688

0

49152 49152

Transmission socket size (bytes)

Throughput (Mbit/s)

60

32768

To overcome the limitations of application level shaping, a new software module has been added to the driver. This module emulates CBR services, controlling the burstiness of AAL5 frames and managing the bandwidth allocated to CBR and UBR traffic. The results proved the feasibility of the solution but are not discussed in this paper.

80 70

Reception socket size (bytes)

Fig. 7. Throughput vs. socket buffer size (scaled windows). It is evident the throughput gain with the larger socket buffers and the major impact of the reception socket size. To obtain more detailed information, new tests were run. Application buffers were set to 65536 bytes and the socket buffer size was varied in steps of 1024 bytes, but both send and receive socket buffers had the same size. The results are shown in Fig. 8. A maximum rate of 101.8 Mbit/s, which is about 76% of the theoretical maximum (134.5 Mbit/s), was obtained. 120 110

Influence of socket buffer size The size of the application buffers was set to 65536 bytes, while the socket buffer size was modified in the sending and receiving hosts. A maximum rate of about 70 Mbit/s was achieved, as shown in Fig. 6, but throughputs of only a few hundred kbit/s were observed, as well.

Throughput (Mbit/s)

100 90 80 70 Average Minimum Maximum

60 50 40 65536

The major factors that contribute to this degradation are large segment sizes, unequal sizes of socket buffers in the sender and the receiver, the use of Nagle’s algorithm and the delayed acknowledgment strategy, as discussed in [4]. Since the default socket size in ATM is 65536 bytes, degradation will only occur if an application selects a small value compared with the default one.

80

73728

81920

90112

98304

106496

114688

122880

131072

Socket size (bytes)

Fig. 8. Throughput vs. socket buffer size. Influence of application buffer size Although it was expected a minor influence of the application buffer size, this was verified. Using a socket buffer size of 131071 bytes and the default MTU size, tests were run with application buffer sizes ranging from 8192 bytes to 65536 bytes. Average, maximum and minimum values shown on Fig. 9 confirm the initial expectations.

40 65536 57344 49152

30 20

40960

10

32768

60,00-70,00 40,00-50,00

100

30,00-40,00 20,00-30,00 10,00-20,00 0,00-10,00

65536

57344

49152

40960

32768

24576

16384

110

50,00-60,00

0

24576 16384

Transmission socket size (bytes)

120

70,00-80,00

Reception socket size (bytes)

Fig. 6. Throughput vs. socket buffer size.

Throughput (Mbit/s)

60 50

Throughput (Mbit/s)

70

90 80 70 Average Minimum Maximum

60 50 40 8192

To cover a wider range of socket buffer sizes, tests were repeated for socket buffer sizes ranging from 65536 to 131072 bytes. The use of these values requires configuring

16384

24576

32768

40960

49152

57344

65536

Application buffer size (bytes)

Fig. 9. Throughput vs. application buffer size.

100 90 80 Throughput (Mbit/s)

Multiple TCP connections By setting up multiple TCP connections, it was intended to identify the causes of possible bottlenecks, which could be due either to protocol mechanisms, bad parameter tuning or processing capacity of the hosts. The MTU, application and socket buffer sizes were set to 9180, 65535 and 131071 bytes, respectively. The results are shown in Tab. 4.

70 60 50 40 30 131071

20

Number Connections 2 2 2 3

Number of Hosts TX RX 1 2 2 1 1 1 1 1

Rate in Mbit/s Rate 1 Rate 2 Rate 3 63,49 62,38 54,08 54,32 59,05 53,76 39,67 37,03 36,33

Total 125,87 108,40 112,81 113,03

Tab. 4. Throughput with multiple TCP connections. As expected transmission is more efficient than reception. When one host sends data to two different hosts (thus splitting the receiver overhead), the total average rate is 125.87 Mbit/s, which is near the target TCP rate (94% efficiency). With a single connection, there is some CPU idle time due to protocol behavior; with two connections, this idle time is reduced, since when a process is idle, the other may be executing the protocol for its associated connection. In the opposite case (two hosts sending to a single host), the overall rate is reduced to 108.40 Mbit/s, due to extra CPU processing required by the receiver, which becomes a possible bottleneck point. When using only a pair of machines, and two or three connections are established between them, a total rate of about 113 Mbit/s is achieved. Once again this confirms the major influence of the receiver process on the overall rate. Influence of the MTU Some problems were observed when using the default value of the MTU in ATM networks (9180 bytes) and socket buffers lower than 16 Kbytes. To avoid this problem, the application must define socket sizes larger than this value, preferably at least 64 Kbytes. Applications developed for Ethernet use smaller socket buffers, and therefore a problem will arise if such values are used over ATM. The alternative in this case is to reduce the MTU size, by system configuration, although this increases the overhead due to protocol header. To see the influence of the MTU size and compare with previous results, tests were run with MTU sizes between 1500 and 9180 bytes, as shown in Tab. 5 and Fig. 10. Socket Size 8192 65536 131071

MTU size 1500 3036 4572 6108 7644 35,23 37,08 0,11 0,15 0,28 77,98 80,43 75,65 69,41 71,30 80,04 94,84 91,71 89,80 88,05 (sizes in bytes and throughput in Mbit/s)

65536

10

9180 0,81 72,90 92,03

Tab. 5. Throughput vs. MTU and socket size.

0 1500 MTU size

8192 3036

4572

6108

7644

Socket size (bytes)

9180

MTU size (bytes)

Fig. 10. Throughput vs. MTU and socket size. Using a larger socket size, the increase of rate on TCP/IP connections is higher than reducing the MTU size. However the latter avoids deadlocks when applications are configured to use small socket buffers.

QUALITY OF SERVICE To provide QoS to a TCP application, in the first place an ATM connection must not be shared by multiple TCP connections established between a pair of hosts; moreover it is necessary a mechanism that allows applications to specify the QoS of the connection. AREQUIPA [9] has been proposed to meet this goal. It allows TCP applications to use dedicated ATM connections when needed and negotiate the QoS. However, both hosts must implement AREQUIPA and applications have to be modified since they have to call new primitives. In heterogeneous environments where different types of hosts and operating systems coexist it is not possible to use AREQUIPA, since it is only available for Linux. To overcome these limitations a new solution was investigated. It addresses two problems: the management of multiple ATM connections between two TCP/IP hosts and the QoS negotiation at the TCP level. CLIP uses the ATMARP protocol to resolve IP and ATM addresses and specifies that a single SVC connection should be established between two hosts; thus TCP connections are multiplexed on a single ATM connection. If for some reason various ATM connections exist between two hosts, a host must accept the packets received in all connections. When a data packet is sent to the network, CLIP chooses the ATM connection to use based on an ATMARP table that maps IP addresses to ATM connections. The first TCP connection to be established between two hosts requires setting up an ATM connection that is registered in the ATMARP table. All subsequent TCP connections between the same pair of hosts will use this ATM connection after lookup of the ATMARP table. To offer QoS guarantees to TCP users an ATM connection must carry a single TCP stream. One way of achieving this goal is to associate an ATM to a TCP connection, using not only the IP addresses (as the original ATMARP protocol does) but using the IP addresses and both TCP port identifiers. This solution was implemented, extending the

ATMARP tables and introducing minor modifications to the ATMARP protocol. If this solution is implemented in one of the hosts and the other is a standard CLIP, it is possible to split the traffic in one direction (from the modified host), while the standard host processes the packets arriving in all ATM connections. However, all the traffic in the opposite direction (including acknowledgement messages) will be carried in a single ATM connection. Another issue is the mechanism that allows an application to specify the QoS parameters for the connection. When an application calls the connect primitive on the socket interface, this causes CLIP with the proposed extensions to establish a new ATM connection. At this point the QoS parameters must be specified. The connect primitive uses three parameters: the socket identifiers, a pointer to an address structure that defines the socket to connect to and the length of that structure. This means that the address structure has a variable size. The address structure contains the socket family and the socket address specific to the protocol. For CLIP this structure has 16 bytes, and many applications define it with a fixed size. The proposal is to extend this structure in order to append it with a structure that defines the QoS of the connection, avoiding the need to modify TCP or IP implementations. The modified version of CLIP must collect the extra structure and use it to establish the ATM connection. Hence, this structure can be identical to the structure that is used by the ATM socket interface, called atm_qos in Linux-ATM, which defines the transmission and reception parameters relevant for each traffic class (maximum and minimum PCR, maximum CDV and maximum SDU size). The proposed solution has been implemented in Linux 2.0.27. Tests were performed between a modified (Linux) and a standard (Windows) host, in order to confirm the feasibility of the solution. It was possible to specify the QoS for each connection initiated in the modified host and use it for transmission. It was verified that the data flow in the opposite direction always used the last ATM connection established and the modified host was able to process all received packets in this connection.

CONCLUSIONS The results presented and discussed in this paper put in evidence that it is possible to obtain very high throughputs when running TCP/IP over ATM with a proper tuning of critical protocol and network parameters. However, the selection of the ATM category of service required by an application may have a strong impact on performance. For traditional bursty data traffic, a best-effort service, like UBR or ABR (Available Bit Rate), could be selected. Since congestion is not prevented and cell (packet) losses may occur, TCP error and flow/congestion control mechanisms are still needed and intelligent cell discard mechanisms [5] may be required.

For real time applications (such as video streaming), a service that offers guaranteed QoS, like CBR or rt-VBR (real time Variable Bit Rate), is recommended. Conventional slow start and congestion avoidance mechanisms would be harmful, in this case. However, since the probability of congestion may be kept very low, fast retransmit and fast recovery mechanisms, supported in recent versions of TCP, avoid performance degradation even if occasional losses occur. Moreover the TCP window flow control may be useful on top of ATM traffic control.

REFERENCES [1]

J.-Y. Le Boudec, “The Asynchronous Transfer Mode: A Tutorial”, Computer Networks and ISDN Systems, vol. 24 (4), May 1992, pp. 279-309.

[2]

Mark Garrett, “A Service Architecture for ATM: From Application to Scheduling”, IEEE Network, May/June 1996, pp. 6-14.

[3]

Mark Laubach, “Classical IP and ARP over ATM”, RFC 1577, January 1994.

[4]

Douglas E. Comer and John C. Lin, “TCP Buffering and Performance over an ATM Network”, Purdue Technical Report CSD-TR 94-026, March 16, 1994.

[5]

Allyn Romanov and Sally Floyd, “Dynamics of TCP Traffic over ATM Networks”, IEEE JSAC, Vol. 13, Nº 4, May 1995, pp.633-641.

[6]

B.J. Ewy et al., “TCP/ATM Experiences in the MAGIC Testbed”, Proc. 4th IEEE Int. Symp. High Performance Distributed Computing, Aug. 1995, pp. 87-93.

[7]

Brian L. Tierney et al., “Performance Analysis in High Speed Wide Area IP-over-ATM Networks: Top-toBottom End-to-End Monitoring”, IEEE Network, May/June 1996, pp. 26-39.

[8]

Werner Almesberger, “Linux ATM device driver interface”, Draft Version 0.1, Feb. 1996.

[9]

W. Almesberger et al., “Application REQuested IP over ATM ( AREQUIPA)”, RFC 2170, July 1997.

BIOGRAPHIES José Ruela received a Ph.D. degree from the University of Sussex, UK, in 1982. He is currently an Associate Professor at the University of Porto, where he gives courses in Computer Networks. He is leader of the Communications and Multimedia group at INESC Porto. His main research interests are resource management and performance evaluation in broadband networks. Nelson Silva received the M.Sc. degree from the University of Porto in 1998. He is currently a researcher at INESC Porto, in the Communications and Multimedia group.