Validation of Router Models in OPNET - CiteSeerX

9 downloads 3264 Views 113KB Size Report
Ethernet ports, which we used to perform our tests. Other details of this router can be found in table 1. Cisco IOS version. IOS IS 12.1(6). Amount of flash memory.
Validation of Router Models in OPNET B. Van den Broeck, P. Leys, J. Potemans1, J. Theunis, E. Van Lil, A. Van de Capelle Katholieke Universiteit Leuven (K.U.Leuven) Department of Electrical Engineering ESAT-TELEMIC division2 Kasteelpark Arenberg 10 B-3001 Heverlee (Leuven), Belgium E-mail: [email protected] Abstract OPNET contains a vast amount of node models that represent specific types of networking gear available in the marketplace. These models closely resemble their real-life counterparts in supported functionality, but in order for OPNET to generate representative simulation results they also have to accurately simulate the performance of the real devices.

In this paper a method is presented to obtain the values necessary to configure the performance parameters of the router node model. First of all a traffic generator was used to measure the actual behavior of the router. The results of these measurements were processed statistically. From these statistics the necessary parameters were calculated. The obtained parameters were set in the router model and several simulations were performed to verify the match with the measurements.

In this paper, the actual behavior of a router will be compared to the behavior of the corresponding OPNET node model depending on parameters like inter-arrival time, size etc. of the packets on the network, with the use of a traffic generator. A method is presented to obtain the values necessary to match the simulation to the measurements. Finally some remarks will be made about the particular configuration we measured and simulated, and suggestions will be given for more accurate models.

This paper is organized as follows. In section 2 the measurement setup is described. Section 3 describes the traffic which the router was loaded with during the measurements. In section 4 the measurement results are processed. Section 5 describes the calculation of the performance parameters. The simulation scenario we used is given in section 6. In section 7 the simulation results are compared with the measurements. Some remarks are made and some suggestions given in section 8. Finally, section 9 sums up the conclusions of this paper.

1. Introduction A very big advantage of the OPNET simulation tool is the vast amount of network elements the user can choose out of to build a simulation topology. The model library included with OPNET Modeler 8.1A, which was used for this paper, is very large and contains node models for a broad range of networking gear. There is even a collection of so-called ‘Vendor Models’ which model a lot of specific, real-life devices from several vendors available in the marketplace.

2. Measurement setup For the purpose of validating the router models in OPNET, we chose a Cisco 2621 router we have available in our networking lab. This router has among others two autosensing 10/100 Mbps Ethernet ports, which we used to perform our tests. Other details of this router can be found in table 1. Cisco IOS version Amount of flash memory Amount of RAM Amount of NVRAM

However as is clearly stated in the online documentation these models are based solely on information gathered from product catalogs and the manufacturer’s website. No other devicespecific information is represented in these models. This means these models closely resemble the real devices in the interfaces they have and the protocols they support. Their performance parameters however have not been tuned in order to approximate the performance characteristics of the real devices. In fact, when examining the settings of some router models, it is clear that very often the ‘IP Processing Information’ attribute which contains the parameters determining the routing performance is set to ‘default’. Some simulation results, like processing delay in the router, will therefore not match the actual delay induced by a router.

1 2

IOS IS 12.1(6) 16 MB 48 MB 32 KB

Table 1: Router characteristics The router contains one central processor that is responsible for performing all packet processing functions of the router. For the measurements we disabled the router cache. In order to measure the performance of the router we wanted to load it with some well-defined traffic patterns and check how the router handled this traffic. Since we wanted to control the characteristics of the imposed traffic completely, we needed a utility to generate this artificial traffic. A lot of free traffic generators are available today. After browsing the documentation of several of them as well as some reports of

Research Assistant of the Fund of Scientific Research – Flanders (FWO-Vlaanderen) ESAT-Telemic participates in the OPNET University Program for research [7, 8, 9] and educational projects [10, 11, 12].

1

research in which these utilities are used, we decided to use the "RUDE & CRUDE" 0.61 utilities [1], which were created during the project Faster 2000 at the Tampere University of Technology. In a technical report titled "Low-Cost Precise QoS Measurement Tool" [2] these utilities were used to measure the Quality of Service characteristics of a network like packet loss, throughput, one-way delay, delay variation as well as distribution of delay and delay variation. The RUDE part (which stands for Real-Time UDP Data Emitter) of this tool can dispatch UDP packets according to a predefined pattern with a precision of 1 microsecond. This accuracy is high enough for our application. To make it possible to process the sent out UDP packets when they are received, they contain some information like the destination address, a stream ID, a sequence number and a time stamp with a resolution of 1 microsecond indicating when they were sent. Therefore the UDP packets have a payload of at least 20 bytes.

Summing up the above, we end up with two measurement configurations which allowed us to measure one-way delay with an accuracy at least of the order of a few tens of microseconds: one for the reference measurement and one for the actual measurement of the router as can be seen in figure 1.

PC 1

PC 2

PC 1

router

PC 2

Figure 1: Reference and router measurement setup PC 1 is a portable PC with a Pentium III 750 MHz processor, running Red Hat Linux 7.1 2.96-79 with a Linux 2.4.2-2 kernel. PC 2 is a desktop PC with a Pentium III 1 GHz processor, running Mandrake Linux 8.1 2.96-0.62mdk with a Linux 2.4.834.1mdk kernel. Both PCs have an autosensing 10/100 Mbps Ethernet NIC, which we used to make the network connection. The NICs were in 100 Mpbs full-duplex mode during the measurements.

At the other end, we needed a utility to receive the packets, to enable us to check how the traffic pattern was modified by the router. The CRUDE part (which stands for Collector for RUDE) has exactly that purpose. It didn't however give very accurate packet receival time stamps (there was an error of up to 200 microseconds). This is probably due to the fact that the packets have to be processed by several protocol layers before being available to the CRUDE utility, and due to the fact that the CRUDE utility has to wait to get scheduled some processor time before it can actually time stamp the packet. So we chose a different approach and received the packets with the tcpdump [3] packet sniffer. This led to very accurate receive time stamps (in the order of microseconds), however we had to write our own utility to process the output of tcpdump.

On the receive side we executed the command: tcpdump -i eth0 -s 80 -w filename This tells tcpdump to put the 80 first bytes of the packets arriving on interface eth0 (the 10/100 Mbps Ethernet NIC) and their receive time stamps into the dumpfile filename. On the send side we then executed the command: rude -P 90 -s filename This tells the RUDE utility to send traffic according to the configuration in the file filename. The "-P 90" option tells RUDE to execute with scheduling priority 90 (one has to start the program as the root user to be able to do this). The processor will therefore schedule the execution of the RUDE process before the ones of lower priority (almost all) processes, so the RUDE process won't have to wait while other processes are executing before it can send its UDP packets.

Because we wanted to measure one-way delay, the clocks of the sending and the receiving PC had to be synchronized. There are a few ways to synchronize the clocks of PCs. One way is to use the Network Time Protocol (NTP) [4]. The accuracy that can be achieved is however of the order of tens of milliseconds in the Internet and about 1 millisecond in a LAN. This is not enough to reliably measure the delay introduced by a router, which can be of the order of milliseconds or even smaller. A better way would be to synchronize the clock of the PCs to the signal of a GPS receiver. However since such a device was not at our disposal, we used a different technique to compensate for the mismatch between the two clocks. By sending a reference stream directly from sender to receiver (so without the router in between) before and after each measurement, it was possible to measure the mismatch between the two clocks (and some other constant delay that might be present). Out of these reference measurements we could conclude that the mismatch between the two clocks was actually not entirely constant. It decreased with a constant amount of 0.9 milliseconds every 10 seconds (the length of the reference measurement). With the values obtained by doing a linear regression of the receive time as a function of the send time of a reference measurement, we were able to calculate the receive times of the next reference measurement with an accuracy of the order of microseconds. So we compensated for the clock mismatch by subtracting the calculated clock mismatch from the receive times of the packets.

The two Ethernet interfaces of the router were configured for the router measurement setup. During the measurement session the topology was switched from the reference measurement setup to the router measurement setup and back again. After every switch a small script reconfigured the PCs for the new setup and sent ping packets from the send PC to the receive PC until a ping reply arrived to make sure the setup was operational, the ARP caches were filled etc. 3. The applied traffic With the RUDE utility two types of traffic can be generated. The CONSTANT stream consists of UDP packets with a constant size sent out at a constant rate. The TRACE stream consists of UDP packets of which size and time until the next packet are specified individually for each packet in a trace file. Several streams can be generated simultaneously. To avoid transitory effects, we decided to load the router with traffic of the CONSTANT stream type. A stream of 10000 packets was sent each time, so we could be sure to be in a steady state long enough to make statistically relevant averages. In order to examine the dependence of the performance characteristics of the router on the packet size and the packet rate, we consecutively loaded the router with packets of 20 (the smallest possible packet), 200, 600, 1000 and 1472 (the largest 2

possible packet on Ethernet) bytes UDP payload, which were sent every 200, 300, 400, 500, 600, 800, 1000, 1200, 1400 or 1600 microseconds.

By examining the number of received packets as a function of the size of the UDP payload and of the interpacket time under heavy load in figure 3, one can see that the performance characteristics of the particular configuration we tested, are indeed dependent of the packet size. This is in contradiction with the general idea that the rate at which a router routes packets, is independent of the packet size and should therefore be expressed as a number of packets per second.

4. The received traffic After the measurements, we converted the dumpfiles created by tcpdump into lists with columns containing the sender, the send time, the receive time corrected for clock mismatch, the packet size and the sequence number. With the use of some processing scripts developed by a couple of Master students during their Master’s thesis [5] in our research group, the lists were processed further. The lost packets were detected, the one-way delay was calculated and the results were processed statistically. This resulted in the following statistics: throughput (bits per second), throughput (packets per second), average delay (seconds) and average packet loss (%). The averages were taken over groups of 100 sent packets. An example of those statistics is shown in figure 2. As expected the various statistics are nearly constant after transitory effects at the beginning.

5. Calculating the necessary parameters The fact that we could overload the router with our measurement traffic allowed for easy determination of the parameters of the router. Since our goal was to model the router in OPNET, we looked at the “IP Processing Information” table of a router node in OPNET. The attributes in this table indicated which parameters we had to determine. Since the measured router uses central processing, the second attribute (‘Backplane Transfer Rate’) was irrelevant. Since we didn’t use MPLS, the third attribute (‘Datagram Switching Rate’) was too. That left us with the datagram forwarding rate, the forwarding rate units and the memory size. As mentioned before, these parameters can easily be determined for an overloaded router. Since the router receives traffic with a constant packet size at a constant rate and can’t handle it, we can assume that the buffer of the router is practically constantly full. So there will always be a packet available for the router to forward. Therefore the datagram forwarding rate, the rate at which the router processes packets, will be equal to the throughput in packets per second (or bits per second depending on how you set the ‘Forwarding Rate Units’ attribute) during the steady state of the measurement. This only holds for an overloaded router. A router far from overload has a throughput equal to the arrival rate of traffic at the router. Note that to recalculate the datagram forwarding rate in packets per second to bits per second for usage in OPNET one has to multiply by 8 times the UDP payload size plus 29 bytes as will be explained in section 6.

Figure 2: Statistics of a few streams of 2500 packets/s

Because we can assume the buffer of the router to be practically constantly full, an arriving packet that’s not dropped will always be inserted into the buffer at the end. So the delay that packet incurs at the router is about the time it takes for the router to process an entire buffer of packets. The relation between the delay, the datagram forwarding rate and the buffer size is represented in the following equation: D = B/ with D = delay (s) B = buffer size (packets or bits) = datagram forwarding rate (packets per second or bits per second) The buffer size that follows from this equation should be filled in as the value of the ‘Memory Size’ attribute. Since the unit of this attribute is bytes however, one should multiply the buffer size in packets by the UDP payload size plus 29 bytes as will be explained in section 6.

Figure 3: Number of received packets as a function of packet size and interpacket time 3

6. The OPNET simulation scenario Next, we created an OPNET scenario that represents the router measurement setup in order to compare the simulation results with our tests. The scenario topology is depicted in figure 4. It contains an advanced Ethernet workstation (the sending PC), an advanced Ethernet server (the receiving PC) and a node representing the Cisco 2621 router. The two computers are connected to the router with Ethernet 100BaseT links. A model of the Cisco 2621 router is not included with the vendor models, but since the router vendor models are all based upon the same model with the right interfaces and protocols attached, we constructed a model for the Cisco 2621 router with the ‘Device Creator’.

Figure 5: Comparison of measurement and simulation 8. Remarks and suggestions Calculation of the buffer size for the various measurements revealed that the buffer size is approximately constant (between 75 and 76 packets with an error of less than 3%) over all measurements when expressed as a number of packets. By querying the router we could confirm that the input queue of the interface connected to the sender has indeed a size of 75 packets. So, in order to get a value that’s realistic for all packet sizes for this particular router in this configuration, we would suggest multiplying the buffer size in packets with the average packet size of the traffic flowing through the router. An adaptation to the OPNET software package to accommodate this router, would be to allow entering the memory size in packets. Therefore we suggest adding a ‘Memory Size Units’ attribute to the ‘IP Processing Information’ table. This attribute should have ‘bytes’ and ‘packets’ as allowed values.

Figure 4: The scenario topology The traffic is defined as a custom application. In the task definition the number of packets per request was set to 1 and we filled in the size of the UDP payload of the packets for ‘Request Packet Size’. The actual size of the resulting IP packets will be 29 bytes larger in OPNET to take into account the size of the IP and UDP headers. All distributions were of course set to constant. The repetition in the traffic pattern could be set in the task definition or in the profile definition. Setting it in the task definition resulted in an interpacket time that was actually 10 microseconds larger than the one specified, but simulation times were very small (about 14 seconds for a stream of 10000 packets). Specifying the repetition in the application repeatability attribute of the profile definition resulted in an exact interpacket time, but simulation times increased by a factor 2 to 6 depending on how fast the application had to be repeated. An advantage of putting the repeatability into the profile definition is that one can measure the one-way end-to-end delay by choosing the task response time statistic. 7. Comparison of measurement and simulation We calculated the values of the ‘Datagram Forwarding Rate’ and ‘Memory Size’ attributes as explained in section 5 for all measurements during which the router was overloaded. With these values every measurement was simulated. The match between the measured and the calculated throughput and delay was very good as can be seen in figure 5. The error on both statistics was always smaller than 5%. This validates the fact that the behavior of the router model included in OPNET closely resembles the performance characteristics of an actual router when the right values are filled in for the various ‘IP Processing Information’ attributes.

Figure 6: The dependence of the datagram forwarding rate on the arrival rate

4

[4] D.L. Mills, “Simple network time protocol (SNTP) version 4 for IPv4, IPv6 and OSI”, Request for Comments 2030, Internet Engineering Task Force, October 1996. http://www.ietf.org/rfc/rfc2030.txt [5] T. Brans and T. Dekeyser, “The ultimate network crash test” (in Dutch), thesis to obtain the degree of Master in Electronics Engineering, K.U.Leuven, Belgium, June 2002. [6] P. Leys, J. Potemans, B. Van den Broeck, J. Theunis, E. Van Lil and A. Van de Capelle, ”Use of the Raw Packet Generator in OPNET”, OPNETWORK 2002, Washington D.C., USA, August 2002. [7] J. Potemans, J. Theunis, B. Rodiers, B. Van den Broeck, P. Leys, E. Van Lil and A. Van de Capelle, “Simulation of a Campus Backbone Network, a case-study”, OPNETWORK 2002, Washington D.C., USA, August 2002. [8] M. Teughels, E. Van Lil, and A. Van de Capelle, "Backbone Network Simulation: a Self-Similar Perpetuum Mobile", OPNETWORK 1999, Washington D.C., USA, August 1999.

Figure 7: The dependence of the datagram forwarding rate on the packet size Examination of the datagram forwarding rate across all measurements revealed a dependence on both the arrival rate in packets and the size of the packets for this particular router in this configuration. Preliminary results indicate both dependencies are approximately linear as is indicated by figures 6 and 7, but further research is needed to determine the relation of those parameters.

[9] J. Theunis, B. Van den Broeck, P. Leys, J. Potemans, E. Van Lil and A. Van de Capelle, “OPNET in Advanced Networking Education”, OPNETWORK 2002, Washington D.C., USA, August 2002.

Out of this we can conclude that at least for some routers the datagram forwarding rate is a parameter that varies significantly with the load applied to the router. For these routers it might be very hard to find a suitable value for the datagram forwarding rate. So it might not be possible to accurately model the delay introduced by these routers. This makes it impossible to determine if and when this kind of routers is overloaded and starts dropping packets by simulation with the present routing model of OPNET. A better understanding of the dependence of the datagram forwarding rate on the arrival rate and the packet size, and the subsequent implementation of it in OPNET might alleviate this problem in the future.

[11] J. Potemans, J. Theunis, M. Teughels, E. Van Lil and A. Van de Capelle, “Student Network Design Projects using OPNET”, OPNETWORK 2001, Washington D.C., USA, August 2001.

[10] J. Theunis, J. Potemans, M. Teughels, A. Van de Capelle and E. Van Lil, “Project Driven Graduate Network Education”, Proc. to the International Conference on Networking ICN’01, Colmar, France, pp.790-802, 2001.

9. Conclusion In this paper a measurement setup and a measurement method were described with which it is possible to calculate the parameters needed to model the performance characteristics of a router in OPNET. Simulation results were presented that show the close match between measurement and simulation. Finally some remarks were given on the particular router we used, as well as some suggestions for more accurate models. References [1] J. Laine, S. Saaristo and R. Prior, “RUDE and CRUDE: Real-time UDP Data Emitter and Collector”, May 2001. http://cvs.atm.tut.fi/rude/ [2] S. Ubik, V. Smotlacha and S. Saaristo, “Low-Cost Precise QoS Measurement Tool”, June 2001. http://www.cesnet.cz/doc/techzpravy/2001/07/ [3] Tcpdump: traffic dump utility. http://www.tcpdump.org

5