ns-test - Simulator Tests Sally Floyd Lawrence Berkeley...

Info icon This preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Simulator Tests Sally Floyd Lawrence Berkeley Laboratory One Cyclotron Road, Berkeley, CA 94704 fl[email protected] May 7, 1997 1 Introduction output buffer size includes a buffer for the packet currently being transmitted on the output link. The third part of the file contains a line for each active connection, specifying the application (e.g., ftp, telnet), the transport protocol (“tcp” for Tahoe-style TCP, “renotcp” for Reno-style TCP, and “satcp” for Reno-style TCP with Selective Acknowledgements), and the variant of the transport protocol used by the receiver (“sink” for a TCP receiver that sends an ACK packet for every data packet, “dasink” for a TCP receiver that sends delayed ACKs, and “sasink” for a TCP receiver that sends Selective ACKs). The delayed-ACK receiver delays sending an ACK until a second data packet arrives, or until 100 msec. Below the input file is a brief description of the TCP behavior demonstrated in the test. Included are the commands for running this test on our old simulator tcpsim (which is not publically available), and on our new network simulator ns. This note shows some of the tests that I use to verify that our simulator is performing the way that we intend it to perform. I have nearly a hundred short tests that I use to verify the simulator; I run these tests after every major change to the simulator. This note shows a selection of these tests. Some of the tests would be of little interest to others, because they check features of the simulator such as the implementation of RED gateways, class-based queueing, variants of TCP with modifications to the window increase algorithms, or non-standard sources. I first used these tests with the old version of our simulator (tcpsim), and am now (July 1995) using them to validate our new implementation of the simulator (implemented in C++ and in Tcl). (We are gradually making various components from the old simulator available on the new one.) However, I am leaving the input files in this document in the format used by the old simulator. On each page, the graph shows the results of the simulation. For each graph, the x-axis shows the time in seconds. The y-axis shows the packet number mod 90. There is a mark on the graph for each packet as it arrives and departs from the congested gateway, and a “x” for each packet dropped by the gateway. Some of the graphs show more that one active connection. In this case, packets numbered 1 to 90 on the y-axis belong to the first connection, packets numbered 101 to 190 on the y-axis belong to the second connection, and so on. Below the graph is the input file for the simulator. The first part of the input file gives the simulator parameters that differ from the default parameters given in the standard input file. The second part of the input file defines the simulation network. The input file shows the edges in the network, with the queue parameters for the output buffers for the forward and/or backward direction for each link (if the output buffer does not use the default queue, which is an unbounded queue). Following the convention in our new simulator, the 2 Simulator defaults, disclaimers, and assumptions This simulator is not intended to reproduce the behavior of specific implementations of TCP, and we do not use production TCP code in our simulator. The simulator is intended to explore the behavior inherent to the underlying congestion control algorithms, including the Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery algorithms. Even with the extensive testing that we have done, I would be surprised if the simulator was without bugs. Nevertheless, I have reasonable confidence that the simulations with this simulator display the essential dynamics of TCP's congestion control algorithms. Several aspects of the simulator don' t match the behavior of actual implementations at all. For example, in the simulator each TCP connection deals with packets, not segments. For each connection, it is possible to specify the data and acknowledgement packet sizes in bytes, but the simulator does not provide for two-way data within a single con-  This work was supported by the Director, Office of Energy Research, Scientific Computing Staff, of the U.S. Department of Energy under Contract No. DE-AC03-76SF00098. This is an expanded version of a note that was first made available in October 1994. 1 [FJ92] Floyd, S., and Jacobson, V., “On Traffic Phase Effects in Packet-Switched Gateways”, Internetworking: Research and Experience, V.3 N.3, September 1992, pp. 115-156. Available via http://wwwnrg.ee.lbl.gov/nrg/. nection. Thus, there is no provision for acknowledgements piggy-backed on data packets. Several parts of the behavior of the simulations in this note do not match current TCP implementations. For example, except for the few tests where a delayed ACK receiver policy is specified, the TCP receiver acks every packet. For these simulations, the granularity of the TCP clock is set to 100 msec. This means that roundtrip times are measured only to the nearest 100 msec. [FJ93] Floyd, S., and Jacobson, V., Random Early Detection Gateways for Congestion Avoidance, IEEE/ACM Transactions on Networking, V.1 N.4, August 1993, p. 397-413. Available via http://wwwnrg.ee.lbl.gov/nrg/. 3 General warnings [K88a] Keshav, S., “REAL: a Network Simulator”, Report 88/472, Computer Science Department, University of California at Berkeley, Berkeley, California, 1988. The two simulations with phase effects are intended partly to emphasize the dangers of interpreting an individual simulation. The results of individual simulations might be sensitive to the exact parameters used in the simulation, such as propagation delays, TCP window sizes, etc. My experience is that simulations are more reliably used to show how performance varies as a function of a particular parameter (such as propagation delay, TCP window size, number of connections, number of congested gateways, etc.). 4 Acknowledgements Our old simulator tcpsim is a version of the REAL simulator [K88a] built on Columbia's Nest simulation package [BDSY88a], with extensive modifications and bug fixes made by Steven McCanne and by Sugih Jamin. For the new simulator ns [Ns], this has been rewritten embedded into Tcl, with the simulation engine implemented in C++. References [Ns] Ns. Available via http://www-nrg.ee.lbl.gov/ns/. [BDSY88a] Bacon, D,. Dupuy, A., Schwartz, J., and Yemimi, Y., “Nest: a Network Simulation and Prototyping Tool”, Proceedings of Winter 1988 USENIX Conference, 1988, pp. 17-78. [F96] Floyd, S., “Simulator Tests for Random Early Detection (RED) Gateways”, URL “ftp://ftp.ee.lbl.gov/papers/redsims.ps.Z”, October 1996. [F94] Floyd, S., “TCP and Successive Fast Retransmits”, technical report, October 1994. URL ftp://ftp.ee.lbl.gov/papers/fastretrans.ps. [F94a] Floyd, S., “TCP and Explicit Congestion Notification”, ACM Computer Communication Review, V. 24 N. 5, October 1994, p. 10-23. Available via http://www-nrg.ee.lbl.gov/nrg/. 2 0 20 Packet Number (Mod 90) 40 60 80 5 Tahoe TCP: Slow Start, Congestion Avoidance, Fast Retransmit . .. . ... . . . .. . .. . . .. .. .. .. . 1 2 .. .. .. .. . .. . .. . . ... . .. . . .. . .. .. . 3 Time 4 .. . .. . .. . .. . 5 Figure 1: Fast Retransmit, Slow Start, and Congestion Avoidance algorithms, with a single packet drop. renotcp, tcp [ window=14 ] edge s1 to r1 bandwidth 8Mb delay 5ms edge r1 to k1 bandwidth 800Kb delay 100ms forward [ queue-size=6 ] ftp conv from tcp [start-at=1.0] at s1 to sink at k1 This test shows the Fast Retransmit, Slow Start, and Congestion Avoidance algorithms of Tahoe TCP. Initially, the connection increases its window using Slow-Start. The sender detects a dropped packet after receiving three duplicate ACKs; this is the Fast Retransmit algorithm. The sender invokes Slow-Start, and later increases the window using the Congestion Avoidance algorithm. In this simulation, the congestion window is 14 packets when the first packet is dropped. After the Fast Retransmit, the source slow-starts up to a congestion window of 7 packets, and then opens the congestion window further by roughly one packet per roundtrip time. This test is run on ns with “ns test-suite.tcl tahoe2”, and on tcpsim with “csh test14C.com”. Figure 14 shows the Fast Recovery algorithm for this scenario. 3 80 Packet Number (Mod 90) 40 60 20 0 .. ... .. ... .. .. .. .. . . . .. .. .. . . .. .. . .. .. . .. 0 1 . .. . .. . .. ... . .. . . .. .. .. .. .. . .. .. . . .. .. .. . . .. . . .. .. . .. .. . . .. . . . .. .. .. . . .. .. . .. .. . .. . . . .. .. 2 3 4 5 Time Figure 2: Fast Retransmit, Slow Start, and Congestion Avoidance algorithms, with multiple packet drops. tcp [ window=50 ] edge s1 to r1 bandwidth 8Mb delay 5ms edge r1 to k1 bandwidth 800Kb delay 100ms forward [ queue-size=6 ] ftp conv from tcp at s1 to sink at k1 This test shows the Fast Retransmit, Slow Start, and Congestion Avoidance algorithms of Tahoe TCP with multiple packet drops for one window of data. This test is run on ns with “ns test-suite.tcl tahoe1”, and on tcpsim with “csh test1.com”. 4 140 120 Packet Number (Mod 90) 40 60 80 100 20 0 .. .. . . .. . .. .. .. .. ... .. ... ... . .. ... . .... ... .. .. . .... . .. 2 4 6 8 10 Time Figure 3: Retransmit timers. renotcp, tcp [ window=4 ] edge s1 to r1 bandwidth 8Mb delay 5ms edge s2 to r1 bandwidth 8Mb delay 5ms edge r1 to k1 bandwidth 800Kb delay 100ms forward [ queue-size=2 ] ftp conv from tcp [start-at=1.0] at s1 to dasink at k1 ftp conv from tcp [start-at=1.3225] at s2 to dasink at k1 This test shows two connections each with a maximum window of four packets. Because of the small window coupled with the use of delayed acks, the source will never receive three duplicate ACKs after a packet drop, and will always have to recover from packet drops by waiting for a retransmit timer. For the top connection, the first packet of the connection is dropped. This shows the default value for the retransmit timer, which in this simulation is three seconds. For the bottom connection, a packet is dropped only after several successful measurements of the roundtrip time have been made. The tcp implementation in this simulation uses the timestamp option, where the roundtrip time can be measured for every new ACK packet. This test is run on ns with “ns test-suite.tcl timers”, and on tcpsim with “csh test3.com”. 5 150 Packet Number (Mod 90) 100 50 0 .. ... ..... .. . . .... ... ... .. . .. ... ..... . .. .. .. ...... . .. . . .... .... . ... ... .. ..... ... .... . .. ...... ... .. . . . . .. .... ... .... . .. . .. . . .. . ... . .... .... .. . ... . .. . .. . .. . .. . .. . .. . ..... . . .. ...... . .. . ... . ...... . ... . . .. 2 4 Time 6 8 Figure 4: Fast Retransmit, Slow Start, and Congestion Avoidance algorithms, with multiple packet drops. renotcp [ window=100 ] edge s1 to r1 bandwidth 8Mb delay 5ms edge s2 to r1 bandwidth 8Mb delay 5ms edge r1 to k1 bandwidth 800Kb delay 100ms forward [ queue-size=8 ] ftp conv from tcp [start-at=1.0] at s1 to sink at k1 ftp conv from tcp [start-at=0.5 window=16] at s2 to sink at k1 This test shows Tahoe TCP with two packet drops from one window of packets. Figure 18 shows this scenario with Reno TCP, and Figure 21 shows Reno TCP with Selective Acknowledgements. This test is run on ns with “ns test-suite.tcl tahoe3”, and on tcpsim with “csh test15C.com”. 6 80 Packet Number (Mod 90) 40 60 20 0 . 1.0 . . .. 1.5 ... .... .. ... .. ... . . .... .. . ... . . ..... .. ... .. ... ... .. .. . .. .. . .. . .. .. .. .. . . .... . ... ... 2.0 2.5 Time 3.0 . . .. . .. 3.5 4.0 Figure 5: Delayed-ACK sink. renotcp, tcp [ window=50 ] edge s1 to r1 bandwidth 8Mb delay 5ms edge r1 to k1 bandwidth 800Kb delay 100ms forward [ queue-size=6 ] ftp conv from tcp [start-at=1.0] at s1 to dasink at k1 This test shows the delayed-ACK sink. The ACK for the first packet is delayed by 100 msec. The second and third packets are acknowledged by a single ACK, and therefore the window is increased by only one packet, from two to three. When the single ACK arrives for packets 4 and 5, the window is again increased to four. The source gets to send three packets. When the delayed ACK arrives for packet 6, the window is increased again to five, allowing the source to send two more packets. This test is run on ns with “ns test-suite.tcl delayed”, and on tcpsim with “csh test7.com”. 7 250 200 ... . .. . . . .. .. .. . . . .. . .... . . .. . .. . 50 Packet Number (Mod 90) 100 150 .. .. ...... . .. . .. 0 .. 0 .... . . .. ... . . . .. .. . ..... 10 20 .. . . . . . . .. 30 40 50 Time Figure 6: Telnet connections. edge s1 to r1 bandwidth 8Mb delay 5ms edge s2 to r1 bandwidth 8Mb delay 5ms edge r1 to k1 bandwidth 800Kb delay 100ms forward [ queue-size=6 ] telnet [interval=1100ms] conv from tcp at s1 to sink at k1 telnet [interval=0] conv from tcp at s2 to sink at k1 telnet [interval=0] conv from tcp at s2 to sink at k1 This test shows various telnet sources. The first telnet connection generates fixed-size packets with interpacket times from an exponential distribution with a mean of 1.1 seconds. The second and third telnet connections generate packets with interpacket times from the tcplib distribution. This test will give quite different results for different seeds for the pseudo-random number generator. This test is run on ns with “ns test-suite.tcl telnet”, and on tcpsim with “csh test10.com”. 8 150 Packet Number (Mod 90) 100 50 0 . .. ... ... . .... .... .... ..... ... ... ... ... ... . .... ... . . ..... ... .. .. .. .. .... . .. .. .. .. . ... .. . ... . . ..... ... ... ... ... .. .... . ... .. .. .. .. .. .. .. .. ... ... ... ... ... . .... ..... .. . ... ... ... .. .. .. .. ..... .. ... ... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .. .. .. .. ... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . ... ... .. ... .. ... ... ... ... .. ... ... ... ... ... ... ... . .. .... .. ... .. ... .. .. .. .. ... .. .. .. .. .. .. .. ....... ... .. .... .. ... ... ... ... ... ... ... .. ... ... ... ... . .. ... ... .. ... .. .. .. .. .. .. .. .... .. .. .. .. ... .. ... .. .. .. .. .. .. .. ... .. .. .. .. .. .. .... . . . . . . . . . . . . . . . . 5 10 15 20 25 Time Figure 7: Phase effects. renotcp, tcp [ window=32 ] edge s1 to r1 bandwidth 8Mb delay 5ms edge s2 to r1 bandwidth 8Mb delay 3ms edge r1 to k1 bandwidth 800Kb delay 100ms forward [ queue-size=16 ] ftp conv from tcp [start-at=5.0] at s1 to sink at k1 ftp conv from tcp [start-at=1.0] at s2 to sink at k1 This test shows a simulation of two TCP connections, with slightly different propagation delays on the two incoming edges. Note that the top connection, from source s2, receives a disproportionate share of the packet drops. This is due to phase effects [FJ92]. These phase effects remain with Reno TCP sources and with delayed-ACK sinks. This test is run on ns with “ns test-suite.tcl phase”, and on tcpsim with “csh test20.com”. 9 1.5 Packet Number (Mod 90) 0.5 1.0 0.0 . . .. ... ... ... ... ... ... ..... ... ... ... . ... ... .... ... .... ... .... .... .... . ... .. .. ... .. ... .. .. .. ...... . . . . . . . .. .. ... .. ... ... ... .... ..... .. . ... .. .. .. . . . . . .. .. . . .. ... .. .. .. ... .. ... .. ... . ... ... .. .. ... ... ... ... .. .. . .. .. . .. . . . .. .. .. ... ... ... ... . . . . .. .... . . . . . . . . .. . ... ..... .... .... .... .... .... .... ... .... ... ... .. .. . .. .. .. .. .. .. .. .. .. . .. .. .. ... .. .. .. ... .. ... ... .. . ..... ... ... ... ... ... ... ... ... ... .. .. .. .... ... .. .. .. .. ... .. .. .. ... ... ...... ... .. .. .. ... ... .. .. .. .. .. . .. .. ... ... ... ... ... ... ... ... .. .. . .. . .. . . . . . . .. .. . . . .. .. ... .. .. .. ... .. .. .. .. .. .. .. 5 10 15 20 25 Time Figure 8: Phase effects, with random overhead. renotcp, tcp [ window=32 ] edge s1 to r1 bandwidth 8Mb delay 5ms edge s2 to r1 bandwidth 8Mb delay 3ms edge r1 to k1 bandwidth 800Kb delay 100ms forward [ queue-size=16 ] ftp conv from tcp [start-at=5.0 overhead=10ms] at s1 to sink at k1 ftp conv from tcp [start-at=1.0 overhead=10ms] at s2 to sink at k1 This test shows a simulation of two TCP connections identical to that in Figure 7, except that each TCP connection uses a random overhead of up to 10ms (the transmission delay at the bottleneck gateway) at the source to add a random delay to the processing time for each incoming ACK packet. The purpose of this random delay is to add a sufficient random factor to the simulations to prevent phase effects [FJ92]. This test is run on ns with “ns test-suite.tcl phase2”. 10 150 Packet Number (Mod 90) 100 50 0 .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... .... ... ... ... ... ... ... ... ... ... ... ... ... .... ... ... ... ... ... ... ... ... .. ... .. .. .. .. .. .. .. .. .. ... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ... .. .. ... . .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... .... ... ... ... ... .. . . . .. . . . . . . .. . . . . . . . . . . . .. ... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ... .. .. .. ... ... ... ... .... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... .. .. . .... . . . . . . . . . . . . . . . .. . . . . . . ... ...... ... .. . ..... ... .... . .. ..... .. ... .. .. . . ... ...... . . .... .. . ... .. ..... . .. .... .. .. . . . .. 5 10 15 20 25 Time Figure 9: Phase effects simulation, with changed propagation delays. renotcp, tcp [ window=32 ] edge s1 to r1 bandwidth 8Mb delay 5ms edge s2 to r1 bandwidth 8Mb delay 9.5ms edge r1 to k1 bandwidth 800Kb delay 100ms forward [ queue-size=16 ] ftp conv from tcp [start-at=5.0] at s1 to sink at k1 ftp conv from tcp [start-at=1.0] at s2 to sink at k1 This test shows a simulation of two TCP connections, with a slightly different propagation delay on one of the links, compared to the test in Figure 7. Note that the bottom connection receives a disproportionate share of the packet drops. This test is run on ns with “ns test-suite.tcl phase1”, and on tcpsim with “csh test24.com”. 11 .. .. . . . ... ... . . ... . .. . .. . . .. . .. ... ..... .. ... ... . .. . . .. .. ... .. . . . . . .. ... .. .. . . . .. ... .. . ... . . . . .. . .. .... .... ..... .... .... .... .... .... .... ..... .... ... ..... ... ... . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ... . ... . .. . .. . . .. .. .. .. .. .. .. .. .. .. .. ... .. .. ... .. .. .. . .... ... ... ... .. ... ... ... .... ... ... ... ... ... .. .. .. . . . . .. . . . . . . . . . .. .. .. .. .. .. .. .... .. .. .. . .. .. .. .. .. .. .. .. .. . .. .. .. . . . . . . . . . . . . . . . .. .. .. .. .. ... ... ... ... . ... ... .. ... ... .. ... ... . . .. .. .. .. .. .. .. .. .. .... .. .. ... .. .. .. .. .. . .. 0 50 Packet Number (Mod 90) 100 150 .. . .. . 5 10 15 20 25 Time Figure 10: Connections with different roundtrip times. tcp [ window=30 ] bq [ dropmech=random-drop ] edge r1 to k1 bandwidth 800Kb delay 100ms forward [ queue-size=11 ] edge s1 to r1 bandwidth 8Mb delay 5ms edge s2 to r1 bandwidth 8Mb delay 200ms ftp conv from tcp at s1 to sink at k1 ftp conv from tcp at s2 to sink at k1 This test shows two Tahoe TCP connections where top connection (from source s2) has a roundtrip time roughly three times that of the bottom connection. The shared gateway uses Random Drop. Because of TCP's window increase algorithms, the connection with the shorter roundtrip time increases its window faster, and receives a disproportionate share of the link bandwidth This test is run on ns with “ns test-suite.tcl tahoe4”, and on tcpsim with “csh test32.com”. 12 150 Packet Number (Mod 90) 50 100 0 . .. ... .. .. ... ... .. .. . .. ... . ..... .. . . ... .. . .... ... ..... ...... ..... ... ... . .. ... ...... . . .. .. ... . . . ..... ...... ... .. .. ... . . ... ... ..... ...... .. .. .... ... . .. .. . . . .. ...... .. .. . .. .. ... ....
View Full Document

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern