jefferson - D istributed Simulation and the T ime Warp...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon
Distributed Simulation and the Time Warp Operating System David Jefferson (UCLA) and Brian Beckman, Fred Wieland, Leo Blume, Mike DiLoreto, Phil Hontalas, Pierre Laroche, Kathy Sturdevant, Jack Tupman, Van Warren, John Wedel, Herb Younger (Jet Propulsion Laboratory), and Steve Bellenot (The Florida State University) Abstract This paper describes the Time Warp Operating System, under development for three years at the Jet Propulsion Laboratory for the Caltech Mark III Hypercube multi- processor. Its primary goal is concurrent execution of large, irregular discrete event simulations at maximum speed. It also supports any other distributed applica- tions that are synchronized by virtual time. The Time Warp Operating System includes a complete implementation of the Time Warp mechanism, and is a substantial departure from conventional operating systems in that it performs synchronization by a general distributed process rollback mechanism. The use of general rollback forces a rethinking of many aspects of operating system design, including programming in- terface, scheduling, message routing and queueing, storage management, flow control, and commitment. In this paper we review the mechanics of Time Warp, describe the TWOS operating system, show how to construct simulations in object-oriented form to run under TWOS, and offer a qualitative comparison of Time Warp to the Chandy-Misra method of distributed simulation. We also include details of two benchmark simulations and preliminary measurements of time-to- completion, speedup, rollback rate, and antimessage rate, all as functions of the number of processors used. 1. Introduction Discrete event simulations are among the most expen- sive of all computational tasks. One sequential execu- tion of a large simulation may take hours or days of processor time, and if the model is probabilistic, many executions will be necessary to determine the output distributions. Nevertheless, many scientific, engineer- ing and military projects depend heavily on simulation Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specfic permission. © 1987 ACM 089791-242-X/87/0011/0077 $1.50 because it is too expensive or too unsafe to experiment on real systems. Any technique for speeding up simu- lations is therefore of great economic importance. One obvious approach is to execute different parts of the same simulation in parallel. Most large systems that people want to simulate are composed of many in- teracting subsystems, and the physical concurrency in these systems translates into computational concur- rency in the simulation. When the system to be simu- lated is extremely regular in its causal/temporal behav- ior, i.e. at each simulation time most objects in the
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 2
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page1 / 17

jefferson - D istributed Simulation and the T ime Warp...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online