In the 1940s a formal foundation for the monte carlo

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ty in nuclear reactors. In the 1940's, a formal foundation for the Monte Carlo method was developed by von Neumann, who established the mathematical basis for probability density functions (PDFs), inverse cumulative distribution functions (CDFs), and pseudorandom number generators. The work was done in collaboration with Stanislaw Ulam, who realized the importance of the digital computer in the implementation of the approach. The collaboration resulted from work on the Manhattan project, where the ENIAC was employed in the calculation of yield Ulam et al., 1947 Ulam and Metropolis, 1949 Eckhard, 1987 Metropolis, 1987]. Individuals in the IBM corporation were pioneers in the eld of random number generation, perhaps because they were rst engaged in it due to their participation in the Manhattan project, where Richard Feynman then directed their computing operations (a fascinating exposition of their approach to performing large-scale computing involving a parallel approach exists in Richard Feynman's Surely You're Joking, Mr. Feynman). It is interesting to note the extremely primitive computing environments in existence at that time, and the challenges this presented to researchers (see Hurd, 1949). Uses of Monte Carlo methods have been many and varied since that time. However, due to computer limitations, the method has not yet fully lived up to its potential as discussed by Metropolis Metropolis, 1985]. Indeed, this is re ected in the stages the method has undergone in the elds of engineering. In the late 1950's and 1960's, the method was tested in a variety of engineering elds Meuller, 1956 Ehrlich, 1959 Polgar and Howell, 1965 Haji-Sheikh, 1968 and Chandler et al., 1968]. At that time, even simple problems were compute-bound. The method has since been extended to more complex problems Howell, 1968 Modest, 1978 Maltby, 1987a and 1987b, Maltby and Burns, 1988 and 1989 Crockett et al., 1989]. Since then, attention was focused upon much-needed convergence enhancement procedures Kahn and Marshall, 1953 Emery and Carson, 1968 Burghart and Stevens, 1971 Lanore, 1971 Shamsunder et al., 1973 Zinsmeister and Sawyer, 1976 Larsen and Howell, 1986]. Many complex problems remained intractable through the seventies. With the advent of high-speed supercomputers, the eld has received increased attention, particularly with parallel algorithms which have much higher execution rates. In his Ph.D. dissertation, Brown introduced the concept of the \event step" Brown, 1981], enabling e cient vectorization of Monte Carlo algorithms where the particles do not interact. This approach was later successfully exploited by several investigators. Martin et al. Martin et al., 1986] reported speedups of a factor of ve on an IBM 3090 with vector units. Nearly linear speedup was reported Sequent Computer Systems, 1985] on a parallel architecture for photon tracing. Bobrowicz et al. Bobrowicz et al., 1984a and 1984b] obtained speedup factors of from ve to eight in an algorithm where particles are accumulated in queues until e cient vector lengths are obtained, allowing physics algorithms such as the Los Alamos benchmark GAMTEB to be e ectively vectorized Burns et al., 1988]. Such advanced coding techniques Remarks 5 have enabled much bigger problems to be attacked, with improved accuracy. However, there are still a host of problems which remain intractable, even with an e ectively vectorized algorithm. Moreover, some impediments to e ective vectorization have been identi ed and analyzed. Zhong and Kalos Zhong and Kalos, 1983] analyzed the \straggler" problem, where few particles persist for many event steps, inhibiting performance due to the overhead incurred where vectors are \short." Pryor and Burns Pryor and Burns, 1988 Burns and Pryor, 1989] reported, for one Monte Carlo problem where particles interact, speedups on the order of half of those observed where the particles do not interact, albeit with a vectorized algorithm of greatly increased complexity. The same problem has been attacked with di erent physics McDonald and Bagano , 1988 Dagum, 1989] in a more e cient algorithm. Good general references on Monte Carlo abound Beckenback, 1956 Hammersly and Handscomb, 1964 Schreider, 1964 Kleinjnen, 1974 Rubenstein, 1981 Binder, 1984 HajiSheikh, 1988] in the literature. Most have a distinct bent - usually either statistics or physics. Unfortunately, there is a dearth involving large-scale engineering applications. No tutorial on large-scale Monte Carlo simulation can be complete without a discussion of pseudo-random number generators. Where billions of random numbers are required, it is essential that the generator be of long period, have good statistical properties and be vectorizable. On 64-bit machines, these criteria are usually satis ed with a multiplicative generator if the constants are carefully chosen Kalos and Whitlock, 1986]. Indeed, we have come a long way from the early days of random number generators Knuth, 1981]. For example, Cray's FORTRAN callable generator ranf has a cycle length of 244, and is very e ci...
View Full Document

This document was uploaded on 01/28/2014.

Ask a homework question - tutors are online