This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Contents of Lecture IV 1. Statistical Errors of Markov Chain MC Data 2. Autocorrelations 3. Integrated Autocorrelation Time and Binning 4. Illustration: Metropolis generation of normally distributed data 5. Selfconsistent versus reasonable error analysis 6. Comparison of Markov chain MC algorithms 1 Statistical Errors of Markov Chain MC Data In large scale MC simulation it may take months, possibly years, of computer time to collect the necessary statistics. For such data a thorough error analysis is a must. A typical MC simulation falls into two parts: 1. Equilibration: Initial sweeps are performed to reach the equilibrium distribution. During these sweeps measurements are either not taken at all or have to be discarded when calculating equilibrium expectation values. 2. Production: Sweeps with measurements are performed. Equilibrium expectation values are calculated from this statistics. A rule of thumb is: Do not spend more than 50% of your CPU time on measurements! The reason for this rule is that, that one cannot be off by a factor worse than two ( √ 2 in the statistical error). How many sweeps should be discarded for reaching equilibrium? 2 In a few situations this question can be rigorously answered with the Coupling from the Past method (Propp and Wilson). The next best thing to do is to measure the integrated autocorrelation time selfconsistently and to discard, after reaching a visually satisfactory situations, a number of sweeps which is larger than the integrated autocorrelation time. In practice even this can often not be achieved. Therefore, it is reassuring that it is sufficient to pick the number of discarded sweeps approximately right. With an increasing statistics the contribution of the nonequilibrium data dies out like 1 /N , where N is the number of measurements. For large N the effect is eventually swallowed by the statistical error, which declines only like 1 / √ N . The point of discarding configurations for reaching equilibrium is that the factor in front of 1 /N can be large. There can be far more involved situations, like that the Markov chain may end up in metastable configurations, which may even stay unnoticed (e.g. complex systems like spin glasses or proteins). 3 Autocorrelations We like to estimate the expectation value b f of some physical observable. We assume that the system has reached equilibrium. How many MC sweeps are needed to estimate b f with some desired accuracy? To answer this question, one has to understand the autocorrelations within the Markov chain. Given is a time series of N measurements f i = f i ( x i ) , i = 1 ,. .. ,N from a Markov process, where x i are the configurations generated. The label i = 1 ,. .. ,N runs in the temporal order of the Markov chain and the elapsed time, measured in updates or sweeps, between subsequent measurements f i , f i +1 is always the same, independently of i . The estimator of the expectation value b f is f = 1 N X f i ....
View
Full Document
 Fall '08
 Berg
 Normal Distribution, Variance, autocorrelation time, lim τint, integrated autocorrelation time, τint

Click to edit the document details