autocorrelation 4This is a rather poor name for this
quantity, since it is not the correlation time that is
integrated but the autocorrelation function. However, it
is the name in common use so we use it here too. 3.3
Measurement 61 function. Second, the
first few sections of Chapter 6.) To avoid this potential
pitfall, we commonly adopt a different strategy for
determining the equilibration time, in which we perform
two different simulations of the same system, starting
them in different initial states.
function of time (measured in Monte Carlo steps per
lattice site) for two different simulations using the
Metropolis algorithm. The two simulations were started
off in two different T = (random-spin) states. By about
time t = 6000 the two simulations have
As with experiments, the errors on Monte Carlo results
divide into two classes: statistical errors and systematic
errors. 9 Statistical errors are errors which arise as a
result of random changes in the simulated system from
measurement to measurementther
fluctuate around a steady average value. The horizontal
axis in Figure 3.3 measures time in Monte Carlo steps per
lattice site, which is the normal practice for simulations of
this kind. The reason is that if time is measured in this
way, then the average
there is no 0 corresponding to the highest eigenvalue.
There are 2N 1 correlation times in the case of the Ising
model, for example. However, the rank of the matrix is
usually very large, so lets not quibble over one
correlation time.) The longest of thes
(3.37), except that n is now replaced by the number nb of
blocks, which would be 10 in our example. This method is
intuitive, and will give a reasonable estimate of the order
of magnitude of the error in a quantity such as c.
However, the estimates it giv
a long enough time after equilibration to make good
independent measurements of the quantities of interest.
When we discussed methods for estimating the
correlation time , we were dealing with this problem. In
the later sections of this chapter, and indee
about measuring our quantity of interest, and how long
do we have to average over to get a result of a desired
degree of accuracy? These are very general questions
which we need to consider every time we do a Monte
Carlo calculation. Although we will be d
one an interval t later than the other. If we measure the
diference between the magnetzaton m(t ) at tme t
and its mean value, and then we do the same thing at
tme t + t, and we multply them together, we will get a
positve value if they were fluctuatng i
but this is obviously impractical in a simulation, so we do
the best we can and just sum over all the measurements
of m that we have, from beginning to end of our run.
Figure 3.5 shows the magnetization autocorrelation of
our 100100 Ising model at tempera
unlikely event that the two systems coincidentally
become trapped in the same metastable region (for
example, if we choose two initial states that are too
similar to one another) will we be misled into thinking
they have reached equilibrium when they have
which is independent of the sampling interval. 12For a
more detailed discussion of these two methods, we refer
the interested reader to the review article by Efron
(1979). 3.5 Measuring the entropy 71 3.4.5 Systematic
errors Just as in experiments, system
really need is a measure of the correlation time of the
simulation. The correlation time is a measure of how long
it takes the system to get from one state to another one
which is significantly different from the first, i.e., a state
in which the number o
interested in such accurate estimates, but only in getting
a rougher measure of , could reasonably skip this
section. In Section 2.2.3 we showed that the probabilities
w(t) and w(t + 1) of being in a particular state at
consecutive Monte Carlo steps are r
algorithm we might make one measurement every sweep
of the lattice. Thus the total number of measurements
we make of magnetization or energy (or whatever)
during the run is usually greater than the number of
independent measurements. There are a number of
subsets of the data that we used in the first term. This is
not strictly speaking necessary, but it makes (t) a little
better behaved. In Figure 3.5 we have also normalized
(t) by dividing throughout by (0), but this is optional.
Weve just done it for nea
measurement of the energy also. 3.3 Measurement 57
Given the energy and the magnetization of our Ising
system at a selection of times during the simulation, we
can average them to find the estimators of the internal
energy and average magnetization. Then
in Section 1.2.1, a system at equilibrium spends the
overwhelming majority of its time in a small subset of
states in which its internal energy and other properties
take a narrow range of values. In order to get a good
estimate of the equilibrium value of
estimate of the specific heat using all the data.11 Both
the jackknife and the bootstrap give good estimates of
errors for large data sets, and as the size of the data set
becomes infinite they give exact estimates. Which one
we choose in a particular cas
also suffers from this problem, unless we only perform
our fit over the exact range of times for which true
exponential behaviour is present. Normally, we dont
know what this range is, so the fitting method is no more
accurate than calculating the integra
process takes on the order of 107 steps in this case.
However looking at pictures of the lattice is not a reliable
way of gauging when the system has come to
equilibrium. A better way, which takes very little extra
effort, is to plot a graph of some quant
these bootstrap calculations of the specific heat, our
estimate of the error is given by = q c 2 c 2 . (3.41)
Notice that there is no extra factor of 1/(n 1) here as
there was in Equation (3.37). (It is clear that the latter
would not give a correct resul
independent samples was twice the correlation time.
Note that the value of in Equation (3.40) is independent
of the value of t, which means we are free to choose t
in whatever way is most convenient. 3.4.2 The blocking
method There are some cases where it
should be the number of independent measurements. In
practice the measurements made are usually not all
independent, but luckily it transpires that the bootstrap
method is not much affected by this difference, a point
which is discussed further below.) We
one for each spin in the system, on averagewe say we
have completed one sweep of the lattice. We could
therefore also say that the time axis of Figure 3.3 was
calibrated in sweeps. Judging the equilibration of a
system by eye from a plot such as Figure 3.
notice that only one spin k flips at a time in the
Metropolis algorithm, so the change of magnetization
from state to state is M = M M = X i s i X i s
i = s k s k = 2s k , (3.13) where the last equality
follows from Equation (3.9). Thus, the clever way t
thus: i = 1 log i (3.33) for all i 6= 0, then Equation
(3.32) can be written Q(t) = Q() +X i6=0 aiqie t/i .
(3.34) 8P is in general not symmetric, so its right and left
eigenvectors are not the same. 3.3 Measurement 65 The
quantities i are the correlation
note, if we simply apply the FFT algorithm directly to our
magnetization data, the result produced is the Fourier
transform of an infinite periodic repetition of the data
set, which is not quite what we want in Equation (3.22). A
simple way of getting aro
If you need to calculate an autocorrelation a thousand
times, and each time takes a few seconds on the
computer, then the seconds start to add up. In this case,
at the expense of rather greater programming effort, we
can often speed up the process by the