Chapter 2
Simulation
Frank Porter
January 13, 2011
The technology of “Monte Carlo simulation” is an important tool for un
derstanding and working with complicated probability distributions. This tech
nique is widely used in both experiment design and analysis of results.
We introduce the Monte Carlo method as a means of evaluating integrals
numerically. Suppose we wish to evaluate the
k
dimensional integral:
I
=
Z
1
0
· · ·
Z
1
0
f
(
x
)
d
k
x,
(2.1)
where
x
is a real vector in
k
dimensions. A numerical estimate of this integral
may be formed according to:
I
N
=
1
N
k
N
X
ν
1
=1
· · ·
N
X
ν
k
=1
f
(
x
ν
)
,
(2.2)
where
ν
labels a vector
x
ν
= (
x
ν
1
, . . . , x
ν
k
) with components given by:
x
ν
i
=
ν
i
/N.
(2.3)
That is, we divide our
k
dimensional unit hypercube integration region into
N
k
equal pieces, evaluate
f
(
x
) at a point in each of these pieces, and take the
average over all the pieces. As long as things are reasonably wellbehaved, this
will converge to the true value of the integral as we take more pieces:
I
=
lim
N
→∞
I
N
.
(2.4)
A variation on this is to randomly select
N
points in the hypercube, and
average the values of
f
(
x
) over all these points.
In this method, we draw
N
random vectors
r
(1)
, r
(2)
, . . . , r
(
N
)
from a distribution:
p
(
r
) =
n
1
r
∈ {
0
≤
r
i
≤
1

i
= 1
,
2
, . . . , k
}
,
0
otherwise.
(2.5)
21
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
22
CHAPTER 2.
SIMULATION
We estimate our integral with
I
0
N
=
1
N
N
X
ν
=1
f
(
r
(
ν
)
)
.
(2.6)
Again, if the integral is wellbehaved,
I
0
N
will converge on the true value of
I
in
the limit
N
→ ∞
.
There is no reason to expect that the evaluation with randomlyselected
points will provide a better estimate in general for the same number of function
evaluations.
However, it does provide a benefit: Each of the
N
samplings is
independent of the others – we don’t need to plan very hard what to choose for
N
, since taking additional samples is straightforward.
Let us reexpress the integral in the form of an expectation value:
I
00
=
I/V
R
=
Z
R
f
(
x
)
p
(
x
)
d
k
x,
(2.7)
=
h
f
i
,
(2.8)
where
R
is the desired region of integration,
V
R
=
R
R
d
k
x
, and
p
(
x
) is a uni
form sampling distribution over
x
∈
R
.
Our approximation is thus obtained
by sampling
N
times from
p
, obtaining
x
(1)
, . . . , x
(
N
)
and forming the sample
average
I
0
N
=
1
N
N
X
i
=1
f
(
x
(
i
)
)
.
(2.9)
Our estimate is unbiased:
I
00
=
h
I
0
N
i
,
(2.10)
and consistent:
I
00
=
lim
N
→∞
I
0
N
.
(2.11)
The variance of our estimator is
Var(
I
0
N
) =
1
N
Var(
f
)
.
(2.12)
It is interesting to notice that the equal space estimate,
I
N
, is typically biased,
although it is consistent.
Often, we are interested in more than the evaluation of a simple integral.
There may be several integrals of interest, and we may not even know at the
outset which integrals will be of greatest interest. This leads to the “simulation”
aspect of the Monte Carlo method.
To be more concrete, let us think in the
context of an “experiment”. We regard an experiment as a sampling of variables
distributed according to some differential equations (or possibly discrete equa
tions).
These differential equations describe our sampling probability density
function.
This is the end of the preview.
Sign up
to
access the rest of the document.
 Winter '09
 Physics, Work, Probability distribution, Randomness, Cumulative distribution function, Geometric distribution, inverse transform method

Click to edit the document details