4
Monte Carlo approximation
In the last chapter we saw examples in which a conjugate prior distribution for
an unknown parameter
θ
led to a posterior distribution for which there were
simple formulae for posterior means and variances. However, often we will
want to summarize other aspects of a posterior distribution. For example, we
may want to calculate Pr(
θ
∈
A

y
1
, . . . , y
n
) for arbitrary sets
A
. Alternatively,
we may be interested in means and standard deviations of some function of
θ
,
or the predictive distribution of missing or unobserved data. When comparing
two or more populations we may be interested in the posterior distribution
of

θ
1

θ
2

,
θ
1
/θ
2
, or max
{
θ
1
, . . . , θ
m
}
, all of which are functions of more
than one parameter. Obtaining exact values for these posterior quantities can
be difficult or impossible, but if we can generate random sample values of
the parameters from their posterior distributions, then all of these posterior
quantities of interest can be approximated to an arbitrary degree of precision
using the Monte Carlo method.
4.1 The Monte Carlo method
In the last chapter we obtained the following posterior distributions for
birthrates of women without and with bachelor’s degrees, respectively:
p
(
θ
1

111
i
=1
Y
i,
1
= 217) = dgamma(
θ
1
,
219
,
112)
p
(
θ
2

44
i
=1
Y
i,
2
= 66) = dgamma(
θ
2
,
68
,
45)
Additionally, we modeled
θ
1
and
θ
2
as conditionally independent given the
data. It was claimed that Pr(
θ
1
> θ
2

∑
Y
i,
1
= 217
,
∑
Y
i,
2
= 66) = 0
.
97. How
was this probability calculated? From Chapter 2, we have
P.D. Hoff,
A First Course in Bayesian Statistical Methods
,
Springer Texts in Statistics, DOI 10.1007/9780387924076
4,
c Springer Science+Business Media, LLC 2009
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
54
4 Monte Carlo approximation
Pr(
θ
1
> θ
2

y
1
,
1
, . . . , y
n
2
,
2
)
=
∞
0
θ
1
0
p
(
θ
1
, θ
2

y
1
,
1
, . . . , y
n
2
,
2
)
dθ
2
dθ
1
=
∞
0
θ
1
0
dgamma(
θ
1
,
219
,
112)
×
dgamma(
θ
2
,
68
,
45)
dθ
2
dθ
1
=
112
219
45
68
Γ
(219)
Γ
(68)
∞
0
θ
1
0
θ
218
1
θ
67
2
e

112
θ
1

45
θ
2
dθ
2
dθ
1
.
There are a variety of ways to calculate this integral. It can be done with
pencil and paper using results from calculus, and it can be calculated nu
merically in many mathematical software packages. However, the feasibility
of these integration methods depends heavily on the particular details of this
model, prior distribution and the probability statement that we are trying to
calculate. As an alternative, in this text we will use an integration method for
which the general principles and procedures remain relatively constant across
a broad class of problems. The method, known as
Monte Carlo approxima
tion
, is based on random sampling and its implementation does not require a
deep knowledge of calculus or numerical analysis.
Let
θ
be a parameter of interest and let
y
1
, . . . , y
n
be the numerical values
of a sample from a distribution
p
(
y
1
, . . . , y
n

θ
). Suppose we could sample some
number
S
of independent, random
θ
values from the posterior distribution
p
(
θ

y
1
, . . . , y
n
):
θ
(1)
, . . . , θ
(
S
)
∼
i.i.d
p
(
θ

y
1
, . . . , y
n
)
.
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '10
 wu
 Variance, Randomness, Monte Carlo method, Monte Carlo methods in finance

Click to edit the document details