This preview shows pages 1–3. Sign up to view the full content.
ISyE8843A, Brani Vidakovic
Handout 9
1
Bayesian Computation.
If the selection of an adequate prior was the major conceptual and modeling challenge of Bayesian analysis,
the major implementational challenge is computation. As soon as the model deviates from the conjugate
structure, ﬁnding the posterior (ﬁrst the marginal) distribution and the Bayes rule is all but simple. A
closed form solution is more an exception than the rule, and even for such closed form solutions, lucky
mathematical coincidences, convenient mixtures, and other “tricks” are needed. Up to this point I believe
you got a sense of this calculational challenge.
If classical statistics relies on optimization, Bayesian statistics relies on integration. The marginal needed
for the posterior is an integral
m
(
x
) =
Z
Θ
f
(
x

θ
)
π
(
θ
)
dθ,
and the Bayes estimator of
h
(
θ
)
, with respect to the squared error loss is a ratio of integrals,
δ
π
(
x
) =
Z
Θ
h
(
θ
)
π
(
θ

x
)
dθ
=
R
Θ
h
(
θ
)
f
(
x

θ
)
π
(
θ
)
dθ
R
Θ
f
(
x

θ
)
π
(
θ
)
dθ
.
The difﬁculties in calculating the above Bayes rule are that (i) the posterior cannot be represented in a
ﬁnite form, and (ii) the integral of
h
(
θ
)
does not have a closed form integral under the possibly closed form
posterior distribution. Adopting a different loss function usually makes calculation even more difﬁcult.
An exception is absolute loss for which the Bayes rule is the mode of the posterior, and the mode is not
inﬂuenced by normalizing (trouble making) constant,
m
(
x
)
.
The last two decades of research in Bayesian statistics contributed to tremendous broadening of the scope
of Bayesian models. Models that could not be handled before are now routinely solved. This is done by
Markov Chain Monte Carlo (MCMC) Methods, and their introduction to the ﬁeld of statistics revolutionized
Bayesian statistics.
This handout overviews pre MCMC techniques: Monte Carlo Integration, Importance Sampling, and
Analytic Approximations (Riemann, Laplace, and Saddlepoint).
1.1
Bayesian CLT
Suppose that
X
1
,X
2
,...,X
n
∼
f
(
x

θ
)
,
where
θ
is
p
dimensional parameter, and that the prior on
θ
is
π
(
θ
)
.
The prior
π
(
θ
)
could be improper, but we assume that the posterior is proper and that its mode exists. Then,
when
n
→ ∞
,
[
θ

x
]
→ MVN
p
(
θ
M
,H

1
(
θ
M
))
,
where
θ
M
is posterior mode, i.e., a solution of
∂π
*
(
θ

x
)
∂θ
i
= 0
, i
= 1
,...,p,
where
π
*
(
θ

x
) =
f
(
x

θ
)
π
(
θ
)
is nonnormalized posterior. Let
H
be the Hessian deﬁned as
H
(
θ
) =

±
∂
2
π
*
(
θ

x
)
∂θ
i
∂θ
j
¶
.
1
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThe asymptotic covariance matrix is
H

1
(
θ
M
) = (
H
(
θ
))

1

θ
=
θ
M
The proof can be found in standard texts on asymptotic theory.
Example: Bernoulli’s.
This is the end of the preview. Sign up
to
access the rest of the document.
 Spring '11
 VIDAKOVIC

Click to edit the document details