This preview shows page 1. Sign up to view the full content.
Unformatted text preview: (θ' , θ)p(θ' y ) (cancel terms)
=J (θ' , θ)α(θ' , θ)p(θ' y ) (1 in disguise)
=K (θ' , θ)p(θ' y ).
And that’s the detailed balance equation.
27 We have now proven that the MetropolisHastings algorithm simulations will
eventually draw from the posterior distribution. However, there are a number
of important questions to be addressed. What proposal distribution should
we use? How many iterations will it take for the chain to be suﬃciently close
to the stationary distribution? How will we know when the chain has reached
its stationary distribution? We will discuss these important issues after we
introduce the Gibbs’ sampler.
6.3 Gibbs’ Sampler The Gibbs’ sampler is a very powerful MCMC sampling technique for the spe
cial situation when we have access to conditional distributions. It is a special
case of the MetropolisHastings algorithm that is typically much faster, but
can only be used in special cases.
Let us express θ ∈ Jd as θ = [θ1 , . . . , θd ]. Suppose that although we are not
able to draw directly from p(θy ) because of the normalization integral, w...
View
Full
Document
This note was uploaded on 03/24/2014 for the course MIT 15.097 taught by Professor Cynthiarudin during the Spring '12 term at MIT.
 Spring '12
 CynthiaRudin

Click to edit the document details