MIT15_097S12_lec15

forms a markov chain what makes the metropolis

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: state. 6.2 Metropolis-Hastings algorithm The goal in MCMC is to construct a Markov Chain whose stationary dis­ tribution is the posterior p(θ|y ). We now present the Metropolis-Hastings algorithm. In addition to the distributions we have already used (likelihood and prior), we will need a proposal distribution (or jumping distribution) J (θ, θ' ) which will propose a new state θ' given the current state θ. There are many options when choosing a proposal distribution which we will discuss later. The proposal distribution will yield a random walk over the parameter space, proposing steps θ → θ' . We accept or reject each step depending on the relative posterior probabilities for θ and θ' . When we run the random walk for long enough, the accepted values will simulate draws from the posterior. 6.2.1 Some intuition into the Metropolis-Hastings algorithm Suppose we are considering the transition θ → θ' . If p(θ' |y ) is larger than p(θ|y ), then for every accepted draw of θ, we should have at least as many accepted draws of θ' and so we should...
View Full Document

Ask a homework question - tutors are online