531f10MH1 - STAT 531: Metropolis-Hastings (MH) Algorithm HM...

Info iconThis preview shows pages 1–6. Sign up to view the full content.

View Full Document Right Arrow Icon
STAT 531: Metropolis-Hastings (MH) Algorithm HM Kim Department of Mathematics and Statistics University of Calgary Fall 2010 1/24
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
One problem with applying Monte Carlo integration is in obtaining samples from some complex probability distribution p ( x ). Attempts to solve this problem are the roots of MCMC methods. Fall 2010 2/24
Background image of page 2
# Recall: Monte Carlo integration on the Markov Chain Subject to regularly conditions, the chain will gradually “forget” its initial state and the chain will eventually converges to a unique stationary distribution. We denote the unique stationary distribution by f , and as n increases, the sampled point X ( t ) will look increasingly like dependent samples from f . The standard average, 1 T T t =1 h ( X ( t ) ) E f [ h ( X )] that lies at the basis of Monte Carlo method can be applied in Markov Chain Monte Carlo (MCMC) settings. One problem: our draws are not independent, which we required for Monte Carlo Integration to work (remember SLLN). Luckily, we have the Ergodic Theorem . Fall 2010 3/24
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Suppose our goal is to draw samples from some distribution p ( θ ), where p ( θ ) = f ( θ ) K , where the normalizing constant K may not be known, and very difficult to compute. The Metropolis algorithm generates a sequence of draws from this distribution is as follows: 1. start with any initial value θ 0 satisfying f ( θ 0 ) > 0 2. using current θ value, sample a candidate point θ * from some jumping distribution q ( θ 2 | θ 1 ) (proposal distribution), which is the probability of returning a value of θ 2 given a previous value of θ 1 . (the restriction on jump density is that it is symmetric, i.e. q ( θ 2 | θ 1 ) = q ( θ 1 | θ 2 )) Fall 2010 4/24
Background image of page 4
given the candidate point θ * , calculate the ratio of the density at the candidate ( θ * ) and current ( θ t - 1 ) points, α = p ( θ * ) p ( θ t - 1 ) = f ( θ * ) f ( θ t - 1 ) 4. if the jump increases the density ( α > 1), accept the candidate point (set θ t = θ * ) and return to step 2 if the jump decreases the density ( α < 1), then with probability α accept the candidate point, else reject it and return to step 2 # summarize the Metropolis sampling((3) and (4)) as first computing ρ = min ± f ( θ * ) f ( θ t - 1 ) , 1 ² and then accepting a candidate point with probability ρ . Fall 2010
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 6
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 02/04/2011 for the course STAT 531 taught by Professor Gaborlukacs during the Spring '11 term at Manitoba.

Page1 / 24

531f10MH1 - STAT 531: Metropolis-Hastings (MH) Algorithm HM...

This preview shows document pages 1 - 6. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online