It is named for two of the authors of the fundamental

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: = ⇡ (im+1 )p(im+1 , im ) ⇡ (im ) This shows Ym is a Markov chain with the indicated transition probability. The formula for the transition probability in (1.13), which is called the dual transition probability, may look a little strange, but it is easy to see that it works; i.e., the p(i, j ) 0, and have ˆ X j p(i, j ) = ˆ X ⇡ (j )p(j, i)⇡ (i) = j ⇡ (i) =1 ⇡ (i) since ⇡ p = ⇡ . When ⇡ satisfies the detailed balance conditions: ⇡ (i)p(i, j ) = ⇡ (j )p(j, i) the transition probability for the reversed chain, p(i, j ) = ˆ ⇡ (j )p(j, i) = p(i, j ) ⇡ (i) is the same as the original chain. In words, if we make a movie of the Markov chain Xm , 0 m n starting from an initial distribution that satisfies the detailed balance condition and watch it backwards (i.e., consider Ym = Xn m for 0 m n), then we see a random process with the same distribution. m. To help explain the concept, 1.6.4 The Metropolis-Hastings algorithm Our next topic is a method for generating samples from a distribution ⇡ (x). It is named for two of the authors of the fundamental papers on the topic. One 37 1.6. SPECIAL EXAMPLES written by Nicholas Metropolis and two married...
View Full Document

This document was uploaded on 03/06/2014 for the course MATH 4740 at Cornell University (Engineering School).

Ask a homework question - tutors are online