This preview shows page 1. Sign up to view the full content.
Unformatted text preview: = ⇡ (im+1 )p(im+1 , im )
⇡ (im ) This shows Ym is a Markov chain with the indicated transition probability.
The formula for the transition probability in (1.13), which is called the dual
transition probability, may look a little strange, but it is easy to see that it
works; i.e., the p(i, j ) 0, and have
j p(i, j ) =
ˆ X ⇡ (j )p(j, i)⇡ (i) = j ⇡ (i)
⇡ (i) since ⇡ p = ⇡ . When ⇡ satisﬁes the detailed balance conditions:
⇡ (i)p(i, j ) = ⇡ (j )p(j, i)
the transition probability for the reversed chain,
p(i, j ) =
ˆ ⇡ (j )p(j, i)
= p(i, j )
⇡ (i) is the same as the original chain. In words, if we make a movie of the Markov
chain Xm , 0 m n starting from an initial distribution that satisﬁes the
detailed balance condition and watch it backwards (i.e., consider Ym = Xn m
for 0 m n), then we see a random process with the same distribution. m.
To help explain the concept, 1.6.4 The Metropolis-Hastings algorithm Our next topic is a method for generating samples from a distribution ⇡ (x). It
is named for two of the authors of the fundamental papers on the topic. One 37 1.6. SPECIAL EXAMPLES written by Nicholas Metropolis and two married...
View Full Document
This document was uploaded on 03/06/2014 for the course MATH 4740 at Cornell University (Engineering School).
- Spring '10
- The Land