This preview shows page 1. Sign up to view the full content.
Unformatted text preview: is ﬁnite, using Lemma
1.12
1
X
XX
1>
Ex N (y ) =
pn (x, y )
y 2C
1
X = X y 2C n=1
1
X pn (x, y ) = n=1 y 2C n=1 1=1 where in the next to last equality we have used that C is closed. This contradiction proves the desired result. 1.4 Stationary Distributions In the next section we will see that if we impose an additional assumption called
aperiodicity an irreducible ﬁnite state Markov chain converges to a stationary
distribution
pn (x, y ) ! ⇡ (y )
To prepare for that this section introduces stationary distributions and shows
how to compute them. Our ﬁrst step is to consider What happens in a Markov chain when the initial state is random?
Breaking things down according to the value of the initial state and using the
deﬁnition of conditional probability
X
P (Xn = j ) =
P (X0 = i, Xn = j )
i = X
i P (X0 = i)P (Xn = j X0 = i) If we introduce q (i) = P (X0 = i), then the last equation can be written as
X
P (Xn = j ) =
q (i)pn (i, j )
(1.7)
i In words, we multiply the transition matrix on the left by the vecto...
View
Full
Document
This document was uploaded on 03/06/2014 for the course MATH 4740 at Cornell University (Engineering School).
 Spring '10
 DURRETT
 The Land

Click to edit the document details