Lecture 9: 02/21/2007
Recall:
Limit theorems for Markov chains.
First two were about limiting behavior of
Nijnn
where
N
ij
(n
)
=# (of visits to j starting from i in time <= n)
Introduced P(i,j) notation:
P(i,j)
=prob(1-step i-to-j transition)=
the number next to the arrow (i)-->(j);
P(i,j)=0 if there is no arrow.
P
(m)
(i,j)=
Prob{m-step transition (i)->(j)}.
Had, for all n>0:
+ ( , )=
,
,
,
Pn 1 i j
kPni kPk jall i j
Special Case:
state space is finite
, say M states.
Form a matrix P as follows:
P
<word crash, lost some notes…>
Terminology:
An (MxM) P having circled properties is a
stochastic matrix
.
Note:
if
1
=m-vector of all 1’s, then P
1
=
1.
So, 1 is an e-value of P and
1
is a corresponding (right) e-
vector.
Definition:
For any countable or finite-state MC, transition probabilities P(I,j), π is called
a
stationary probability
distribution
for the Markov chain if:
Π=π
i
, I member of state space think of it as an infinite vector
Π>=0, I member of state space
Sum(π
i
,I,I member of state space)=1 and , most importantly, sum(π
i
P(I,j))=π
j
, all j
Nb: if M= size of state space (M<infinity), then stationarity condition reas: