This preview shows page 1. Sign up to view the full content.
Unformatted text preview: n the average
P
1=n n=1 Xt converges to with probability 1 as n ! 1.
t
Some ne print: It is possible to have = +1, and the SLLN still holds. For example, supposing that
the random variables1 t take their values in the set of nonnegative integers f0; 1; 2; : : :g, the mean is
X
de ned to be = Pk=0 kPfX0 = kg. This sum could diverge, in which case we de ne to be +1,
and we have 1=n Pn=1 Xt ! 1 with probability 1.
t For example, if X0 ; X1 ; : : : are iid with values in the set S, then the SLLN tells us that
1=n n
X
t=1 I fXt = ig ! PfX0 = ig with probability 1 as n ! 1. That is, the fraction of times that the iid process takes the
value i in the rst n observations converges to PfX0 = ig, the probability that any given
observation is i.
We will do a generalization of this result for Markov chains. This law of large numbers
will tell us that the fraction of times that a Markov chains occupies state i converges to a
limit.
It is possible to view this result as a consequence of a more general and rather advanced
ergodic theorem see, for example, Durrett's Probability: Theory and Examples . However,
I do not want to assume prior knowledge of ergodic theory. Also, the result for Markov
chains is quite simple to derive as a consequence of the ordinary law of large numbers for iid
random variables. Although the successive states of a Markov chain are not independent, of
course, we have seen that certain features of a Markov chain are independent of each other.
Here we will use the idea that the path of the chain consists of a succession of independent
cycles," the segments of the path between successive visits to a recurrent state. This
independence makes the treatment of Markov chains simpler than the general treatment of
stationary processes, and it allows us to apply the law of large numbers that we already
know.
Stochastic Processes J. Chang, March 30, 1999 1. MARKOV CHAINS Page 130 1.57 Theorem. Let X0 ; X1 ; : : : be a Markov chain starting in the state X0 = i, and suppose that the state i communicates with another state j . The limiting fraction of time
that the chain spends in state j is 1=E j Tj . That is, n
1 X I fX = j g = 1 = 1:
Pi lim
t
n!1 n
E j Tj
t=1 Proof: The result is easy if the state j is transient, since in that case E j Tj = 1 and with probability 1 the chain visits j only nitely many times, so that
n
1X
lim n I fXt = j g = 0 = E 1T
n!1 jj t=1 with probability 1. So we assume that j is recurrent. We will also begin by proving the
result in the case i = j ; the general case will be an easy consequence of this special case.
Again we will think of the Markov chain path as a succession of cycles , where a cycle is a
segment of the path that lies between successive visits to j . The cycle lengths C1 ; C2 ; : : :
are iid and distributed as Tj ; here we have already made use of the assumption that we are
starting at the state X0 = j . De ne Sk = C1 + + Ck and let Vn j denote the number
of visits to state j made by X1 ; : : : ; Xn , that is, Vn j = n
X
t=1 fXt = j g: A bit of thought see also the picture below shows that Vn j is...
View
Full
Document
 Spring '10
 DURRETT
 Multiplication, Markov Chains

Click to edit the document details