Unformatted text preview: also the number of cycles
completed up to time n, that is,
Vnj = maxfk : Sk ng: To ease the notation, let Vn denote Vn j . Notice that SVn n SVn +1; Stochastic Processes J. Chang, March 30, 1999 1.9. A SLLN FOR MARKOV CHAINS
and divide by Vn to obtain Page 1-31 SVn n
Vn Vn SVn +1 :
Vn Since j is recurrent, Vn ! 1 with probability one as n ! 1. Thus, by the ordinary
Strong Law of Large Numbers for iid random variables, we have both SVn ! E T
and SVn+1 = SVn+1
Vn + 1 V n+1
Vn ! E j Tj 1 = E j Tj with probability one. Note that the last two displays hold whether E j Tj is nite or in nite.
Thus, n=Vn ! E j Tj with probability one, so that Vn ! 1
n E j Tj
with probability one, which is what we wanted to show.
Next, to treat the general case where i may be di erent from j , note that Pi fTj 1g =
1 by Theorem 1.35. Thus, with probability one, a path starting from i behaves as follows.
It starts by going from i to j in some nite number Tj of steps, and then proceeds on from
state j in such a way that the long run fraction of time that Xt = j for t Tj approaches
1=E j Tj . But clearly the long run fraction of time the chain is at j is not a ected by the
behavior of the chain on the nite segment X0 ; : : : ; XTj ,1 . So with probability one, the
long run fraction of time that Xn = j for n 0 must approach 1=E j Tj .
The following result follows directly from Theorem 1.57 by the Bounded Convergence
Theorem from the Appendix. That is, we are using the following fact: if Zn ! c with
probability one as n ! 1 and the random variables Zn all take values in the same bounded
interval, then we also have E Zn ! c. To apply this in our situation, note that we have
Zn := n n
t=1 I fXt = j g ! E 1T jj with probability one as n ! 1, and also each Zn lies in the interval 0,1 . Finally, use
the fact that the expectation of an indicator random variable is just the probability of the
1.58 Corollary. For an irreducible Markov chain, we have
lim n P t i; j = E T
t=1 for all states i and j .
Stochastic Processes J. Chang, March 30, 1999 1. MARKOV CHAINS Page 1-32 There's something suggestive here. Consider for the moment an irreducible, aperiodic
Markov chain having a stationary distribution . From the Basic Limit Theorem, we know
that, P n i; j ! j as n ! 1. However, it is simple fact that if a sequence of numbers
converges to a limit, then the sequenceP Cesaro averages" converges to the same limit;
that is, if at ! a as t ! 1, then 1=n n=1 at ! a as n ! 1. Thus, the Cesaro averages
of P ni; j must converge to j . However, the previous Corollary shows that the Cesaro
averages converge to 1=E j Tj . Thus, it follows that
j = E 1T :
It turns out that the aperiodicity assumption is not needed for this last conclusion; we'll
see this in the next result. Incidentally, we could have proved this result much earlier; for
example we don't need the Basic Limit Theorem in the development.
1.59 Theorem. An irreducible, positive recurr...
View Full Document