This preview shows page 1. Sign up to view the full content.
Unformatted text preview: s are
required in order to mix the deck well?" Probabilistic reasoning can lead to insights and
results that would be hard to come by from thinking of these problems as just" matrix
theory problems. 1.4 The basic limit theorem of Markov chains
As indicated by its name, the theorem we will discuss in this section occupies a fundamental
and important role in Markov chain theory. What is it all about? Let's start with an
example in which we can all see intuitively what is going on. 1.15 Figure. A random walk on a clock. For ease of writing and drawing,
consider a clock with 6 numbers on it: 0,1,2,3,4,5. Suppose we perform a random walk
by moving clockwise, moving counterclockwise, and staying in place with probabilities 1 3
each at every time n. That is,
8 1=3 if j = i , 1 mod 6
P i; j = : 1=3 if j = i
1=3 if j = i + 1 mod 6.
Suppose we start out at X0 = 2, say. That is,
0 = 0 0; 0 1; : : : ; 0 5 = 0; 0; 1; 0; 0; 0:
Then of course
1
1 = 0; 1 ; 1 ; 3 ; 0; 0;
33
1.16 Example Random walk on a clock . Stochastic Processes J. Chang, March 30, 1999 1.5. STATIONARY DISTRIBUTIONS
and it is easy to calculate
and Page 19 22
2 = 1 ; 9 ; 1 ; 9 ; 1 ; 0
939
367632
3 = 27 ; 27 ; 27 ; 27 ; 27 ; 27 : Notice how the probability is spreading out away from its initial concentration on the state
2. We could keep calculating n for more values of n, but it is intuitively clear what will
happen: the probability will continue to spread out, and n will approach the uniform
distribution:
11
1
n ! 1 ; 6 ; 6 ; 1 ; 1 ; 6
6
66
as n ! 1. Just imagine: if the chain starts out in state 2 at time 0, then we close our
eyes while the random walk takes 10,000 steps, and then we are asked to guess what state
the random walk is in at time 10,000, what would we think the probabilities of the various
states are? I would say: X10;000 is for all practical purposes uniformly distributed over
the 6 states." By time 10,000, the random walk has essentially forgotten" that it started
out in state 2 at time 0, and it is nearly equally likely to be anywhere.
Now observe that the starting state 2 was not special; we could have started from
anywhere, and over time the probabilities would spread out away from the initial point,
and approach the same limiting distribution. Thus, n approaches a limit that does not
depend upon the initial distribution 0 .
The following Basic Limit Theorem" says that the phenomenon discussed in the previous example happens quite generally. We will start with a statement and discussion of the
theorem, and then prove the theorem later. We'll use the notation P0 " for probabilities
when the initial distribution is 0 .
1.17 Theorem Basic Limit Theorem . Let X0 ; X1 ; : : : be an irreducible, aperiodic Markov chain having a stationary distribution . Then for all initial distributions 0 , lim P fX = ig = i for all
n!1 0 n i 2 S: We need to de ne the words irreducible," aperiodic," and stationary distribution." Let's
start with s...
View
Full
Document
This document was uploaded on 03/06/2014 for the course MATH 4740 at Cornell.
 Spring '10
 DURRETT
 Multiplication, Markov Chains

Click to edit the document details