Unformatted text preview: 474 0:375248 1
P 10 = @ 0:249996 0:375095 0:374909 A ;
0:249996 0:375078 0:374926
0 0:2500000002 0:3749999913 0:3750000085 1
P 20 = @ 0:2499999999 0:375000003 0:374999997 A :
0:2499999999 0:3750000028 0:3749999973
So we don't really have to solve equations; in this example, any of the rows of the matrix
P 20 provides a very accurate approximation for . No matter what state we start from, the
distribution after 20 steps of the chain is very close to :25; :375; :375. This is the Basic
Limit Theorem in action.
1.21 Exercise Stationary distribution of Ehrenfest chain . The Ehrenfest
chain is a simple model of mixing" processes. This chain can shed light on perplexing questions
like Why aren't people dying all the time due to the air molecules bunching up in some odd
corner of their bedrooms while they sleep?" The model considers d balls distributed among
two urns, and results in a Markov chain fX0 ; X1 ; : : :g having state space f0; 1; : : : ; dg, with the
state Xn of the chain at time n being the number of balls in urn 1 at time n. At each time,
we choose a ball at random uniformly from the d possibilities, take that ball out of its current
urn, and drop it into the other urn. Thus, P i; i , 1 = i=d and P i; i + 1 = d , i=d for all
i.
What is the stationary distribution of the Ehrenfest chain? You might want to solve the
problem for a few small values of d. You should notice a pattern, and come up with a familiar
answer. Can you explain without calculation why this distribution is stationary? A Markov chain might have no stationary distribution, one stationary distribution,
or in nitely many stationary distributions. We just saw an example with one. A trivial
example with in nitely many is when P is the identity matrix, in which case all distributions
are stationary. To nd an example without any stationary distribution, we need to consider
an in nite state space. We will see later that any nitestate Markov chain has at least one
stationary distribution. An easy example of this has S = f1; 2; : : : g and P i; i + 1 = 1 for
all i, which corresponds to a Markov chain that moves deterministically to the right." In
P
this case, the equation P j = i2S iP i; j reduces to j = j , 1, which clearly has
no solution satisfying j = 1. Another interesting example is the simple, symmetric
random walk on the integers : P i; i , 1 = 1=2 = P i; i + 1. Here the equations for
stationarity become
1
j = 1 j , 1 + 2 j + 1:
2
Stochastic Processes J. Chang, March 30, 1999 1. MARKOV CHAINS Page 112 Again it is easy to see how? that these equations have no solution that is a probability
mass function.
Intuitively, notice the qualitative di erence: in the examples without a stationary distribution, the probability doesn't settle down to a limit probability distributionin the
rst example the probability moves o to in nity, and in the second example it spreads out
in both directions. In both cases, the probability on any xed state converges to 0; one
might say the pr...
View
Full Document
 Spring '10
 DURRETT
 Multiplication, Markov Chains, Probability theory, Markov chain, J. Chang

Click to edit the document details