Unformatted text preview: obability escapes o to in nity or ,1. How can we keep the probability
from escaping? Here is an example.
1.22 Exercise. Consider a Markov chain on the integers with P i; i + 1 = :4 and P i; i , 1 = :6 for i 0;
P i; i + 1 = :6 and P i; i , 1 = :4 for i 0;
P 0; 1 = P 0; ,1 = 1=2:
This is a chain with in nitely many states, but it has a sort of probabilistic restoring force"
that always pushes back toward 0. Find the stationary distribution. The next exercise may look a bit inscrutable at rst, but it is well worth doing and it
introduces an important idea.
1.23 Exercise Probability flux . Consider a partition of the state space S of a Markov
chain into two subsets A and Ac . Suppose the Markov chain has stationary distribution .
Show that
XX
XX
1.24 i2A j 2Ac iP i; j = i2Ac j 2A iP i; j : 1.25 Exercise. Use exercise 1.23 to redo Exercise 1.21, by writing the equations
produced by 1.24 with the choice A = f0; 1; : : : ; ig for various i. The calculation should be
easier. The left side of 1.24 may be thought of the probability ux owing out of A into Ac ."
The equality says that this must be the same as the ux from Ac back into A. This has the
suggestive interpretation that the stationary probabilities describe a stable system in which
all the probability is happy where it is, and does not want to ow to anywhere else, so that
the net ow from A to Ac must be zero. We can say this in a much less mysterious way
as follows. Think of i as the long run fraction of time that the chain is in state i. We
will soon see a theorem a strong law of large numbers for Markov chains" that supports
this interpretation. Then iP i; j is the long run fraction of times that a transition
from i to j takes place. But clearly the long run fraction of times occupied by transitions
going from a state in A to a state in Ac must equal the long run fraction of times occupied
by transitions going the opposite way. In fact, along any sample path, the numbers of
Stochastic Processes J. Chang, March 30, 1999 1.6. IRREDUCIBILITY, PERIODICITY, AND RECURRENCE Page 113 transitions that have occurred in the two directions up to any time n may di er by at most
1!
1.26 Exercise Renewal theory, the residual, and lengthbiased sampling .
Let X1 ; X2 ; : : : be iid taking values in f1; : : : ; dg. These are typically thought of as lifetimes
of lightbulbs. . . De ne Sk = X1 + + Xk , n = inf fk : Sk ng, and Rn = S n , n.
Then Rn is called the residual lifetime at time n. This is the amount of lifetime remaining in
the bulb that is in operation at time n. 1. The sequence R0 ; R1 ; : : : is a Markov chain. What is its transition matrix? What is the
stationary distribution?
2. De ne the total lifetime Ln at time n by Ln = X n . This has an obvious interpretation
as the total lifetime of the lightbulb in operation at time n. Show that L0 ; L1 ; : : : is not a
Markov chain. But Ln still has a limiting distribution, and we'd like to nd it. We'll do this
by constructing a Markov c...
View
Full Document
 Spring '10
 DURRETT
 Multiplication, Markov Chains, Probability theory, Markov chain, J. Chang

Click to edit the document details