Unformatted text preview: oupling. Consider
chains fXt g, fYt g having transition rule P , with X0 and Y0 . Look at PfX1 = Y1 g,
conditioning on whether or not X0 = Y0 .
*** ALSO apply this stu back to a Gibbs sampling example. 1.10.4 More about stationary distributions *** Omit or incorporate in earlier sections?
Suppose the chain has a positive recurrent atom , so that E T 1. De ne A = E hPT ,1
i
t=0 I fXt 2 Ag :
T
What is this? Remember the I denotes an indicator random variable. The sum
PT ,1 I fX 2 Ag is accumulating 0's and 1's as t ranges over the values 0; 1; : : : ; T , 1.
t
t=0
So the sum is simply a count of the number of times that Xt 2 A holds for t between 0 and
T , 1. In other words, the sum is the number of visits made by X0 ; : : : ; XT ,1 to the set
A, and the numerator of A is the expected number of such visits. Think again of the
cycle" idea, where a cycle is now a portion of the Markov chain path between successive
1.86 Stochastic Processes E J. Chang, March 30, 1999 1.10. GENERAL STATE SPACE MARKOV CHAINS Page 147 visits to the state . Then A is the expected number of times the chain visits the set A
during a cycle, divided by the expected length of a cycle.
Now, Ta is a random variable, so the sum in 1.86 is running over a random number
of terms. That looks a bit hard to work with, but we can use the following standard and
useful trick, which should be your rst reaction when you see sums like this: we make the
summation sign run over all possible t values and introduce another indicator function to
restrict the sum to the values of t that we want. That is,
TX1
,
t=0 I fXt 2 Ag = 1
X
t=0 I fXt 2 AgI ft Ta g = 1
X
t=0 I fXt 2 A; Ta tg: Taking the expected value, since the expected value of an indicator random variable is its
probability, we can write in the equivalent form A = 1.87 P1 t=0 P fXt 2 A; T
E T tg : The manipulation from 1.86 to 1.87 is so fundamental and often used in probability that
you will often see it used without any comment. It is a trick that is well worth mastering
and remembering.
1.88 Proposition. Let fXt g be a Markov chain with a positive recurrent atom , and de ne hPT ,1
i P1 P fX 2 A; T
t=0 I fXt 2 Ag
= t=0 E tT
A =
E T
Then is a stationary distribution for fXt g.
E tg : R Proof: Clearly is a probability distribution. We want to R
show that P x; Adx = A. De ning A = E T A, we want to show that P x; Adx = A. We
have Z But P x; Adx = 1
XZ
t=0 P fXt 2 dx; T tgP x; A: P x; A = P fXt+1 2 A j Xt = xg = P fXt+1 2 A j Xt = x; T tg;
where the last equality holds by the Markov property, because the event fT
tg =
c depends only on the random variables X0 ; : : : ; Xt . That is, given the precise
fT tg
information about the state Xt = x, we can throw away the information T t. So Z P x; Adx =
= Stochastic Processes 1
XZ
t=0 1
X
t=0 P P fXt 2 dx; T fXt+1 2 A; T tgP fXt+1 2 A j Xt = x; T tg tg
J. Chang, March 30, 1999 Page 148
= E = "TX1
, E t=0
"X
T
t=1 R I...
View
Full Document
 Spring '10
 DURRETT
 Multiplication, Markov Chains, Probability theory, Markov chain, J. Chang

Click to edit the document details