This preview shows page 1. Sign up to view the full content.
Unformatted text preview: r some t 0g 0 for all x 2 S
2 For all states x 2 R and all subsets A S, P x; A A.
Conditions 1 and 2 pull in opposite directions: Roughly speaking, 1 wants the set R
to be large, while 2 wants R to be small. Condition 1 requires that R be accessible from
each state x 2 S. For example, 1 is satis ed trivially by taking R to be the whole state
space S, but in that case 2 becomes a very demanding condition, asking for P x;
to hold for all states x 2 S. On the other hand, 2 is satis ed trivially if we take R to be
any singleton fx1 g: just take to be P x1 ; and take = 0:9, for example. But in
many examples each singleton is hit with probability 0, so that no singleton choice for R will
satisfy condition 1. A Harris chain is one for which there is a set R that is simultaneously
large enough to satisfy 1 but small enough to satisfy 2.
Let's think a bit about the interpretation of 2. What does this inequality tell us?
Writing P x; A = P x; A , A
=: A + 1 , Qx; A;
A + 1 ,
1, we have expressed the distribution P x; as a mixture of two probability distributions
and Qx; , where Qx; is de ned by Qx; A = P x; A , A =1 , . Note that
Qx; is indeed a probability measure; for example, Qx; A 0 by the assumption that
P x; A A, and Qx; S = 1 because we have divided by the appropriate quantity
1 , in the de ning Qx; . Thus, we can simulate a draw from the distribution P x;
by the following procedure.
Flip a coin" having Pheads = and Ptails = 1 , .
If the outcome is heads, take a random draw from the distribution .
If the outcome is tails, take a draw from the distribution Qx; .
Stochastic Processes J. Chang, March 30, 1999 1.10. GENERAL STATE SPACE MARKOV CHAINS Page 145 It is useful to imagine essentially the same process in another slightly di erent way, on
a slightly di erent state space. Let us adjoin an additional state, , to the given state space
~
S, obtaining the new state space S = S f g. This new state will be our accessible atom.
We will say that the new chain visits the state whenever the old chain enters the set R
and the coin ip turns up heads. Thus, after the state is entered, we know that the next
state will be distributed according to the distribution ; note that this distribution is the
same for all x 2 R. When the chain enters the state x 2 R and the coin ip turns up tails,
the next state is chosen according to the distribution Qx; .
~
~
To put all of this together, consider a Markov chain X0 ; X0 ; X1 ; X1 ; : : : generated recursively as follows. Suppose we are at time t, and we have already generated the value of
~
~
Xt , and we are about to generate Xt . If Xt 2 Rc = S , R, then Xt = Xt . If Xt 2 R, then
~
we toss a coin. If the toss comes up heads, which happens with probability , then Xt = .
~ t = Xt . Next we use the value of Xt to generate Xt+1 . If
~
If the toss comes up tails, then X
~
~
Xt = then Xt+1 is chosen from the distribution . If Xt 2 R then Xt...
View Full
Document
 Spring '10
 DURRETT
 Multiplication, Markov Chains

Click to edit the document details