Unformatted text preview: out the chain in
the state 1 2?
For your convenience, a bit of helpful algebra: Qm 2n+1 ,2
n=1 2n+1 ,1 1
= 2,2,m . 1.10.2 Warm up for Harris chains The purpose of this section is to warm up for the next section on Harris chains. If you are
already feeling warm, you might nd all this a bit slow and repetitious, in which case you
might try skipping to the next section and see how it goes. If that section seems mysterious
to you, you can always come back here then.
To illustrate the method of thinking we will see how the ideas work in some simple
chains having nite state spaces. Of course, the ideas are not needed in order to obtain a
Basic Limit Theorem for countable-state Markov chains; we have already done that! But
we will use the ideas to extend the Basic Limit Theorem to more general state spaces.
1.80 Example. A lesson of exercise 1.5 ***make this an example rather than an exercise? was that we can lump" states if the transition probabilities out of those states
are the same. That is, what characterizes a state x is really its next-state transition
probabilities P x; , and if P x; = P y; , then we may combine the two states x and y
into one state and still have a Markov chain. In a sense, if we have just made a transition
and are told that the chain went to either x or y and we are wondering which, it really
doesn't matter, in the sense that it makes no di erence to our probabilistic predictions of
the future path of the chain. In general, suppose there is a set R of states all having the
same next-state transition probabilites; that is, suppose P x; = P y; for all x; y 2 R.
Stochastic Processes J. Chang, March 30, 1999 1.10. GENERAL STATE SPACE MARKOV CHAINS Page 1-41 Then we may lump the states in R into a new state , say. Whenever the X chain enters
the set R, that is, whenever it occupies a state in the set R, we will say that the chain
X enters the state . For example, given a chain X0; X1 ; : : : having transition matrix
0 :1 :2 :3 1
P = 2 @ :3 :1 :6 A, states 2 and 3 may be lumped into one state . That is, if we just
3 :3 :1 :6
keep track of visits to state 1 and state , de ning Xt by
Xt = 1 if Xt = f2; 3g ;
if Xt 2
the process X0 ; X1 ; : : : is a Markov chain in its own right, with transition matrix P = :1 :9 11
:3 :7 . In fact, we can combine the processes together to form the interlaced sequence
X0 ; X0 ; X1 ; X1 ; : : :, which is also a Markov chain, although it is time-inhomogeneous. The
01 0 1
~ t use the matrix U = 2 @ 0 1 A, and the transitions from Xt to
transitions from Xt to X
301 :1 :2 :34 ~
Xt+1 use the matrix V = 1 :1 :5 :6 . Note that UV = P and V U = P .
31 1.81 Figure. A tricky but useful way of thinking of running the chain. This edi ce we have erected on top of the given chain X0 ; X1 ; : : : is an unnecessarily
complicated way of thinking about this particular chain, but this style of thinking will
Stochastic Processes J. Chang, March 30, 1999 Page 1-42 1. MARKOV CHAINS be used for the general Basic Lim...
View Full Document