This preview shows page 1. Sign up to view the full content.
Unformatted text preview: arge j , then this sum of
products will converge and the states will be positiverecurrent.
For the simple birthdeath process of Figure 5.1, if we deﬁne ρ = q /p, then ρj = ρ for all j .
For ρ < 1, (5.30) simpliﬁes to πi = πo ρi for all i ≥ 0, π0 = 1 − ρ, and thus πi = (1 − ρ)ρi
for i ≥ 0. Exercise 5.2 shows how to ﬁnd Fij (1) for all i, j in the case where ρ ≥ 1. We
have seen that the simple birthdeath chain of Figure 5.1, we have seen that the chain is
transient if ρ > 1. This is not necessarily so in the case where selftransitions exist, but
the chain is still either transient or nullrecurrent. An important example of this arises in
Exercise 6.1. 5.4 Reversible Markov chains Many important Markov chains have the property that, in steadystate, the sequence of
states looked at backwards in time, i.e.,. . . Xn+1 , Xn , Xn−1 , . . . , has the same probabilistic
structure as the sequence of states running forward in time. This equivalence between the
forward chain and backward chain leads to a number of results that are intuitively quite
surprising and that are quite diﬃcult to derive without using this equivalence. We shall
study these results here and then extend them in Chapter 6 to Markov processes with a
discrete state space. This set of ideas, and its use in queueing and queueing networks, has
been an active area of queueing research over many years . It leads to many simple results
for systems that initially look very complex. We only scratch the surface here and refer the
interested reader to [13] for a more comprehensive treatment. Before going into reversibility,
we describe the backward chain for an arbitrary Markov chain.
The deﬁning characteristic of a Markov chain {Xn ; n ≥ 0} is that for all n ≥ 0,
Pr {Xn+1  Xn , Xn−1 , . . . , X0 } = Pr {Xn+1  Xn } . (5.31) For homogeneous chains, which we have been assuming throughout, Pr {Xn+1 = j  Xn = i} =
Pij , independent of n. For any k > 1, we can extend (5.31) to get
Pr {Xn+k , Xn+k−1 , . . . , Xn+1  Xn , Xn−1 , . . . , X0 } = Pr {Xn+k  Xn+k−1 } Pr {Xn+k−1  Xn+k−2 } . . . Pr {Xn+1  Xn }
= Pr {Xn+k , Xn+k−1 , . . . , Xn+1  Xn } . (5.32) By letting A+ be any event deﬁned on the states Xn+1 to Xn+k and letting A− be any
event deﬁned on X0 to Xn−1 , this can be written more succinctly as
©
™
©
™
Pr A+  Xn , A− = Pr A+  Xn .
(5.33) 212 CHAPTER 5. COUNTABLESTATE MARKOV CHAINS This says that, given state Xn , any future event A+ is statistically independent of any past
event A− . This result, namely that past and future are independent given the present state,
is equivalent to (5.31) for deﬁning a Markov chain, but it has the advantage of showing the
symmetry between past and future. This symmetry is best brought out by multiplying both
sides of (5.33) by Pr {A−  Xn }, obtaining3
©
™
©
™©
™
Pr A+ , A−  Xn = Pr A+  Xn Pr A−  Xn .
(5.34)
This symmetric form says that, conditional on the current state, past and future are statistically independent. Dividing both sides by Pr {A+  Xn } then yields
©
™
©
™
Pr A−  Xn , A+ = Pr A−  Xn .
(5.35)
By letting A− be Xn−1 and A+ be Xn+1 , Xn+2 , . . . , Xn+k , this becomes
Pr {Xn−1  Xn , Xn+1 , . . . , Xn+k } = Pr {Xn−1  Xn } .
This is the equivalent form to (5.31) for the backward chain, and says that the backward
chain is also a Markov chain. By Bayes’ law, Pr {Xn−1  Xn } can be evaluated as
Pr {Xn−1  Xn } = Pr {Xn  Xn−1 } Pr {Xn−1 }
.
Pr {Xn } (5.36) Since the distribution of Xn can vary with n, Pr {Xn−1  Xn } can also depend on n. Thus
the backward Markov chain is not necessarily homogeneous. This should not be surprising,
since the forward chain was deﬁned with some arbitrary distribution for the initial state at
time 0. This initial distribution was not relevant for equations (5.31) to (5.33), but as soon
as Pr {A−  Xn } was introduced, the initial state implicitly became a part of each equation
and destroyed the symmetry between past and future. For a chain in steadystate, however,
Pr {Xn = j } = Pr {Xn−1 = j } = πj for all j , and we have
Pr {Xn−1 = j  Xn = i} = Pj i πj /πi . (5.37) Thus the backward chain is homogeneous if the forward chain is in steadystate. For a chain
with steadystate probabilities {πi ; i ≥ 0}, we deﬁne the backward transition probabilities
∗
Pij as
∗
πi Pij = πj Pj i . (5.38) ∗
From (5.36), the backward transition probability Pij , for a Markov chain in steadystate,
is then equal to Pr {Xn−1 = j  Xn = i}, the probability that the previous state is j given
that the current state is i.
∗
Now consider a new Markov chain with transition probabilities {Pij }. Over some segment
of time for which both this new chain and the old chain are in steadystate, the set of states
generated by the new chain is statistically indistinguishable from the backward running
sequence of states from the original chain. It is somewhat simpler, in talking about forward ©+
™...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details