Discrete-time stochastic processes

zn next consider a backward transition from ds

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: arge j , then this sum of products will converge and the states will be positive-recurrent. For the simple birth-death process of Figure 5.1, if we define ρ = q /p, then ρj = ρ for all j . For ρ < 1, (5.30) simplifies to πi = πo ρi for all i ≥ 0, π0 = 1 − ρ, and thus πi = (1 − ρ)ρi for i ≥ 0. Exercise 5.2 shows how to find Fij (1) for all i, j in the case where ρ ≥ 1. We have seen that the simple birth-death chain of Figure 5.1, we have seen that the chain is transient if ρ > 1. This is not necessarily so in the case where self-transitions exist, but the chain is still either transient or null-recurrent. An important example of this arises in Exercise 6.1. 5.4 Reversible Markov chains Many important Markov chains have the property that, in steady-state, the sequence of states looked at backwards in time, i.e.,. . . Xn+1 , Xn , Xn−1 , . . . , has the same probabilistic structure as the sequence of states running forward in time. This equivalence between the forward chain and backward chain leads to a number of results that are intuitively quite surprising and that are quite difficult to derive without using this equivalence. We shall study these results here and then extend them in Chapter 6 to Markov processes with a discrete state space. This set of ideas, and its use in queueing and queueing networks, has been an active area of queueing research over many years . It leads to many simple results for systems that initially look very complex. We only scratch the surface here and refer the interested reader to [13] for a more comprehensive treatment. Before going into reversibility, we describe the backward chain for an arbitrary Markov chain. The defining characteristic of a Markov chain {Xn ; n ≥ 0} is that for all n ≥ 0, Pr {Xn+1 | Xn , Xn−1 , . . . , X0 } = Pr {Xn+1 | Xn } . (5.31) For homogeneous chains, which we have been assuming throughout, Pr {Xn+1 = j | Xn = i} = Pij , independent of n. For any k > 1, we can extend (5.31) to get Pr {Xn+k , Xn+k−1 , . . . , Xn+1 | Xn , Xn−1 , . . . , X0 } = Pr {Xn+k | Xn+k−1 } Pr {Xn+k−1 | Xn+k−2 } . . . Pr {Xn+1 | Xn } = Pr {Xn+k , Xn+k−1 , . . . , Xn+1 | Xn } . (5.32) By letting A+ be any event defined on the states Xn+1 to Xn+k and letting A− be any event defined on X0 to Xn−1 , this can be written more succinctly as © ™ © ™ Pr A+ | Xn , A− = Pr A+ | Xn . (5.33) 212 CHAPTER 5. COUNTABLE-STATE MARKOV CHAINS This says that, given state Xn , any future event A+ is statistically independent of any past event A− . This result, namely that past and future are independent given the present state, is equivalent to (5.31) for defining a Markov chain, but it has the advantage of showing the symmetry between past and future. This symmetry is best brought out by multiplying both sides of (5.33) by Pr {A− | Xn }, obtaining3 © ™ © ™© ™ Pr A+ , A− | Xn = Pr A+ | Xn Pr A− | Xn . (5.34) This symmetric form says that, conditional on the current state, past and future are statistically independent. Dividing both sides by Pr {A+ | Xn } then yields © ™ © ™ Pr A− | Xn , A+ = Pr A− | Xn . (5.35) By letting A− be Xn−1 and A+ be Xn+1 , Xn+2 , . . . , Xn+k , this becomes Pr {Xn−1 | Xn , Xn+1 , . . . , Xn+k } = Pr {Xn−1 | Xn } . This is the equivalent form to (5.31) for the backward chain, and says that the backward chain is also a Markov chain. By Bayes’ law, Pr {Xn−1 | Xn } can be evaluated as Pr {Xn−1 | Xn } = Pr {Xn | Xn−1 } Pr {Xn−1 } . Pr {Xn } (5.36) Since the distribution of Xn can vary with n, Pr {Xn−1 | Xn } can also depend on n. Thus the backward Markov chain is not necessarily homogeneous. This should not be surprising, since the forward chain was defined with some arbitrary distribution for the initial state at time 0. This initial distribution was not relevant for equations (5.31) to (5.33), but as soon as Pr {A− | Xn } was introduced, the initial state implicitly became a part of each equation and destroyed the symmetry between past and future. For a chain in steady-state, however, Pr {Xn = j } = Pr {Xn−1 = j } = πj for all j , and we have Pr {Xn−1 = j | Xn = i} = Pj i πj /πi . (5.37) Thus the backward chain is homogeneous if the forward chain is in steady-state. For a chain with steady-state probabilities {πi ; i ≥ 0}, we define the backward transition probabilities ∗ Pij as ∗ πi Pij = πj Pj i . (5.38) ∗ From (5.36), the backward transition probability Pij , for a Markov chain in steady-state, is then equal to Pr {Xn−1 = j | Xn = i}, the probability that the previous state is j given that the current state is i. ∗ Now consider a new Markov chain with transition probabilities {Pij }. Over some segment of time for which both this new chain and the old chain are in steady-state, the set of states generated by the new chain is statistically indistinguishable from the backward running sequence of states from the original chain. It is somewhat simpler, in talking about forward ©+ ™...
View Full Document

This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.

Ask a homework question - tutors are online