This preview shows page 1. Sign up to view the full content.
Unformatted text preview: e backward to a
transition is exponential with parameter ∫i . This means that the process running backwards
∗
is again a Markov process with transition probabilities Pij and transition rates ∫i . Figure
6.8 helps to illustrate this.
✛ State i ✲✛ State j , rate ∫j t1 ✲✛ State k ✲ t2 Figure 6.8: The forward process enters state j at time t1 and departs at t2 . The
backward process enters state j at time t2 and departs at t1 . In any sample function, as
illustrated, the interval in a given state is the same in the forward and backward process.
Given X (t) = j , the time forward to the next transition and the time backward to the
previous transition are each exponential with rate ∫j . Since the steady state probabilities {pi ; i ≥ 0} for the Markov process are determined by
πi /∫i
,
k πk /∫k pi = P (6.52) and since {πi ; i ≥ 0} and {∫i ; i ≥ 0} are the same for the forward and backward processes,
we see that the steady state probabilities in the backward Markov process are the same as
the steady state probabilities in the forward process. This result can also be seen by the
correspondence between sample functions in the forward and backward processes.
∗
∗
The transition rates in the backward process are deﬁned by qij = ∫i Pij . Using (6.51), we 254 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES have
∗
∗
qij = ∫j Pij = ∫i πi Pj i
∫i πj qj i
=
.
πi
πi ∫j (6.53) From (6.52), we note that pj = απj /∫j and pi = απi /∫i for the same value of α. Thus the
∗
ratio of πj /∫j to πi /∫i is pj /pi . This simpliﬁes (6.53) to qij = pj qj i /pi , and
∗
pi qij = pj qj i . (6.54) This equation can be used as an alternate deﬁnition of the backward transition rates. To
interpret this, let δ be a vanishingly small increment of time and assume the process is in
steady state at time t. Then δ pj qj i ≈ Pr {X (t) = j } Pr {X (t + δ ) = i  X (t) = j } whereas
∗
δ pi qij ≈ Pr {X (t + δ ) = i} Pr {X (t) = j  X (t + δ ) = i}.
∗
A Markov process is deﬁned to be reversible if qij = qij for all i, j . If the embedded Markov
∗
chain is reversible, (i.e., Pij = Pij for all i, j ), then one can repeat the above steps using Pij
∗ and q ∗ to see that p q = p q for all i, j . Thus, if the emb edded
and qij in place of Pij
i ij
j ji
ij
chain is reversible, the process is also. Similarly, if the Markov process is reversible, the
above argument can be reversed to see that the embedded chain is reversible. Thus, we
have the following useful lemma. Lemma 6.4. Assume that steady state Pobabilities {pi ; i≥0} exist in an irreducible Markov
pr
process (i.e., (6.20) has a solution and
pi ∫i < 1). Then the Markov process is reversible
if and only if the embedded chain is reversible.
One can ﬁnd the steady state probabilities of a reversible Markov process and simultaneously
show that it is reversible by the following useful theorem (which is directly analogous to
Theorem 5.6 of chapter 5).
Theorem 6.4. For an irreducible Markov process, assume that {pi ; i ≥ 0} is a set of nonP
negative numbers summing to 1, satisfying i pi ∫i ≤ 1, and satisfying
pi qij = pj qj i for al l i, j. (6.55) Then {pi ; i ≥ 0} is the set of steady state probabilities for the process, pi > 0 for al l i, the
process is reversible, and the embedded chain is positive recurrent.
Proof: Summing (6.55) over i, we obtain
X
pi qij = pj ∫j for all j. i P These, along with i pi = 1 are the steady state equations for the process. These equations
have a solution, and by Theorem 6.2, pi > 0 for all i, the embedded chain is positive
recurrent, and pi = limt→1 Pr {X (t) = i}. Comparing (6.55) with (6.54), we see that
∗
qij = qij , so the process is reversible.
There are many irreducible Markov processes that are not reversible but for which the
backward process has interesting properties that can be deduced, at least intuitively, from 6.6. REVERSIBILITY FOR MARKOV PROCESSES 255 the forward process. Jackson networks (to be studied shortly) and many more complex
networks of queues fall into this category. The following simple theorem allows us to use
whatever combination of intuitive reasoning and wishful thinking we desire to guess both
∗
the transition rates qij in the backward process and the steady state probabilities, and to
then verify rigorously that the guess is correct. One might think that guessing is somehow
unscientiﬁc, but in fact, the art of educated guessing and intuitive reasoning is at the heart
of all good scientiﬁc work.
Theorem 6.5. ForP irreducible P
an
Markov process, assume that a set of positive numbers
{pi ; i ≥ 0} satisfy i pi = 1 and i pi ∫i < 1. Also assume that a set of nonnegative
∗
numbers {qij } satisfy the two sets of equations
X qij = j X ∗
qij for al l i (6.56) for al l i, j. (6.57) j ∗
pi qij = pj qj i Then {pi } is the set of steady state probabilities for the process, pi > 0 for al l i, the embedded
∗
chain is positive recurrent, and {qij } is the set of trans...
View Full
Document
 Spring '09
 R.Srikant

Click to edit the document details