This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Since k=1 πi /∫i is ﬁnite for any integer k, it means that
i
the probability of states larger than k is increasing with time. In the limit t → 1, the state
disperses over an inﬁnite set of possibilities with increasingly small probabilities for each.
To look at this in another way, the expected time to the next transition, starting in steady
state for the embedded chain, is 1.
This is probably not a phenomenon that can be understood intuitively. Perhaps the best
one can hope for is to see that it does not violate the things we know. 6.2.4 The equations for the steady state process probabilities P
Let us now come back to the case where i πi /∫i < 1, which is the case of virtually all
applications. We have seen that a Markov process can be speciﬁed in terms of the timetransitions qij = ∫i Pij , and it is useful to express the steady state equations for pj directly
in terms of qij rather than indirectly in terms of the embedded chain. First note from
P
P
(6.18) that if i πi /∫i < 1, then j pj = 1, so that the limiting time averages behave as
P
we would expect in all but the peculiar case where i πi /∫i = 1. We also note that (6.18)
speciﬁes the embedded steady state probabilities πi in terms of the pi . Since πi = pi ∫i α,
P
where α is independent of i, we can use the normalization i πi = 1 to obtain
pi ∫i
.
k pk ∫k πi = P We can substitute πi as given by (6.19) into (6.6), obtaining pj ∫j =
state j . Since ∫i Pij = qij ,
X
X
pj ∫j =
pi qij ;
pi = 1.
i (6.19) P i pi ∫i Pij for each (6.20) i This set of equations is known as the steady state equations for the Markov process. The
P
condition i pi = 1 has been added as a normalization condition. Equation (6.20) has a
nice interpretation in that the term on the left is the steady state rate in time at which
transitions occur out of state j and the term on the right is the rate in time at which
transitions occur into state j . Since the total number of entries to j must diﬀer by at most
1 from the exits from j for each sample path, this equation is not surprising.
We know that (6.6) has a unique solution with all πi > 0 if the embedded chain is positive
recurrent, and thus (6.20) also has a unique solution with all pi > 0 under the conditions
P
that the embedded chain is positive recurrent and i πi /∫i < 1. This ﬁnal condition is
not quite what we want, since we would like to solve (6.20) directly without worrying about
the embedded chain. 6.2. STEADYSTATE BEHAVIOR OF IRREDUCIBLE MARKOV PROCESSES 245 P
If we ﬁnd a solution to (6.20), however, and if i pi ∫i < 1 in that solution, then the
corresponding set of πi from (6.19) must satisfy (6.6) and be the unique steady state solution
for the embedded chain. Thus the solution for pi must be the corresponding steady state
solution for the Markov process. This is summarized in the following theorem.
TheoremP
6.2. Assume an irreducible Markov process and let {pi ; i ≥ 0} be a solution to
(6.20). If i pi ∫i < 1, then, ﬁrst, that solution is unique, second, each pi is positive, and
third, the embedded Markov chain is positive recurrent with the steady state πi satisfying
P
(6.19). Also, if the embedded chain is positive recurrent, and i πi /∫i < 1 then the set of
pi satisfying (6.18) is the unique solution to (6.20). 6.2.5 The sampledtime approximation again For an alternative view of the probabilities {pi }, consider the special case (but the typical
case) where the transition rates {∫i } are bounded. Consider the sampledtime approximation to the process for a given increment size δ ≤ [maxi ∫i ]−1 (see Figure 6.3). Let
{wi ; i ≥ 0} be the set of steady state probabilities for the sampledtime chain, assuming
that they exist. These steady state probabilities satisfy
wj = X
i6=j wi qij δ + wj (1 − ∫j δ ); wj ≥ 0; X wj = 1. (6.21) j P
The ﬁrst equation simpliﬁes to wj ∫j = i6=j wi qij , which is the same as (6.20). It follows
that the steady state probabilities {pi ; i ≥ 0} for the process are the same as the steady
state probabilities {wi ; i ≥ 0} for the sampledtime approximation. Note that this is not an
approximation; wi is exactly equal to pi for all values of δ ≤ 1/ supi ∫i . We shall see later
that the dynamics of a Markov process are not quite so well modeled by the sampled time
approximation except in the limit δ → 0. 6.2.6 Pathological cases The example in Figure 6.5 gives some insight into the case of positive recurrent embedded
P
chains with i πi /∫i = 1. It models a variation of an M/M/1 queue in which the server
becomes increasingly rattled and slow as the queue builds up, and the custormers become
almost equally discouraged about entering. The downward drift in the transitions is more
than overcome by the slow down in higher states. Transitions continue to occur, but the
number of transitions per unit time goes to 0 with increasing time. Exercise 6.1 gives some
added insight into this type of situation.
P
P
It is also possible for (6.20) to have...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details