This preview shows page 1. Sign up to view the full content.
Unformatted text preview: rrent chain; this ensures that
the steadystate probabilities for the embedded chain exist and are positive. It is possible 5.7. SEMIMARKOV PROCESSES 225 for the denominator in (5.62) to be inﬁnite. This can happen either because U (i) is inﬁnite
for some i, or because the sum does not converge (for example, there could be a countably
inﬁnite number of states and U (i) could be proportional to 1/πi ). When the denominator
is inﬁnite, we say that the probabilities {pi } do not exist; this is a bizarre special case, and
it is discussed further in Section 6.1, but it doesn’t have to be speciﬁcally excluded from
the following analysis.
Lemma 5.6. Consider a semiMarkov process with an irreducible recurrent embedded chain
{Xn ; n ≥ 0}. Given X0 = i, let {Mij (t); t ≥ 0} be the number of transitions into a given
state j in the interval (0, t]. Then {Mij (t); t ≥ 0} is a delayed renewal process (or, if j = i,
is an ordinary renewal process).
Proof: Let M (t) be the total number of state transitions over all states that occur in the
interval (0, t]. From Lemma 5.5, limt→1 M (t) = 1 with probability 1. Let Nij (n) be the
number of transitions into state j that occur in the embedded Markov chain by the nth
transition of the embedded chain. From Lemma 5.4, {Nij (n); n ≥ 0} is a delayed renewal
process. It follows from Lemma 3.2 of Chapter 3 that limn→1 Nij (n) = 1 with probability
1. Since Mij (t) is the number of transitions into j during the ﬁrst M (t) transitions of the
embedded chain, we have Mij (t) = Nij (M (t)). Thus,
lim Mij (t) = lim Nij (M (t)) = lim Nij (n) = 1. t→1 t→1 t→1 It follows that the time W1 at which the ﬁrst transition into state j occurs, and the subsequent interval W2 until the next transition, are both ﬁnite with probability 1. Subsequent intervals have the same distribution as W2 , and all intervals are independent, so
{Mij (t); t ≥ 0} is a delayed renewal process with interrenewal intervals {Wk ; k ≥ 1}. If
i = j , then all Wk are identically distributed and we have an ordinary renewal process,
completing the proof.
Let W (j ) be the mean interrenewal interval between successive transitions into state j (i.e.,
the mean of the interrenewal intervals W2 , W3 , . . . in {Mij (t); t ≥ 0}). Consider a delayed
renewalreward process deﬁned on {Mij (t); t ≥ 0} for which R(t) = 1 whenever X (t) = j
(see Figure 9). Deﬁne pj as the timeaverage fraction of time spent in state j . Then, if
U (i) < 1, Theorems 3.8 and 3.12 of Chapter 3 state that
R1
R(τ )dτ
U (j )
pj = lim 0
=
with probability 1.
(5.63)
t→1
t
W (j )
We can also investigate the limit, as t → 1, of the probability that X (t) = j . This is equal
to E [R(t)] for the renewalreward process above. From Equation (3.72) of Chapter 3, if the
distribution of the interrenewal time is nonarithmetic, then
pj = lim E [R(t)] =
t→1 U (j )
.
W (j ) (5.64) Next we must express the mean interrenewal time, W (j ), in terms of more accessible
quantities. From Theorem 3.9 of Chapter 3,
lim Mij (t)/t = 1/W (j ) t→1 with probability 1. (5.65) 226 CHAPTER 5. COUNTABLESTATE MARKOV CHAINS t t ✛ Un Xn =j ✛ ✲ t t Xn+1 6=j Xn+2 6=j
Wk t
Xn+3 =j ✲ t Xn+4 6=j Figure 5.9: The delayed renewalreward process for time in state j . The reward is one
from an entry into state j , say at the nth transition of the embedded chain, until the
next transition out of state j . The expected duration of such an interval Un is U (j ).
The interrenewal interval Wk , assuming the kth occurrence of state j happens on the
the nth transition of the embedded chain, lasts until the next entry into state j , with
expected duration W (j ). As before, Mij (t) = Nij (M (t)) where, given X0 = i, M (t) is the total number of transitions
in (0, t] and Nij (n) is the number of transitions into state j in the embedded Markov chain
by the nth transition. Lemma 5.5 shows that limt→1 M (t) = 1 with probability 1, so
lim t→1 Mij (t)
Nij (M (t))
Nij (n)
= lim
= lim
= πj .
n→1
t→1
M (t)
M (t)
n (5.66) Combining (5.65) and (5.66), we have
1
W (j ) Mij (t)
t→1
t
Mij (t) M (t)
= lim
t→1 M (t)
t
Mij (t)
M (t)
= lim
lim
t→1 M (t) t→1
t
M (t)
= πj lim
.
t→1
t
= lim (5.67) (5.68) Substituting this in (5.63), we see that pj = πj U (j ) limt→1 {M (t)/t}. Since limt→1 {M (t)/t}
P
is independent of j , and since j pj = 1, we see that limt→1 {M (t)/t} must be equal to
P
{ j πj U (j )}−1 , thus yielding (5.62). Summarizing, we have the following theorem:
Theorem 5.9. Assume that the embedded Markov chain of a semiMarkov process is irP
reducible and positiverecurrent. If i πi U (i) <P , then, with probability 1, the limiting
1
fraction of time spent in state j is pj = πj U (j )/ i πi U (i).
From (5.64), pj is also equal to limt→1 Pr {X (t) = j } if the distribution of the interrenewal
interval between transistions into j is nonarithmetic. A suﬃcient condition for this (assumP
ing that i πi U i < 1) is that Gkj (u) be a nonarithmetic distribution for at least one pair
of states k,...
View
Full
Document
 Spring '09
 R.Srikant

Click to edit the document details