Unformatted text preview: bution function
X
n
Pr {Un ≤ u} =
Pij−1 exp(−∫j u).
j The epoch of the nth transition is Sn = U1 + · · · Un , which is also a rv (i.e., it it ﬁnite
with probability 1, so limt→1 Pr {Sn ≤ t} = 1), i.e., for all n, the nth transition eventually
occurs with probability 1.
Now deﬁne Mi (t) as the number of transitions (given X0 = i) that occur by time t. Note that
{Mi (t); t ≥ 0} is not a renewal process (neither delayed nor ordinary), but it still satisﬁes the
set identity {Mi (t) ≥ n} = {Sn ≤ t}. This then implies that limt→1 Pr {Mi (t) ≥ n} = 1 for
all n, or in other words,2 limt→1 Mi (t) = 1 with probability 1. Note that this result, which
is stated in the following lemma, does not assume that the Markov process is irreducible or
recurrent.
Lemma 6.1. Let Mi (t) be the number of transitions in the interval (0, t] of a Markov
process starting with X0 = i. Then
lim Mi (t) = 1 t→1 6.2.2 with probability 1. (6.10) Renewals on successive entries to a state For an irreducible Markov process with X0 = i, let Mij (t) be the number of transitions into
state j over the interval (0, t]. We want to ﬁnd when this is a delayed renewal counting
process. It is clear that the sequence of epochs at which state j is entered form renewal
points, since they form renewal points in the embedded Markov chain and the time intervals
between transitions depend only on the current state. The questions are whether the ﬁrst
2 To spell this out, consider the sample function Mi (t, ω ) for any ω ∈ ≠. This is nondecreasing in t
and thus either has a ﬁnite limit or goes to 1. The set of ω for which this limit is at most n is 0 since
limt→1 Pr {Mi (t) ≥ n} = 1, and thus the limit is 1 with probability 1. 6.2. STEADYSTATE BEHAVIOR OF IRREDUCIBLE MARKOV PROCESSES 241 entry to state j must occur within some ﬁnite time, and then whether recurrences must
occur within ﬁnite time. The following lemma answers these questions for the case where
the embedded chain is recurrent (either positive recurrent or null recurrent).
Lemma 6.2. Consider a Markov process with an irreducible recurrent embedded chain
{Xn ; n ≥ 0}. Given X0 = i, let {Mij (t); t ≥ 0} be the number of transitions into a
given state j in the interval (0, t]. Then {Mij (t); t ≥ 0} is a delayed renewal counting
process (or, if j = i, is an ordinary renewal counting process).
Proof: Given X0 = i, let Nij (n) be the number of transitions into state j that occur in the
embedded Markov chain by the nth transition of the embedded chain. From Lemma 5.4,
{Nij (n); n ≥ 0} is a delayed renewal process, and from Lemma 3.2, limn→1 Nij (n) = 1
with probability 1. Then Mij (t) = Nij (Mi (t)) where Mi (t) is the total number of state
transitions (between all states) in the interval (0, t]. Thus, with probability 1,
lim Mij (t) = lim Nij (Mi (t)) = lim Nij (n) = 1. t→1 t→1 n→1 where we have used Lemma 6.1, which asserts that limt→1 Mi (t) = 1 with probability 1.
It follows that the time W1 at which the ﬁrst transition into state j occurs, and the subsequent interval W2 until the next transition to state j , are both ﬁnite with probability 1.
Subsequent intervals have the same distribution as W2 , and all intervals are independent,
so {Mij (t); t ≥ 0} is a delayed renewal process with interrenewal intervals {Wk ; k ≥ 1}.
If i = j , then all Wk are identically distributed and we have an ordinary renewal process,
completing the proof.
The interrenewal intervals W2 , W3 , . . . for {Mij (t); t ≥ 0} above are welldeﬁned nonnegative iid rv’s whose distribution depends on j but not i. They either have an expectation
as a ﬁnite number or can be regarded as having an inﬁnite expectation. In either case, this
expectation is denoted as E [W (j )] = W (j ). This is the mean time between successive
entries to state j , and we will see later that in some cases this mean time can be inﬁnite.
In order to study the fraction of time spent in state j , we deﬁne a delayed renewalreward
process, based on {Mij (t); t ≥ 0}, for which unit reward is accumulated whenever the
process is in state j (see Figure 6.4). If transition n − 1 of the embedded chain enters state j ,
then the interval Un until the nth transition is exponential with rate ∫j , so E [Un Xn−1 =j ] =
1/∫j .
Deﬁne pj as the limiting timeaverage fraction of time spent in state j (if such a limit exists).
Then, since U (j ) = 1/∫j , Theorems 3.6 and 3.12, for ordinary and delayed renewalreward
processes respectively, state that
pj = lim t→1 Rt
0 Rj (τ )dτ
U (j )
1
=
=
t
W (j )
∫j W (j ) with probability 1. (6.11) We can also investigate the limit, as t → 1, of the probability that X (t) = j . This is equal
to limt→1 E [R(t)] for the renewalreward process above. Because of the exponential holding 242 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES t t ✛ Un Xn−1 =j ✛ ✲ t Rj (t) t Xn 6=j t Xn+1 6=j Xn+2 =j Wk ✲ t Xn+3 6=j Figure 6.4: The delayed renewalreward process {Rj (t); t ≥ 0} for time in stat...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details