This preview shows page 1. Sign up to view the full content.
Unformatted text preview: {W > j δ }, and f (j ) = F (j ) − F (j + 1). In this case δ
i=1 F (i) lies
between E [W ] − δ and E [W ]. As δ → 0, ρ = ∏E [W ], and distribution of time in the system
becomes identical to that of the M/M/1 system. 5.7 SemiMarkov processes SemiMarkov process are generalizations of Markov chains in which the time intervals between transitions are random. To be speciﬁc, let X (t) be the state of the process at time t
and let {0, 1, 2, . . . } denote the set of possible states (which can be ﬁnite or countably inﬁnite). Let the random variables S1 < S2 < S3 < . . . denote the successive epochs at which
state transitions occur. Let Xn be the new state entered at time Sn (i.e., Xn = X (Sn ), and
X (t) = Xn for Sn ≤ t < Sn + 1). Let S0 = 0 and let X0 denote the starting state at time 0
(i.e., X0 = X (0) = X (S0 ). As part of the deﬁnition of a semiMarkov process, the sequence
{Xn ; n ≥ 0} is required to be a Markov chain, and the transition probabilities of that chain
are denoted {Pij , i ≥ 0, j ≥ 0}. This Markov chain is called the embedded Markov chain of
the semiMarkov process. Thus, for n ≥ 1,
Pr {Xn = j  Xn−1 = i} = Pr {X (Sn ) = j  X (Sn−1 ) = i} = Pij . (5.58) Conditional on X (Sn−1 ), the state entered at Sn is independent of X (t) for all t < Sn−1 .
As the other part of the deﬁnition of a semiMarkov process, the intervals Un = Sn − Sn−1
between successive transitions for n ≥ 1 are random variables that depend only on the states
X (Sn−1 ) and X (Sn ). More precisely, given Xn−1 and Xn , the interval Un is independent
of the set of Um for m < n and independent of X (t) for all t < Sn−1 . The conditional
distribution function for the intervals Un is denoted by Gij (u), i.e.,
Pr {Un ≤ u  Xn−1 = i, Xn = j } = Gij (u). (5.59) The conditional mean of Un , conditional on Xn−1 = i, Xn = j , is denoted (i, j ), i.e.,
Z
U (i, j ) = E [Un  Xn−1 = i, Xn = j ] =
[1 − Gij (u)]du.
(5.60)
u≥0 We can visualize a semiMarkov process evolving as follows: given an initial state, X0 = i
at time 0, a new state X1 = j is selected according to the embedded chain with probability
Pij . Then U1 = S1 is selected using the distribution Gij (u). Next a new state X2 = k is
chosen according to the probability Pj k ; then, given X1 = j and X2 = k, the interval U2 is
selected with distribution function Gj k (u). Successive state transitions and transition times
are chosen in the same way. Because of this evolution from X0 = i, we see that U1 = S1 224 CHAPTER 5. COUNTABLESTATE MARKOV CHAINS is a random variable, so S1 is ﬁnite with probability 1. Also U2 is a random variable, so
that S2 = S1 + U2 is a random variable and thus is ﬁnite with probability 1. By induction,
Sn is a random variable and thus is ﬁnite with probability 1 for all n ≥ 1. This proves the
following simple lemma.
Lemma 5.5. Let M (t) be the number of transitions in a semiMarkov process in the interval
(0, t], (i.e., SM (t) ≤ t < SM (t+1) ) for some given initial state X0 . Then limt→1 M (t) = 1
with probability 1.
Figure 5.8 shows an example of a semiMarkov process in which the transition times are
deterministic but depend on the transitions. The important point that this example brings
out is that the embedded Markov chain has steadystate probabilities that are each 1/2. On
the other hand, the semiMarkov process spends most of its time making long transitions
from state 0 to state 1, and during these transitions the process is in state 0. This means
that one of our ﬁrst ob jectives must be to understand what steadystate probabilities mean
for a semiMarkov process.
1/2; 1 r 0→0 r ✿♥
✘0② 1/2; 10
1/2; 1 ③♥
1
② 1/2; 1 r 0→1 1→0 r 0→0 r Figure 5.8: Example of a semiMarkov process with deterministic transition epochs.
The label on each arc (i, j ) in the graph gives Pij followed by U (i, j ). The solid dots
on the sample function below the graph show the state transition epochs and show the
new states entered. Note that the state at Sn is the new state entered, i.e., Xn , and
the state remains Xn in the interval [Sn ,n+1 ). In what follows, we assume that the embedded Markov chain is irreducible and positiverecurrent. Deﬁne U (i) as the expected time in state i before a transition, i.e.,
X
X
U (i) = E [Un  Xn−1 = i] =
Pij E [Un  Xn−1 = i, Xn = j ] =
Pij U (i, j ).
(5.61)
j j The steadystate probabilities {πi } for the embedded chain tell us the fraction of transitions
that enter any given state i. Since U (i) is the expected holding time in i per transition
into i, we would guess that the fraction of time spent in state i should be proportional to
πi U (i). Normalizing, we would guess that the timeaverage probability of being in state i
should be
X
pi = πi U (i)/
πj U (j ).
(5.62)
j By now, it should be no surprise that renewal theory is the appropriate tool to make this
precise. We continue to assume an irreducible positiverecu...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details