This preview shows page 1. Sign up to view the full content.
Unformatted text preview: j such that Pkj > 0. It is not hard to see that if the distribution of interrenewal
intervals for one value of j is arithmetic with span d, then the distribution of interrenewal
intervals for each i is arithmetic with the same span (see exercise 5.19). 5.8. EXAMPLE — THE M/G/1 QUEUE 227 For the example of Figure 5.8, we see by inspection that U (1) = 5.5 and U (2) = 1. Thus
p1 = 11/13, and p2 = 2/13.
For a semiMarkov process, knowing the probability that X (t) = j for large t does not
completely specify the steadystate behavior. Another important steady state question is
to determine the fraction of time involved in i to j transitions. To make this notion precise,
deﬁne Y (t) as the residual time until the next transition after time t (i e., t + Y (t) is the
epoch of the next transition after time t). We want to determine the fraction of time t over
which X (t) = i and X (t + Y (t)) = j . Equivalently, for a nonarithmetic process, we want
to determine Pr {X (t) = i, X (t + Y (t) = j )} in the limit as t → 1. Call this limit Q(i, j ).
Consider a renewal process, starting in state i and with renewals on transitions to state
i. Deﬁne a reward R(t) = 1 for X (t) = i, X (t + Y (t)) = j and R(t) = 0 otherwise (see
Figure 5.10). That is, for each n such that X (Sn ) = i and X (Sn+1 ) = j , R(t) = 1 for
Sn ≤ t < Sn+1 . The expected reward in an interrenewal interval is then Pij U (i, j ) . It
follows that Q(i, j ) is given by
Rt
R(τ )(dτ
Pij U (i, j )
pi Pij U (i, j )
Q(i, j ) = lim 0
=
=
.
(5.69)
t→1
t
W (i)
U (i) t t ✛ Xn =i ✛ Un ✲ t Xn+1 =j t Xn+2 6=j
Wk t t Xn+3 =i Xn+4 6=j ✲ t Xn+5 =i t Xn+6 =j Figure 5.10: The renewalreward process for i to j transitions. The expected value of
Un if Xn = i and Xn+1 = j is U (i, j ) and the expected interval between entries to i is
W (i). 5.8 Example — the M/G/1 queue As one example of a semiMarkov chain, consider an M/G/1 queue. Rather than the usual
interpretation in which the state of the system is the number of customers in the system,
we view the state of the system as changing only at departure times; the new state at a
departure time is the number of customers left behind by the departure. This state then
remains ﬁxed until the next departure. New customers still enter the system according to
the Poisson arrival process, but these new customers are not considered as part of the state
until the next departure time. The number of customers in the system at arrival epochs
does not in general constitute a “state” for the system, since the age of the current service
is also necessary as part of the statistical characterization of the process.
One purpose of this example is to illustrate that it is often more convenient to visualize
the transition interval Un = Sn − Sn−1 as being chosen ﬁrst and the new state Xn as being 228 CHAPTER 5. COUNTABLESTATE MARKOV CHAINS chosen second rather than choosing the state ﬁrst and the transition time second. For the
M/G/1 queue, ﬁrst suppose that the state is some i > 0. In this case, service begins on
the next customer immediately after the old customer departs. Thus, Un , conditional on
Xn = i for i > 0, has the distribution of the service time, say G(u). The mean interval until
a state transition occurs is
Z1
U (i) =
[1 − G(u)]du; i > 0.
(5.70)
0 Given the interval u for a transition from state i > 0, the number of arrivals in that period
is a Poisson random variable with mean ∏u, where ∏ is the Poisson arrival rate. Since the
next state j is the old state i, plus the number of new arrivals, minus the single departure,
Pr {Xn+1 = j  Xn = i, Un = u} = (∏u)j +i+1 exp(−∏u)
.
(j − i + 1)! (5.71) for j ≥ i − 1. For j < i − 1, the probability above is 0. The unconditional probability Pij
of a transition from i to j can then be found by multiplying the right side of (5.71) by the
probability density g (u) of the service time and integrating over u.
Z1
G(u)(∏u)j −i+1 exp(−∏u)
Pij =
du; j ≥ i − 1, i > 0.
(5.72)
(j − i + 1)
0
For the case i = 0, the server must wait until the next arrival before starting service. Thus
the expected time from entering the empty state until a service completion is
Z1
U (0) = (1/∏) +
[1 − G(u)]du.
(5.73)
0 We can evaluate P0j by observing that the departure of that ﬁrst arrival leaves j customers
in this system iﬀ j customers arrive during the service time of that ﬁrst customer; i.e., the
new state doesn’t depend on how long the server waits for a new customer to serve, but
only on the arrivals while that customer is being served. Letting g (u) be the density of the
service time,
Z1
g (u)∏u)j exp(−∏u)
P0j =
du; j ≥ 0.
(5.74)
j!
0 5.9 Summary This chapter extended the ﬁnitestate Markov chain results of Chapter 4 to the case of
countablyinﬁnite state spaces. It also provided an excellent example of how renewal processes can be used for understanding other kinds of processes. In Section 5.1, the ﬁrstpassagetime random variables were used to construct renewal processes with renewals on
successive transitio...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details