This preview shows page 1. Sign up to view the full content.
Unformatted text preview: − f (1)]πs = (1 − ∏δ )πa(s ) ∗
πd(s ) Pd(s ),s ;
∗
πa(s ) Pa(s ),s ;
∗
πs Ps ,s ; (5.49) (1 − ∏δ )g (z1 )πs = ∏δ f (z1 + 1)πd(s ) (5.48) ∏δ f (1)πs = ∏δ f (1)πs . (5.50) We next show that (5.48), applied repeatedly, will allow us to solve for πs (if ∏ is small
enough for the states to be positive recurrent). Verifying that the solution also satisﬁes
(5.47) and (5.49), will then verify the hypothesis. Since f (z1 + 1)/g (z1 ) is F (z1 ) from
(5.43), we have
πs = ∏δ
F (z1 )πd(s ) .
1 − ∏δ (5.51) For m > 1, d(s ) = (m − 1, z2 , . . . , zm ), so we can apply (5.51) to πd(s ) , and substitute the
result back into (5.51), yielding
πs = √ ∏δ
1 − ∏δ !2 F (z1 )F (z2 )πd(d(s )) , (5.52) where d(d(s )) = (m − 2, z3 , . . . , zm ). Applying (5.51) repeatedly to πd(d(s )) , πd(d(d(s ))) , and
so forth, we eventually get √
!m m
Y
∏δ πs =
F (zj ) πφ .
(5.53)
1 − ∏δ
j =1 Before this can be accepted as a steadystate distribution, we must verify that it satisﬁes
(5.47) and (5.49). The left hand side of (5.47) is (1 − ∏δ )[1 − g (z1 )]πs , and, from (5.43),
1 − g (z1 ) = [F (z1 ) − f (z1 + 1)]/F (z1 ) = F (z1 + 1)/(z1 ). Thus using (5.53), the left side of
(5.47) is √
!m m
√
!m m
Y
Y
F (z1 +1)
∏δ
∏δ (1 − ∏δ )
F (zj ) πφ = (1−∏δ )
F (zj ) F (z1 +1)πφ .
1−∏δ
1−∏δ
F (z1 )
j =1 j =2 This is equal to (1 − ∏δ )πr(s ) , verifying (5.47). Equation (5.49) is veriﬁed in the same way.
We now have to ﬁnd P
whether there is a solution for pf such that these probabilities sum to
1. First deﬁne Pm = z1 , . . . , zm π (m, z1 , . . . , zm ). This is the probability of m customers
in the system. Whenever a new customer enters the system, it receives one increment of
service immediately, so each zi ≥ 1. Using the hypothesized solution in (5.53), √
!m m 1
YX
∏δ Pm =
F (i) πφ .
(5.54)
1 − ∏δ
j =1 i=1 222 CHAPTER 5. COUNTABLESTATE MARKOV CHAINS Since F (i) = Pr {W > iδ }, since W is arithmetic with span δ , and since the mean of a
nonnegative random variable is the integral of its complementary distribution function, we
have
δ 1
X
i=1 F (i) = E [W ] − δ
Pm = √ ∏
1 − ∏δ !m (5.55)
≥
¥m
E [W ] − δ
πφ . (5.56) Deﬁning ρ = [∏/(1 − ∏δ )]{E [W ] − δ }, we see Pm = ρm πφ . If ρ < 1, then πφ = 1 − ρ, and
Pm = (1 − ρ)ρm ; m ≥ 0. (5.57) The condition r < 1 is required for the states to be positiverecurrent. The expected
P
number of customers in the system for a roundrobin queue is m mPm = ρ/(1 − ρ), and
using Little’s theorem, Theorem 3.8, the expected delay is ρ/[∏(1 − ρ)]. In using Little’s
theorem here, however, we are viewing the time a customer spends in the system as starting
when the number m in the state increases; that is, if a customer arrives at time nδ , it goes
to the front of the queue and receives one increment of service, and then, assuming it needs
more than one increment, the number m in the state increases at time (n + 1)δ . Thus the
actual expected delay, including the original d when the customer is being served but not
counted in the state, is δ + ρ/[∏(1 − ρ)].
The relation between ρ and ∏E [W ] is shown in Figure 5.7, and it is seen that ρ < 1 for
∏E [W ] < 1. The extreme case where ∏δ = ∏E [W ] is the case for which each customer
requires exactly one unit of service. Since at most one customer can arrive per time increment, the state always remains at s = φ, and the delay is δ , i.e., the original increment of
service received when a customer arrives.
1 °
°
ρ
°
°
∏δ °
∏E [W ] 1 Figure 5.7: ρ as a function of ∏E [W ] for given ∏δ .
Note that (5.57) is the same as the distribution of customers in the system for the M/M/1
Markov chain in (5.42), except for the anomaly in the deﬁnition of ρ here. We then have
the surprising result that if roundrobin queueing is used rather than FCFS, then the distribution of the number of customers in the system is approximately the same as that for
an M/M/1 queue. In other words, the slow truck eﬀect associated with the M/G/1 queue
has been eliminated.
Another remarkable feature of roundrobin systems is that one can also calculate the expected delay for a customer conditional on the required service of that customer. This 5.7. SEMIMARKOV PROCESSES 223 is done in Exercise 5.15, and it is found that the expected delay is linear in the required
service.
Next we look at processor sharing by going to the limit as δ → 0. We ﬁrst eliminate the
assumption that the service requirement distribution is arithmetic with span δ . Assume
that the server always spends an increment of time δ on the customer at the front of the
queue, and if service is ﬁnished before the interval of length δ ends, the server is idle until
the next sample time. The analysis of the steadystate distribution above P still valid if
is
1
we deﬁne F (j ) = Pr...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details