This preview shows page 1. Sign up to view the full content.
Unformatted text preview: r the backward Markov chain, and interpret it as a model
for residual life.
Exercise 5.12. Consider the sample time approximation to the M/M/1 queue in ﬁgure 5
a) Give the steadystate probabilities for this chain (no explanations or calculations required–
just the answer).
In parts b) to g) do not use reversibility and do not use Burke’s theorem. Let Xn be the
state of the system at time nδ and let Dn be a random variable taking on the value 1 if a
departure occurs between nδ and (n + 1)δ , and the value 0 if no departure occurs. Assume
that the system is in steadystate at time nδ .
b) Find Pr {Xn = i, Dn = j } for i ≥ 0, j = 0, 1
c) Find Pr {Dn = 1}
d) Find Pr {Xn = i  Dn = 1} for i ≥ 0
e) Find Pr {Xn+1 = i  Dn = 1} and show that Xn+1 is statistically independent of Dn .
Hint: Use part d); also show that Pr {Xn+1 = i} = Pr {Xn+1 = i  Dn = 1} for all i ≥ 0 is
suﬃcient to show independence.
f ) Find Pr {Xn+1 = i, Dn+1 = j  Dn } and show that the pair of variables (Xn+1 , Dn+1 ) is
statistically independent ofDn . 5.10. EXERCISES 233 g) For each k > 1, ﬁnd Pr {Xn+k = i, Dn+k = j  Dn+k−1 , Dn+k−2 , . . . , Dn } and show that
the pair (Xn+k , Dn+k ) is statistically independent of (Dn+k−1 , Dn+k−2 , . . . , Dn ). Hint: use
induction on k; as a substep, ﬁnd Pr {Xn+k = i  Dn+k−1 = 1, Dn+k−2 , . . . , Dn } and show
that Xn+k is independent of Dn+k−1 , Dn+k−2 , . . . , Dn .
h) What do your results mean relative to Burke’s theorem.
Exercise 5.13. Let {Xn , n ≥ 1} denote a irreducible recurrent Markov chain having a
countable state state space. Now consider a new stochastic process {Yn , n ≥ 0} that only
accepts values of the Markov chain that are between 0 and some integer m. For instance,
if m = 3 and X1 = 1, X2 = 3, X3 = 5, X4 = 6, X5 = 2, then Y1 = 1, Y2 = 3, Y3 = 2.
a) Is {Yn , n ≥ 0} a Markov chain? Explain brieﬂy.
b) Let pj denote the proportion of time that {Xn , n ≥ 1} is in state j . If pj > 0 for all j ,
what proportion of time is {Yn , n ≥ 0} in each of the states 0, 1, . . . , m?
c) Suppose {Xn } is nullrecurrent and let pi (m), i = 0, 1, . . . , m denote the longrun proportions for {Yn , n ≥ 0}. Show that pj (m) = pi (m)E [time the X process spends in j between
returns to i], j 6= i.}
Exercise 5.14. Verify that (5.49) is satisﬁed by the hypothesized solution to p in (5.53).
Also show that the equations involving the idle state f are satisﬁed.
Exercise 5.15. Replace the state m = (m, z1 , . . . , zm ) in Section 5.6 with an expanded
state m = (m, z1 , w1 , z2 , w2 , . . . , zm , wm ) where m and {zi ; 1 ≤ i ≤ m} are as before and
w1 , w2 , . . . , wm are the original service requirements of the m customers.
a) Hypothesizing the same backward roundrobin system as hypothesized in Section 5.6,
ﬁnd the backward transition probabilities and give the corresponding equations to (5.475.50) for the expanded state description.
b) Solve the resulting equations to show that
m
≥ ∏δ ¥m Y
πm = π + φ
f (wj ).
1 − ∏δ
j =1 c) Show that the probability that there are m customers in the system, and that those
customers have original service requirements given by w1 , . . . , wm , is
√
!m m
Y
∏δ
Pr {m, w1 , . . . , wm } = πφ
(wj − 1)f (wj ).
1 − ∏δ
j =1 d) Given that a customer has original service requirement w, ﬁnd the expected time that
customer spends in the system. 234 CHAPTER 5. COUNTABLESTATE MARKOV CHAINS Exercise 5.16. A taxi alternates between three locations. When it reaches location 1 it
is equally likely to go next to either 2 or 3. When it reaches 2 it will next go to 1 with
probability 1/3 and to 3 with probability 2/3. From 3 it always goes to 1. The mean time
between locations i and j are t12 = 20, t13 = 30, t23 = 30. Assume tij = tj i ).
What is the (limiting) probability that the taxi’s most recent stop was at location i, i =
1, 2, 3?
What is the (limiting) probability that the taxi is heading for location 2?
What fraction of time is the taxi traveling from 2 to 3. Note: Upon arrival at a location
the taxi immediately departs.
Exercise 5.17. Consider an M/G/1 queueing system with Poisson arrivals of rate ∏ and
expected service time E [X ]. Let ρ = ∏E [X ] and assume ρ < 1. Consider a semiMarkov
process model of the M/G/1 queueing system in which transitions occur on departures
from the queueing system and the state is the number of customers immediately following
a departure.
a) Suppose a colleague has calculated the steadystate probabilities {pi } of being in state
i for each i ≥ 0. For each i ≥ 0, ﬁnd the steadystate probability pii of state i in the
embedded Markov chain. Give your solution as a function of ρ, πi , and p0 .
b) Calculate p0 as a function of ρ.
c) Find πi as a function of ρ and pi .
d) Is pi the same as the steadystate probability that the queueing system contains i customers at a given time? Explain carefully.
Exercise 5.18. Consider an M/G/1 queue in which the arrival rate is ∏ and the service
time distributin is uniform (0, 2W ) with ∏W < 1. Deﬁn...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details