**Unformatted text preview: **Problem Set 5 MS&E 221
Due: Wednesday, March 7 11:59PM
(Visit to Clinic): Suppose that you have the u and are waiting at a clinic to
see a doctor. There are k ≥ 2 doctors who are all currently seeing a patient. Each doctor i takes
random amount of time Ti for each patient for i = 1, . . . , k. Suppose that Ti 's are independent with
Ti ∼ Expo(λi ) for some λi > 0, i = 1, . . . , k .
You need to provide your reasoning for the following questions (no points will be given otherwise).
Question 5.1 (a) Let Ri be the remaining time for doctor i to nish treating their current patient. For each
i = 1, . . . , k , what is the distribution of Ri ?
(b) Let W be the time you have to wait until you see a doctor. What is the distribution of W ?
(c) What is the probability that doctor 1 nishes treating her current patient before doctor 2 does?
(d) What is the probability that doctor i is the rst to nish treating her current patient?
(e) What is the expected total amount of time you spend at the clinic? (assume that you leave
immediately after a doctor treats you.)
Answer: (a) By the memoryless property, we have Ri ∼ exp(λi ) for i = 1, . . . , k.
(b) We have W = min1≤i≤k Ri . Since Ri 's are independent, we have from part (a) that
P(W ≥ t) = P( min Ri ≥ t) = P(Ri ≥ t for all 1 ≤ i ≤ k)
1≤i≤k = k
Y
i=1 Hence, W ∼ Expo P k
i=1 λi P(Ri ≥ t) = k
Y exp(−λi t) = exp − i=1 k
X !
λi t . i=1 . (c) We wish to calculate P(R1 < R2 ). By conditioning on R1 , we have
Z ∞
P(R1 < R2 |R1 = t)λ1 e−λ1 t dt =
P(t < R2 )λ1 e−λ1 t dt
0
Z0 ∞
Z ∞
λ1
=
e−λ2 t λ1 e−λ1 t dt =
λ1 e−(λ1 +λ2 )t dt =
λ1 + λ2
0
0
Z ∞ P(R1 < R2 ) = 1 (d) We wish to calculate P(Ri ≤ S\i ) where S\i := minj6=i Ri . Noting that S\i ∼ Expo
from a similar logic as in part (b), we conclude as in part (c) that
λi
P(Ri ≤ S\i ) = Pk j=1 λj P j6=i λj . (e) First, we condition on Ri < minj6=i Rj .
E Time spent in clinicRi < min Rj = E W Ri < min Rj + E Ti Ri < min Rj
j6=i
j6=i
j6=i
= E Ri Ri < min Rj + E [Ti ]
j6=i 1 1
= Pk
+
λi
i=1 λi where in the last equality, we used the fact that Ri = W if Ri < minj6=i Rj and part (b). Hence,
we have
E [Time spent in clinic] =
= k
X
E Time spent in clinicRi < min Rj P Ri < min Rj
j6=i i=1
k
X
i=1 1
Pk j=1 λj + 1
λi ! j6=i λi
Pk j=1 λj k+1
= Pk
.
j=1 λj Question 5.2 (Automated Grocery Store): Consider you are a business analyst for the Amazon Go
grocery store. In the store, the customers are able to purchase without a cashier or checkout station. You may solve either Version 1 (parts (a) and (b)) or Version 2 (parts (c) and (d)). Note that
Version 2 is a simplied exercise and you don't need to use programming for part (d). : The (s, S)-inventory system of the store is also built in a partially-automatic way:
whenever the inventory level of a product falls below s, it will trigger the sensor and then automatically place an order for replenishment to the level S . You receive a notication only when the
replenishment order is placed but have no access to the real-time inventory level. However, your
manager instructs you to build a remote inventory monitoring system (only with the replenishment
notications), which is helpful in understanding the customers' behavior.
Version 1 Consider the (s, S)-inventory model with s = 5 and S = 10. Let Xn be the inventory position
at the end of each day (which is unknown). As previous exercises, we assume that the delivery
arrives immediately at the end of each day. Suppose that the daily demands Zn are independent
2 and identically distributed as Geom(p) with p = 0.4 (with P(Zn = 0) > 0). Let Yn be an indicator
of whether the sensor for replenishment (for a particular item) is triggered at the end of day n
(
1
Yn =
0 if there is a replenishment on day n
.
otherwise Assume that X0 = S . Mathematically, your goal is to compute
νn (x) := P(Xn = x|Y1 , ..., Yn ). (a) Show that the following recursive relation holds for n ≥ 2
P
νn (x) = P xn−1 P(Xn = x|Yn = yn , Xn−1 = xn−1 )P(Yn = yn |Xn−1 = xn−1 )νn−1 (xn−1 ) z,xn−1 P(Xn = z|Yn = yn , Xn−1 = xn−1 )P(Yn = yn |Xn−1 = xn−1 )νn−1 (xn−1 ) . (1) (b) (Programming) Given the observations
Y0 = 0, Y1 = 1, Y2 = 1, Y3 = 0, Y4 = 0, Y5 = 0, Y6 = 0, implement your inventory monitoring system and provide ν6 (x).
Consider the (s, S)-inventory model with s = 5 and S = 7. Let Xn be the inventory
position at the end of each day (which is unknown). As previous exercises, we assume that the
delivery arrives immediately at the end of each day. Suppose that the daily demands Zn are
independent and identically distributed as Geom(p) with p = 0.4 (with P(Zn = 0) > 0). Let
Wn be an indicator corresponding to whether the system is in the order-up-to level S , so that
Wn := 1(Xn =S) . Assume that X0 = S . Mathematically, your goal is to compute
Version 2 µn (x) := P(Xn = x|W1 , ..., Wn ). (c) Derive an recursive relation between µn (x) and µn−1 (x).
(d) Given the observations
W0 = 1, W1 = 0, W2 = 1, W3 = 1, W4 = 0, implement your inventory monitoring system and provide µ4 (x).
Answer: 3 (a) Solution 1 νn (x) = P(Xn = x|Y1n )
X
P(Xn = x, Xn−1 = y|Y1n )
=
y = X = X = X ∝ X ∝ X ∝ X ∝ X P(Xn = x|Xn−1 = y, Y1n )P(Xn−1 = y|Y1n ) (Bayes' formula) y P(Xn = x|Xn−1 = y, Yn )P(Xn−1 = y|Y1n ) (conditional independence) y P(Xn = x|Xn−1 = y, Yn ) y P(Xn−1 = y, Yn |Y1n−1 )
(Bayes' formula)
P(Yn |Y1n−1 ) P(Xn = x|Xn−1 = y, Yn )P(Xn−1 = y, Yn |Y1n−1 ) (denominator is constant) y P(Xn = x|Xn−1 = y, Yn )P(Yn |Xn−1 = y, Y1n−1 )P(Xn−1 = y|Y1n−1 ) (Bayes' formula) y P(Xn = x|Xn−1 = y, Yn )P(Yn |Xn−1 = y, Y1n−1 )νn−1 (y) y P(Xn = x|Xn−1 = y, Yn )P(Yn |Xn−1 = y)νn−1 (y) (conditional independence) y (a) Note that Yn = 1(Xn−1 −Zn <s) . Throughout, we use the notation z0n = (z0 , . . . , zn ).
First, we show the following lemma.
Solution 2 Lemma 1. For n ≥ 2, we have ) = P(Y1n−1 = y1n−1 |X0n−2 = x0n−2 )P(Yn = yn |Xn−1 = xn−1 )
P(Y1n = y1n |X0n−1 = xn−1
0
P(Xn−1 = xn−1 |Yn−1 = yn−1 , Xn−2 = xn−2 )
×
.
P(Xn−1 = xn−1 |Xn−2 = xn−2 )
Proof of Lemma By the denition of Yn 's and the Bayes rule, we have )
P(Y1n = y1n |X0n−1 = xn−1
0
= P(Y1n−1 = y1n−1 |X0n−1 = x0n−1 )P(Yn = yn |X0n−1 = x0n−1 )
= P(Y1n−1 = y1n−1 |X0n−1 = x0n−1 )P(Yn = yn |Xn−1 = xn−1 )
= P(Y1n−1 = y1n−1 |X0n−2 = xn−2
)P(Xn−1 = xn−1 |Y1n−1 = y1n−1 , X0n−2 = x0n−2 )
0
P(Xn−1 = xn−1 |X0n−2 = xn−2
)
0
× P(Yn = yn |Xn−1 = xn−1 ) = P(Y1n−1 = y1n−1 |X0n−2 = xn−2
)P(Xn−1 = xn−1 |Yn−1 = yn−1 , Xn−2 = xn−2 )
0
P(Xn−1 = xn−1 |Xn−2 = xn−2 )
× P(Yn = yn |Xn−1 = xn−1 ) where we used the Markov property in the last equality.
4 Now, we condition on X0 , . . . , Xn−1 and use the above lemma to derive a recursive relation
for νn (x).
νn (x) = P(Xn = x|Y1n = y1n )
X
P(Xn = x|Y1n = y1n , X0n−1 = xn−1
)P(X0n−1 = xn−1
|Y1n = y1n )
=
0
0
x0 ,...,xn−1
(a) = X P(Xn = x|Yn = yn , Xn−1 = xn−1 )P(Y1n = y1n |X0n−1 = x0n−1 ) x0 ,...,xn−1
(b) = X P(X0n−1 = xn−1
)
0
n
n
P(Y1 = y1 ) P(Xn = x|Yn = y, Xn−1 = xn−1 )P(Yn = yn |Xn−1 = xn−1 ) x0 ,...,xn−1 × P(Xn−1 = xn−1 |Yn−1 = yn−1 , Xn−2 = xn−2 )
P(Xn−1 = xn−1 |Xn−2 = xn−2 ) × P(Y1n−1 = y1n−1 |X0n−2 = x0n−2 ) P(X0n−1 = x0n−1 )
P(Y1n = y1n ) where step (a) followed from the denition of Yn 's and the Bayes rule and step (b) followed
from the above lemma. Now, applying the Markov property in the last line of the preceeding
display, we obtain
νn (x) = X P(Xn = x|Yn = y, Xn−1 = xn−1 )P(Yn = yn |Xn−1 = xn−1 ) xn−1 × X
x0 ,...,xn−2 P(Xn−1 = xn−1 |Yn−1 = yn−1 , Xn−2 = xn−2 )
P(Xn−1 = xn−1 |Xn−2 = xn−2 ) × P(Y1n−1 = y1n−1 |X0n−2 = xn−2
)
0
= X P(Xn−1 = xn−1 |Xn−2 = xn−2 )P(X0n−2 = xn−2
)
0
n
n
P(Y1 = y1 ) P(Xn = x|Yn = y, Xn−1 = xn−1 )P(Yn = yn |Xn−1 = xn−1 ) xn−1 × X P(Xn−1 = xn−1 |Yn−1 = yn−1 , Xn−2 = xn−2 ) x0 ,...,xn−2 × P(Y1n−1 = y1n−1 |X0n−2 = xn−2
)
0 P(X0n−2 = x0n−2 )
.
P(Y1n = y1n ) (2) Now, note that since
P(Xn−1 = xn−1 |Yn−1 = yn−1 , Xn−2 = xn−2 ) = P(Xn−1 = xn−1 |Y1n−1 = y1n−1 , X0n−2 = xn−2
),
0 we have
X P(Xn−1 = xn−1 |Yn−1 = yn−1 , Xn−2 = xn−2 ) x0 ,...,xn−2 × P(Y1n−1 = y1n−1 |X0n−2 = xn−2
)P(X0n−2 = x0n−2 )
0
= X P(Xn−1 = xn−1 , Y1n−1 = y1n−1 , X0n−2 = x0n−2 ) x0 ,...,xn−2 = P(Xn−1 = xn−1 , Y1n−1 = y1n−1 ). 5 Using this identity in the expression (2), we obtain
νn (x) = X P(Xn = x|Yn = yn , Xn−1 = xn−1 )P(Yn = yn |Xn−1 = xn−1 ) xn−1 = P(Xn−1 = xn−1 , Y1n−1 = y1n−1 )
P(Y1n = y1n ) ×
X (3) P(Xn = x|Yn = yn , Xn−1 = xn−1 )P(Yn = yn |Xn−1 = xn−1 ) xn−1 × P(Xn−1 = xn−1 |Y1n−1 = y1n−1 )
= P(Y1n−1 = y1n−1 )
P(Y1n = y1n ) 1
P(Yn = yn |Y1n−1 = y1n−1 )
X
P(Xn = x|Yn = yn , Xn−1 = xn−1 )P(Yn = yn |Xn−1 = xn−1 )νn−1 (xn−1 ). (4)
×
xn−1 Now, since νn (z) = 1, we have
X
P(Yn = yn |Y1n−1 = y1n−1 ) =
P(Xn = z|Yn = yn , Xn−1 = xn−1 )P(Yn = yn |Xn−1 = xn−1 )νn−1 (xn−1 )
P z z,xn−1 and the nal result (1) follows from the recursion (4).
(a) Solution 3 From the denition of Yn ,
Yn = I(Xn−1 − Zn < s). Let
ν n (x) = P (Xn−1 = x|Y1 , ..., Yn ). First, we derive the relation between ν n and νn
νn (x) = P (Xn = x|Y1 , ..., Yn )
X
=
P (Xn = x, Xn−1 = y|Y0 , ..., Yn )
y = X P (Xn = x|Y1 , ..., Yn , Xn−1 = y)P (Xn−1 = y|Y1 , ..., Yn ) y = X P (Xn = x|Y1 , ..., Yn , Xn−1 = y)ν n (y) y = X P (Xn = x|Yn , Xn−1 = y)ν n (y) y The rst term of the last line can be computed with a discussion of the value that Yn takes. 6 Then, we begin to derive a recursion for ν n (x)
ν n (x) = P (Xn−1 = x|Y1 , ..., Yn )
P (Xn−1 = x, Yn |Y1 , ..., Yn−1 )
=
P (Yn |Y1 , ..., Yn−1 )
∝ P (Xn−1 = x, Yn |Y1 , ..., Yn−1 ) (Because the denominator is a constant)
X
P (Xn−1 = x, Xn−2 = y, Yn |Y1 , ..., Yn−1 )
∝
y ∝ X P (Xn−1 = x, Yn |Xn−2 = y, Y1 , ..., Yn−1 )P (Xn−2 = y|Y1 , ..., Yn−1 ) y ∝ X ∝ X P (Xn−1 = x, Yn |Xn−2 = y, Y1 , ..., Yn−1 )ν n−1 (y) y P (Xn−1 = x, Yn |Xn−2 = y, Yn−1 )ν n−1 (y) y ∝ X P (Yn |Xn−1 = x, Xn−2 = y, Yn−1 )P (Xn−1 |Xn−2 = y, Yn−1 )ν n−1 (y) y ∝ X P (Yn |Xn−1 = x)P (Xn−1 = x|Xn−2 = y, Yn−1 )ν n−1 (y) y Thus, we obtain a recursive formula for ν n (x).
(b) For the given problem, note that
P(Xn = x|Yn = 1, Xn−1 = xn−1 ) = 1(x=S)
P(Xn = x|Yn = 0, Xn−1 = xn−1 ) = P(Zn = xn−1 − x)
P(Zn ≤ xn−1 − s) and
P(Yn = yn |Xn−1 = xn−1 ) = P(Zn > xn−1 − s). Noting that xn−1 and z ranges over s to S , we can then compute νn (x) via the recursion (1).
(c) We can directly apply the recursive formula learned in class, to get
P
z µn−1 (z)P (z, x)I(f (x) = Wn )
µn (x) = P P
y
z µn−1 (z)P (z, y)I(f (y) = Wn ) where P is the transition matrix for the chain and f (x) = I(x = S).
(d) Pugging in the values of Wn , we have
µ3 = (0, 0, 1)
µ4 = (0.375, 0.625, 0). 7 Question 5.3 (Discrete Event Simulation):
Consider a bank with two tellers. You observed
customer interarrival times 8.05, 3.0, 0.99, 1.43, 0.26, 1.14, 1.02, 1.75, 2.14 and their corresponding
service times 6.58, 3.57, 3.39, 2.47, 3.65, 2.22, 2.6, 5.01, 2.34. Using these observations, simulate
X(t), the number of people in the bank at time t until T = 19.00 (assume X(0) = 0). (a) Give a list of (t, X(t)) pairs where the value of X(t) changes for 0 ≤ t ≤ T .
(b) From 0 ≤ t ≤ T , what is the total length of time that both tellers are busy?
Answer: (a) See the following table (this is eectively how you would run this simulation on a computer)
and the gure.
(b) T − 11.05 = 19.00 − 11.05 = 7.95 time units. (Manufacturing): Consider a manufacturing process with batches of raw materials
coming in. Suppose that the interarrival times of batches are i.i.d. exponential random variables
with rate λ and their processing times are i.i.d exponential random variables with rate µ. Due to
nancial constraints, there is only one machine which can only process one batch of raw materials
at any given instance. We assume that once a batch gets processed by the machine, it leaves the
system immediately. Question 5.4 Let X(t) denote the number of batches in the system at time t and let Tj := inf{t ≥ 0 : X(t) = j}
be the hitting time to have j batches in the system. Let Pi,j be the probability that when the
number of batches in the system changes from i, it moves to j batches. Clearly, we have Pi,j = 0
for j 6= i − 1, i + 1.
(a) Derive an expression for Pi,j in terms of λ and µ.
(b) Using rst-step analysis, derive an expression for Ei [Tj ] in terms of Ei+1 [Tj ], and Ei−1 [Tj ] (and
λ, µ).
(c) What is E0 [T1 ]? Based on your answer and the relationship derived in part (b), show that
i Ei [Ti+1 ] = 1 X µ k
.
λ
λ
k=0 (d) Let λ = 2.0 and µ = 3.0. Starting with no raw material in the system, what is the expected
amount time until the system has j ≥ 1 batches in the system? Explain your reasoning carefully.
Answer:
Let X(t) denote the number of batches in the system at time t. We note that this is a
M/M/1 queueing model as covered in class. Let Ti,j be the time it takes to have j batches in the
system, starting from i batches in the system. Note that Ei [Ti,j ] = E[Tj ]. 8 9 Number of Jobs in System
5
4 X(t) 3
2
1
0
0.0 2.5 5.0 7.5 t 10 10.0 12.5 15.0 17.5 (a) Let S1 , S2 be independent exponential random variables with respective rates λ and µ. Then,
since arrivals propel the queue upwards by one slot, we have
∞ Z P(t < S2 |S1 = t)λe Pi,i+1 = P(S1 < S2 ) = −λt Z λe−(λ+µ)t dt = 0 0 Similarly, we have Pi,i−1 = ∞ dt = λ
.
λ+µ µ
λ+µ . (b) Starting from i, let τi be the time at which the Markov chain X(t) leaves state i. Letting S1 , S2
D
min(S1 , S2 ). By conditioning on the rst transition, we have for
be as in part (a), we have τi =
i≥1
Ei [Tj ] = Ei [Tj |X(τi ) = i + 1]Pi (X(τi ) = i + 1) + Ei [Tj |X(τi ) = i − 1]Pi (X(τi ) = i − 1)
= Ei [Tj |X(τi ) = i + 1]Pi,i+1 + Ei [Tj |X(τi ) = i − 1]Pi,i−1
= (E[min(S1 , S2 )| min(S1 , S2 ) = S1 ] + Ei [Tj − τi |X(τi ) = i + 1]) Pi,i+1
+ (E[min(S1 , S2 )| min(S1 , S2 ) = S2 ] + Ei [Tj − τi |X(τi ) = i − 1]) Pi,i−1
= E[min(S1 , S2 )] + Ei [Tj − τi |X(τi ) = i + 1]Pi,i+1 + Ei [Tj − τi |X(τi ) = i − 1]Pi,i−1 .
1
Now, note that E[min(S1 , S2 )] = λ+µ
since min(S1 , S2 ) ∼ Exp(λ + µ). Applying the strong
Markov property in the preceeding display, we obtain Ei [Tj ] = 1
λ
µ
+
Ei+1 [Tj ] +
Ei−1 [Tj ].
λ+µ λ+µ
λ+µ (c) We have E0 [T1 ] = E[S1 ] = λ1 . From part (a), we obtain
λ
µ
1
+
·0+
· E0 [T2 ]
λ+µ λ+µ
λ+µ
1
µ
=
+
· (E0 [T1 ] + E1 [T2 ])
λ+µ λ+µ
1
µ
1
=
+
·
+ E1 [T2 ] .
λ+µ λ+µ
λ E1 [T2 ] = Rearranging, we have
E1 [T2 ] = In general, note that 1
µ
1+
.
λ
λ 1
λ
µ
+
·0+
· Ei−1 [Ti+1 ]
λ+µ λ+µ
λ+µ
1
µ
+
· (Ei−1 [Ti ] + Ei [Ti+1 ])
=
λ+µ λ+µ Ei [Ti+1 ] = Rearranging,
Ei [Ti+1 ] = 1
(1 + µEi−1 [Ti−1,i ]) .
λ We recursively conclude that i 1 X µ i
Ei [Ti,i+1 ] =
.
λ
λ
i=0 11 (d) We wish to compute E0 [Tj ]. By the strong Markov property, we have
E0 [Tj ] = E0 [T1 ] + · · · + Ej−1 [Tj ] = j−1
X Ek [Tk+1 ]. (5) k=0 Note from part (c) that i+1
1 − µλ
Ei [Ti+1 ] =
λ−µ r = 1−r
1−r .) Plugging this expression into (5), we conclude that
µ j
µ
1
−
λ
1 = 3 · 1.5j − j − 3.
E0 [Tj ] =
j−
λ−µ
λ−µ (here we used the formula n Pn−1
k=0 (Malfunctioning Single Server Queue): Consider the M/M/1 system in which
customers arrive at rate λ = 2.0 and the server serves at rate µ = 3.0. However, suppose that in
any interval of length h in which the server is busy there is a probability αh + o(h) that the server
will experience a breakdown with α = .1, which causes the system to shut down. All customers
that are in the system depart, and no additional arrivals are allowed to enter until the breakdown
is xed. The time to x a breakdown is exponentially distributed with rate β = 1.0.
Question 5.5 (a) Dene appropriate states.
(b) Give the balance equations.
(c) In terms of the long-run probabilities, what proportion of entering customers complete their
service?
Answer: (a) The system can be modeled as a CTMC in state space S = {B, 0, 1, 2, . . . }, where state B
represents a breakdown, while state i represents the system operative with i customers present
(i ≥ 0). The transition rates are given by λx,y λ if x ≥ 0, y = x + 1; µ
if x ≥ 1, y = x − 1; α if x ≥ 1, y = B;
= β
if x = B, y = 0; 0 otherwise. (b) The (global) balance equations are:
λπ0 =µπ1 + βπB
(λ + µ + α)πx =µπx+1 + λπx−1 , x ≥ 1
∞
X
βπB =α
πx .
x=1 12 It is worth noting that the above system together with the condition x∈S πx = 1 has a unique
solution, since this CTMC is irreducible and positive recurrent: irreducibility is direct. To check
positive recurrence it is enough to verify that state B is positive recurrent. To this end, let
Y = (Yn : n ≥ 0) denote the embedded Markov chain, and let NB , inf{n ≥ 1 : Yn = B} be the
n
number of transitions until the rst return
P∞to Bn. It is easy to argue that PB (NB > 2n) ≤ q ,
α
where q = 1 − α+µ+λ . Then E(NB ) ≤ n=1 q < ∞, so state B is positive recurrent for the
embedded chain. Since {λ(x) : x ∈ S} is bounded, this implies B is positive recurrent for the
original CTMC.
P (c) The fraction of entering customers who complete service is given by
f =P(completes service|enters)
∞
X
=
P(completes service|system has x, customers entered)P(system has x| customers entered)
= x=0
∞
X
x=0 µ
µ+α x+1 πx
1 − πB since P(completes service|system has x, customers entered) = 13 µ
µ+α x+1 . ...

View
Full Document