hmwk6sol

# hmwk6sol - ISyE 3232 Stochastic Manufacturing and Service...

This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ISyE 3232 Stochastic Manufacturing and Service Systems Fall 2011 H. Ayhan Solutions to Homework 7 1. You must be careful in this problem because λ is not the arrival rate in this problem. Because λ is being used in the service rate distribution, I will use η (read as “eta”) to denote the arrival rate. Because the arrival rate is given in terms of hours, I will ﬁrst convert it into minutes to match the service distribution’s units. (You can also convert the service rate information into hours; both methods will work, so long as the units agree.) ￿ ￿￿ ￿ 30 customers 1 hour 1 customer = η= 1 hour 60 minutes 2 minutes Because the arrival distribution is exponential, the squared coeﬃcient of variation is 1, so c2 = 1, which is always true for the exponential distribution. Now that the arrival information A is calculated, I need to determine the mean and variance for the arrival distribution (which if you look carefully should see is not an exponential distribution). The ﬁrst way to do this is to calculate the mean and variance directly from the deﬁnitions. Let S be a random variable representing the service time. ￿∞ ￿∞ 1 3 E (S ) = sf (s) ds = s4λ2 se−2λs ds = = minutes λ 2 0 0 ￿￿ ∞ ￿2 ￿∞ ￿￿ 2 Var(S ) = E S 2 − (E (S )) = s2 f (s) ds − sf (s) ds = ￿ ∞ 0 s2 4λ2 se−2λs ds − ￿￿ 0 ∞ s4λ2 se−2λs ds 0 ￿2 0 = 3 1 1 9 − 2= 2= 2λ 2 λ 2λ 8 An alternative method for calculating the mean and variance is possible that avoids the necessity of calculating the above integrals but requires knowledge of the Erlang-k distribution. The p.d.f. for an Erlang-k distribution with parameter α is, α k s k e − αs for s ≥ 0 (k − 1)! g (s) = 0 otherwise By setting α = 2λ and k = 2, and considering only s ≥ 0, I can rewrite the given p.d.f. as follows. 2 (2λ) se−(2λ)s α2 se−αs f (s) = 4λ2 se−2λs = = (2 − 1)! (k − 1)! I have now shown that the service times have an Erlang-2 distribution with parameter α = 2λ. What is useful about this result is that an Erlang-k distribution with parameter α corresponds to the distribution of the sum of k i.i.d. exponential random variables with rate α. So let S = X1 + X2 where X1 and X2 are independent exponential random variables with rate α, allowing the mean and variance to be calculated as E ( S ) = E ( X1 + X2 ) = E ( X1 ) + E ( X2 ) = Var(S ) = Var(X1 + X2 ) = Var(X1 ) + Var(X2 ) = 1 1 2 1 1 +== αα α λ 1 1 2 1 + 2= 2= 2 2 α α α 2λ Having calculated the mean and variance of the service times, I can now calculate the service rate µ, and the squared coeﬃcient of variation for the service distribution c2 . S µ= c2 = S 1 2 = E (S ) 3 Var(S ) (E (S )) 2 = 1 2 The last piece of information I need to calculate is the traﬃc intensity, ρ= η 3 = µ 4 (a) Using Kingman’s formula, the average waiting time for each customer (in the queue) is Wq = ￿ c2 + c2 a s 2 ￿ ρ = µ−η ￿ 1+ 2 1 2 ￿ 3 4 2 3 − 1 2 = 3.375 minutes (b) Because the arrival rate is less than the service rate, the throughput of the system is η Using Little’s Law, the average number of customers in the queue is Lq = η Wq = 1 · 3.375 = 1.6875 customers 2 (c) Since the mean service time is E (S ) = 3/2 minutes, the average time spent at the site by a customer is W = Wq + m = 3.375 + 3/2 = 4.875 minutes By using Little’s Law again, the average number of customers at the site is L = ηW = 1 · 4.875 = 2.4375 customers 2 2. (a) To determine the state space for Xn , you should consider how the system moves from state to state. If the inventory drops below 3 units during a particular day, an order is placed to return the inventory to 6 units by the next morning, therefore the day can never start with fewer than 3 units in stock. The stock is only ever replenished to at most 6 units, so the day will never start with more than 6 units. From these observations, the state space can be written as S = {3, 4, 5, 6} The initial state is deterministic, which means the initial distribution is given by a probability distribution with only a single non-zero value, ￿ 1 for i = 5 P(X0 = i) = 0 otherwise or equivalently as a vector, ￿ aT = 0 0 Next, we ﬁnd the transition matrix. Note that 1 0 ￿ P(Xn+1 = 3|Xn = 3) = P(Xn+1 = 4|Xn = 3) = P(Xn+1 = 5|Xn = 3) = 0 2 because whenever the inventory level goes below 3 we order up to 6. Therefore, P (Xn+1 = 6|Xn = 3) = 1 Now suppose the inventory at the beginning of the day is 4 units. If the demand during the current day (day n) is 1, we end up with 3 units of inventory, and, because we do not order, we will have 3 units of inventory at the beginning of the next day (day n + 1). Because the demand equals 1 with probability 1/6, the probability of transitioning from state 4 to state 3 is 1/6. P(Xn+1 = 3|Xn = 4) = P(D = 1) = 1 6 If the demand is 2 or 3, the inventory level drops below 3, so we order, hence P (Xn+1 = 6|Xn = 4) = P(D = 2) + P(D = 3) = 5 6 Proceeding by the same logic, row by row, the transition matrix is given by, 0001 1 0 0 5 6 P = 6 1 3 6 0 2 6 6 2 3 1 0 6 6 6 where Pij = P(Xn+1 = j + 2|Xn = i + 2). (b) Let Yn be the number of units of inventory in stock at the end of day n. The inventory at the end of a day can be any value from 0 to 5. It cannot be 6 because at the beginning of a day the maximum number of items we can have in inventory is 6 and the demand is strictly greater than zero with probability 1. So the state space in this case is S = {0, 1, 2, 3, 4, 5}. If it still seems odd to exclude 6, include 6 in the transition matrix and see what happens. You should notice in the construction involving state 6, that state 6 cannot be reached from any initial state, including state 6, for this reason we do not include it in this formulation. The initial state is deterministic and the initial distribution is given by ￿ 1 if i = 2 P(Y0 = 2) = 0 otherwise Observe that P (Yn+1 = 5|Yn = 0) = P (D = 1) = 1/6 Similarly P (Yn+1 = 4|Yn = 0) = P (D = 2) = 3/6 P (Yn+1 = 3|Yn = 0) = P (D = 3) = 2/6 3 Continuing in this fashion, we arrive at 0 0 0 P = 2 6 0 0 the following transition matrix 00231 6 6 6 0 0 2 3 1 6 6 6 0 0 2 3 1 6 6 6 3 1 0 0 0 6 6 2 3 1 0 0 6 6 6 0 2 6 3 6 1 6 0 where Pij = P (Yn+1 = j − 1|Yn = i − 1). 3. Xn represents the number of consecutive days without injury on the morning of day n. If today the number of consecutive days without injury is m then it is possible (with probability 99/100) to have one more day without injury and begin the next morning with m + 1 consecutive days without injury. Because this logic holds for every non-negative integer, the state space is equal to the non-negative integers, S = Z+ = {0, 1, 2, . . .} To gain some intuition for how the transition probabilities are calculated, consider the following table. The ﬁrst row gives the number of consecutive days without injury on the morning of day n. The second row indicates whether an injury occurred later during the day but before the next morning. The last row shows the number of consecutive days without injury on the following morning, which is a result of the previous day’s events. n Morning of day n (Xn ) Injury Today? Morning of day n+1 (Xn+1 ) 1 0 No 1 2 1 No 2 3 2 Yes 0 4 0 No 1 5 1 No 2 6 2 No 3 7 3 Yes 0 8 0 Yes 0 This example serves to illustrate the progression of states for the Markov chain in one particular instance. In general, if at the start of the day there have been m ≥ 0 consecutive days without injury, then one of two outcomes can occur. If no injury occurs then the number of consecutive days without injury has increased by 1, so on the morning of the next day there will have been m + 1 consecutive days without injury. From this logic the ﬁrst transition probability can be determined. 99 P(Xn+1 = m + 1|Xn = m) = P(No Injury) = 100 If an injury occurs, then the streak has been broken and the number of consecutive days without injury resets to 0. This allows one more transition probability to be calculated. 1 100 Notice that the sum of these two probabilities is 1, and they correspond to the same row of the transition matrix. Because the sum of the elements of each row of the transition matrix must equal 1, all other entries must be 0. Also, because m was generic, this formula and logic applies to all m ≥ 0. The probabilities can be summarized by writing as follows. 99 if j = i + 1 100 1 Pi,j = P(Xn+1 = j |Xn = i) = 100 if j = 0 0 otherwise P(Xn+1 = 0|Xn = m) = P(Injury) = 4 By applying the transition probability formula row by row, we can calculate the ﬁrst several rows of the matrix until a pattern develops 1 99 0 0 0 0 ··· 100 100 99 1 0 0 0 0 · · · 100 100 1 99 100 0 0 0 0 · · · 100 99 1 0 0 0 0 · · · 100 100 1 99 100 0 0 0 0 · · · 100 . . . . . . .. . . . . . . . . . . . . . The initial distribution is deterministic, and so it can be expressed as a probability distribution taking one non-zero value, ￿ 1 if i = 0 P(X0 = i) = 0 otherwise or in vector notation ￿ a= 1 0 5 0 0 ··· ￿ ...
View Full Document

## This note was uploaded on 03/08/2012 for the course ISYE 3232 taught by Professor Billings during the Fall '07 term at Georgia Tech.

Ask a homework question - tutors are online