ch7
63 Pages

ch7

Course Number: OR 220, Fall 2008

College/University: UNC

Word Count: 20650

Rating:

Document Preview

CHAPTER 7 Queueing Models 1. Introduction 2. Properties of General Queueing Systems Relationship between j and j Relationship between j and j PASTA: Relationship between j and pj Littles law 3. Birth and Death Queues M/M/1 Queue M/M/1/K Queue M/M/s Queue M/M/ Queue Queues with Finite Populations Queues with Balking and Reneging 4. Open Queueing Networks State-Dependent Service State-Dependent Arrivals and...

Unformatted Document Excerpt
Coursehero >> North Carolina >> UNC >> OR 220

Course Hero has millions of student submitted documents similar to the one
below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.

Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.

7 Queueing CHAPTER Models 1. Introduction 2. Properties of General Queueing Systems Relationship between j and j Relationship between j and j PASTA: Relationship between j and pj Littles law 3. Birth and Death Queues M/M/1 Queue M/M/1/K Queue M/M/s Queue M/M/ Queue Queues with Finite Populations Queues with Balking and Reneging 4. Open Queueing Networks State-Dependent Service State-Dependent Arrivals and Service 5. Closed Queueing Networks 6. Single Server Queues M/G/1 Queue G/M/1 Queue 7. Retrial Queue 8. Innite Server Queue 9. Modeling Exercises 10. Computational Exercises 277 278 QUEUEING MODELS Folk Theorem of Queueing Theory: The queue you join moves the slowest. Proof: Follows as a corollary to Murphys Law. 7.1 Introduction Queues are an unavoidable aspect of modern life. We do not like queues because of the waiting involved. However, we like the fair service policies that a queueing system imposes. Imagine what would happen if an amusement park did not enforce a rst-come rst-served queueing discipline at its attractions! Knowingly or unknowingly, we face queues every day. We stand in queues at grocery store checkout counters, for movie tickets, at the baggage claim areas in airports. Many times we may be in a queue without physically being there - as when we are put on hold for the next available service representative when we call the customer service number during peak times. Many times we do not even know that we are in a queue: when we pick up the phone we are put in a queue to get a dial tone. Since we generally get the dial tone within a fraction of a second, we do not realize that we went through a queue. But we did, nonetheless. Finally, waiting in queues is not a uniquely human fate. All kinds of systems enforce queueing for all kinds of non-human commodities. For example, a modern computer system manages queues of computer programs at its central processing unit, its input/output devices, etc. A telephone system maintains a queue of calls and serves them by assigning circuits to them. A digital communication network transmits packets of data in a store-and-forward fashion, i.e., it maintains a queue of packets at each node before transmitting them further towards their destinations. In manufacturing setting queues are called inventories. Here the items are produced at a factory and stored in a warehouse, i.e., they form a queue in the warehouse. The items are removed from the warehouse whenever a demand occurs. Managing queue - whether human or non-human - properly is critical to a smooth (and protable) operation of any system. In a manufacturing system, excessive queues (i.e., inventories) of manufactured products are expensive - one needs larger warehouses to store them, enormous capital is idled in the inventory, there are costs of deterioration, spoilage, etc. In a computer setting, building large queues of programs inevitably means slower response times. Slow response times from the central computers can have a disastrous effect on modern banks, and stock exchanges, for example. The modern communication networks can carry voice and video trafc efciently only if the queue of data packets are managed so that delays do not exceed a few milliseconds. INTRODUCTION 279 For human queues there is an additional cost to queueing - the psychological stress generated by having to wait, the violent road rage being the most extreme manifestation of this. Industries that manage human queues (airlines, retail stores, amusement parks, etc.) employ several methods to reduce the stress. For example, knowing beforehand the that there is a half-hour wait in a queue for a ride in an amusement park helps reduce the stress. Knowing the reason for delay also reduces stress - hence the airline pilots announcement about being tenth in a queue for takeoff after the plane leaves from the gate but sits on the tarmac for 20 minutes. Diverting the attention of the waiting public can make the wait more bearable - this explains the presence of tabloids and TVS near the grocery store checkout counters. Airports use an ingenious method - there is generally a long walk from the gate to the baggage claim area. This makes the total wait seem smaller. Finally, there is the famous story of the manager of a skyscraper who put mirrors in the elevator lobbies and successfully reduced the complaints about slow elevators! Another aspect of queues mentioned earlier is the fair service discipline. In human queues this generally means rst-come rst-0served (or rst-in, rst-out, or head of the line). In non-human queues many other disciplines can be used. For example, blood banks may manage their blood inventory by last-in rst-out policy. This ensures better quality blood for most clients. The bank may discard blood after it stays in the bank for longer than a certain period. Generally, queues (or inventories) of perishable commodities use last-in rst-out systems. Computers use many specialized service disciplines. A common service discipline is called time sharing, under which each job in the system gets milliseconds from the CPU in a round-robin fashion. Thus all jobs get some service in reasonable intervals of time. In the limit, as 0, all jobs get served continuously in parallel, each job getting a fraction of the CPU processing capacity. This limiting discipline is called processor sharing. As a last example, the service discipline may be random: the server picks on of the waiting customers at random for the next service. Such a discipline is common in statistical experiments to avoid bias. It is also common to have priorities in service, although we do not consider such systems in this chapter. A block diagram of a simple queueing system is shown in Figure ??. There are several key aspects to describing a queueing system. We discuss them below. 1. The Arrival Process. The simplest model is when the customers arrive one at a time and the times between two consecutive arrivals are iid non-negative random variables. We use special symbols to denote these inter-arrival times as follows: M for exponential (stands for memoryless or Markovian), Ek for Erlang with k phases, D for deterministic, P H for phase type, G for general (sometimes we use GI to emphasize independence). This list is by no means exhaustive, and new notation keeps getting introduced as newer applications demand newer arrival characteristics. For example, the applications in telecommunication systems use what are known as the Markovian arrival processes, or Markov modulated Poisson processes, denoted by MAP or MMPP. 280 QUEUEING MODELS 2. The service Times. The simplest model assumes that the service times are iid nonnegative random variables. We use the notation of the inter-arrival times for the service times as well. 3. The Number of Servers. It is typically denoted by s (for servers) or c (for channels in telecommunication systems). It is typically assumed that all servers are identical. 4. The Holding Capacity. This is the maximum number of customers that be in the system at any time. We also call it the system capacity, or just capacity. If the capacity is K, an arriving customer who sees K customers in the system is permanently lost. If no capacity is mentioned, it is assumed to be innity. 5. The Service Discipline. This describes the sequence in which the waiting customers are serviced. As described before, the possible disciplines are FCFS (Firstcome First-served), LCFS (Last-come First-served), Random, PS (processor Sharing), etc. We shall follow the symbolic representation introduced by G. Kendall represent a queueing system as Inter-arrival time distribution/Service time distribution/Number of Servers/Capacity/Service Discipline. Thus M/G/3/15/LCF S represents a queueing system with Poisson arrivals, generally distributed service times, three servers, a capacity to hold 15 customers, and last-come rst-served service discipline. If the capacity and the service discipline is not mentioned, we assume innite capacity and FCFS discipline. Thus an M/M/s queue has Poisson arrivals, exponential service times, s servers, innite waiting room and FCFS discipline. Several quantities are of interest in the study of queueing systems. We introduce the relevant notation below: X(t) Xn Xn Xn Wn = = = = = the number of customers in the system at time t, the number of customers in the system just after the n-th customer departs, the number of customers in the system just before the n-th customer enters, the number of customers in the systems just before the n-th customer arrives, time spent in the system by the n-th customer. Note that we distinguish between an arriving customer and an entering customer, since an arriving customer may not enter because the system is full, or the customer decides not to join since the system is too congested. We are interested in pj j j = = = t lim P(X(t) = j), lim P(Xn = j), lim P(Xn = j), n n INTRODUCTION j F (x) = = n 281 lim P(Xn = j), lim P(Wn x), n t L = W = lim E(X(t)), lim E(Wn ). n We shall also nd it useful to study the customers in the queue (i.e., those who are in the system but not in service). We dene X q (t) q Xn q Xn q Xn q Wn and pq j q j q j = = = = = the number of customers in the queue at time t, the number of customers in the queue just after the n-th customer departs, the number of customers in the queue just before the n-th customer enters, the number of customers in the queue just before the n-th customer arrives, time spent in the queue by the n-th customer, = = = = = = = t lim P(X q (t) = j), q lim P(Xn = j), q lim P(Xn = j), n n j q F q (x) Lq Wq n q lim P(Xn = j), q lim P(Wn x), n t lim E(X q (t)), q lim E(Wn ). n With this introduction we are ready to apply the theory of DTMCs and CTMCs to queueing systems. In particular we shall study queueing systems in which {X(t), t 0} or {Xn , n 0} or {Xn , n 0} is a Markov chain. We shall also study queueing systems modeled by multi-dimensional Markov chains. There is an extremely large and growing literature on queueing theory and this chapter is by no means an exhaustive treatment of even the Markovian queueing systems. Readers are encouraged to refer to one of the several excellent books that are devoted entirely to queueing theory. It is obvious that {X(t), t 0}, {Xn , n 0}, {Xn , n 0} and {Xn , n 0} are related processes. The exact relationship between these processes is studied in the next section. 282 7.2 Properties of General Queueing Systems QUEUEING MODELS In this section we shall study several important properties of general queueing systems. They are discussed in Theorems 7.100, 7.101, 7.102 and 7.104. In Theorem 7.101 we show that under mild conditions on the sample paths of {X(t), t 0}, the limiting distributions of Xn and Xn , if they exist, are identical. Thus, in steady state, the state of the system as seen by an entering customer is identical to the one seen by a departing customer. In Theorems 7.102 and 7.103, we prove that, if the arrival process is Poisson, and certain other mild conditions are satised, the limiting distributions of X(t) and Xn (if they exist) are identical. Thus in steady state the state of the system as seen by an arriving customer is the same as the state of the system at an arbitrary point of time. This property is popularly known as PASTA Poisson Arrivals See Time Averages. The last result (Theorem 7.104) is called Littles Law, and it relates limiting averages L and W dened in the last sections. All these results are very useful in practice, but their proofs are rather technical. We suggest that the reader should rst read the statements of the theorems, and understand their implications, before reading the proofs. 7.2.1 Relationship between j and j As pointed out in the last section, we make a distinction between the arriving customers and the entering customers. One can think of the arriving customers as the potential customers and the entering customers as the actual customers. The potential customers become an actual customers when they decide to enter. The relationship between the state of the system as seen by an arrival (potential customer) and as seen by an entering customer depends on the decision rule used by the arriving customer to actually enter the system. To make this more precise, let In = 1 if the n-th arriving customer enters, and 0 otherwise. Suppose the following limits exist n lim P(In = 1) = , j 0. (7.1) and n lim P(In = 1|Xn = j) = j , (7.2) Note that is the long run fraction of the arriving customers that enter the system. The next theorem gives a relationship between the state of the system as seen by an arrival and by an entering customer. Theorem 7.100 Arriving and Entering Customers. Suppose > 0, and one of the two limiting distributions {j , j 0} and {j , j 0} exist. Then the other limiting distribution exists, and the two are related to each other by j j , j 0. (7.3) j = PROPERTIES OF GENERAL QUEUEING SYSTEMS Proof: Dene n 283 N (n) = i=1 Ii , n 1. Thus N (n) be the number of customers who join the system from among the rst n arrivals. This implies that P(Xn = j|In = 1) = P(XN (n) = j). The assumption > 0 implies that N (n) as n with probability 1. Now P(Xn = j) = P(Xn = j, In = 1) + P(Xn = j, In = 0) = P(XN (n) = j)P(In = 1) + P(In = 0|Xn = j)P(Xn = j). Rearranging this yields P(In = 1|Xn = j)P(Xn = j) = P(XN (n) = j)P(In = 1). Letting n on both sides, and using Equations 7.30 and 7.2, we get j j = j if either P(Xn = j) or P(Xn = j) has a limit as n . The theorem follows from this. Note that if we further know that j = 1 j=0 we can evaluate as = i i . i=0 Hence Equation 7.3 can be written as j = j j , i=0 i i j 0. Finally, if every arriving customer enters, we have j = 1 for all j 0 and 1. Hence the above equation reduces to j = j for all j 0, as expected. Example 7.1 M/M/1/K System. Consider an M/M/1/K system. An arriving customer enters the system if he nds less than K customers ahead of him, else he leaves. Thus we have P(In = 1|Xn = j) = From Equation 7.2 j = 1 0 if 0 j < K if j K. 1 0 if 0 j < K if j K. 284 QUEUEING MODELS Now suppose the limiting distribution {j , 0 j K 1} exists. Then clearly we must have K1 j = 1. j=0 The the limiting distribution {j , 0 j K} exists and we must have K j = 1. j=0 Hence we get j = j K1 i=0 i = j , 0 j K 1. 1 K 7.2.2 Relationship between j and j The next theorem gives the relationship between the state of the system as seen by an entering customer and a departing customer. Theorem 7.101 Entering and Departing Customers. Suppose the customers enter and depart a queueing system one at a time, and one of the two limiting distribu tions {j , j 0} and {j , j 0} exists. Then the other limiting distribution exists, and j = j , j 0. (7.4) Proof: We follow the proof as given in Cooper [1981]. Suppose that there are i customers in the system at time 0. We shall show that {Xn+i j} {Xn+j+1 j}, j 0. (7.5) First suppose Xn+i = k j. This implies that there are exactly k + n entries before the (n + i)th departure. Thus there can be only departures between the (n + i)th departure and (n + k + 1)st entry. Hence Xn+k+1 k. Using k = j we see that {Xn+i j} {Xn+j+1 j}, Xn+j+1 j 0. To go the other way, suppose = k j. This implies that there are exactly n + i + j k departures before the (n + j + 1)st entry. Thus there no entries between the (n + i + j k)th departure and (n + j + 1)st entry. Hence Xn+k+1 k. Thus we have shown that {Xn+i+jk j} {Xn+j+1 = k j}, j 0. PROPERTIES OF GENERAL QUEUEING SYSTEMS Setting k = j we get Xn+i j. Hence {Xn+j+1 j} {Xn+i j}, 285 j 0. This proves the equivalence in Equation 7.5. Hence we have P(Xn+i j) = P(Xn+j+1 j), j 0. Letting n , and assuming one of the two limits exist, the theorem follows. Two comments are in order at this point. First, the above theorem is a sample path result and does not require any probabilistic assumptions. As long as the limits exist, they are equal. Of course, we need probabilistic assumption to assert that the limits exist. Second, the above theorem can be applied even in the presence of batch arrivals and departures, as long as we we sequence the entries (or departures) in the batch and observe the system after every entry and every departure in the batch. Thus, suppose n customers have entered so far, and a new batch of size 2 enters when there are i customers in the system. Then we treat this as two single entries occurring one after another, thus yielding Xn+1 = i + 1 and Xn+2 = i + 2. Example 7.2 Entering and Departing Customers. The {X(t), t 0} processes in the queueing systems M/M/1, M/M/s, M/G/1, G/M/1, etc, satisfy the hypothesis of Theorem 7.101. Since every arrival enters the system, we can combine the result of Theorem 7.100 with that of Theorem 7.101, to get j = j = j , j 0, if any one of the three limiting distributions exist. At this point we do not know how to prove that they exist. 7.2.3 Relationship between j and pj Now we discuss the relationship between the limiting distribution of the state of the system as seen by an arrival and and that of the state of the system at any time point. This will lead us to an important property called PASTA Poisson Arrivals See Time Averages. Roughly speaking, we shall show that, the limiting probability that an arriving customer sees the system in state j is the same as the limiting probability that thee system is in state j, if the arrival process is Poisson. Once can think of pj as the time average (that is, the occupancy distribution): the long run fraction of the time that the system spends in state j. Similarly, we can interpret j as the long run fraction of the arriving customers that see the system in state j. PASTA syas that, if the arrival process is Poisson, these two averages are identical. We shall give a proof of this under the restrictive setting when the stochastic process {X(t), t 0} describing the system state is a CTMC on {0, 1, 2, }. However, PASTA is a very general result that applies even though the process is not Markovian. For example, it applies to the queue length process in an M/G/1 queue, even if that 286 QUEUEING MODELS process is not a CTMC. However, its general proof is rather technical and will not be presented here. We refer the reader to Wolff [1989] or Heyman and Sobel [1982] for the general proof. Let X(t) be the state of queueing system at time t. Let N (t) be the number of customers who arrive at this system up to time t, and assume that {N (t), t 0} is a PP(), and Sn is the time of arrival of the nth customer. Assume that {X(t), Sn < t Sn+1 } is a CTMC with state-space S and rate matrix G = [gij ], n 0. When the nth arrival occurs at time Sn , it causes an instantaneous transition in the X process from i to j with probability rij , regardless of the past history up to time t. That is, P(X(Sn +) = j|X(Sn ) = i, (X(u), N (u)), 0 u < Sn ) = ri,j , i, j 0. Now the nth arrival sees the system in state Xn = X(Sn ). Hence we have, assuming the limits exist, j = lim P(Xn = j) = lim P(X(Sn ) = j), n n j0 (7.6) (7.7) and pj = lim P(X(t) = j), t j 0. The next theorem gives the main result. Theorem 7.102 PASTA. If the limits in Equations 7.6 and 7.7 exist, j = p j , j 0. The proof proceeds via several lemmas, and is completely algebraic. We provide the following intuition to strengthen the feel for the result. Figure ?? shows a sample path {X(t), t 0} of a three-state system interacting with a PP() {N (t), t 0}. In the gure, the Ti s are the intervals of time (open on the left and closed on the right) when the system is in state 3. The events in the PP {N (t), t 0} that see the system in state 3 are numbered consecutively from 1 to 12. Figure ?? shows the the Ti s spliced together, essentially deleting all the intervals of time when the system is not in state 3. Thus we count the Poisson events that trigger the transitions out of state 3 to states 1 and 2, but not those that trigger a transition into state 3 from state 1 and 2. We also count all the Poisson events that do not generate any transitions. Now, the the times between consecutive events in Figure ?? are iid exp(), due to the model of the interaction that we have postulated. Hence the process of events in Figure ?? is a PP(). Following the notation of Section 6.4, let Vj (t) be the amount of time the system spends in state j over (0, t]. Hence the expected number of Poisson events up to time t that see the system in state 3 is E(V3 (t)). By the same argument, the expected number of Poisson events up to time t that see the system in state j is E(Vj (t)). The expected number of events up to time t in a PP() is t. Hence the PROPERTIES OF GENERAL QUEUEING SYSTEMS fraction of the Poisson events that see the system in state j is 287 E(Vj (t)) E(Vj (t)) = . t t If the {X(t), t 0} process is a CTMC, the above fraction, as t goes to pj , the long run fraction of the time the CTMC spends in state j. Hence, the limiting probability (if the limits exist) the system is in state j just before an event in the PP, is the same as the long run probability that the system is in state j. The hard part is to prove that the limits exist. We now continue with the proof of Theorem 7.102. First we study the structure of the process {X(t), t 0}. Lemma 7.1 Let {X(t), t 0} is a CTMC on S with with rate matrix Q given by Q = G + (RI ), where R = [rij ], and I is the identity matrix. Proof: That {X(t), t 0} is a CTMC is a consequence of how the system interacts with the Poisson process, and the system behavior between the Poisson events. The rate at which it moves to state j from state i = j is given by qij = rij + gij . Hence we have qii = qii (1 rii ). The above two relations imply Equation 7.8. The next lemma describes the probabilistic structure of the {Xn , n 0} process. Lemma 7.2 {Xn , n 0} is a DTMC on S with transition probability matrix P = [pij ] given by P = RB (7.8) where R = [rij ] and B = [bij ] where bij = P(X(Sn+1 ) = j|X(Sn +) = i), Proof: That {Xn , n 0} is a DTMC is clear. We have pij = P(Xn+1 = j|Xn = i) = P(X(Sn+1 ) = j|X(Sn ) = i) = kS i, j S. P(X(Sn+1 = j|X(Sn ) = i, X(Sn +) = k) P(X(Sn + = k|X(Sn ) = i) = kS P(X(Sn + = k|X(Sn ) = i)P(X(Sn+1 = j|X(Sn +) = k) 288 = kS QUEUEING MODELS rik pkj , which yields Equation 7.8 in matrix form. The next lemma relates the B, R and G matrices in an algebraic fashion. Lemma 7.3 The matrix B satises (I G)B = I. Proof: Let {Y (t), t 0} be a CTMC with generator matrix G, and let aij (t) = P(Y (t) = j|y(0) = i). Let a (s) be the Laplace transform of aij (). From Equation 6.25 on page 214 we ij see that A (s) = [a (s)] satises ij sA (s) I = GA (s). Since the {X(t), Sn t < Sn+1 } is a CTMC with generator G, and Sn+1 Sn exp()), we get bij = P(X(Sn+1 ) = j|X(Sn +) = i) (7.9) = 0 et aij (t)dt = a (). ij Thus B = A () = I + GA () = I + GB/, which yields Equation 7.9. With these three lemmas we can now give the The Proof of Theorem 7.102: Suppose {X(t), t 0} is an irreducible positive recurrent CTMC with limiting distribution p = [pj , j S]. Then, we have, from Theorem 4.34 and 6.90, we get pQ = 0, This yields 0 = = = = = = pQ pQB p(G + (R I))B (from Lemma 7.1) p((G I)B + RB) p(I + P ) (from Lemma 7.3) (p pP ). pj = 1, P = , j = 1. PROPERTIES OF GENERAL QUEUEING SYSTEMS Hence we have p = pP, pj = 1. 289 Thus p is the stationary distribution of the DTMC {Xn , n 0}. However, since n , n 0} is irreducible and aperiodic. {X(t), t 0} is an irreducible CTMC, {X Hence it has a unique limiting distribution . Hence, we must have p = , as desired. We explain with three examples. Example 7.3 M/M/1/1 Queue. Verify Theorem 7.102 directly for the M/M/1/K queue. Let X(t) be the number of customers at time t in an M/M/1/K queue with arrival rate and service rate . Let N (t) be the number of arrivals over (0, t]. Then {N (t), t 0} is a PP(). Let Sn be the nth arrival epoch. We see that {X(t), Sn t < Sn+1 } is a CTMC with generator matrix G= 0 0 . If the system is empty at an arrival instant, the arrival enters, else the arrival leaves. Thus the R matrix is given by R= 0 0 1 1 . From Lemma 7.1 we see that {X(t), t 0} is a CTMC with generator matrix Q = G + (R I) = . This is as expected. Next we compute the B matrix. We have b10 = P(X(Sn+1 ) = 0|X(Sn +) = 1) = P(Next departure occurs before next arrival) . = + Similar calculations yield B= 1 + 0 + . Using Lemma 7.2 we see that {Xn , n 0} is a DTMC on {0, 1} with transition probability matrix P = RB = + + + + . 290 Hence its limiting distribution is given by 0 = , 1 = . + + QUEUEING MODELS Using the results of Example 6.34 on page 242 we see that the limiting distribution of the CTMC {X(t), t 0} is given by p0 = Thus Theorem 7.102 is veried. Example 7.4 PASTA for M/M/1 and M/M/s Queues. Let X(t) be the number of customers at time t in an M/M/1 queue with arrival rate and service rate > . Let N (t) be the number of arrivals over (0, t]. Then {N (t), t 0} is a PP(). We see that X and N processes satisfy the assumptions described in this section. Hence we can apply PASTA (Theorem 7.102) to get j = p j , j = j , , p1 = . + + j 0. Furthermore, the X process satises the conditions for Theorem 7.100. Thus j 0. Finally, every arriving customer enters the system, hence pij = j , j 0. From Example 6.36 we see that this queue is stable, and using Equation 6.70 and the above equations, we get pij = j = j = pj = (1 )j , where rho = / < 1. j 0, (7.10) Similar analysis for the M/M/s queue shows that pij = j = j = pj , j 0. Example 7.5 M/M/1/K System. Let X(t) be the number of customers at time t in an M/M/1/K queue with arrival rate and service rate . Let N (t) be the number of arrivals over (0, t]. An arriving customer enters if and only if the number in the system is less than K when he arrives at the system. Then {N (t), t 0} is a PP(). We see that X and N processes satisfy the assumptions described in this section. Hence we can apply PASTA (Theorem 7.102) to get j = pj , 0 j K. Furthermore, the X process satises the conditions for Theorem 7.100. Thus j = j , 0 j K 1. PROPERTIES OF GENERAL QUEUEING SYSTEMS Finally, from Example 7.1 we get j = 291 j , 0 j K 1. 1 K Thus the arriving customers see the system in steady state, but not the entering customers. This is because the arriving customers form a PP, but not the entering customers. We shall compute the limiting distribution {pj , 0 j K} for this system in Section 7.3.2. We have proved PASTA in Theorem 7.102 for a {X(t), t 0} process that interacts with a PP in a specic way, and behaves like a CTMC between the events of the PP. However, PASTA is a very general property. We state the general result here, but omit the proof. In almost all applications the version of PASTA given here is sufcient, since almost all the processes we study in this book can be turned into CTMCs by using phase-type distributions as we shall see in Section 7.6. Let X(t) be the state of a system at time t, and let {N (t), t 0} be a PP that may depend on the {X(t), t 0} process. However, we assume the following: Lack of Anticipation Property: {N (t + s) N (s), t 0} and {X(u), 0 u s} are independent. Note that {N (t + s) N (s), t 0} is independent of {N (u), 0 u s} due to the independence of increments property of a PP. Thus the lack of anticipation property says that the future arrivals after time s are independent of the system history up to time s. Now let B be a given set of states and let VB (t) be the time spent by the system in the set B over (0, t], and AB (t) be the number of arrivals over (0, t] that see the system in the set B, N (t) AB (t) = n=1 1{X(Sn )B} , t0 where Sn is the time of the nth arrival. Now suppose the sample paths of the {X(t), t 0} are almost surely piecewise continuous and have nite number of jumps in nite intervals of time. Then the following theorem gives the almost sure version of PASTA. Theorem 7.103 PASTA. VBt(t) converges almost surely if and only if verges almost surely, and both have the same limit. AB (t) N (t) con- 292 Proof: See Wolff (1989), or Heyman and Sobel (1982). QUEUEING MODELS (t) Note that PASTA, which equates the limit of VBt(t) to that of AB(t) , holds whenN ever the convergence holds. We use the tools developed in this book to show that VB (t) = lim P(X(t) B), t t t lim and AB (t) = lim P(X(Sn ) B). t N (t) n Thus according to PASTA, whenever the limits exist, lim t lim P(X(t) B) = lim P(X(Sn ) B), n or, when B = {j}, p j = j . Example 7.6 PASTA and the M/G/1 and G/M/1 Queues. Theorems 7.101 and 7.103 imply that for the M/G/1 queue pij = j = j = pj , j 0. On the other hand, for the G/M/1 queue, we have pij = j = j , j 0. However, PASTA does not apply unless the inter-arrival times are exponential. Thus in general, for an G/M/1 queue, we do not have j = pj . 7.2.4 Littles Law Recall the denitions of X(t), Wn , L and W on Page 280. In this subsection we study another important theorem of queueing theory: The Little Law. It states that L, the average number of customers in the system, , the average arrival rate of customers to the system, and W , the average time a customer spends in the system (all in steady state), are related to each other by the following simple relation: L = W. (7.11) The above relation is in fact a sample path property, rather than a probabilistic one. We need probabilistic assumption to assert that the above averages exist. An intuitive explanation of Littles Law is as follows: Suppose each arriving customer pays the system $1 per unit time the customer spends in the system. The long run rate at which the system earns revenue can be computed in two equivalent ways. First, the nth customer pays the system $Wn , the time spent by that customer in the system. Hence the average amount paid by a customer in steady state is $W . Since the average arrival rate is , the system earns $W per unit time in the long run. Second, since each customer in the system pays at rate $1 per unit time, the system earns PROPERTIES OF GENERAL QUEUEING SYSTEMS 293 revenue at the instantaneous rate of $X(t) per unit time at time t. Hence the long run rate of revenue is seen to be L, the steady state expected value of X(t). Since these two calculations must provide the same answer, Equation 7.11 follows. In the rest of the section we shall make the statement of Littles law more precise. Consider a general queueing system where the customers arrive randomly, get served, and then depart. Let X(t) the number of customers in the system at time t, A(t) be the number of arrivals over (0, t], and Wn be the time spent in the system by the nth customer. We also refer to it as the waiting time or sojourn time. Now dene the following limits, whenever they exist. Note that these are dened for every sample path of the queueing system. L = W 1 t X(u)du, t t 0 A(t) , = lim t t n k=1 Wk = lim . n n lim (7.12) (7.13) (7.14) The next theorem states the relationship that binds the three limits dened above. Theorem 7.104 Littles law. Suppose that for a xed sample path the limits in Equations 7.13 and 7.14 exist and are nite. Then the limit in Equation 7.12 exists, and is given by L = W. Proof: Let S0 = 0, D0 = 0 and Sn be the arrival time and Dn Sn be the departure time of the nth customer, n 1, and assume that 0 S1 S2 . A(t) is already dened to be the number of arrivals over (0, t]. Dene D(t) to be the number of departures over (0, t]. Without loss of generality we assume that X(0) = 0, since we can always assume that X(0) customers arrived at time0+. Then we have Wn X(t) = Dn Sn , n 1, t 0, = n=1 1Sn t<Dn , A(t) = sup{n 0 : Sn t}, t 0, D(t) = sup{n 0 : Dn t}, X(t) = A(t) D(t). Now the existence of the limits implies that (we omit the details of the proof of this assertion, and refer the readers to El-Taha and Stidham (19??)) t lim X(t) = 0. t (7.15) 294 This implies that D(t) A(t) X(t) = = . t t The above relations also imply that D(t) t A(t) QUEUEING MODELS Wn n=1 0 X(u)du n=1 Wn . Dividing by t on both sides we get 1 t D(t) Wn n=1 1 t t X(u)du 0 1 Wn , t n=1 A(t) A(t) which can be written as D(t) 1 t D(t) D(t) Wn n=1 1 t t X(u)du 0 A(t) 1 Wn . t A(t) n=1 A(t) (7.16) Now, assume that A(t) as t . Then D(t) as t , and we have 1 t D(t) lim D(t) Wn = lim n=1 1 Wn = W. t A(t) n=1 Now let t in Equation 7.16. We get W L W. If A(t) remains bounded as t , we necessarily have L = = W = 0. The theorem follows. There are many results that prove Littles Law under less restrictive conditions. We refer the readers to Wolff (1989) and Heyman and Sobel (1982) and El-Taha and Stidham (19??). The condition in Equation 7.15 usually holds since we concentrate on stable queueing systems, where the queue length has non-defective limiting distribution. Note that as long as the service discipline is independent of the service times, the {X(t), t 0} process does not depend on the service discipline. Hence L is also independent of the service discipline. On the other hand, the {Wn , n 0} process does depend upon the service discipline. However, Littles Law implies that the average wait is independent of the service discipline, since the quantities L and are independent of the service discipline. Example 7.7 The M/G/ Queue. Consider the innite server queue of Example 5.17 on page 174. In the queueing nomenclature introduce in Section 7.1 this is an M/G/ queue. Verify Littles law for this system. From Example 5.17, we see that in steady state the number of customers in this BIRTH AND DEATH QUEUES 295 system is a P( ) random variable, where is the arrival rate of customers, and is the mean service time. Thus L = . Since the system has innite number of servers, Wn , the time spent by the n customer in the system equals his service time. hence W = E(Wn ) = . Thus Equation 7.11 is satised. 7.3 Birth and Death Queues Many queueing systems where customers arrive one at a time, form a single queue, and get served one at a time, can be modeled by birth and death processes. See Example 6.10 on page 201 for the denition, and Example 242 on page 6.35 for the limiting behavior. We shall use these results in the rest of this section. 7.3.1 M/M/1 Queue Consider an M/M/1 queue. Such a queue has a single-server, innite capacity waiting room, where customers arrive according to a PP() and request iid exp() service times. Let X(t) be the number of customers in the system at time t. We have seen in Example 6.11 on page 201 that {X(t), t 0} is a birth and death process with birth rates n = , n 0 and death rates n = , n 1. We saw in Example 6.36 on page 243 that this queue is stable if = / < 1. The parameter is called the trafc intensity of the queue, and it can be interpreted as the expected number of arrivals during one service time. The system serves one customer during one service times, and gets new customers during this time on the average. Thus if < 1, the system can serve more customers than it gets, so it should be stable. The stability condition can also be written as < . In this form it says that the rate at which customers arrive is less than the rate at which they can be served, and hence the system should be stable. Note that the system is perfectly balanced when = , but it is unstable, since it has no spare service capacity to handle random variation in arrivals and services. Example 6.36 shows that the limiting distribution of X(t) in a stable M/M/1 queue is given by pj = (1 )j , From Example 7.4 we get pij = j = j = pj = (1 )j , j 0. j 0. 296 We have L= j=0 QUEUEING MODELS . 1 jpj = (7.17) Thus as 1, L . This is a manifestation of increasing congestion as 1. Next we shall compute F (), the limiting distribution of Wn , assuming that the service discipline is FCFS. We have F (x) = = n lim P(Wn x) P(Wn x|Xn = j)P(Xn = j) j=0 j P(Wn x|Xn = j). j=0 n lim = n lim From Equation 7.10 we see that j = pj = (1 )j , j 0. Now suppose an entering customer sees j customers ahead in the system. Due to the FCFS discipline, these customers will be served before his service starts. The remaining service time of the customer in service (if any) is an exp() random variable. Hence this customers time in the system is the sum of j + 1 iid exp() random variables. Thus P(Wn x|Xn = j) = 1 r=0 ex (x)r . r! Substituting, we get (1 )j P(Wn x|Xn = j) 1 j=0 r=0 F (x) = ex (x)r r! , which, after some algebra, reduces to F (x) = 1 e()x , x 0. 1 . Thus the steady state waiting time is an exp( ) random variable. Thus we have W = E(Waiting Time in Steady State) = Using equation 7.17 we see that L = W . Thus Littles Law is veried for the M/M/1 queue. 7.3.2 M/M/1/K Queue In an M/M/1/K system, customers arrive according to a PP() and receive iid exp() service times from a single server. If an arriving customer nds K persons BIRTH AND DEATH QUEUES 297 in the system, he leaves immediately without service. Let X(t) be the number of customers in the system at time t. One can show that {X(t), t 0} is a birth and death process on {0, 1, 2, , K} with birth rates n = , 0 n < K and death rates n = , 1 n K. Note that K = 0 implies that the number in the system will not increase from K to K + 1. We can use the results of Example 242 on page 6.35 to compute the the limiting distribution of the X(t). Substituting in Equation 6.68 we get j = j , 0 j K, where = / is the trafc intensity. We have K j = j=0 1K 1 K +1 if = 1, if = 1. This is always nite, hence the queue is always stable. Substituting in Equation 6.69 we get 1 j if = 1, 1K pj = (7.18) 1 if = 1. K+1 Finally, from Example 7.5 we have j = pj , 0 j K. and pj , 0 j K 1. 1 pK The mean number of customers in the system in steady state is given by j = j = L= j=0 jpj = 1 K KpK . 1 1 K+1 (7.19) 7.3.3 M/M/s Queue Consider an M/M/s queue. Such a queue has a s identical servers, innite capacity waiting room, where customers arrive according to a PP() and request iid exp() service times. The customers form a single line and the customer at the head of the line is served by the rst available server. If more thatn one server is idle when a customer arrives, he goes to any one of the available servers. Let X(t) be the number of customers in the system at time t. One can show that {X(t), t 0} is a birth and death process with birth rates n = , and death rates n = min(n, s) n 0. n0 298 QUEUEING MODELS We can use the results of Example 242 on page 6.35 to compute the the limiting distribution of the X(t). Using the trafc intensity parameter = in Equation 6.68 we get n = We have s min(n, s)min(n,s) n , min(n, s)! s1 n=0 n 0. n = n=0 n + ss s s! 1 if < 1, if 1. Hence the stability condition for the M/M/s queue is < 1. This condition says that the queue is stable if the arrival rate is less than the maximum service rate s, which makes intuitive sense. From now on we assume that the queue is stable. Using Equation 6.69 we get 1 s1 p0 = n=0 n = n=0 n + ss s s! 1 1 and min(n, s)min(n,s) n p0 , n 1. min(n, s)! The limiting probability that all servers are busy is given by pn = n p0 = pn = n=s ps ss s p0 = . s! 1 1 The mean number of customers in the system can be shown to be L= + ps . (1 )2 (7.20) 7.3.4 M/M/ Queue The M/M/ queue is the limit of the M/M/s queue as s . It arises as a model of self-service queues and was modeled in Example 6.12 as a birth and death process with birth parameters n = , and death parameters n = n n 0. n0 BIRTH AND DEATH QUEUES This queue is stable as long as < . Its limiting distribution is given in Example 6.37 as = pj = e j , j! j 0. 299 As in the case of the M/M/1 queue, we have pij = j = j = pj = (1 )j , j 0. The transient analysis of the M/M/ queue was done in Example 5.17, from which we see that if X(0) = 0, then X(t) is a Poisson random variable with mean (1 Et ). From this we get E(X(t)|X(0) = 0) = (1 et ). In comparison, the transient analysis of the M/M/1 and M/M/s queues is quite messy. We refer the readers to one of the books on queueing theory for details: Saaty (1961) or Gross and Harris (1974). 7.3.5 Queues with Finite Populations In Example 6.6 on page 6.35 we modeled a workshop with two machines and one repair person. Here we consider a more general workshop with N machines and s repair-persons. The life times of the machines are iid exp() random variables, and the repair times are iid exp() random variables, and are independent of the life times. The machines are as good as new after repairs. The machines are repaired in the order in which they fail. Let X(t) be the number of working machines at time t. One can show that {X(t), t 0} is a birth and death process on {0, 1, 2, , N } with birth parameters n = (N n), and death parameters n = min(n, s) 0 n N. Since this is a nite state queue, it is always stable. One can compute the limiting distribution using the results of Example 6.35 on page 242. 0nN 1 7.3.6 M/M/1 Queue with Balking and Reneging Consider an M/M/1 queue of Section 7.3.1. Now suppose an arriving customer who sees n customers in the system ahead of him joins the system with probability n . This is called the balking behavior. Once he joins the system, he displays reneging behavior as follows: He has a patience time that is an exp() random variable. If his 300 QUEUEING MODELS service does not begin before his patience time expires, he leaves the system without getting served (i.e., he reneges); else he completes his service and then departs. All customer patience times are independent of each other. Let X(t) be the number of customers in the system at time t. We can show that {X(t), t 0} is a birth and death process with birth rates n = n , and death rates n = + (n 1), n 1. (Why do we get (n 1) and not n in the death parameters?) If > 0, such a queue is always stable and its steady state distribution can be computed by using the results of Example 6.35 on page 242. The above examples illustrate the usefulness of the birth and death processes in modeling queues. Further examples are given in Modeling Exercises 7.1 - 7.5. n0 7.4 Open Queueing Networks The queueing models in Section 7.3 are for single-station queues, i.e., there is a single place where the service is rendered. Customers arrive at this facility, which may have more than one server, get served once, and then depart. In this and the next section we consider more complicated queueing systems called queueing networks. A typical queueing network consists of several service stations, or nodes. Customers form queues at each of these queueing stations. After completing service at a station the customer may depart from the system or join a queue at some other service station. In open queueing networks, customers arrive at the nodes from outside the system, visit the nodes in some order, and then depart. Thus the total number of customers in an open queueing network varies over time. In closed queueing networks there are no external arrivals or departures, thus the total number of customers in a closed queueing network remains constant. We shall study open queueing networks in this section and closed queueing networks in the next. Open queueing networks arise in all kinds of situations: hospitals, computers, telecommunication networks, assembly lines, and supply chains, to name a few. Consider a simple model of patient ow in a hospital. Patients arrive from outside at the admitting ofce or the emergency ward. Patients from the admitting ofce go to various clinics, which we have lumped into one node for modeling ease. In the clinics the patients are diagnosed and routed to the intensive care unit, or are dismissed, or are given reappointment for a follow-up visit. Patients with reappointments go home and return to the admitting ofce at appropriate times. Patients from the emergency ward OPEN QUEUEING NETWORKS 301 are either dismissed after proper care, or are sent to the intensive care unit. From the intensive care unit they are either discharged or given reappointments for follow-up visits. This simple patient ow model can be set up as a queueing network with ve nodes as shown in Figure ??. The arrows interconnecting the nodes show the patient routing pattern. In this gure the customers can arrive at the system at two nodes: admitting and emergency. Customers can depart the system from three nodes: clinics, emergency, and intensive care. Note that a customer can visit a node a random number of times before departing the system, i.e., a queueing network can have cycles. It is extremely difcult to analyze a queueing network in all its generality. In this section we shall concentrate on a special class of queueing networks called the Jackson Networks. Jackson introduced this class in a seminal paper in 1957 and it has become a standard queueing network model since then. A queueing network is called a Jackson network if it satises the following assumptions: A1. It has N service stations (nodes). A2. There are si servers at node i, 1 si , 1 i N . Service times of customers at node i are iid exp(i ) random variables. They are independent of service times of customers in other nodes. A3. There is an innite waiting room at each node. A4. External arrivals at node i form a PP(i ). All the arrival processes are independent of each other and the service times. A5. After completing service at node i, the customer departs the system with probability ri , or joins the queue at node j with probability rij , independent of the number of customers at any node in the system. rii can be positive. We have N ri + j=1 rij = 1, 1 i N. (7.21) A6. The routing matrix R = [rij ] is such that I R is invertible. All the above assumptions are crucial to the analysis of Jackson Networks. Some assumptions can be relaxed, but not the others. For example, in practice we would like to consider nite capacity waiting rooms (assumption 3) or state-dependent routing (assumption 5). However such networks are very difcult to analyze. On the other hand certain types of state-dependent service and arrival rates can be handled fairly easily. See Subsections 7.4.1 and 7.4.2. Now let us study a Jackson network described above. Let Xi (t) be the number of 302 customers at node i at time t, (1 i N, t 0), and let QUEUEING MODELS X(t) = [X1 (t), X2 (t), , XN (t)] be the state of the queueing network at time t. The state-space of the {X(t), t 0} process is S = {0, 1, 2, }N . To understand the transitions in this process we need some notation. Let ei be an N -vector with a 1 in the ith coordinate and 0 in all other. Suppose the system is in state x = [x1 , x2 , , xN ] S. If an external arrival takes place at node i, the system state changes to x + ei . If a customer completes service at node i (this can happen only if xi > 0) and departs the system, the system state changes from x to x ei , and if the customer moves to state j, the system state changes from x to x ei + ej . It can be seen that {X(t), t 0} is a multi-dimensional CTMC with the following transition rates: q(x, x + ei ) q(x, x ei ) q(x, x ei + ej ) Hence we get N N = i , = min(xi , si )i ri , = min(xi , si )i rij , i = j. q(x, x) = q(x) = i=1 i i=1 min(xi , si )i (1 rii ). Now let p(x) = = t lim P(X(t) = x) t lim P(X1 (t) = x1 , X2 (t) = x2 , , XN (t) = xN ) be the limiting distribution, assuming it exists. We now study node j in isolation. The input to node j consists of two parts: the external input that occurs at rate j , and the internal input originating from other nodes (including node j) in the network. Let aj be the total arrival rate (external + internal) to node j. In steady state (assuming it exists) the departure rate from node i must equal the total input rate ai to node i. A fraction rij of the departing customers goes to node j. The the internal input from node i to node j is ai rij . Thus in steady state we must have N aj = j + i=1 ai rij , 1 j N, (7.22) which can be written in matrix form as a(I R) = , OPEN QUEUEING NETWORKS 303 where a = [a1 , a2 , , aN ] and = [1 , 2 , , N ]. Now, from assumption 6, I R is invertible. Hence we have a = (I R)1 . (7.23) Note that the invertibility of I R implies that no customer stays in the network indenitely. Now consider an M/M/si queue with arrival rate ai and service rate i . From the results of Section 7.3.3 we see that such a queue is stable if i = ai /si i < 1, and i (n), the steady state probability that there are n customers in the system is by i (n) = where si 1 min(n, si )min(n,si ) n i i (0), min(n, si )! nn n ssi si i + i n! i si ! 1 i n 0, 1 i (0) = n=0 . With these preliminaries we are ready ti state the main result about the Jackson networks below. Theorem 7.105 Open Jackson Networks. The CTMC {X(t), t 0} is positive recurrent if and only if ai < si i , 1 i N, where a = [a1 , a2 , , aN ] is given by Equation 7.23. When it is positive recurrent, its steady state distribution is given by N p(x) = i=1 i (xi ). (7.24) Proof: The balance equations for {X(t), t 0} are N N q(x)p(x) = i=1 N i p(x ei ) + i=1 ri i min(xi + 1, si )p(x + ei ) x S. + j=1 i:i=j rij i min(xi + 1, si )p(x + ei ej ), Assume that p(x) = 0 if x has any negative coordinates. Substitute p(x) of Equation 7.24 in the above equation to get N N N q(x) k=1 k (xk ) = i=1 N i k=1 k (xk ) i (xi 1) i (xi ) N + i=1 ri i min(xi + 1, si ) k=1 k (xk ) i (xi + 1) i (xi ) 304 N N QUEUEING MODELS + j=1 i:i=j rij i min(xi + 1, si ) k=1 k (xk ) i (xi + 1) j (xj 1) . i (xi ) j (xj ) Canceling N k=1 k (xk ) from both sides and using the identity i min(xi , si ) i (xi 1) = i (xi ) ai the above equality reduces to N q(x) = i=1 i N i min(xi , si ) + ri ai ai i=1 (ai rij ) j min(xj , sj ) aj N + j=1 i:i=j Using Equation 7.22 and simplifying the above equation we get N N i = i=1 i=1 ri ai . However, the above equation holds in steady state since the left hand side is the rate at which the customers enter the network, and the right hand side is the rate at which they depart the network. We can also derive this equation from Equations 7.22 and 7.21. Thus the solution in Equation 7.24 satises the balance equation. Since i (0) > 0 if and only if ai < si i , the condition of positive recurrence follows. Also, the CTMC is irreducible. Hence there is a unique limiting distribution. Hence the theorem follows. The form of the distribution in Equation 7.24 is called the product form, for the obvious reason. Theorem 7.105 syas that, in steady state, the queue lengths at the N nodes are independent random variables. Furthermore, node i behaves as if it is an M/M/si queue with arrival rate ai and service rate i . The phrase behaves as if is important, since the process {Xi (t), t 0} is not an birth and death process of an M/M/si queue. For example, in general, the total arrival process to node i is not a PP(ai ). It just so happens that the steady distribution of {Xi (t), t 0} is the same as that of an M/M/si queue. We illustrate the result of Theorem 7.105 with a few examples. Example 7.8 Single Queue with Feedback. The simplest queueing network is a single station queue with feedback as shown in Figure ??. Customers arrive from outside at this service station according to a PP) and request iid exp() services. The service station has s identical servers. When a customer completes service, he leaves the system with probability . With probability 1 he rejoins the queue and behaves as a new arrival. OPEN QUEUEING NETWORKS 305 This is a Jackson network with N = 1, s1 = s, 1 = , 1 = , r1 = , r11 = 1 . From Equation 7.22, we get a1 = + (1 )a1 , which yields a1 = /. Thus the queue is stable if < 1. The steady state distribution of X(t) = X1 (t) is given by = pn = lim P(X(t) = n) = t min(n, s)min(n,s) n p0 , min(n, s)! 1 n 0, where nn n ss s p0 = + n! s! 1 n=0 s1 . Example 7.9 Tandem Queue. Consider N single server queues in tandem as shown in Figure ??. External customers arrive at node 1 according to a PP(). The service times at the ith node are iid exp(i ) random variables. Customers completing service at node i join the queue at node i + 1, 1 i N 1. Customers leave the system after completing service at node N . This is a Jackson network with N nodes and 1 = , i = 0, for 2 i N , ri,i+1 = 1 for 1 i N 1, ri = 0 for 1 i N 1, and rN = 1. The trafc Equations 7.22 yield ai = , 1 N. Hence the tandem queueing system is stable if < i for all 1 i N , i.e., < min{1 , 2 , , N }. Thus the slowest server determines the trafc handling capacity of the tandem network. Assuming stability, the limiting distribution is given by N t lim P(Xi (t) = xi , 1 i N ) = i=1 i xi 1 i . Example 7.10 Patient Flow. Consider the queueing network model of patient ow as shown in Figure ??. Suppose external patients arrive at the admitting ward at a rate of 4 per hour and at the emergency ward at a rate of 1/hr. The admissions desk is manages by six secretaries, and each secretary processes an admission in ve minutes on the average. The clinic is served by k doctors, (here k his to be decided on), and the average consultation with a doctor takes 15 minutes. Generally, one out of every four patients going through the clinic is asked to return for another check up in two weeks (336 hours). One out of every ten is sent to the intensive care unit from the clinic. 306 QUEUEING MODELS The rest are dismissed after consultations. Patients arriving at the emergency room requires on the average one hour of consultation time with a doctor. The emergency room is staffed by m doctors, where m is to be decided on. One out of two patients in the emergency ward goes home after treatment, whereas the other is admitted to the intensive care unit. The average stay in the intensive care unit is four days, and there are n beds available, where n is to decided on. From the intensive care unit, 20% of the customers go home, and the other 80% are given reappointments for follow up in two weeks. Analyze this system assuming that the assumptions of Jackson networks are satised. The parameters of the N = 5 node Jackson network (using time units of hours) are 1 = 4, 2 = 1, 3 = 0, 4 = 0, 5 = 0, s1 = 6, s2 = m, s3 = k, s4 = n, s5 = , 1 = 12, 2 = 1, 3 = 4, 4 = 1/96, 5 = 1/336, r1 = 0, r2 = 0.5, r3 = 0.65, r4 = 0.2, r5 = 0. The routing matrix is given by R= Equation 7.22 are given by a1 a2 a3 a4 a5 Solving the above equations we get a1 = 6.567, a2 = 1, a3 = 6.567, a4 = 1.157, a5 = 2.567. We use Theorem 7.105 to establish the stability of the network. Note that a1 < s1 1 and a5 < s5 5 . We must also have 1 = a2 < s2 2 = m, 6.567 = a3 < s3 3 = 4k, 1.157 = a4 < s4 4 = n/96. These are satised if we have m > 1, k > 1.642, n > 111.043. = = = = = 4 + a5 1 a1 .5a2 + .1a3 .25a3 + .8a4 . 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.5 0.1 0 0 0 0 0.25 0.80 0 . OPEN QUEUEING NETWORKS 307 Thus the hospital must have at least two doctors in the emergency room, at least two in the clinics, and have at least 112 beds in the intensive care unit. So let us assume the hospital uses two doctors each in the emergency room and the clinics, and has 120 beds. With these parameters, the steady state analysis of the queueing network can be done by treating (1) the admissions queue as an M/M/6 with arrival rate 6.567 per hour, and service rate of 12 per hour; (2) the emergency ward queue as an M/M/2 with arrival rate 1 per hour, and service rate of 1 per hour; (2) the intensive care queue as an M/M/120 with arrival rate 1.157 per hour, and service rate of 1/96 per hour; and (5) the home queue as an M/M/ with arrival rate 2.567 per hour, and service rate of 1/336 per hour. Furthermore these ve queues are independent of each other in steady state. Next we discuss two important generalizations of the Jackson networks. 7.4.1 State-Dependent Service In the Jackson network model we had assumed that the service rate at node i when there are n customers at that node is given by min(si , n)i . We dene Jackson networks with state-dependent service by replacing assumption A2 by A2 as follows: A2 . The service rate at node i when there are n customers at that node is given by i (n), with i (0) = 0 and i (n) > 0 for n 0, 1 i N . Note that the service rate at node i is not allowed to depend on the state of node j = i. Now dene n i (0) = 1, i (n) = j=1 ai i (j) , n 1, 1 i N (7.25) where ai is the total arrival rate to node i as given by Equation 7.23. Jackson networks with state-dependent service also admit a product form solution as shown in the next Theorem. Theorem 7.106 Jackson Networks with State-dependent Service. A Jackson networks with state-dependent service is stable if and only if ci = n=0 i (n) < for all 1 i N. If the network is stable, the limiting state distribution is given by p(x) = i (xi ) , ci i=1 N x S. Proof: Follows along the same lines as the proof of Theorem 7.105. 308 QUEUEING MODELS Thus, in steady-state, the queues at various nodes in a Jackson network with statedependent service are independent. 7.4.2 State-Dependent Arrivals and Service It is also possible to further generalize the model of Subsection 7.4.1 by allowing the instantaneous arrival rate to the network to depend on the total number of customers in the network. Specically, we replace assumption A4 by A4 as follows: A4 . External customers arrive at the network at rate (n) when the total number of customers in the network is n. An arriving customer joins node i with probability ui , where ui = 1. i=1 The above assumption implies that the instantaneous external arrival rate to node i is ui (n) if the total number of customers in the network is n. To keep the {X(t), t 0} process irreducible, we assume that there is a K such that (n) > 0 for 0 n < K, and (n) = 0 for n K. We call the Jackson networks with assumptions A2 and A5 replaced by A2 and A5 Jackson networks with state-dependent arrivals and service. We shall see that such Jackson networks with state-dependent arrival and service rates continue to have a kind of product form limiting distribution. However, the queue at various nodes are not independent any more. The results are given in the next theorem. First we need the following notation. Let [ai , 1 i N ] be the unique solution to N aj = uj + i=1 ai pij , 1 j N. Let i (n) be dened as in Equation 7.25 using the above [ai , 1 i N ]. Also, for x = [x1 , x2 , , xN ] S, let N |x| = i=1 xi . Thus if the state of the network is X(t), the total number of customers in it at time t is |X(t)|. Theorem 7.107 Jackson Networks with State-dependent Arrivals and Service. CLOSED QUEUEING NETWORKS 309 The limiting state distribution in a Jackson network with state-dependent arrivals and service is given by N |x| p(x) = c i=1 i (xi ) j=1 (j), x S, where c is the normalizing constant given by N |x| 1 (j) . c= xS i=1 i (xi ) j=1 The network is stable if and only if c > 0. Proof: Follows along the same lines as that of Theorem 7.105. Computation of the constant c is the hard part. There is a large literature on product form queueing networks and it is more or less completely understood now as what enables a network to have product form solution. See Kelly (1979) and Walrand (1988). 7.5 Closed Queueing Networks In this section we consider the closed queueing networks. In these networks there are no external arrivals to the network, and there are no departures from the network. Thus the total number of customers in the network is constant. Closed queueing networks have been used to study population dynamics, multi-programmed computer systems, telecommunication networks with window ow control, etc. We start with the denition. A queueing network is called a closed Jackson network if it satises the following assumptions: B1. It has N service stations (nodes) and a total of K customers. B2. The service rate at node i when there are n customers at that node is given by i (n), with i (0) = 0 and i (n) > 0 for 1 n K, 1 i N . B3. After completing service at node i, the customer joins the queue at node j with probability rij , independent of the number of customers at any node in the system. rii can be positive. B4. The routing matrix R = [rij ] is a transition probability matrix of an irreducible DTMC. Now let us study a closed Jackson network described above. Let Xi (t) be the number of customers at node i at time t, (1 i N, t 0), and let X(t) = [X1 (t), X2 (t), , XN (t)] 310 QUEUEING MODELS be the state of the queueing network at time t. As in the case of open Jackson networks, we see that {X(t), t 0} is a CTMC on state-space N S = {x = [x1 , x2 , , xN ] : xi 0, i=1 xi = K}. with transition rates given by q(x, x ei + ej ) = i (xi )rij , Hence we get N i = j, x S. q(x, x) = q(x) = i=1 i (xi )(1 rii ), x S. Since the CTMC has nite state-space and is irreducible, it is positive recurrent. Let p(x) = = t t lim P(X(t) = x) lim P(X1 (t) = x1 , X2 (t) = x2 , , XN (t) = xN ) be the limiting distribution. We need the following notation before we give the result about p(x). Let = [1 , 2 , , N ] be the limiting distribution of the DTMC with transition matrix R. Since R is assumed to be irreducible, is the unique solution to N = R, i=1 i = 1. (7.26) Next, dene n i (0) = 1, i (n) = j=1 i i (j) , 1 n K, 1 i N. (7.27) Theorem 7.108 Closed Jackson Networks. The limiting distribution of the CTMC {X(t), t 0} is given by N p(x) = GN (K) i=1 i (xi ), (7.28) where the normalizing constant GN (K) is chosen so that p(x) = 1. xS Proof: Follows by verifying that the solution in Equation 7.28 satises the balance equation q(x)p(x) = ys:y=x p(y)q(y, x). The verication proceeds along the same lines as that in the proof of Theorem 7.105. CLOSED QUEUEING NETWORKS 311 Thus the closed Jackson network has a product form limiting distribution. The hard part is the evaluation of G(K), the normalizing constant. The computation is difcult since the size of the state-space grows exponentially in N and K: it has N +K1 elements. A recursive method of computing G(K) for closed Jackson netK works of single-server queues is described in the next example. Example 7.11 Tandem Closed Network. Consider a closed Jackson network of N single server nodes as shown in Figure ??. The service rate at node i is i (n) = i for n 1. The routing probabilities are ri,i+1 = 1 for 1 i N 1, and rN,1 = 1. Thus the solution to Equation 7.26 is given by i = We have i (n) = n , i where i = From Theorem 7.108 we get N 1 , 1 i N. N n 0, i 1 = . i N i p(x) = GN (K) i=1 xi , i x S. We leave it to the readers to verify that the generating function of GN (K) is given by N 1 K N (z) = . G GN (K)z = 1 i z i=1 K=0 Now we see that G0 (z) = 1 and GN (z)(1 N z) = GN 1 (z), which can be written as GN (z) = N z GN (z) + GN 1 (z). From this we can derive the following recursion GN (K) = GN 1 (K) + N GN (K 1), with boundary conditions G0 (0) = 1, GN (0) = 1, N 1, G0 (K) = 0, K 1. We can use this recursion to compute GN (K) in O(N K) steps. Example 7.12 Multi-programming Systems. Consider the following model of a multi-programming computer system. It consists of a central processing unit (CPU) N 1, 312 QUEUEING MODELS (node 10, a printer (node 2), and a disk drive (node 3). A program starts in the CPU. When the computing part is done, it goes to the printer with probability or the disc drive with probability 1 . From the printer the program terminates with probability or goes back to the CPU for further computing with probability 1 . After completing the operation at the disc drive the program returns to the CPU with probability 1. Suppose the time required at the CPU phase is exp(1 ), the printer stage is exp(2 ) and the disc drive stage is exp(3 ). Suppose these times are independent. When a program departs the system from the printer queue, a new program is instantaneously admitted to the CPU queue, so that the total number of programs in the system remains constant, say K. The parameter K is called the degree of multiprogramming. This system can be modeled by a closed Jackson network as shown in Figure ??. The parameters of this network are i (n) = i , i = 1, 2, 3; n 1, 1 0 . 0 0 R= 1 0 1 0 Thus the solution to Equation 7.26 is given by 1 = 0.5, 2 = 0.5, 3 = 0.5(1 ). Using 1 = we get i (n) = n . i Thus p(x1 , x2 , x3 ) = G3 (K)x1 x2 x3 , 1 2 3 x1 + x2 + x3 = K. The constant G3 (K) can be computed by using the method of Example 7.11. The throughput of the system is dened as the rate at which jobs get completed in steady state. In our system, if the system is in state (x1 , x2 , x3 ) with x2 > 0, jobs get completed at rate 2 . Hence we have Throughput = 2 xS:x2 >0 1 1 , 2 = , 3 = , 21 22 23 p(x1 , x2 , x3 ). The closed queueing systems have been found to be highly useful models for computing systems and there is a large literature in this area. See Gelenbe and Pujolle (1987) and Saur and Chandy (1981). SINGLE SERVER QUEUES 7.6 Single Server Queues 313 So far we have studied queueing systems that are described by CTMCs. In this section we study single server queues where either the service times or the interarrival times are non-exponential, making the queue length process non-Markovian. 7.6.1 M/G/1 Queue We study an M/G/1 queue where customers arrive according to a PP() and form a single queue in an innite waiting room in front of a single server and demand iid service times with common cdf G(), with mean and variance 2 . Let X(t) be the number of customers in the system at time t. The stochastic process {X(t), t 0} is a CTMC if and only if the service times are exponential random variables. Thus in general we cannot use the theory of CTMCs to study pj = lim P(X(t) = j), t j 0, in an M/G/1 queue. Recall the denitions of Xn , Xn , Xn , j , j , and j from Section 7.1. Since every arriving customer joins the system, the {X(t), t 0} process jumps up and down by one at a time, and the arrival process is Poisson, we can use Theorems 7.102, 7.101 and 7.103 to get j = j = j = pj , j 0. Thus we can compute the limiting distribution of {X(t), t 0} by studying the limiting distribution of {Xn , n 0}. This is possible to do, since the next theorem shows that {Xn , n 0} is a DTMC. Theorem 7.109 Embedded DTMC in an M/G/1 Queue. {Xn , n 0} is an irreducible and aperiodic DTMC on S = {0, 1, 2, } and one-step transition probability matrix 0 1 2 3 0 1 2 3 0 0 1 2 P = 0 (7.29) 0 0 1 . 0 0 0 0 . . . . .. . . . . . . . . . where i = 0 et (t)i dG(t), i! i 0. (7.30) Proof: Let An be the number of arrivals to the queueing system during the nth service time. Since the service times are iid random variables with common distribution 314 QUEUEING MODELS G(), and the arrival process is PP(), we see that {An , n 1} is a sequence of iid random variables with common pmf P(An = i) = P(i arrivals during a service time) = 0 P(i arrivals during a service time of duration t)dG(t) et 0 = = i . (t)i dG(t) i! Now, if Xn > 0, the (n+1)st service time starts immediately after the nth departure, and during that service time An+1 customers join the system. Hence after the (n + 1)st departure there are Xn + An+1 1 customers are left in the system. On the other hand, if Xn = 0, the (n + 1)st service time starts immediately after the (n + 1)st arrival, and during that service time An+1 customers join the system. Hence after the (n + 1)st departure there are An+1 customers are left in the system. Combining these two observations, we get Xn+1 = An+1 Xn 1 + An+1 if Xn = 0 if Xn > 0. This is identical to Equation 2.9 derived in Example 2.16 on page 23 if we dene Yn = An+1 . The result then follows from the results in Example 2.16. The DTMC is irreducible and aperiodic since i > 0 for all i 0. The next theorem gives the result about the limiting distribution of {Xn , n 0}. Theorem 7.110 Limiting Distribution of an M/G/1 Queue. The DTMC {Xn , n 0} is positive recurrent if and only if = < 1. If it is positive recurrent, its limiting distribution has the generating function given by (1 z)G( z) j z j = (1 ) (z) = , (7.31) G( z) z j=0 where G(s) = 0 est dG(t). Proof: Since {Xn , n 0} is the DTMC studied in Example 2.16 on page 23, we can use the results about its limiting distribution from Example 4.23 on page 122. From there we see that the DTMC is positive recurrent if and only if kk < 1. k=0 SINGLE SERVER QUEUES Substituting from Equation 7.30 we get 315 kk k=0 = k=0 k 0 t et (t)k dG(t) k! k (t)k k! dG(t) = 0 e k=0 = 0 tdG(t) = = . Thus the DTMC is positive recurrent if and only if < 1. From Equation 4.43 (using in place of ) we get (z)(1 z) (z) = (1 ) (7.32) (z) z where (z) = k=0 k z k . Substituting for k from Equation 7.30 in the above equation (z) = k=0 zk 0 et (t)k dG(t) k! (t)k k! dG(t) = 0 et k=0 zk = 0 et ezt dG(t) e(1z)t ezt dG(t) = G( z). 0 = Substituting in Equation 7.32 we get Equation 7.31. This proves the theorem. One immediate consequence of Equation 7.31 is that the probability that the server is idle in steady state can be computed as p0 = 0 = (0) = 1 . (7.33) Also, since pj = j for all j 0, (z) in Equation 7.31 is also the generating function of the limiting distribution of the {X(t), t 0} process. Using Equation 7.31 we can compute the expected number of customers in the system in steady state as given in the following theorem. Theorem 7.111 Expected Number in an M/G/1 Queue. The expected number in steady state in a stable M/G/1 queue is given by L=+ 1 2 2 1 1+ 2 2 , (7.34) 316 QUEUEING MODELS where and 2 are the mean and variance of the service time. Proof: We have L = = j=0 t lim E(X(t)) jpj jj = j=0 = d(z) ||z=1 dz The theorem follows after evaluating the last expression in straight forward fashion. This involves using LHopitals rule twice. The Equation 7.34 is called the Pollaczek-Khintchine formula. It is interesting to note that the rst moment of the queue length depends on the second moment (or equivalently, the variance) of the service time. This has an important implication. We can decrease the queue length by making the server more consistent, that is, by reducing the variability of the service times. Since the variance is zero for constant service times, it follows that among all service times with the same mean, the deterministic service time will minimize the expected number in the system in steady state! Example 7.13 The M/M/1 Queue. If the service time distribution is G(x) = 1 ex , x0 the M/G/1 queue reduces to M/M/1 queue with = 1/ and = = /. In this case we get G(s) = . s+ Substituting in Equation 7.31 we get (z) (1 z)G( z) G( z) z (1 z)/( + z) = (1 ) /( + z) z (1 z) = (1 ) (1 z)( z) 1 = . 1 z = (1 ) SINGLE SERVER QUEUES By expanding the last expression in a power series in z we get 317 (z) = j=0 j z j = j=0 pj z j = (1 ) j=0 j z j . Hence we get pj = (1 )j , j 0. This matches with the result in Example 6.36 on page 243, as expected. Example 7.14 The M/Ek /1 Queue. Suppose the service times are iid Erl(k, ). Then the M/G/1 queue reduces to M/Ek /1 queue. In this case we get = The queue is stable if k < 1. Assuming the queue is stable, the expected number in steady state can be computed by using Equation 7.34 as = = L=+ 2 k + 1 1 . 2 1 k k , 2 = k . 2 A large number of variations of the M/G/1 queue have been studied in literature. See Modeling Exercises 7.13 and 7.15. Next we study the waiting times (this includes time in service) in an M/G/1 queue assuming FCFS service discipline. Let Fn () be the cdf of Wn , the waiting time of the nth customer. Let Fn (s) = E(esWn ). The next theorem gives the Laplace Stieltjes Transform (LST) of the waiting time in steady state, dened as F (s) = lim Fn (s). n Theorem 7.112 Waiting Times in an M/G/1 Queue. The LST of the waiting time in steady state in a stable M/G/1 queue with FCFS service discipline is given by sG(s) F (s) = (1 ) . s G(s) (7.35) Proof: Let Mn be the number of arrivals during the n customers waiting time in the system. Since Xn is the number of customers left in the system after the nth departure, the assumption of FCFS service discipline implies that Xn = Mn . The Poisson assumption implies that (see the derivation of (z) in the proof of Theorem 7.110) E(z Mn ) = Fn ( z). 318 Hence QUEUEING MODELS (z) = lim E(z Xn ) = lim E(z Mn ) = lim Fn ( z) = F ( z). n n n Substituting z = s we get Equation 7.35. Equation 7.35 is also known as the Pollaczec-Khintchine formula. Using the deriva tives of F (s) at s = 0 we get W =+ s2 , 2(1 ) where s2 = 2 + 2 is the second moment of the service time. Using Equation 7.34 we can verify directly that Littles Law L = W holds for the M/G/1 queue. 7.6.2 G/M/1 Queue Now we study a G/M/1 queue where customers arrive one at a time and the interarrival times are iid random variables with common cdf G(), with G(0) = 0 and mean 1/. The arriving customers form a single queue in an innite waiting room in front of a single server and demand iid exp() service times. Let X(t) be the number of customers in the system at time t. The stochastic process {X(t), t 0} is a CTMC if and only if the service times are exponential random variables. Thus in general we cannot use the theory of CTMCs to study the limiting behavior of X(t). Recall the denitions of Xn , Xn , Xn , j , j , and j from Section 7.1. Since every arriving customer joins the system, and the {X(t), t 0} process jumps up and down by one at a time, we can use Theorems 7.102 and 7.101 to get j = j = j , j 0. However, unless the inter-arrival times are exponential, the arrival process is not a PP, and hence j = pj . The next theorem shows that {Xn , n 0} is a DTMC. Theorem 7.113 Embedded DTMC in an G/M/1 Queue. {Xn , n 0} is an irreducible and aperiodic DTMC on S = {0, 1, 2, } and one-step transition probability matrix 0 0 0 0 0 1 1 0 0 0 2 2 1 0 0 P = (7.36) . 3 3 2 1 0 . . . . . .. . . . . . . . . . . . where i = 0 et (t)i dG(t), i! i 0, (7.37) SINGLE SERVER QUEUES and bi = j=i+1 319 aj , i 0. Proof: Let Dn be the number of departures that can occur (assuming there are enough customers in the system) during the nth inter-arrival time. Since the interarrival times are iid random variables with common distribution G(), and the service times are iid exp(), we see that {Dn , n 1} is a sequence of iid random variables with common pmf P(Dn = i) = P(i possible departures during an inter-arrival time) (t)i et dG(t) = i! 0 = i . Now, the nth arrival sees Xn customers in the system. Hence there are Xn + 1 customers in the system after the nth customers enters. If Dn+1 < Xn + 1, the (n + 1) arrival will see Xn + 1 Dn+1 customers in the system, else there will be no customers in the system when the next arrival occurs. Hence we get Xn+1 = max{Xn + 1 Dn+1 , 0}. This is identical to Equation 2.11 derived in Example 2.17 on page 24 if we dene Yn = Dn+1 . The result then follows from the results in Example 2.17. The DTMC is irreducible and aperiodic since i > 0 for all i 0. The next theorem gives the result about the limiting distribution of {Xn , n 0}. Theorem 7.114 G/M/1 Queue at Arrival Times. The DTMC {Xn , n 0} is positive recurrent if and only if = / < 1. If it is positive recurrent, its limiting distribution is given by j = lim P(Xn = j) = (1 )j , n j0 (7.38) where is the unique solution in (0, 1) to = 0 e(1)t dG(t) = G((1 )). (7.39) Proof: Since {Xn , n 0} is the DTMC studied in Example 2.17 on page 24, we can use the results about its limiting distribution from Example 4.24 on page 123. From there we see that the DTMC is positive recurrent if and only if kk > 1. k=0 320 Substituting from Equation 7.37 we get QUEUEING MODELS kk = k=0 . Thus the DTMC is positive recurrent if and only if > 1, i.e., < 1. Let (z) = i=0 z i i . Following the derivation in the proof of Theorem 7.110, we get (z) = G( z). From Equation 4.45 (using in place of ) we get j = (1 )j , j 0, where is the unique solution in (0, 1) to = () = G( ). This proves the theorem. Example 7.15 The M/M/1 Queue. If the service time distribution is G(x) = 1 ex , x0 the G/M/1 queue reduces to M/M/1 queue. In this case we get G(s) = Substituting in Equation 7.39 we get = Solving for we get = , or = 1. If < 1, the = is the only solution in (0, 1). In this case Equation 7.38 reduces to j = (1 )j , j 0. Since the arrival process in this queue is Poisson, we have pj = j = j . Thus we have pj = (1 )j , j 0. This matches with the result in Example 6.36 on page 243, as expected. = The next theorem gives relates {pj , j 0} and {j , j 0}. . s+ . (1 ) + Theorem 7.115 Limiting Distribution of a G/M/1 Queue. For a G/M/1 queue RETRIAL QUEUE 321 with = / < 1 the limiting distributions {pj , j 0} and {j , j 0} are related as follows: p0 pj = 1 , = j1 , j 1. (7.40) (7.41) Proof: Postponed to Theorem 9.182 on page 435. In the next theorem we study the limiting distribution of waiting times (this includes time in service) in an G/M/1 queue assuming FCFS service discipline. Theorem 7.116 Waiting Times in an G/M/1 Queue. The limiting distribution of the waiting time in a stable G/M/1 queue with FCFS service discipline is given by F (x) = lim P(Wn x) = 1 e(1)x , n x 0. (7.42) Proof: The waiting time of a customer who sees j customers ahead of him is an Erlang(j + 1, ) random variable. Using this we get F (x) = n lim P(Wn x|Xn = j)P(Xn = j) j=0 = j=0 j P(Erl(j + 1, ) x) The rest of the proof follows along the same lines as in the case of the M/M/1 queue. From Equation 7.42 we get W = Using Littles law we get , (1 ) which can be veried by computing L directly from the limiting distribution of X(t) given Theorem 7.115. L= 1 . (1 ) 7.7 Retrial Queue We have already seen the M/M/1/1 retrial queue in Example 6.15 on page 203. In this section we generalize it to M/G/1/1 queue. We describe the model below. Customers arrive from outside to a single server according to a PP() and require 322 QUEUEING MODELS iid service times with common distribution G() and mean . There is room only for the customer in service. Thus the capacity is 1, hence the M/G/1/1 nomenclature. If an arriving customer nds the server idle, he immediately enters service. Otherwise he joins the orbit, where he stays for an exp() amount of time (called the retrial time) independent of his past and the other customers. At the end of the retrial time he returns to the server, and behaves like a new customer. He persists in conducting retrials until he is served, after which he exits the system. A block diagram of this queueing system is shown in Figure ??. Let X(t) be the number of customers in the system (those in service + those in orbit) at time t. Note that {X(t), t 0} is not a CTMC. It has jumps of size +1 when a new customer arrives, and of size -1 when a customer completes service. Since every arriving customer enters the system (either service or the orbit), and the arrival process is Poisson, we have j = j = j = p j , j 0. Thus we can study the limiting behavior of the {X(t), t 0} by studying the {Xn , t 0} process at departure points, since, as the next Theorem shows, it is a DTMC. Theorem 7.117 Embedded DTMC in an M/G/1/1 Retrial Queue. {Xn , n 0} is an irreducible and aperiodic DTMC on S = {0, 1, 2, }. Proof: Let An be the number of arrivals to the queueing system during the nth service time. Since the service times are iid random variables with common distribution G(), and the arrival process is PP(), we see that {An , n 1} is a sequence of iid random variables. Now, immediately after a service completion, the server is idle. Hence Xn represents the number of customers in the orbit when the nth service completion occurs. Each of these customers will conduct a retrial after iid exp() times. Also, a new arrival will occur after an exp() amount of time. Hence the next service request will occur after an exp( + Xn ) amount of time. With probability Xn /( + Xn ) this request is from a customer from the orbit, and with probability /( + Xn ), it is from a new customer. The (n + 1)st service starts when this request arrives, during which An+1 new customers arrive and join the orbit. Hence the system dynamics is given by Xn+1 = Xn + An+1 with probability /( + Xn ) Xn + An+1 1 with probability Xn /( + Xn ). (7.43) Since An+1 is independent of history, the above recursion implies that {Xn , n 0} is a DTMC. Irreducibility and aperiodicity is obvious. The next theorem gives the generating function of the limiting distribution of {Xn , n 0}. Theorem 7.118 Limiting Distribution of an M/G/1/1 Retrial Queue. The DTMC RETRIAL QUEUE {Xn , n 0} with > 0 is positive recurrent if and only if = < 1. 323 If it is positive recurrent, its limiting distribution has the generating function given by (z) = j=0 j z j = (1 ) (1 z)G( z) exp z) z G( 1 z 1 G( u) du , G( u) u (7.44) where G(s) = 0 est dG(t). Proof: From Equation 7.43 we get E(z Xn+1 ) = E z Xn +An+1 = E(z An+1 ) E Now let n (z) = E(z Xn ) and n (z) = E Then n (z) + z (z) n = E = E z Xn + Xn + z E Xn z Xn 1 + Xn (7.46) (7.47) + Xn z Xn + Xn + E z Xn +An+1 1 +E Xn z Xn 1 + Xn z Xn + Xn . Xn + Xn . (7.45) ( + Xn )z Xn + Xn = E(z Xn ) = n (z). Also, from the proof of Theorem 7.110, we get E(z An ) = G( z), Using Equation 7.47 in the Equation 7.45 we get n+1 (z) + Now let (z) = lim n (z), and (z) = lim n (z). n n n 1. z n+1 (z) = G( z)(n (z) + n (z)). (7.48) Letting n in Equation 7.48 and rearranging, we get (z G( z)) (z) = (G( z) 1)(z) which can be easily integrated to obtain (z) = C exp z G( u) 1 du , u G( u) 324 where C is a constant of integration. By choosing C = C exp we get (z) = C exp 1 z QUEUEING MODELS 1 G( u) 1 du , u G( u) G( u) 1 du . u G( u) z (z). 1 z (7.49) Letting n in Equation 7.47 we get (z) = (z) + Using Equation 7.48 and 7.49 we get (z) = C (1 z)G( z) exp z) z G( 1 G( u) du . G( u) u The unknown constant C can be evaluated by using (1) = 1. This yields (after applying LHopitals rule) C = 1 . (7.50) Hence the theorem follows. One immediate consequence of Equation 7.44 is that the probability that the system is empty in steady state can be computed as p0 = 0 = (0) = (1 ) exp 1 0 1 G( u) du . G( u) u However, this is not the same as the server being idle, since the server can be idle even if the system is not empty. We can use Littles Law type argument to show that the server is idle in steady state with probability 1 , independent of ! But this simple fact cannot be deduced by using the embedded DTMC. Now, since pj = j for all j 0, (z) in Equation 7.44 is also the generating function of the limiting distribution of the {X(t), t 0} process. Using Equation 7.44 we can compute the expected number of customers in the system in steady state as given in the following theorem. Theorem 7.119 Expected Number in an M/G/1/1 Retrial Queue. The expected number in steady state in a stable M/G/1/1 retrial queue with > 0 is given by L=+ 1 2 2 1 1+ 2 2 + , 1 (7.51) where and 2 are the mean and variance of the service time. INFINITE SERVER QUEUE Proof: We have L = = j=0 t 325 lim E(X(t)) jpj = j=0 jj = d(z) |z=1 dz The theorem follows after evaluating the last expression in straight forward fashion. This involves using LHopitals rule twice. Note that L is a decreasing function of . In fact, as , L of the above theorem converges to the L of a standard M/G/1 as given Theorem 7.111. This makes intuitive sense, since in the limit as , every customer is always checking to see if the server is idle. Thus as soon as the service is complete a customer from the orbit (if it is not empty) enters service. However, the service discipline is in random order, rather than in FCFS fashion, although this does not affect the queue length process. As expected, the generating function of the retrial queue reduces to that of the M/G/1 queue as . 7.8 Innite Server Queue We have seen the M/M/s queue in Subsection 7.3.3 and the M/M/ queue in Subsection 7.3.4. Unfortunately, the M/G/s queue proves to be intractable. Surprisingly, M/G/ queue can be analyzed very easily. We present that analysis here. Consider an innite server queue where customers arrive according to a PP(), and request iid service times with common cdf G() and mean . Let X(t) be the number of customers in such a queue at time t. Suppose X(0) = 0. We have analyzed this process in Example 5.17 on page 174. Using the analysis there we see that X(t) is a Poisson random variable with mean m(t), where t m(t) = 0 (1 G(u)du. We have t lim m(t) = . Hence the limiting distribution of X(t) is P( ). Note that this limiting distribution holds even if X(0) > 0, since all the initial customers will eventually leave, and do not affect the newly arriving customers. We conclude this chapter with the remark that it is possible to analyze a G/M/s queue with an embedded DTMC chain. The G/M/ queue can be analyzed by the methods of renewal processes, to be developed in the next chapter. Note that the {X(t), t 0} processes studied in the last three sections are not CTMCs. What kind of processes are these? The search for the answer to this question will lead us into 326 QUEUEING MODELS renewal theory, regenerative processes, and Markov regenerative processes. These topics will be covered in the next two chapters. 7.9 Modeling Exercises 7.1 Two types of customers use a common single-server queue. Customers of type i arrive according to PP(i ) i = 1, 2. The service times are iid exp() for both types. If the total number of customers (of both types) in the system exceeds K (a given positive number), customers of type 1 do not enter the system, while customers of type two always join the system. Model this system by a birth and death process. 7.2 Customers arrive at a taxi stand according to a PP(). If a taxi is waiting at the taxi stand, the customer immediately hires it and leaves the taxi stand in the taxi. If there are no taxis available, the customer waits. There is essentially innite waiting room for the customers. Independently of the customers, taxis arrive at the taxi stand according to a PP(). If a taxi arriving at the taxi stand nds that no customer is waiting, it leaves immediately. Model this system as anM/M/1 queue, and specify its parameters. 7.3 Customers arrive at a bank according to a PP(). The service times are iid exp(). The bank follows the following policy: when there are fewer than four customers in the bank, only one teller is active; for four to nine customers, the bank uses two tellers,; and beyond nine customers there are three tellers. Model the number of customers in the bank as a birth and death process. 7.4 A machine produces items one at a time according to a PP(). These items are stored in a warehouse of innite capacity. Demands arise according to a PP(). If there is an item in the warehouse when a demand arises, an item is immediately removed to satisfy the demand. Any demand that occurs when the warehouse is empty is lost. Let X(t) be the number of items in the warehouse at time t. Model the {X(t), t 0} process by a birth and death process. 7.5 Consider a grocery store checkout queue. When the number of customers in the line is three or less, the checkout person does the pricing as well as bagging, taking exp(1 ) time. When there are three or more customers in the line, a bagger comes to help, and the service rate increases to 2 > 1 , i.e., the reduced service times are now iid exp(2 ). Assume that customers join the checkout line according to a PP(). Model the number of customers in the checkout line as a birth and death process. 7.6 Customers arrive according to a PP() at a single-server station and demand iid exp() service times. When a customer completes his service, he departs with probability , or rejoins the queue instantaneously with probability 1 , and behaves like a new customer. Model the number of customers in the system as a birth and death process. MODELING EXERCISES 327 7.7 Consider a service station with two distinct serves. Customers arrive according to a PP() and form a single queue. Service times at server i are exp(i ), i = 1, 2, with 1 > 2 . If both servers are free an incoming customer goes to server 1, else he goes to the available server. If both servers are busy he joins a single queue and goes to the rst available server. Model this system by a CTMC? Is it a birth and death process? 7.8 Consider an M/M/1 queue with arrival rate , service rate and the following operating policy. The server is turned off when the system is empty. It remains off until there are N customers in the system, at which time it starts serving them one by one until the system is empty. Such a policy is called an N -type control. Let X(t) be the number of customers in the system at time t. Is {X(t), t 0} a CTMC? Let Y (t) = 1 if the server is busy at time t, and 0 otherwise. Is {Y (t), t 0} a CTMC? Is {(X(t), Y (t)), t 0} a CTMC? If it is, show its transition rates or the rate diagram. 7.9 Consider a single server queue that serves customers from k independent sources. Customers from source i arrive according to a PP(i ) and need iid exp(i ) service times. They form a single queue and are served in an FCFS fashion. Let X(t) be the number of customers in the system at time t. Show that {X(t), t 0} is the queue-length process in an M/G/1 queue. Identify the service distribution G. 7.10 Consider a single server queue subject to break downs and repairs as follows: the worker stays functional for an exp() amount of time and then fails. The repair time is exp(). The successive up and down times are iid. However, the server is subject to failures only when it is serving a customer. Assume that the failure does not cause any loss of work. Thus if a customer service is interrupted by failure, the service simply resumes after the server is repaired. Let X(t) be the number of customers in this system at time t. Model {X(t), t 0} as an M/G/1 queue by identifying the correct service time distribution G. 7.11 Redo the problem Modeling Exercise 7.10 assuming that the server can fail even when it is not serving any customers. Is {X(t), t 0} the queue-length process of an M/G/1 queue? Explain. Let Xn be the number of customers in the system after the nth departure. Show that {Xn , n 0} is a DTMC and display its transition probability matrix. 7.12 Consider the {X(t), t 0} process described in Modeling Exercise 7.4 with the following modication: the machine produces items in a deterministic fashion at a rate of one item per unit time. Model {X(t), t 0} as the queue length process in a G/M/1 queue. 7.13 Consider an M/G/1 queue where the server goes on vacation if the system is empty upon service completion. If the system is empty upon return from the vacation, the server goes on another vacation; else he starts serving the customers in the system one by one. Successive vacation times are iid. Let Xn be the number of customers in the system after the nth customer departs. Show that {Xn , n 0} is a DTMC. 328 QUEUEING MODELS 7.14 A service station is staffed with two identical servers. Customers arrive according to a PP(). The service times are iid with common distribution exp() at either server. Consider the following two routing policies 1. Each customer is randomly assigned to one of the two servers with equal probability. 2. Customers are alternately assigned to the two servers. Once a customer as assigned to a server he stays in that line until served. Let Xi (t) be the number of customers in line for the ith server. Is {Xi (t), t 0} the queue-length process of an M/G/1 queue or an G/M/1 queue under the two routing schemes? Identify the parameters of the queues. 7.15 Consider the following variation of an M/G/1 queue: All customers have iid 2 service times with common cdf G, with mean G and variance G . However the customers who enter an empty system have a different service time cdf H with mean 2 H and variance H . Let X(t) be the number of customers at time t. Is {X(t), t 0} a CTMC? If yes, give its generator matrix. Let Xn be the number of customers in the system after the nth departure. Is {Xn , n 0} a DTMC? If yes, give its transition probability matrix. 7.16 Consider a communication node where packets arrive according to a PP(). The node is allowed to transmit packets only at times n = 0, 1, 2 , and transmission time of a packet is one unit of time. If a packet arrives at an empty system, it has to wait for the next transmission time to start its transmission. Let X(t) be the number of packets in the system at time t, Xn be the number of packets in the system after the completion of the nth transmission, and Xn be the number of packets available for transmission at time n. Is {Xn , n 0} a DTMC? If yes, give its transition probabilities. Is {Xn , n 0} a DTMC? If yes, give its transition probabilities. 7.17 Suppose the customers that cannot enter an M/M/1/1 queue (with arrival rate and service rate ) enter service at another single server queue with innite waiting room. This second queue is called an overow queue. The service times at the overow queue are iid exp() random variables. Let X(t) be the number of customers at the overow queue at time t. Model the overow queue as a G/M/1 queue. What is the LST of the inter-arrival time distribution to the overow queue? 7.18 Customers arrive according to a PP() at a service station with s distinct servers. Service times at server i are iid exp(i ), with 1 2 s . There is no waiting room. Thus there can be at the most s customers in the system. An incoming customer goes to the fastest available server. If all the servers are busy, he leaves without service. Model this as a CTMC. COMPUTATIONAL EXERCISES 7.10 Computational Exercises 329 7.1 Show that the variance of the number of customers in steady state in a stable M/M/1 system with arrival rate and service rate is given by 2 = , (1 )2 where = /. 7.2 Let X q (t) be the number of customers in the queue (not including any in service) at time t in an M/M/1 queue with arrival rate and service rate . Is {X q (t), t 0} a CTMC? Compute the limiting distribution of X q (t) assuming < . Show that the expected number of customers in the queue (not including the customer in service) is given by 2 . Lq = 1 q 7.3 Let Wn be the time spent in the queue (not including time in service) by the nth arriving customer in an M/M/1 queue with arrival rate and service rate . q Compute the limiting distribution of Wn assuming < . Compute W q , the limiting q expected value of Wn as n . Using the results of Computational Exercise 7.2 show that Lq = W q . Thus littles law holds when applied to the customers in the queue. 7.4 Let X(t) be the number of customers in the system at time t in an M/M/1 queue with arrival rate and service rate > . Let T = inf{t 0 : X(t) = 0}. T is called the busy period. Compute E(T |X(0) = i). 7.5 Let T be as in Computational Exercise 7.4. Let N be the total number of customers served during (0, T ]. Compute E(N |X(0) = i). 7.6 Customers arrive according to P P () to a queueing system with two servers. The ith server (i = 1, 2) needs exp(i ) amount of time to serve one customer. Each incoming customer is routed to server 1 with probability p1 or to server 2 with probability p2 = 1 p1 , independently. Queue jumping is not allowed. Find the optimum routing probabilities that will minimize the expected total number of customers in the system in steady state. 7.7 Consider a stable M/M/1 queue with the following cost structure. A customer who sees i customers ahead of him when he joins the system costs $ci to the system. The system charges every customer a fee of $f upon entry. Show that the long run net revenue is given by (f i=0 ci i (1 )). 330 QUEUEING MODELS 7.8 A queueing system consists of K servers, each with its own queue. Customers arrive at the system according to a PP(). A system controller routes an incoming customer to server k with probability k , where 1 + 2 + + K = 1. Customers assigned to server k receive i.i.d. exp(k ) service times. Assume that 1 + 2 + + K > . It costs hk dollars to hold a customer for one unit of time in queue k. 1. What are the feasible values of k s so that the resulting system is stable? 2. Compute the the expected holding cost per unit time as a function of the routing probabilities k (1 k K) in the stable region. 3. Compute the optimal routing probabilities k that minimize the holding cost per unit time for the entire system. 7.9 Compute the long run fraction of customers who cannot enter the M/M/1/K system described in Subsection 7.3.2. 7.10 Compute W , the expected time spent in the system by an arriving customers in steady state in an M/M/1/K system, by using Littles Law and Equation 7.19. (If an arriving customer does not enter, his time in the system is zero.) What is the correct value of in L = W as applied to this example? 7.11 Compute W , the expected waiting time of the entering customers in steady state in an M/M/1/K system, by using Littles Law and Equation 7.19. What is the correct value of in L = W as applied to this example? 7.12 Suppose there are 0 < i < K customers in an M/M/1/K queue at time 0. Compute the expected time when the queue either becomes empty or full. 7.13 Consider the M/M/1/K system of Subsection 7.3.2 with the following cost structure. Each customer waiting in the system costs $c per unit time. Each customer entering the system pays $a as an entry fee to the system. Compute the long run rate of net revenue for this system. 7.14 Consider the system of Modeling Exercise 7.4 with production rate of 10 per hour and demand rate of 8 per hour. Suppose the machine is turned off when the number of items in the warehouse reaches K, and is turned on again when it falls to K 1. Any demand that occurs when the warehouse is empty is lost. It costs 5 dollars to produce an item, and 1 dollar to keep an item in the warehouse for one hour. Each item sells for ten dollars. 1. Model this system as an M/M/1/K queue. State the parameters. 2. Compute the long run net income (revenue-production and holding cost) per unit time, as a function of K. 3. Compute numerically the optimal K that maximizes the net income per unit time. 7.15 Consider the M/M/1 queue with balking (but no reneging) as described in Subsection 7.3.6. Suppose the limiting distribution of the number of customers in this COMPUTATIONAL EXERCISES 331 queue is {pj , j 0}. Using PASTA show that in steady state an arriving customer enters the system with probability j pj . j=0 7.16 Consider the M/M/1 queue with balking (but no reneging) as described in Subsection 7.3.6. Suppose the limiting distribution of the number of customers in this queue is P(), where = /. What balking probabilities will produce this limiting distribution? 7.17 Show that the expected number of busy servers in a stable M/M/s queue is /. 7.18 Derive Equation 7.20. Hence or otherwise compute the expected waiting time of a customer in the M/M/s system in steady state. 7.19 Show that for a stable M/M/s queue of Subsection 7.3.3 ps . Lq = 1 Compute W q explicitly and show that Littles law Lq = W q is satised. 7.20 Compute the limiting distribution of the time spent in the queue by a customer in an M/M/s queue. Hence or otherwise compute the limiting distribution of the time spent in the system by a customer in an M/M/s queue. 7.21 Consider two queueing systems. System 1 has s servers, each serving at rate . System 2 has a single server, serving at rate s. Both systems are subject to PP() arrivals. Show that in steady state, the expected number of customers in the queue (not including those in service) System 2 is less than in System 1. This shows that it is better to have a single efcient server than many inefcient ones. 7.22 Consider the nite population queue of Subsection 7.3.5 with 2 machines and one repair-person. Suppose every working machine produces revenue at a rate of $r per unit time. It costs $C to repair a machine. Compute the long run rate at which the system earns prots (revenue - cost). 7.23 When is the system in Modeling Exercise 7.4 stable? Assuming stability, compute the limiting distribution of the number of items in the warehouse. What fraction of the incoming demands are satised in steady state? 7.24 Compute the limiting distribution {pi , 0 i s} of the number of customers in an M/M/s/s queue with arrival rate and service rate for each server. 7.25 The quantity ps in the computational Exercise 7.24 is called the blocking probability, and is denoted by B(s, ) where = /. Show that the long run rate at which the customers enter the system is given by (1 B(s, )). Also, show that B(s, ) satises the recursion B(s, ) = B(s 1, ) , s + B(s 1, ) 332 with initial condition B(0, ) = 1. QUEUEING MODELS 7.26 When is the system in Modeling Exercise 7.1 stable? Assuming stability, compute the limiting distribution of the number of customers in the system. 7.27 Consider the system of Modeling Exercise 7.1. What is the limiting distribution of the number of customers in the system as seen by an arriving customer of type 1? By an entering customer of type 1? 7.28 When is the system in Modeling Exercise 7.3 stable? Assuming stability, compute the limiting distribution of the number of customers in the bank. What is the steady state probability that three tellers are active? 7.29 When is the system in Modeling Exercise 7.6 stable? Assuming stability, compute the limiting distribution of the number of customers in the system. 7.30 When is the system in Modeling Exercise 7.5 stable? Assuming stability, compute the expected number of customers in the system in steady state. 7.31 Show that the bivariate CTMC of Modeling Exercise 7.8 is stable if = / < 1. Assuming the system is stable, show that pi,j = lim P(X(t) = i, Y (t) = j), i 0, j = 0, 1, t is given by pi,0 pi,1 pN +n,1 = = = 1 , 0 i < N, N (1 i ), 1 i < N N (1 N )n , n 0. N 7.32 Consider the queueing system of Modeling Exercise 7.8. Suppose it costs $f to turn the server on from the off position, while turning the server off is free of cost. It costs $c to keep one customer in the system for one unit of time. Compute the long run operating cost per unit of the N -type policy. Show how one can optimally choose N to minimize this cost rate. 7.33 When is the system in Modeling Exercise 7.7 stable? Assuming stability, compute the limiting distribution of the number of customers in the system. What is the probability that server i is busy in steady state (i = 1, 2)? 7.34 Compute the limiting distribution of the CTMC in modeling Exercise 7.18 for the case of s = 3. What fraction of the customers are turned away in steady state? 7.35 When is the system in Modeling Exercise 7.9 stable? Assuming stability, compute the limiting distribution of the number of customers in the system. COMPUTATIONAL EXERCISES 333 7.36 When is the system in Modeling Exercise 7.10 stable? Assuming stability, compute the generating function of the limiting distribution of the number of customers in the system. 7.37 Consider the Jackson network of single server queue as shown in Figure ??. Derive the stability condition. Assuming stability compute 1. the expected number of customers in steady state in the network, 2. the fraction of the time the network is completely empty in steady state. 7.38 Do Computational Exercise 7.37 for the network in Figure ??. 7.39 Do Computational Exercise 7.37 for the network in Figure ??. 7.40 North Carolina State Fair has 35 rides, and it expects to get about 60,000 visitors per day (12 hours) on the average. Each visitor is expected to take 5 rides on the average during his/her visit. Each ride lasts approximately 1 minute and serves an average of 30 riders per batch. Construct an approximate Jackson Network model of the rides in the state fair that incorporates all the above data in a judicious fashion. State your assumptions. Is this network stable? Show how to compute the average queue length at a typical ride. 7.41 A 30 mile long stretch of an interstate highway in Montana has no inlets or exits. This stretch is served by 3 cell towers, stationed at milepost number 5, 15, 25. Each tower serves calls in the ten mile section around it. Cars enter the highway at milepost zero according to a PP(), with = 60/hr. (Ignore the trafc in the reverse direction.) They travel at a constant speed of 100 miles per hour. Each entering car initiates a phone call at rate = .2 per minute, i.e., the time until the initiation of a call is an exp() random variable. The call duration is exponentially distributed with mean 10 minutes. Once the call is nished the car does not generate any new calls. (Thus each car generates at most one call.) Suppose there is enough channel capacity available that no calls are blocked. When the calling car crosses from the area of one station to the next, the call is seamlessly handed over to the next station. Model this as a Jackson network with ve nodes, each having innite servers. Node 1 is for the rst tower, nodes 2 and 3 are for the second tower, and nodes 4 and 5 are for the third tower. Nodes 1, 2 and 4 handle newly initiated calls, while nodes 3 and 5 handle handed-over calls. Tower 1 does not handle any handed-over calls. Note that for innite server nodes the service time distribution can be general. Let Xi (t) be the number of calls in node i, 1 i 5. Compute 1. 2. 3. 4. the service time distribution of the calls in node i, the routing matrix, the expected number of calls handled by the ith station in steady state, the expected number of calls that are handed over from station i to station i + 1 per unit time (i = 1, 2). 334 QUEUEING MODELS 7.42 Consider a network of two nodes in series that operates as follows: customers arrive at the rst node from outside according to a PP(), and after completing service at node 1 move to node 2, and exit the system after completing service at node 2. The service times at each node are iid exp(). Node 1 has one server active as long as there are ve or fewer customers present at that node, and two servers active otherwise. Node 2 has one server active for up to two customers, two servers for three through ten customers, and three servers for any higher number. If an arriving customer sees a total of i customers at the two nodes, he joins the rst node with probability 1/(i + 1) and leaves the system without any service with probability i/(i + 1). Compute 1. the condition of stability 2. the expected number of customers in the network in steady state. 7.43 Consider an open Jackson network with N single-server nodes. Customers arrive from outside the network to the ith node with rate i . A fraction pi of the customers completing service at node i join the queue at node i + 1 and the rest leave the network permanently, i = 1, 2, ..., N 1. Customers completing service at node N join the queue at node 1 with probability pN , and the rest leave the network permanently. The service times at node i are exp(i ) random variables. 1. 2. 3. 4. State the assumptions to model this as a Jackson Network. What are the trafc equations for the Jackson Network? What is the condition of stability? What the expected number of customers in the network in steady state, assuming the network is stable? 7.44 Show that the probability that a customer in an open Jackson network of Section 7.4 stays in the network forever is zero if I R is invertible. 7.45 For a closed Jackson network of single server queues, shows that 1. limt P(Xi (t) j) = j GGN (K) , 0 j K. i N (Kj) 2. Li = limt E(Xi (t)) = K j=1 j GGN (K) , 0 j K. i N (Kj) 7.46 Generalize the method of computing GN (K) derived in Example 7.11 to general closed Jackson networks of single-server queues with N nodes and K customers. 7.47 A simple communications network consists of two nodes labeled A and B connected by two one-way communication links: line AB from A to B, and line BA from line from B to A. There are N users at each node. The ith user (1 i N ) at node A (B) is denoted by Ai (Bi ). User Ai has an interactive session set up with user Bi and it operates as follows: User Ai sends a message to user Bi . All the messages generated at node A wait in a buffer at node A for transmission to the appropriate user at node B on line AB in an FCFS fashion. When user Bi receives the message COMPUTATIONAL EXERCISES 335 from user Ai , she spends a random amount of time, called think time, to generate a response to it. All the messages generated at node B wait in a buffer at node B for transmission to the appropriate user at node A on line BA in an FCFS fashion. When user Ai receives the message from user Bi , she spends a random amount of time to generate a response to it. This process of messages going back and forth between the pairs of users Ai and Bi continues forever. Suppose all the think times are iid exp() random variables, and the message transmission times are iid exp() random variables. Model this as a closed Jackson network. What is the expected number of messages in the buffers at nodes A and B in steady state? 7.48 For the closed Jackson network of Section 7.5, dene the throughput T H(i) of node i as the rate at which customers leave node i in steady state, i.e., K T H(i) = n=0 i (n) lim P(Xi (t) = n). t Show that T H(i) = i GN (K) . GN (K 1) 7.49 Compute the expected number of customers in steady state in an M/G/1 system where the arrival rate is one customer per hour and the service time distribution is PH(, M ) where = [0.5 0.5 0] and 3 1 1 M = 0 3 2 . 0 0 3 7.50 Compute the expected queue length in an M/G/1 queue with the following service time distributions (all with mean 1/): 1. 2. 3. 4. Exponential with parameter , Uniform over [0, 2/], Deterministic with mean 1/, Erlang with parameters (k, k). Which distribution produces the largest congestion? Which produces the smallest? 7.51 Consider the {X(t), t 0} and the {Xn , n 0} processes dened in Modeling Exercise 7.11. Show that the limiting distribution of the two (if they exist) are identical. Let pn (qn ) be the limiting probability that there are n customers in the system and the server is up. Let p(z) and q(z) be the generating functions of {pn , n 0} and {qn , n 0}. Show that this system is stable if < . + 336 Assuming that the system is stable show that q(z) = and p(z) = + (1 z) q(z). z z + QUEUEING MODELS + (1 z) , 7.52 Show that the DTMC {Xn , n 0} in the Modeling Exercise 7.13 is positive recurrent if = < 1, where is the arrival rate and is the mean service time. Assuming the DTMC is stable, show that the generating function of the limiting distribution of Xn is given by (z) = G( z) 1 ((z) 1) m z G( z) where G is the LST of the service time, m is the expected number of arrivals during a single vacation, and (z) is the generating function of the number of arrivals during a single vacation. 7.53 Let X(t) be the number of customers at time t in the system described in Modeling Exercise 7.13. Show that {Xn , n 0} and {X(t), t 0} have the same limiting distribution, assuming it exists. Using the results of Computational Exercise 7.52 show that the expected number of customers in steady state is given by L=+ 2 1 2 1 1+ 2 2 + m(2) , 2m where 2 is the variance of the service time, m(2) is the second factorial moment of the number of arrivals during a single vacation. 7.54 Let X(t) be the number of customers at time t in an M/G/1 queue under N type control as explained in Modeling Exercise 7.8 for an M/M/1 queue. Using the results of computational Exercises 7.52 and 7.53 establish the condition of stability for this system and compute the generating function of the limiting distribution of X(t) as t . 7.55 When is the queueing system described in Modeling Exercise 7.14 stable? Assuming stability, compute the expected number of customers in the system in steady state under the two policies. Which policy is better at minimizing the expected number in the system in steady state? 7.56 Show that the DTMC {Xn , n 0} in the Modeling Exercise 7.16 is positive recurrent if < 1. Assuming the DTMC is stable, compute (z), the generating function of the limiting distribution of Xn as n . COMPUTATIONAL EXERCISES 337 7.57 Show that the DTMC {Xn , n 0} in the Modeling Exercise 7.16 is positive recurrent if < 1. Assuming the DTMC is stable, compute (z), the generating n as n . function of the limiting distribution of X 7.58 In the Modeling Exercise 7.16, is the limiting distribution of {X(t), t 0} same as that of {Xn , n 0} or {Xn , n 0}? Explain. 7.59 Show that the DTMC {Xn , n 0} in the Modeling Exercise 7.14 is positive recurrent if = G < 1. Assuming the DTMC is stable, show that the generating function of the limiting distribution of Xn is given by (z) = z H( z) G( z) 1 G . z) 1 G + H z G( Hint: Use the results of Computational Exercise 4.25. 7.60 Let X(t) be the number of customers at time t in the system described in Modeling Exercise 7.14. Show that {Xn , n 0} and {X(t), t 0} have the same limiting distribution, assuming it exists. Using the results of Computational Exercise 7.59 show that the expected number of customers in steady state is given by L= 2 2 2 2 2 2 H 2 H + H G G 2 G + G + + . 1 G + H 2 1 G + H 2 1 G 7.61 Consider an M/G/1 queue where the customers arrive according to a PP() and request iid service times with common mean , and variance 2 . After service completion, a customer leaves with probability p, or returns to the end of the queue with probability 1 p, and behaves like a new customer. 1. Compute the mean and variance of the amount of time a customer spends in service during the sojourn time in the system. 2. Compute the condition of stability. 3. Assuming stability, compute the expected number of customers in the system as seen by a departure (from the system) in steady state. 4. Assuming stability, compute the expected number of customers in the system at a service completion (customer may or may not depart at each service completion) in steady state. 7.62 Analyze the stability of the {X(t), t 0} process in Modeling Exercise 7.12. Assuming stability, compute the limiting distribution of the number of items in the warehouse. What fraction of the demands are lost in steady state? 7.63 Compute the limiting distribution of the number of customers in a G/M/1 queue with the inter-arrival times G(x) = r(1 e1 x ) + (1 r)(1 e2 x ), where 0 < r < 1, 1 > 0, 2 > 0. The service times are iid exp(). 338 QUEUEING MODELS 7.64 Let X(t) be the number of customers in a G/M/2 queue at time t. Let Xn be the number of customers as seen by the nth arrival. Show that {Xn , n 0} is a DTMC, and compute its one-step transition probability matrix. Derive the condition of stability and the limiting distribution of Xn as n . 7.65 Consider the overow queue of Modeling Exercise 7.17. 1. Compute the condition of stability for the overow queue. 2. Assuming the overow queue is stable, compute the pmf of the number of customers in the overow queue in steady state. 7.66 Consider the following modication to the M/G/1/1 retrial queue of Section 7.7. A new customer joins the service immediately if he nds the server free upon his arrival. If the server is busy, the arriving customer leaves immediately with probability c, or joins the orbit with probability 1 c, and conducts retrials until he is served. Let Xn and X(t) be as in Section 7.7. Derive the condition of stability and compute the generating function of the limiting distribution of Xn and X(t). Are they the same? 7.67 Consider the retrial queue of Section 7.7 with exp() service times. Show that the results of Section 7.7 are consistent with those of Example 6.38. 7.68 A warehouse stocks Q items. Orders for these items arrive according to a PP(). The warehouse follows a (Q, Q 1) replenishment policy with back orders as follows: If the warehouse is not empty, the incoming demand is satised from the existing stock and an order is placed with the supplier for replenishment. If the warehouse is empty, the incoming demand is back-logged and an order is placed with the supplier for replenishment. The lead time, i.e., the amount of time it takes for the order to reach the warehouse from the supplier, is a random variable with distribution G(). The lead times are iid, and orders may cross, i.e., the orders placed at the supplier may be received out of order. Let X(t) be the number of outstanding orders at time t. 1. Model {X(t), t 0} as an M/G/ queue. 2. Compute the long run fraction of the time the warehouse is empty.
MOST POPULAR MATERIALS FROM OR 220
MOST POPULAR MATERIALS FROM OR
MOST POPULAR MATERIALS FROM UNC