This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: UNIVERSITY OF SOUTHERN CALIFORNIA, SPRING 2008 1 Lecture Notes 2 EE 549 Queueing Theory Instructor: Michael Neely I. MULTISERVER SYSTEMS AND PERFORMANCE TRACKING Here we treat multi-server systems that share a common buffer. We begin with the following simple example. A. Example Compare a single server, work conserving queue with constant server rate = 2 to a system of two parallel servers with individual service rates = 1 . Consider an input stream consisting of exactly two packets, both of length equal to 1 unit: Packet A arrives at time t = 1 , and packet B arrives at time t = 1 . 5 . The packets are served in FIFO order in the single server system. The packets are also served in FIFO order in the 2-server system: Packet A begins its service in either of the rate- 1 servers immediately upon arrival, and packet B begins its service immediately upon its arrival by entering the alternative rate- 1 server (note that the first server is busy with packet A at this time). 1 1 .5 2 2 .5 3 1 .5 1 0 .5 O t ra te = 2 ra te = 1 ra te = 1 F irst p a c k e t fin ish e d , a n d 2 nd p a c k e t h a lf fin ish e d =U multi (t) =U single (t) Fig. 1. An example of the unfinished work functions associated with two arriving packets entering a multi-server system versus entering a single-server system, illustrating the multiplexing inequality. The above figure illustrates the corresponding unfinished work functions U single ( t ) and U multi ( t ) . In this example we observe that U single ( t ) U multi ( t ) for all time t . This can be considered as a special case of the multiplexing inequality by treating the multi-queue input functions X 1 ( t ) and X 2 ( t ) as the streams consisting of the packet entering the first server and the second server, respectively. It is clear from the above example that the multi-server system suffers from inefficiencies when it is not fully loaded , that is, when there are some servers that sit idle. In this lecture we describe the worst case backlog increase incurred by these inefficiencies. B. Multi-Server, Single Buffer Queues A multi-server, single buffer queue is a queueing system with a single shared buffer for storing incoming packets, together with a set of servers that process these packets (see Figure below). A packet waiting in the queue can be processed by any one of the servers. This system is conceptually similar to a system of parallel queues, with the exception that parallel queues often have separate storage buffers. Indeed, without a shared buffer, it is difficult or impossible for a packet currently contained in one queue to switch to the buffer space of another queue that it prefers. Shared buffering allows such freedom....
View Full Document
This note was uploaded on 12/21/2010 for the course EE 549 taught by Professor Neely during the Spring '08 term at USC.
- Spring '08