{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

notes2_EE549_2008

# notes2_EE549_2008 - UNIVERSITY OF SOUTHERN CALIFORNIA...

This preview shows pages 1–3. Sign up to view the full content.

UNIVERSITY OF SOUTHERN CALIFORNIA, SPRING 2008 1 Lecture Notes 2 EE 549 — Queueing Theory Instructor: Michael Neely I. M ULTISERVER S YSTEMS AND P ERFORMANCE T RACKING Here we treat multi-server systems that share a common buffer. We begin with the following simple example. A. Example Compare a single server, work conserving queue with constant server rate μ = 2 to a system of two parallel servers with individual service rates μ = 1 . Consider an input stream consisting of exactly two packets, both of length equal to 1 unit: Packet A arrives at time t = 1 , and packet B arrives at time t = 1 . 5 . The packets are served in FIFO order in the single server system. The packets are also served in FIFO order in the 2-server system: Packet A begins its service in either of the rate- 1 servers immediately upon arrival, and packet B begins its service immediately upon its arrival by entering the alternative rate- 1 server (note that the first server is busy with packet A at this time). 1 1.5 2 2.5 3 1.5 1 0.5 O t rate=2 rate=1 rate=1 First packet finished, and 2 nd packet half finished =U multi (t) =U single (t) Fig. 1. An example of the unfinished work functions associated with two arriving packets entering a multi-server system versus entering a single-server system, illustrating the multiplexing inequality. The above figure illustrates the corresponding unfinished work functions U single ( t ) and U multi ( t ) . In this example we observe that U single ( t ) U multi ( t ) for all time t . This can be considered as a special case of the multiplexing inequality by treating the multi-queue input functions X 1 ( t ) and X 2 ( t ) as the streams consisting of the packet entering the first server and the second server, respectively. It is clear from the above example that the multi-server system suffers from inefficiencies when it is not fully loaded , that is, when there are some servers that sit idle. In this lecture we describe the worst case backlog increase incurred by these inefficiencies. B. Multi-Server, Single Buffer Queues A multi-server, single buffer queue is a queueing system with a single shared buffer for storing incoming packets, together with a set of servers that process these packets (see Figure below). A packet waiting in the queue can be processed by any one of the servers. This system is conceptually similar to a system of parallel queues, with the exception that parallel queues often have separate storage buffers. Indeed, without a shared buffer, it is difficult or impossible for a packet currently contained in one queue to switch to the buffer space of another queue that it prefers. Shared buffering allows such freedom. Definition 1: A multi-server, single buffer queueing system is work conserving if it never holds a packet in its buffer while there is an idle server available.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
UNIVERSITY OF SOUTHERN CALIFORNIA, SPRING 2008 2 X (t) K 2 μ 1 ( t 29 μ 2 ( t 29 μ Κ ( t 29 1 Fig. 2. A multi-server, shared buffer queue.
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 6

notes2_EE549_2008 - UNIVERSITY OF SOUTHERN CALIFORNIA...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online