{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Lecture 12

# Lecture 12 - Notes on Queueing Theory andTheory Notes on...

This preview shows pages 1–5. Sign up to view the full content.

Dr. Deep Medhi, University of Missouri-Kansas City Notes on Queueing Theory

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Chapter 2: Stochastic Processes, B-D Model and Queues In this section, we provide brief overview of stochastic processes, and then go into birth-and-death model and queueing analysis. You may want to consult the book by Allen [1] (used often in CS 394) for more material on stochastic processes etc. 1. Stochastic Processes Let be a parameter, assuming values in a set .Le t be a random or stochastic variable for every . The family is called a stochastic process. We think about stochastic events that occur over time, i.e, is in time-space. In terms of measurement, we can quantifytimeas either a continuousvariable or a discrete variable. (for example the ALOHA protocol uses continuous time model while slotted ALOHA uses discrete time model.) Similarly, the state space (values for the stochastic process) can also be continuous (C) or discrete (D). The possible combinations are: Time state-space C D to discuss C C e.g. brownian motion - not covered D C Waiting time of n-th arrival in a queue D D to discuss In discrete state space, the stochastic process is called a chain with values denoted, e.g., . 2. Discrete-time Markov chain A stochastic process is called a Markov chain if for every ,wehave (In this definition, we use time to be discrete.) What this means is that for a Markov chain, the probability at time depends only on the previous state and nothing before that. This is known as the memoryless property of a Markov chain. Now, what is the probability of the process being in state given that it was in state in the preceding time? This is the transition probability from state to . It is written as: CS 522, v 0.94, d.medhi, W’99 5
Given, you are at state at time , the probabilities of moving to all states (in the next time slot) mustaddupto1,i.e. for each A Markov chain is called temporally homogeneous if : The transition probability is then denoted by . For all possible values of , one can denote he the transition probability as a matrix with elements . The transition in -step is given by Since a Markov chain has stationary transition probabilities, we have for all and 3. Continuous-time, Markov Chain Let be a Markov process with countable state space over continuous time-space . For, example, can be the number of customers in the system at time .Fo r continuous time, discrete space (Markov chains) the transition probability is denoted by, Note, for each Now, we can consider all the possible cases for at time giving us a matrix of information. Thus, in matrix notation, transition probability matrix CS 522, v 0.94, d.medhi, W’99 6

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
4. Chapman-Kolmogorov equation So far we have mentioned one-step transition probabilities, i.e., probability of given .C -K equation provides a relation for multiple steps as follows: where .
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 23

Lecture 12 - Notes on Queueing Theory andTheory Notes on...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online