{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

MIT2_854F10_queueing

# MIT2_854F10_queueing - Queues Lecturer Stanley B Gershwin...

This preview shows pages 1–14. Sign up to view the full content.

Queues Lecturer: Stanley B. Gershwin

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Stochastic processes t is time. X () is a stochastic process if X ( t ) is a random variable for every t . t is a scalar — it can be discrete or continuous. X ( t ) can be discrete or continuous, scalar or vector.
Stochastic Markov processes processes A Markov process is a stochastic process in which the probability of finding X at some value at time t + δt depends only on the value of X at time t . Or, let x ( s ) , s t, be the history of the values of X before time t and let A be a possible value of X . Then prob { X ( t + δt ) = A | X ( s ) = x ( s ) , s t } = prob { X ( t + δt ) = A | X ( t ) = x ( t ) }

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Stochastic Markov processes processes In words: if we know what X was at time t , we don’t gain any more useful information about X ( t + δt ) by also knowing what X was at any time earlier than t . This is the definition of a class of mathematical models. It is NOT a statement about reality!! That is, not everything is a Markov process.
Markov Example processes I have \$100 at time t = 0 . At every time t 1 , I have \$ N ( t ) . A (possibly biased) coin is ﬂipped. If it lands with H showing, N ( t + 1) = N ( t ) + 1 . If it lands with T showing, N ( t + 1) = N ( t ) 1 . N ( t ) is a Markov process. Why?

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Markov processes Discrete state, discrete time States and transitions States can be numbered 0, 1, 2, 3, ... (or with multiple indices if that is more convenient) . Time can be numbered 0, 1, 2, 3, ... (or 0, Δ , 2 Δ , 3 Δ , ... if more convenient) . The probability of a transition from j to i in one time unit is often written P ij , where P ij = prob { X ( t + 1) = i | X ( t ) = j }
Markov processes Discrete state, discrete time States and transitions Transition graph 14 P 14 P 24 P 64 P 45 14 P 14 P 24 P 64 1 − 1 2 3 4 5 6 7 P ij is a probability. Note that P ii = 1 m,m P mi . = i

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Markov processes Discrete state, discrete time States and transitions Example : H ( t ) is the number of Hs after t coin ﬂips. Assume probability of H is p . p p p p p 0 1 2 3 4 1−p 1−p 1−p 1−p 1−p
Markov processes Discrete state, discrete time States and transitions Example : Coin ﬂip bets on Slide 5. Assume probability of H is p . 1−p 1−p 1−p 1−p 1−p 1−p 1−p 1−p 1−p 100 102 98 96 103 97 101 99 p p p p p p p p p

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Markov processes Discrete state, discrete time States and transitions Define π i ( t ) = prob { X ( t ) = i } . Transition equations: π i ( t + 1) = j P ij π j ( t ) . (Law of Total Probability) Normalization equation: i π i ( t ) = 1 .
Markov processes Discrete state, discrete time States and transitions P 45 4 5 1 −P − P −P 14 24 64 Transition equation: π 4 ( t + 1) = π 5 ( t ) P 45 + π 4 ( t )(1 P 14 P 24 P 64 )

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Markov processes Discrete state, discrete time States and transitions 14 P 14 P 24 P 64 P 45 14 P 14 P 24 P 64 1 − 1 2 3 4 5 6 7 prob { X ( t + 1) = 2 } = prob { X ( t + 1) = 2 | X ( t ) = 1 } prob { X ( t ) + prob { X ( t + 1) = 2 | X ( t ) = 2 } prob { X ( t ) + prob { X ( t + 1) = 2 | X ( t ) = 4 } prob { X ( t ) + prob { X ( t + 1) = 2 | X ( t ) = 5 } prob { X ( t ) = 1 } = 2 } = 4 } = 5 }
Markov processes Discrete state, discrete time States and transitions 14 P 14 P 24 P 64 P 45 14 P 14 P 24 P 64 1 − 1 2 3 4 5 6 7 Or, since P ij = prob { X ( t + 1) = i | X ( t ) = j } and π i ( t ) = prob { X ( t ) = i } , π 2 ( t

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 82

MIT2_854F10_queueing - Queues Lecturer Stanley B Gershwin...

This preview shows document pages 1 - 14. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online