6.262.Lec19

# 6.262.Lec19 - DISCRETE STOCHASTIC PROCESSES Lecture 19...

This preview shows pages 1–8. Sign up to view the full content.

Lecture 19 4/16/2010 Discrete Stochastic Processes 1 DISCRETE STOCHASTIC PROCESSES Lecture 19 Review: Markov Processes (aka Continuous-Time Markov Chains) Kolmogorov Differential Equations Backward Kolmogorov Equation Forward Kolmogorov Equation Reversibility in Markov Processes

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Lecture 19 4/16/2010 Discrete Stochastic Processes 2 M ARKOV P ROCESSES A Markov process is a Markov chain with continuous, exponentially distributed holding times at each state. The mean holding time can depend on the state, but the holding times for distinct passages through the state are independent. There can be a different rate of departure for each state the chain may be in. More specifically, the holding interval U n between the time that state X n-1 = k and the time that state X n is entered is a nonnegative exponentially distributed random variable with parameter k ν , i.e., () 1 ( u | X k) = 1 - e k u nn PU υ ≤= Furthermore, conditional on X n-1 , U n is jointly independent of X m , for all m n-1 and of U m , for all m n. Within a holding time the future is conditionally independent of the past, given the present state ( ) X t . So the process has the Markov Property
Lecture 19 4/16/2010 Discrete Stochastic Processes 3 Definition A countable-state Markov process X(t), t 0, satisfies X(t) = X n , S n t < S n+1 , S 0 = 0, S m = U 1 + --- + U m . Each U k , given X k-1 = i, is exponential with rate , and is conditionally independent of all other U m and X m . i υ

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Lecture 19 4/16/2010 Discrete Stochastic Processes 4 Another view of the same processes is that each state i has independent Poisson processes with rates , and the first arrival of these determines both the next state and the transition time to enter it. This latter view lets us characterize the process by the transition rates q ij = ν i P ij , from which the original parameters can be recovered, since The rate at which the process leaves state i is , which equals (if there are no self transitions) i ν , . ii ji i j i i j jj j ij ij j j qP P q P q υ νν == = = = ∑∑ i . ij q
Lecture 19 4/16/2010 Discrete Stochastic Processes 5 These transition rates are in many regards a better way to characterize a Markov process, as the M/M/1 queue model below makes clear.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Lecture 19 4/16/2010 Discrete Stochastic Processes 6 Intuitively, the steady–state fraction of the time a countable-state Markov process spends in state j should depend on the fraction of transitions that enter state j in steady state and the expected holding time 1/ ν j . In fact, the course notes use renewal theory in section 6.2 to show that this is precisely the case, i.e.: Theorem 6.1 The limiting time-average fraction of time spent in state j is, with probability 1: provided the embedded Markov chain is irreducible and positive recurrent (which guarantees existence of a unique set of positive π ’s.) () j j j k k k p π ν = j
Lecture 19 4/16/2010 Discrete Stochastic Processes 7 While the theorem remains valid as stated, the anomalous case occurs when as in the modified M/M/1 queue model below, where the customer arrivals and the service times become slower and slower the more filled the queue becomes.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 24

6.262.Lec19 - DISCRETE STOCHASTIC PROCESSES Lecture 19...

This preview shows document pages 1 - 8. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online