This preview shows pages 1–6. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Key Observation for Markov Chains Transition Diagrams ShortTerm and LongTerm (Limiting/Stationary) Behavior Transience and Recurrence Key Observations Introductory Engineering Stochastic Processes, ORIE 3510 Instructor: Mark E. Lewis, Professor School of Operations Research and Information Engineering Cornell University Disclaimer : Notes are only meant as a lecture supplement not substitute! Instructor: Mark E. Lewis, Professor 1 Key Observation for Markov Chains Transition Diagrams ShortTerm and LongTerm (Limiting/Stationary) Behavior Transience and Recurrence Key Observations Key Observation for Markov Chains Remark If you know the initial state, all of the randomness is captured in the onestep transition matrix. This means that if you are ever asked to model a process as a Markov chain you need to specify the initial distribution and P . The transient distribution of DTMC { Xn , n } on X is easily obtainable from P , P ( Xn = j  X = i ) = ( P n ) ij for any i , j X . Two questions (ok three): Is it simple to obtain P ? Can visual techniques help? Can P be used to obtain the shortterm and longterm behavior? Instructor: Mark E. Lewis, Professor 2 Key Observation for Markov Chains Transition Diagrams ShortTerm and LongTerm (Limiting/Stationary) Behavior Transience and Recurrence Key Observations Key Observation for Markov Chains Remark If you know the initial state, all of the randomness is captured in the onestep transition matrix. This means that if you are ever asked to model a process as a Markov chain you need to specify the initial distribution and P . The transient distribution of DTMC { Xn , n } on X is easily obtainable from P , P ( Xn = j  X = i ) = ( P n ) ij for any i , j X . Two questions (ok three): Is it simple to obtain P ? Can visual techniques help? Can P be used to obtain the shortterm and longterm behavior? Instructor: Mark E. Lewis, Professor 2 Key Observation for Markov Chains Transition Diagrams ShortTerm and LongTerm (Limiting/Stationary) Behavior Transience and Recurrence Key Observations Key Observation for Markov Chains Remark If you know the initial state, all of the randomness is captured in the onestep transition matrix. This means that if you are ever asked to model a process as a Markov chain you need to specify the initial distribution and P . The transient distribution of DTMC { Xn , n } on X is easily obtainable from P , P ( Xn = j  X = i ) = ( P n ) ij for any i , j X . Two questions (ok three): Is it simple to obtain P ? Can visual techniques help? Can P be used to obtain the shortterm and longterm behavior? Instructor: Mark E. Lewis, Professor 2 Key Observation for Markov Chains Transition Diagrams ShortTerm and LongTerm (Limiting/Stationary) Behavior Transience and Recurrence Key Observations Key Observation for Markov Chains Remark If you know the initial state, all of the randomness is captured in the onestep transition matrix....
View
Full
Document
This note was uploaded on 02/12/2012 for the course ORIE 3510 taught by Professor Resnik during the Spring '09 term at Cornell University (Engineering School).
 Spring '09
 RESNIK

Click to edit the document details