{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Markov_chainsIV-beamer

# Markov_chainsIV-beamer - Long Term Behavior for Transient...

This preview shows pages 1–5. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Long Term Behavior for Transient States Example – Nice City The Expected Number of Visits to Each Transient State Probability of Ever Reaching a State Absorption Probabilities The Nice City Example with Numbers Introductory Engineering Stochastic Processes, ORIE 3510 Instructor: Mark E. Lewis, Associate Professor School of Operations Research and Information Engineering Cornell University Disclaimer : Notes are only meant as a lecture supplement not substitute! 1/ 16 Long Term Behavior for Transient States Example – Nice City The Expected Number of Visits to Each Transient State Probability of Ever Reaching a State Absorption Probabilities The Nice City Example with Numbers Long Term Behavior for Transient States Recall that the long-term behavior, starting in a particular state i is captured by lim n →∞ pij ( n ) . When j is transient, this limit is zero. When j is recurrent, we need to know the probability that when the DTMC enters a recurrent state (starting in i ), that enters the class that contains j . What about what happens...along the way? For example, how much time is spent in each transient state before entering ( any ) recurrent class? Or, if there is particular (transient) state of interest, what is the probability of ever entering that state? 2/ 16 Long Term Behavior for Transient States Example – Nice City The Expected Number of Visits to Each Transient State Probability of Ever Reaching a State Absorption Probabilities The Nice City Example with Numbers Long Term Behavior for Transient States Recall that the long-term behavior, starting in a particular state i is captured by lim n →∞ pij ( n ) . When j is transient, this limit is zero. When j is recurrent, we need to know the probability that when the DTMC enters a recurrent state (starting in i ), that enters the class that contains j . What about what happens...along the way? For example, how much time is spent in each transient state before entering ( any ) recurrent class? Or, if there is particular (transient) state of interest, what is the probability of ever entering that state? 2/ 16 Long Term Behavior for Transient States Example – Nice City The Expected Number of Visits to Each Transient State Probability of Ever Reaching a State Absorption Probabilities The Nice City Example with Numbers Questions of Interest The Transition Matrix Rewriting the matrix Grand Theft Auto NICE City To get a property you must commit some acts of kindness at that property Your strategy is to work on two properties You have chosen to randomize slightly: you will work on one property and then move to working on the other property with some fixed probabilities While working on one property, there is the possibility that you will figure out how to complete the task for the other property, complete it and be done Once either task is completed (corresponding to a property), you own the property...and it generates revenue You then sit back...and collect your money 3/ 16 Long Term Behavior for Transient States...
View Full Document

{[ snackBarMessage ]}

### Page1 / 44

Markov_chainsIV-beamer - Long Term Behavior for Transient...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online