6.262.Lec21

6.262.Lec21 - DISCRETE STOCHASTIC PROCESSES Lecture 21...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
Lecture 21 4/23/2010 Discrete Stochastic Processes 1 DISCRETE STOCHASTIC PROCESSES Lecture 21 Semi-Markov Processes (Section 6.8) Fraction of Time in the State ( p j ) versus Fraction of Transitions to the State ( π j ) Average Time Spent in State i Going to State j Qi j , af bg The M/G/1 Queue as a Semi-Markov Process Random Walks - Chapter 7 Simple Random Walks (Section 7.1) Application to G/G/1 Queues (Section 7.2) Application to Hypothesis Testing (Section 7.3)
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Lecture 21 4/23/2010 Discrete Stochastic Processes 2 Visualize starting in state X 0 at time 0. The next state X 1 is chosen according to embedded chain, and then the time U 1 chosen, conditional on X 0 and X 1 . The process remains in state X 0 for 1 0 tU < and then transitions to X 1 at 11 tS U = = . Next X 2 is chosen according to embedded chain, and then U 2 is chosen, conditional on X 1 and 2 , X L . If there is only one state, then we have a simple renewal process. If the transition times are deterministic and equal, then we have a Markov chain. Alternatively, if we ignore transition times and only ask for the sequence of states, we also have a Markov chain. Semi-Markov Process
Background image of page 2
Lecture 21 4/23/2010 Discrete Stochastic Processes 3 Conditional Dependence in a Semi-Markov Process
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Lecture 21 4/23/2010 Discrete Stochastic Processes 4 Semi-Markov Example A taxi serves 3 cities. When it arrives in city 1 a new customer immediately asks to be taken to a location in city 1 with probability 11 P , to a location in city 2 with probability 12 P and to a location in city 3 with probability 13 P . The durations of the taxi rides are the random variables 11 12 , UU and 13 U , respectively. We assume a new customer enters the moment the trip is up. The taxi location during the trip is counted as its city of origin, i.e., ( ) 1 Xt = for all t from arrival in city 1 up to the time it first arrives in another city. 1 2 3 P 11 , U 11 P 22 , U 22 P 33 , U 33 P 32 , U 32 P 13 , U 13 P 23 , U 23 P 31 , U 31 P 21 , U 21 P 12 , U 12
Background image of page 4
Lecture 21 4/23/2010 Discrete Stochastic Processes 5 Question – For the 2-state semi-Markov process below, is the present state the only thing worth knowing about the past for purposes of predicting the future? If you know the present state is 0, do you learn anything more by knowing it has been in state 0 for 95 seconds? The correct answer to this explains why these are called semi- Markov processes. Fraction of Time in the State ( p j ) versus Fraction of Transitions to the State ( π j ) Let i be steady state probability of state i in embedded chain. Let p i be the steady state fraction of time spent in state i . This is not typically the same as i . ( ?, ? p == ur ) 1/2; 1 1/2; 1 1/2; 1 1/2; 99 0→0 01 0→1 1→0 0→0 0→0
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Lecture 21 4/23/2010 Discrete Stochastic Processes 6 π i is the long term fraction of transitions that enter state i , whereas p i is fraction of time in state i . In figure, p 0 is quite large since process persists in state 0 for a long time.
Background image of page 6
Image of page 7
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 21

6.262.Lec21 - DISCRETE STOCHASTIC PROCESSES Lecture 21...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online