MIT6_262S11_lec20

# MIT6_262S11_lec20 - 6.262 Discrete Stochastic Processes L20...

This preview shows pages 1–4. Sign up to view the full content.

If the embedded chain of a MP is positive recurrent, then π j / ν j M ( t ) 1 p j = ; lim i = WP1 k π k / ν t k →∞ t k π k / ν k where M i ° ( t ) is the sample-path average ° rate at which transitions occur WP1 and p j is the sample-path av­ erage fraction of time in state j WP1, independent of starting state. If ° k π k / ν k = , the transition rate M i ( t ) /t 0 and the process has no meaningful steady state. Otherwise the steady state uniquely satisﬁes p j ν j = ± p i q ij ; p j > 0; all j ; ± p j = 1 i j This says that rate in equals rate out for each state. For birth/death, p j q j,j +1 = p j +1 q j +1 ,j . 2 6.262: Discrete Stochastic Processes 4/25/11 L20: Markov processes and Random Walks Outline: Review - Steady state for MP Reversibility for Markov processes Random walks Queueing delay in a G/G/1 queue Detection, decisions, & Hypothesis testing 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
For an irreducible process, if there is a solution to the equations p j ν j = ° p i q ij ; p j > 0; all j ; ° p j = 1 i j and if ± i ν i p i < , then the embedded chain is positive recurrent and p j ν j 1 π j = ± ; π i / ν i = ( p j ν j ) i p i ν i ° i ° i If ± i ν i p i = , then each π j = 0 , the embedded chain is either transient or null-recurrent, and the notion of steady-state makes no sense. 3 1 1 2 1 0 . 6 2 ③ 0 1 ② ② 2 0 . 6 2 3 ✒✑ ✒✑ ✒✑ 2 ③ 3 ... 0 . 4 0 . 4 0 . 4 ✖✕ Imbedded chain for hyperactive birth/death ✓✏ 1 ✓✏ 1 . 2 2 . 4 ✒✑ ✒✑ 0 1 2 3 0 . 8 1 . 6 ✒✑ 3 . 2 ✖✕ Same process in terms of { q ij } Using p q j,j +1 = 3 j p j +1 q j +1 ,j , we see that p ° j +1 = p j , so 4 p j = (1 / 4) (3 / 4) j and p j ν j = . j If truncate this process to k states, then 1 3 k 3 j 1 2 k 2 k j p j = ² 1 ³ ´ µ ³ ´ ; π j = ² 1 4 4 4 3 ³ 3 ´ µ ³ 3 ´ ° ² µ 1 ³ 3 ´ k ² ³ 3 k p j ν 1 2 4 2 ´ j = 1 µ → ∞ j 4
Reversibility for Markov processes For any Markov chain in steady state, the backward transition probabilities P are deﬁned as ij π i P ij = π j P ji There is nothing mysterious here, just Pr ° X n = j,X n +1 = i ± = Pr ° X n +1 = i Pr X n = j | X n +1 = i = Pr { X n = j ± ° } Pr X n +1 = i ± | X n = j This also holds for the embedded chain ° of a Mark ± ov process.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 11

MIT6_262S11_lec20 - 6.262 Discrete Stochastic Processes L20...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online