{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

MIT6_262S11_lec20 - 6.262 Discrete Stochastic Processes L20...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
If the embedded chain of a MP is positive recurrent, then π j / ν j M ( t ) 1 p j = ; lim i = WP1 k π k / ν t k →∞ t k π k / ν k where M i ( t ) is the sample-path average rate at which transitions occur WP1 and p j is the sample-path av­ erage fraction of time in state j WP1, independent of starting state. If k π k / ν k = , the transition rate M i ( t ) /t 0 and the process has no meaningful steady state. Otherwise the steady state uniquely satisfies p j ν j = p i q ij ; p j > 0; all j ; p j = 1 i j This says that rate in equals rate out for each state. For birth/death, p j q j,j +1 = p j +1 q j +1 ,j . 2 6.262: Discrete Stochastic Processes 4/25/11 L20: Markov processes and Random Walks Outline: Review - Steady state for MP Reversibility for Markov processes Random walks Queueing delay in a G/G/1 queue Detection, decisions, & Hypothesis testing 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
For an irreducible process, if there is a solution to the equations p j ν j = p i q ij ; p j > 0; all j ; p j = 1 i j and if i ν i p i < , then the embedded chain is positive recurrent and p j ν j 1 π j = ; π i / ν i = ( p j ν j ) i p i ν i i i If i ν i p i = , then each π j = 0 , the embedded chain is either transient or null-recurrent, and the notion of steady-state makes no sense. 3 1 1 2 1 0 . 6 2 0 1 2 0 . 6 2 3 ✒✑ ✒✑ ✒✑ 2 3 . . . 0 . 4 0 . 4 0 . 4 ✖✕ Imbedded chain for hyperactive birth/death ✓✏ 1 ✓✏ 1 . 2 2 . 4 ✒✑ ✒✑ 0 1 2 3 . . . 0 . 8 1 . 6 ✒✑ 3 . 2 ✖✕ Same process in terms of { q ij } Using p q j,j +1 = 3 j p j +1 q j +1 ,j , we see that p j +1 = p j , so 4 p j = (1 / 4) (3 / 4) j and p j ν j = . j If we truncate this process to k states, then 1 3 k 3 j 1 2 k 2 k j p j = 1 ; π j = 1 4 4 4 3 3 3 1 3 k 3 k p j ν 1 2 4 2 j = 1 → ∞ j 4
Background image of page 2
Reversibility for Markov processes For any Markov chain in steady state, the backward transition probabilities P are defined as ij π i P ij = π j P ji There is nothing mysterious here, just Pr X n = j, X n +1 = i = Pr X n +1 = i Pr X n = j | X n +1 = i = Pr { X n = j } Pr X n +1 = i | X n = j This also holds for the embedded chain of a Mark ov process.
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}