022107 - Lecture 9 Recall Limit theorems for Markov chains...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon
Lecture 9: 02/21/2007 Recall: Limit theorems for Markov chains. First two were about limiting behavior of Nijnn where N ij (n ) =# (of visits to j starting from i in time <= n) Introduced P(i,j) notation: P(i,j) =prob(1-step i-to-j transition)= the number next to the arrow (i)-->(j); P(i,j)=0 if there is no arrow. P (m) (i,j)= Prob{m-step transition (i)->(j)}. Had, for all n>0: + ( , )= , , , Pn 1 i j kPni kPk jall i j Special Case: state space is finite , say M states. Form a matrix P as follows: P <word crash, lost some notes…> Terminology: An (MxM) P having circled properties is a stochastic matrix . Note: if 1 =m-vector of all 1’s, then P 1 = 1. So, 1 is an e-value of P and 1 is a corresponding (right) e- vector. Definition: For any countable or finite-state MC, transition probabilities P(I,j), π is called a stationary probability distribution for the Markov chain if: Π=π i , I member of state space think of it as an infinite vector Π>=0, I member of state space Sum(π i ,I,I member of state space)=1 and , most importantly, sum(π i P(I,j))=π j , all j Nb: if M= size of state space (M<infinity), then stationarity condition reas:
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 2
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 09/14/2007 for the course ECE 496 taught by Professor Delchamps during the Spring '07 term at Cornell.

Page1 / 3

022107 - Lecture 9 Recall Limit theorems for Markov chains...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online