EMSE 208 Lecture 5 - Markov Chains Applications-1

EMSE 208 Lecture 5 - Markov Chains Applications-1 - 1...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
1 Markov Chains Time to revisit a recurrent state π j are often called stationary probabilities if we define Patterns Consider a MC with transitions P i,j and steady state probabilities p j , starting in state r we are interested in determining the expected number of transitions until the pattern i 1 , …i k , N(i 1 ,…,i k )= min{n k, X n-k+1 =i 1 , …, X n =i k } and we are interested in E[N(i 1 ,…,i k )|X 0 =r] (r=i 1 does not , j [# transitions to get j| the MC starts in j] = 1/ j j m E π =
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2 Markov Chains Patterns Let μ (i,i 1 ) denote the mean number of transitions it take to enter i 1 given we start at i 0. Using conditional probability arguments μ (i,i 1 ) = 1+ Σ j i 1 P i,j μ (j,i 1 ) for all i For a given MC {X n ,n 0}define a corresponding k-chain MC whose state at any time is the sequence of the most recent k states, i.e k=3, X 1 =1, X 2 =4, X 3 =1, X 4 =1, then the k-chain MC state at time 4 is (4,1,1) Let π (i 1 ,…,i k ) be the stationary probabilities for the k-chain π (i 1 ,…,i k ) = π j 1 P j 1 , j 2 …. P j k-1 , j k
Background image of page 2
3 Markov Chains Patterns E[transitions between visits to i 1 ,…,i k ]= 1/ π (i 1 ,…,i k ) π (i 1 ,…,i k ) = π j 1 P j 1 , j 2 …. P j k-1 , j k Further let A(i 1 ,…,i k ) denote the additional number of transitions needed until the pattern occurs given the first m transitions have taken the chain in to states X 1 =i 1 , … X m =i m , m<k CASE 1: The pattern has no overlaps, where overlaps of size j indicate that the last j in the sequence is the same as the first j) E[N(i 1 ,…,i k )|X 0 =i k ] = 1/ π (i 1 ,…,i k ) = μ (i k ,i 1 ) +E[A(i 1 )] solving yields
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
4 Markov Chains Patterns CASE 1 continues: E[N(i 1 ,…,i k )|X 0 =r] = μ (r,i 1 ) +E[A(i 1 )] = μ (r,i 1 ) + 1/ π (i 1 ,…,i k ) - μ (i k ,i 1 ) CASE 2: suppose the largest overlap is of size s E[A(i 1 ,…,i s )] = 1/ π (i 1 ,…,i k ) But N(i 1 ,…,i k ) = N(i 1 ,…,i s ) + A(i 1 ,…,i s ) E[N(i 1 ,…,i k )|X 0 =r] = E[N(i 1 ,…,i s )] + E[A(i 1 ,…,i s )] and we continues until we have no overlaps and then use CASE 1
Background image of page 4
5 Markov Chains Patterns Example CASE 2: 1,2,3,1,2 ,3,1,2 E[N(1,2,3,1,2,3,1,2)|X 0 =r] = E[N(1,2,3,1,2)|X 0 =r] +1/ π (1,2,3,1,2,3,1,2) = E[N(1,2)|X 0 =r]+1/ π (1,2,3,1,2) + 1/ π (1,2,3,1,2,3,1,2) = μ (r,1) + 1/ π (1,2) - μ (2,1) +1/ π (1,2,3,1,2) + 1/ π (1,2,3,1,2,3,1,2) If the generated data is a sequence of independent and identically distributed random variables with each value equal to j with probability P j then P i,j = π j =P j and μ (i,j)=1/P j
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
6 Markov Chains Prop 4.3 Let {X n
Background image of page 6
Image of page 7
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 05/28/2011 for the course EMSE 208 taught by Professor Mazzuchi during the Spring '11 term at GWU.

Page1 / 22

EMSE 208 Lecture 5 - Markov Chains Applications-1 - 1...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online