lecture6-[1]

# lecture6-[1] - Lecture 6 Markov Chains STAT 150 Spring 2006...

This preview shows pages 1–2. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Lecture 6 : Markov Chains STAT 150 Spring 2006 Lecturer: Jim Pitman Scribe: Alex Michalka <> Markov Chains • Discrete time • Discrete (finite or countable) state space S • Process { X n } • Homogenous transition probabilities • matrix P = { P ( i, j ); i, j ∈ S } P ( i, j ), the ( i, j ) th entry of the matrix P , represents the probability of moving to state j given that the chain is currently in state i . Markov Property: P ( X n +1 = i n +1 | X n = i n , . . . , X = i ) = P ( i n , i n +1 ) This means that the states of X n- 1 . . . X don’t matter. The transition probabilities only depend on the current state of the process. So, P ( X n +1 = i n +1 | X n = i n , . . . , X = i ) = P ( X n +1 = i n +1 | X n = i n ) = P ( i n , i n +1 ) To calculate the probability of a path, multiply the desired transition probabilities: P ( X = i , X 1 = i 1 , X 2 = i 2 , . . . , X n = i n ) = P ( i , i 1 ) · P ( i 1 , i 2 ) · . . . · P ( i n- 1 , i n ) Example : iid Sequence { X n } , P ( X n = j ) = p ( j ) , and ∑ j p ( j ) = 1 . P ( i, j ) = j. Example : Random Walk. S = Z (integers), X n = i + D 1 + D 2 + . . . + D n , where D i are iid, and P ( D i = j ) = p ( j ). P ( i, j ) = p ( j- i ) . Example : Same random walk, but stops at 0. P ( i, j ) = p ( j- i ) if i 6 = 0; 1 if i = j = 0; if i = 0 , j 6 = 0 ....
View Full Document

## This note was uploaded on 10/02/2009 for the course STAT 87528 taught by Professor Pitman,jim during the Spring '09 term at Berkeley.

### Page1 / 4

lecture6-[1] - Lecture 6 Markov Chains STAT 150 Spring 2006...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online