Markov_chain

# Markov_chain - Summer 2007 STAT333 Summary of the Markov...

This preview shows pages 1–2. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Summer 2007 STAT333 Summary of the Markov Chain Suppose we have a sequence of random variables, { X 1 , X 2 , . . . } = { X n } ∞ n =1 , which is also called a stochastic process. ♣ Markov Chain . (a) State space ( S ): all the possible values of { X n } ∞ n =1 . (b) State: i ∈ S is called State i . (c) If the stochastic process { X n } ∞ n =1 satisfies the following conditions: P ( X n +1 = j | X n = i, X n − 1 = i n − 1 , . . . , X 1 = i 1 ) = P ( X n +1 = j | X n = i ) = P ( X 1 = j | X = i ) ( denoted by p ij ) . The intuitive understanding is that given the current information X n , the future step ( X n +1 ) does not depend on the history ( X 1 , . . . , X n − 1 ). ♣ Transition matrix (a) One-step transition matrix P = ( p ij ) i,j ∈ S . (b) n-step transition matrix P ( n ) = ( p ( n ) ij ) i,j ∈ S , where p ( n ) ij = P ( X n = j | X = i ). (c) C-K equation: p n + m ij = k ∈ S p ( n ) ik p ( m ) kl (Pointwise form) or P ( n ) = P n (Matrix form) ....
View Full Document

### Page1 / 2

Markov_chain - Summer 2007 STAT333 Summary of the Markov...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online