This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 1.10. CHAPTER SUMMARY 55 1.10 Chapter Summary A Markov chain with transition probability p is defined by the property that given the present state the rest of the past is irrelevant for predicting the future: P ( X n +1 = y  X n = x,X n 1 = x n 1 ,...,X = x ) = p ( x,y ) The m step transition probability p m ( i,j ) = P ( X n + m = y  X n = x ) is the m th power of the matrix p . Recurrence and transience The first thing we need to determine about a Markov chain is which states are recurrent and which are transient. To do this we let T y = min { n 1 : X n = y } and let xy = P x ( T y < ) When x 6 = y this is the probability X n ever visits y starting at x . When x = y this is the probability X n returns to y when it starts at y . We restrict to times n 1 in the definition of T Y so that we can say: y is recurrent if yy = 1 and transient if yy < 1. Transient states in a finite state space can all be identified using Theorem 1.3. If xy > , but yx = 0 , then x is transient. Once the transient states are removed we can use Theorem 1.4. If C is a finite closed and irreducible set, then all states in C are recurrent. Here A is closed if x A and y 6 A implies p ( x,y ) = 0, and B is irreducible if x,y B implies xy > 0. The keys to the proof of Theorem 1.4 are that: (i) If x is recurrent and xy > 0 then y is recurrent, and (ii) In a finite closed set there has to be at least...
View
Full
Document
 Spring '10
 DURRETT
 Probability

Click to edit the document details