This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Math 216 Notes  Fall 2010 Jonathan C. Mattingly September 15, 2010 1 Finite State Markov Chains A discrete time stochastic process ( X n ) n is a collection of random variables indexed by the nonnegative integers Z + = { n Z : n } . The set in which the X n take values is called the state space of the stochastic process. Definition. A stochastic process ( X n ) n is a Markov chain if P ( X n +1 = j  X n = i n , ,X = i ) = P ( X n +1 = j  X n = i n ) for all j,i n , ,i I . Definition. A Markov chain is time homogeneous if for all k Z + and i,j I P ( X k +1 = i  X k = j ) = P ( X 1 = i  X = j ) Unless we say otherwise we will always assume that all Markov chains are time homogeneous. In such cases we will write p n ( i,j ) = P ( X n = j  X = i ) By the Markov property one has P ( X n = x n ,X n 1 x n 1 , X 1 = x 1  X = x ) = p 1 ( x n ,x n 1 ) p 1 ( x n 1 ,x n 2 ) p 1 ( x 1 ,x ) We will begin by concentrating on stochastic processes on a finite state space I . With out loss of generality, we can take the state space to be I = { , 1 ....,N } . 1.1 Markov chains and matrices There is a very fruitful correspondence between finite state Markov chains and Matrices. We begin by considering random variables on a state space I = { ,...,N 1 } . Such a random variable X can be specified completely by N + 1 nonnegative numbers { i : i I } such that P ( X = i ) = i . Clearly we have that i I i = 1. It is convenient to organize the i in a rowvector = ( ,dots, N 1 ) R N . The vector is called the distribution of the random variable X . With this in mind we make the following definition. Definition. A row vector = ( ,..., N 1 ) R N called a distribution if i 0. If in addition N 1 i =0 i = 1, it is called a probability distribution . 1 Let P R N,N be a matrix with nonnegative entries. We will write P i,j for the i jth entry of P , that is to say P = p 00 p ,N 1 . . . . . . . . . p N 1 , p N 1 ,N 1 Definition. A square matrix P with nonnegative entries is called a stochastic matrix if all rows sum to one. That is to say, for all j , i P ji = 1. Stochastic matrices are in onetoone correspondence with time homogeneous Markov processes on a finite state space. The correspondence is given by P ij = P ( X 1 = j  X = i ) It then follows that the distribution of the random variable X n when conditioned to have X = i is given by the row vector ( P n ) i, * by which we mean the i th row of the matrix P n . In other words, ( ( P n ) i, ,..., ( P n ) i,N 1 ) (1) If we denote by e ( i ) the row vector with 1 in the i th slot and 0 in the remaining slots, then (1) can be written compactly as e ( i ) P n . If instead of starting from a deterministic initial condition, we let X be random with distribution given by the probability distribution = ( 1 , , N 1 ). This means that)....
View
Full
Document
 Fall '08
 Mckinley,S
 Integers, Markov Chains

Click to edit the document details