notes216

notes216 - An idiosyncratic introduction to stochastic...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: An idiosyncratic introduction to stochastic processes Class notes for Math 216 Notes - Fall 2010 Jonathan C. Mattingly September 15, 2010 1 Finite State Markov Chains A discrete time stochastic process ( X n ) n ≥ is a collection of random variables indexed by the non-negative integers Z + = { n ∈ Z : n ≥ } . The set in which the X n take values is called the state space of the stochastic process. Definition. A stochastic process ( X n ) n ≥ is a Markov chain if P ( X n +1 = j | X n = i n , ··· ,X = i ) = P ( X n +1 = j | X n = i n ) for all j,i n , ··· ,i ∈ I . Definition. A Markov chain is time homogeneous if for all k ∈ Z + and i,j ∈ I P ( X k +1 = i | X k = j ) = P ( X 1 = i | X = j ) Unless we say otherwise we will always assume that all Markov chains are time homogeneous. In such cases we will write p n ( i,j ) = P ( X n = j | X = i ) By the Markov property one has P ( X n = x n ,X n- 1- x n- 1 , ··· X 1 = x 1 | X = x ) = p 1 ( x n ,x n- 1 ) p 1 ( x n- 1 ,x n- 2 ) ··· p 1 ( x 1 ,x ) We will begin by concentrating on stochastic processes on a finite state space I . With out loss of generality, we can take the state space to be I = { , 1 ....,N } . 1.1 Markov chains and matrices There is a very fruitful correspondence between finite state Markov chains and Matrices. We begin by considering random variables on a state space I = { ,...,N- 1 } . Such a random variable X can be specified completely by N non-negative numbers { λ i : i ∈ I } such that P ( X = i ) = λ i . Clearly we have that ∑ i ∈ I λ i = 1. It is convenient to organize the λ i in a row-vector λ = ( λ ,...,λ N- 1 ) ∈ R N . The vector λ is called the distribution of the random variable X . With this in mind we make the following definition. Definition. A row vector λ = ( λ ,...,λ N- 1 ) ∈ R N called a distribution if λ i ≥ 0. If in addition ∑ N- 1 i =0 λ i = 1, it is called a probability distribution . 1 Let P ∈ R N,N be a matrix with non-negative entries. We will write P i,j for the i- jth entry of P , that is to say P = p , ··· p ,N- 1 . . . . . . . . . p N- 1 , ··· p N- 1 ,N- 1 Definition. A square matrix P with non-negative entries is called a stochastic matrix if all rows sum to one. That is to say, for all j , ∑ i P ji = 1. Stochastic matrices are in one-to-one correspondence with time homogeneous Markov processes on a finite state space. The correspondence is given by P ij = P ( X 1 = j | X = i ) It then follows that P ( X n = j | X = i ) = ( P n ) ij Or in other words, the distribution of the random variable X n when conditioned to have X = i is given by the row vector ( P n ) i, * by which we mean the i th row of the matrix P n . In other words, ( ( P n ) i, ,..., ( P n ) i,N- 1 ) (1) If we denote by e ( i ) the row vector with 1 in the i th slot and 0 in the remaining slots, then (1) can be written compactly as e ( i ) P n ....
View Full Document

This note was uploaded on 01/16/2011 for the course MATH 216 taught by Professor Mckinley,s during the Fall '08 term at Duke.

Page1 / 8

notes216 - An idiosyncratic introduction to stochastic...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online