This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: O.H. Probability and Markov Chains – MATH 2561/2571 E09 2 Markov Chains In various applications one considers collections of random variables which evolve in time in some random but prescribed manner (think, eg., about con secutive flips of a coin combined with counting the number of heads observed). Such collections are called random (or stochastic) processes. A typical random process X is a family { X t : t ∈ T } of random variables indexed by elements of some set T . When T = { , 1 , 2 , . . . } one speaks about a ‘discretetime’ process, alternatively, for T = R or T = [0 , ∞ ) one has a ‘continuoustime’ process. In what follows we shall only consider discretetime processes. 2.1 Markov property Let (Ω , F , P ) be a probability space and let { X , X 1 , . . . } be a sequence of random variables 6 which take values in some countable set S , called the state space . We assume that each X n is a discrete 7 random variable which takes one of N possible values, where N =  S  ( N may equal + ∞ ). Definition 2.1. The process X is a Markov chain if it satisfies the Markov property : P ( X n +1 = x n +1  X = x , X 1 = x 1 , . . ., X n = x n ) = P ( X n +1 = x n +1  X n = x n ) (2.1) for all n ≥ 1 and all x , x 1 , . . . , x n +1 ∈ S . Thinking of n as the ‘present’ and of n +1 as a ‘future’ moment of time, the Markov property (2.1) says that “given the present value of a Markov chain, its future behaviour does not depend on the past”. Remark 2.1.1. It is straightforward to check that the Markov property (2.1) is equivalent to the following statement: for each s ∈ S and every sequence { x k : k ≥ } in S , P ( X n + m = s  X = x , X 1 = x 1 , . . ., X n = x n ) = P ( X n + m = s  X n = x n ) for any m, n ≥ . The evolution of a chain is described by its ‘initial distribution’ μ k def = P ( X = k ) and its ‘transition probabilities’ P ( X n +1 = j  X n = i ) ; it can be quite complicated in general since these probabilities depend upon the three quantities n , i , and j . 6 ie, each X n is a Fmeasurable mapping from Ω into S . 7 without loss of generality, we can and shall assume that S is a subset of integers. 6 O.H. Probability and Markov Chains – MATH 2561/2571 E09 Definition 2.2. A Markov chain X is called homogeneous if P ( X n +1 = j  X n = i ) ≡ P ( X 1 = j  X = i ) for all n , i , j . The transition matrix P = ( p ij ) is the  S × S  matrix of transition probabilities p ij = P ( X n +1 = j  X n = i ) . In what follows we shall only consider homogeneous Markov chains. The next claim characterizes transition matrices. Theorem 2.3. P is a stochastic matrix , which is to say that a) every entry of P is nonnegative, p ij ≥ ; b) each row sum of P equals one, ie., for every i ∈ S we have ∑ j p ij = 1 ....
View
Full
Document
This note was uploaded on 05/12/2010 for the course APPLIED ST 2010 taught by Professor Various during the Spring '10 term at Universidad Nacional Agraria La Molina.
 Spring '10
 Various

Click to edit the document details