zz-9 - ECE 6010 Lecture 10 – Markov Processes Basic...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ECE 6010 Lecture 10 – Markov Processes Basic concepts A Markov process { X t } one such that P ( X t k +1 = x k +1 | X ( t k ) = x k , X ( t k- 1 ) = x k- 1 , . . . , X ( t 1 ) = x 1 ) = P ( X t k +1 = x k +1 | X t k = x k ) (for a discrete random process) or f ( x t k +1 | X t k = x k . . . , x t 1 = x 1 ) = f ( x t k +1 | X t k = x k ) (for a continuous random process). The most recent observation determines the state of the process, and prior observations have no bearing on the outcome if the state is known. Example 1 Let X i be i.i.d. and let S n = X 1 + ··· + x n = S n- 1 + X n . Then P [ S n +1 = s n +1 | S n = s n , . . . , S 1 = s 1 ] = P [ X n +1 = S n +1- S n ] = P [ S n +1 = s n +1 | S n = s n ] . 2 Example 2 Let N ( t ) be a Poisson process. P ( N ( t k +1 ) = n k +1 | N ( t k ) = n k , . . . , N ( t 1 ) = n 1 )] = P [ j- i events in t k +1- t k ] = P [ N ( t k +1 ) = n k +1 | N ( t k ) = n k ] 2 Let { X t } be a Markov r.p. The joint probability has the following factorization: P ( X t 3 = x 3 , X t 2 = x 2 , X t 1 = x 1 ) = P ( X t 3 = x 3 | X t 2 = x 2 ) P ( X t 2 = x 2 | X t 1 = x 1 ) P ( X t 1 = x 1 ) (Why?) Discrete-time Markov Chains Definition 1 An integer-valued Markov random process is called a Markov chain. 2 Let the time-index set be the set of integers. Let p j (0) = P [ X = j ] be the initial probabilities. (Note that ∑ j p j (0) = 1 .) We can write the factorization as P [ X n = i n , . . . , X = i ] = P [ X n = i n | X n- 1 = i n- 1 ] ··· P [ X 1 = i 1 | X = i ] P [ X = i ] . If the probability P [ X n +1 = j | X n = i ] is does not change with n , then the r.p. X n is said to have homogeneous transition probabilities . We will assume that this is the case, and write p ij = P [ X n +1 = j | X n = i ] . Note: ∑ j P [ X n +1 = j | X n = i ] = 1 . That is, ∑ j p ij = 1 . We can represent these transition probabilities in matrix form: P = p 00 p 01 p 02 ··· p 10 p 11 p 12 ··· . . . ECE 6010: Lecture 10 – Markov Processes 2 The rows of P sum to 1 . This is called a stochastic matrix . Frequently discrete-time Markov chains are modeled with state diagrams. Example 3 Two light bulbs are held in reserve. After a day, the probability that we need a light bulb is p . Let Y n be the number of new light bulbs at the end of day n . P = 1 p 1- p p 1- p Draw the diagram. 2 Now let us look further ahead. Let p ij ( n ) = P [ X n + k = j | x k = i ] If the r.p. is homogeneous, then p ij ( n ) = P [ X k = j | X i = i ] . Let us develop a formula for the case that n = 2 . P [ X 2 = j, X 1 = l | x = i ] = P [ X 2 = j, X 1 = l, X = i ] P [ X = i ] = P [ X 2 = j | X 1 = l ] P [ X 1 = l | X = i ] P [ X = i ] P [ X = i ] = p il (1) p lj (1) = p il p lj Now marginalize: P [ X 2 = j | X = i ] = X l P [ X 2 = j, X 1 = l | X = i ] = X l p il P lj ....
View Full Document

This note was uploaded on 03/01/2012 for the course ECE 6010 taught by Professor Stites,m during the Spring '08 term at Utah State University.

Page1 / 9

zz-9 - ECE 6010 Lecture 10 – Markov Processes Basic...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online