This preview shows pages 1–8. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Lecture 12 DiscreteTime Markov Chains Topics State transition matrix Network diagrams Examples: gamblers ruin, brand switching, IRS, craps Transient probabilities Steadystate probabilities Many realworld systems contain uncertainty and evolve over time. Stochastic processes (and Markov chains) are probability models for such systems. Discrete Time Markov Chains Origins : GaltonWatson process When and with what probability will a family name become extinct? A discretetime stochastic process is a sequence of random variables X , X 1 , X 2 , . . . typically denoted by { X n }. Components of Stochastic Processes The state space of a stochastic process is the set of all values that the X n s can take. (we will be concerned with stochastic processes with a finite # of states ) Time : n = 0, 1, 2, . . . State : vdimensional vector, s = ( s 1 , s 2 , . . . , s v ) In general, there are m states, s 1 , s 2 , . . . , s m or s , s 1 , . . . , s m1 Also, X n takes one of m values, so X n s . At time 0 I have X 0 = $2, and each day I make a $1 bet. I win with probability p and lose with probability 1 p . Ill quit if I ever obtain $4 or if I lose all my money. State space is S = { 0, 1, 2, 3, 4 } Let X n = amount of money I have after the bet on day n . Gamblers Ruin If X n = 4, then X n +1 = X n +2 = = 4. If X n = 0, then X n +1 = X n +2 = = 0. 1 3 with probabilty So, 1 with probabilty 1 p X p =  A stochastic process { X n } is called a Markov chain if Pr { X n +1 = j  X = k , . . . , X n1 = k n1 , X n = i } = Pr { X n +1 = j  X n = i } transition probabilities for every i , j , k , . . . , k n1 and for every n . Discrete time means n N = { 0, 1, 2, . . . } The future behavior of the system depends only on the current state i and not on any of the previous states. Markov Chain Definition Pr{ X n +1 = j  X n = i } = Pr{ X 1 = j  X = i } for all n (They dont change over time) We will only consider stationary Markov chains. The onestep transition matrix for a Markov chain with states S = { 0, 1, 2 } is where p ij = Pr{ X 1 = j  X = i } = 22 21 20 12 11 10 02 01 00 p p p p p p p p p P Stationary Transition Probabilities If the state space S = { 0, 1, . . . , m 1} then we have...
View
Full
Document
This note was uploaded on 12/19/2011 for the course M E 366l taught by Professor Staff during the Spring '08 term at University of Texas at Austin.
 Spring '08
 Staff

Click to edit the document details