This preview shows pages 1–9. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Click to edit Master subtitle style Applied Probability Methods for Engineers Slide Set 7 Click to edit Master subtitle style Chapter 17 of Winston’s Operations Research Book Markov Chains Stochastic Processes n Suppose we observe some measurement in a system at time points 0, 1, 2, … n Let Xt be a random variable for the measurement at time t n Discretetime stochastic process describes relation among variables X0, X1, X2, … n Random variable Xt may depend on all prior values X0, …, Xt1, or may depend on some subset of these (or may not depend on any of them at all) Gambler’s Ruin n At time 0 you have $2; at times 1, 2, … you bet $1 and with probability p you win $1 (with probability 1 – p you lose) n Your goal is to get to $4 and if you do, you quit n Let Xt be your position at time t n X0 = 2 n X1 = 3 with probability p and X1 = 1 with probability 1 – p n If Xt = 4, then Xt+i = 4 for all i ≥ 1 Urn Problem n Urn contains two unpainted balls n Choose a ball at random and flip a coin n If ball is unpainted and coin lands on heads, paint the ball red n If ball is unpainted and coin lands on tails, paint the ball black n If ball is painted already, we change its color n Define t as time after tth coin toss n Let [u r b] denote the state of the system, where u is # of painted balls, r is # red, and b is # blue n X0= [2 0 0] n After first toss, we get X1 = [1 1 0] or X1 = [1 0 1] n We can define allowable state transitions, e.g., if Xt = [0 2 0] then Xt+1 = [0 1 1] Stochastic Processes n A stochastic process is characterized by a set of allowable states and probabilities of moving from state to state (which are state dependent) n A continuous time stochastic process the state can be observed at any time point, and may change state at any time point n Example: evolution of stock prices over time Markov Chains n A discretetime Markov chain is a special type of process in which the state at time t depends only on the state at time t – 1 (and not on the state at times t – 2, t – 3, …) n For t = 0, 1, 2, … n P(Xt+1 = it+1 Xt = it, Xt1 = it1, …, X1 = i1, X0 = i0) = P(Xt+1 = it+1 Xt = it) n We will also assume that P(Xt+1 = j Xt = i) does not depend on t, i.e., P(Xt+1 = j Xt = i) = pij n The probability of transitioning from state i to j does not depend on t Markov Chains n The terms pij are transition probabilities (from state i to state j) n We call these stationary Markov chains because of our assumption that transition probabilities are not time dependent n In addition to knowing pij we also need to know the probability of being in state i at time zero, denoted by qi n q = [q1 q2 …qs] is the initial probability distribution for the state at time 0 Markov Chains...
View
Full
Document
This note was uploaded on 10/27/2010 for the course ESI 6321 taught by Professor Josephgeunes during the Spring '07 term at University of Florida.
 Spring '07
 JosephGeunes

Click to edit the document details