This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Copyright c 2010 by Karl Sigman 1 Simulating Markov chains Many stochastic processes used for the modeling of financial assets and other systems in engi neering are Markovian , and this makes it relatively easy to simulate from them. Here we present a brief introduction to the simulation of Markov chains. Our emphasis is on discretestate chains both in discrete and continuous time, but some examples with a general state space will be discussed too. 1.1 Definition of a Markov chain We shall assume that the state space S of our Markov chain is S = ZZ = { ..., 2 , 1 , , 1 , 2 ,... } , the integers, or a proper subset of the integers. Typical examples are S = IN = { , 1 , 2 ... } , the nonnegative integers, or S = { , 1 , 2 ...,a } , or S = { b,..., , 1 , 2 ...,a } for some integers a,b > 0, in which case the state space is finite. Definition 1.1 A stochastic process { X n : n ≥ } is called a Markov chain if for all times n ≥ and all states i ,...,i,j ∈ S , P ( X n +1 = j  X n = i,X n 1 = i n 1 ,...,X = i ) = P ( X n +1 = j  X n = i ) (1) = P ij . P ij denotes the probability that the chain, whenever in state i , moves next (one unit of time later) into state j , and is referred to as a onestep transition probability . The square matrix P = ( P ij ) , i,j ∈ S , is called the onestep transition matrix , and since when leaving state i the chain must move to one of the states j ∈ S , each row sums to one (e.g., forms a probability distribution): For each i X j ∈S P ij = 1 . We are assuming that the transition probabilities do not depend on the time n , and so, in particular, using n = 0 in (1) yields P ij = P ( X 1 = j  X = i ) . (Formally we are considering only time homogenous MC’s meaning that their transition prob abilities are timehomogenous ( time stationary ).) The defining property (1) can be described in words as the future is independent of the past given the present state. Letting n be the present time, the future after time n is { X n +1 ,X n +2 ,... } , the present state is X n , and the past is { X ,...,X n 1 } . If the value X n = i is known, then the future evolution of the chain only depends (at most) on i , in that it is stochastically independent of the past values X n 1 ,...,X . Markov Property: Conditional on the rv X n , the future sequence of rvs { X n +1 ,X n +2 ,... } is indepen dent of the past sequence of rvs { X ,...,X n 1 } . The defining Markov property above does not require that the state space be discrete, and in general such a process possessing the Markov property is called a Markov chain or Markov process . 1 Remark 1.1 A Markov chain with nonstationary transition probabilities is allowed to have a different transition matrix P n , for each time n . This means that given the present state X n and the present time n , the future only depends (at most) on ( n,X n ) and is independent of the past....
View
Full
Document
This note was uploaded on 10/17/2010 for the course IEOR 4703 taught by Professor Sigman during the Fall '07 term at Columbia.
 Fall '07
 sigman

Click to edit the document details