{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

4703-10-Notes-MC

# 4703-10-Notes-MC - Copyright c 2010 by Karl Sigman 1...

This preview shows pages 1–3. Sign up to view the full content.

Copyright c 2010 by Karl Sigman 1 Simulating Markov chains Many stochastic processes used for the modeling of financial assets and other systems in engi- neering are Markovian , and this makes it relatively easy to simulate from them. Here we present a brief introduction to the simulation of Markov chains. Our emphasis is on discrete-state chains both in discrete and continuous time, but some examples with a general state space will be discussed too. 1.1 Definition of a Markov chain We shall assume that the state space S of our Markov chain is S = ZZ = { . . . , - 2 , - 1 , 0 , 1 , 2 , . . . } , the integers, or a proper subset of the integers. Typical examples are S = IN = { 0 , 1 , 2 . . . } , the non-negative integers, or S = { 0 , 1 , 2 . . . , a } , or S = {- b, . . . , 0 , 1 , 2 . . . , a } for some integers a, b > 0, in which case the state space is finite. Definition 1.1 A stochastic process { X n : n 0 } is called a Markov chain if for all times n 0 and all states i 0 , . . . , i, j ∈ S , P ( X n +1 = j | X n = i, X n - 1 = i n - 1 , . . . , X 0 = i 0 ) = P ( X n +1 = j | X n = i ) (1) = P ij . P ij denotes the probability that the chain, whenever in state i , moves next (one unit of time later) into state j , and is referred to as a one-step transition probability . The square matrix P = ( P ij ) , i, j ∈ S , is called the one-step transition matrix , and since when leaving state i the chain must move to one of the states j ∈ S , each row sums to one (e.g., forms a probability distribution): For each i X j ∈S P ij = 1 . We are assuming that the transition probabilities do not depend on the time n , and so, in particular, using n = 0 in (1) yields P ij = P ( X 1 = j | X 0 = i ) . (Formally we are considering only time homogenous MC’s meaning that their transition prob- abilities are time-homogenous ( time stationary ).) The defining property (1) can be described in words as the future is independent of the past given the present state. Letting n be the present time, the future after time n is { X n +1 , X n +2 , . . . } , the present state is X n , and the past is { X 0 , . . . , X n - 1 } . If the value X n = i is known, then the future evolution of the chain only depends (at most) on i , in that it is stochastically independent of the past values X n - 1 , . . . , X 0 . Markov Property: Conditional on the rv X n , the future sequence of rvs { X n +1 , X n +2 , . . . } is indepen- dent of the past sequence of rvs { X 0 , . . . , X n - 1 } . The defining Markov property above does not require that the state space be discrete, and in general such a process possessing the Markov property is called a Markov chain or Markov process . 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Remark 1.1 A Markov chain with non-stationary transition probabilities is allowed to have a different transition matrix P n , for each time n . This means that given the present state X n and the present time n , the future only depends (at most) on ( n, X n ) and is independent of the past. Simulation of a two-state Markov chain The general method of Markov chain simulation is easily learned by first looking at the simplest case, that of a two-state chain. So consider a Markov chain { X n : n 0 } with only two states, S = { 0 , 1 } , and transition matrix P = 0 . 30 0 . 70 0 . 50 0 . 50 .
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern