Copyright c 2010 by Karl Sigman
1
Simulating Markov chains
Many stochastic processes used for the modeling of financial assets and other systems in engi
neering are
Markovian
, and this makes it relatively easy to simulate from them.
Here we present a brief introduction to the simulation of Markov chains. Our emphasis is on
discretestate chains both in discrete and continuous time, but some examples with a general
state space will be discussed too.
1.1
Definition of a Markov chain
We shall assume that the state space
S
of our Markov chain is
S
= ZZ =
{
. . . ,

2
,

1
,
0
,
1
,
2
, . . .
}
,
the integers, or a proper subset of the integers.
Typical examples are
S
= IN =
{
0
,
1
,
2
. . .
}
,
the nonnegative integers, or
S
=
{
0
,
1
,
2
. . . , a
}
, or
S
=
{
b, . . . ,
0
,
1
,
2
. . . , a
}
for some integers
a, b >
0, in which case the state space is finite.
Definition 1.1
A stochastic process
{
X
n
:
n
≥
0
}
is called a Markov chain if for all times
n
≥
0
and all states
i
0
, . . . , i, j
∈ S
,
P
(
X
n
+1
=
j

X
n
=
i, X
n

1
=
i
n

1
, . . . , X
0
=
i
0
)
=
P
(
X
n
+1
=
j

X
n
=
i
)
(1)
=
P
ij
.
P
ij
denotes the probability that the chain, whenever in state
i
, moves next (one unit of time
later) into state
j
, and is referred to as a
onestep transition probability
.
The square matrix
P
= (
P
ij
)
, i, j
∈ S
,
is called the
onestep transition matrix
, and since when leaving state
i
the
chain must move to one of the states
j
∈ S
,
each row sums to one (e.g., forms a probability
distribution):
For each
i
X
j
∈S
P
ij
= 1
.
We are assuming that the transition probabilities do not depend on the time
n
, and so, in
particular, using
n
= 0 in (1) yields
P
ij
=
P
(
X
1
=
j

X
0
=
i
)
.
(Formally we are considering only
time homogenous
MC’s meaning that their transition prob
abilities are timehomogenous (
time stationary
).)
The defining property (1) can be described in words as
the future is independent of the past
given the present state.
Letting
n
be the present time, the future after time
n
is
{
X
n
+1
, X
n
+2
, . . .
}
,
the present state is
X
n
, and the past is
{
X
0
, . . . , X
n

1
}
. If the value
X
n
=
i
is known, then the
future evolution of the chain only depends (at most) on
i
, in that it is stochastically independent
of the past values
X
n

1
, . . . , X
0
.
Markov Property:
Conditional on the rv
X
n
, the future sequence of rvs
{
X
n
+1
, X
n
+2
, . . .
}
is indepen
dent of the past sequence of rvs
{
X
0
, . . . , X
n

1
}
.
The defining Markov property above does not require that the state space be discrete, and
in general such a process possessing the Markov property is called a
Markov chain
or
Markov
process
.
1