This preview shows page 1. Sign up to view the full content.
Unformatted text preview: = 0, 1, 2, . . . In this chapter we will extend the notion to a continuous time
parameter t
0, a setting that is more convenient for some applications. In
discrete time we formulated the Markov property as: for any possible values of
j, i, in 1 , . . . i0
P (Xn+1 = j Xn = i, Xn 1 = in 1 , . . . , X0 = i0 ) = P (Xn+1 = j Xn = i) In continuous time, it is technically di cult to deﬁne the conditional probability
given all of the Xr for r s, so we instead say that Xt , t 0 is a Markov chain
if for any 0 s0 < s1 · · · < sn < s and possible states i0 , . . . , in , i, j we have
P (Xt+s = j Xs = i, Xsn = in , . . . , Xs0 = i0 ) = P (Xt = j X0 = i)
In words, given the present state, the rest of the past is irrelevant for predicting
the future. Note that built into the deﬁnition is the fact that the probability
of going from i at time s to j at time s + t only depends on t the di↵erence in
the times.
Our ﬁrst step is to construct a large collection of examples. In Example 4.6
we will see that this is almost the general case.
Example 4.1. Let N (t), t 0 be a Poisson process with rate and let Yn be a
discrete time Markov chain wit...
View
Full
Document
This document was uploaded on 03/06/2014 for the course MATH 4740 at Cornell University (Engineering School).
 Spring '10
 DURRETT
 The Land

Click to edit the document details