Chapter 10
Alternate Characterizations
of Markov Processes
This lecture introduces two ways of characterizing Markov pro
cesses other than through their transition probabilities.
Section 10.1 addresses a question raised in the last class, about
when being Markovian relative to one ﬁltration implies being Markov
relative to another.
Section 10.2 describes discreteparameter Markov processes as
transformations of sequences of IID uniform variables.
Section 10.3 describes Markov processes in terms of measure
preserving transformations (Markov operators), and shows this is
equivalent to the transitionprobability view.
10.1
The Markov Property Under Multiple Fil
trations
In the last lecture, we deﬁned what it is for a process to be Markovian relative
to a given ﬁltration
F
t
.
The question came up in class of when knowing that
X
Markov with respect to one ﬁltration
F
t
will allow us to deduce that it is
Markov with respect to another, say
G
t
.
To begin with, let’s introduce a little notation.
Deﬁnition 106 (Natural Filtration)
The natural ﬁltration for a stochastic
process
X
is
F
X
t
≡
σ
(
{
X
u
,
u
≤
t
}
)
.
Obviously, every process
X
is adapted to
F
X
t
.
Deﬁnition 107 (Comparison of Filtrations)
A ﬁltration
G
t
is
ﬁner than
or
more reﬁned than
or
a reﬁnement of
F
t
,
F
t
≺ G
t
, if, for all
t
,
F
t
⊆
G
t
, and
at least sometimes the inequality is strict.
F
t
is
coarser
or
less ﬁne
than
G
t
. If
F
t
≺ G
t
or
F
t
=
G
t
, we write
F
t
±
G
t
.
54
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentCHAPTER 10.
MARKOV CHARACTERIZATIONS
55
Lemma 108
If
X
is adapted to
G
t
, then
F
X
t
±
G
t
.
Proof:
For each
t
,
X
t
is
G
t
measurable.
But
F
X
t
is, by construction, the
smallest
σ
algebra with respect to which
X
t
is measurable,
so,
for every
t
,
F
X
t
⊆ G
t
, and the result follows.
±
Theorem 109
If
X
is Markovian with respect to
G
t
, then it is Markovian with
respect to any coarser ﬁltration to which it is adapted, and in particular with
respect to its natural ﬁltration.
Proof:
Use the smoothing property of conditional expectations: For any two
σ
ﬁelds
F ⊂ G
and random variable
Y
,
E
[
Y
F
] =
E
[
E
[
Y
G
]
F
] a.s. So, if
F
t
is
coarser than
G
t
, and
X
is Markovian with respect to the latter, for any function
f
∈
L
1
and time
s > t
,
E
[
f
(
X
s
)
F
t
]
=
E
[
E
[
f
(
X
s
)
G
t
]
F
t
]
a.s.
(10.1)
=
E
[
E
[
f
(
X
s
)

X
t
]
F
t
]
(10.2)
=
E
[
f
(
X
s
)

X
t
]
(10.3)
where the last line uses the facts that (i)
E
[
f
(
X
s
)

X
t
] is a function
X
t
, (ii)
X
is adapted to
F
t
, so
X
t
is
F
t
measurable, and (iii) if
Y
is
F
measurable, then
E
[
Y
F
] =
Y
. Since this holds for all
f
∈
L
1
, it holds in particular for
1
A
, where
A
is any measurable set, and this established the conditional independence which
constitutes the Markov property.
Since (Lemma 108) the natural ﬁltration is
the coarsest ﬁltration to which
X
is adapted, the remainder of the theorem
follows.
±
The converse is false, as the following example shows.
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '06
 Schalizi
 Probability theory, Markov process, Markov chain, Andrey Markov, Random walk

Click to edit the document details