1.10.
CHAPTER SUMMARY
55
1.10
Chapter Summary
A Markov chain with transition probability
p
is defined by the property that
given the present state the rest of the past is irrelevant for predicting the future:
P
(
X
n
+1
=
y

X
n
=
x, X
n

1
=
x
n

1
, . . . , X
0
=
x
0
) =
p
(
x, y
)
The
m
step transition probability
p
m
(
i, j
) =
P
(
X
n
+
m
=
y

X
n
=
x
)
is the
m
th power of the matrix
p
.
Recurrence and transience
The first thing we need to determine about a Markov chain is which states are
recurrent and which are transient. To do this we let
T
y
= min
{
n
≥
1 :
X
n
=
y
}
and let
ρ
xy
=
P
x
(
T
y
<
∞
)
When
x
=
y
this is the probability
X
n
ever visits
y
starting at
x
. When
x
=
y
this is the probability
X
n
returns to
y
when it starts at
y
. We restrict to times
n
≥
1 in the definition of
T
Y
so that we can say:
y
is recurrent if
ρ
yy
= 1 and
transient if
ρ
yy
<
1.
Transient states in a finite state space can all be identified using
Theorem 1.3.
If
ρ
xy
>
0
, but
ρ
yx
= 0
, then
x
is transient.
Once the transient states are removed we can use
Theorem 1.4.
If
C
is a finite closed and irreducible set, then all states in
C
are recurrent.
Here
A
is closed if
x
∈
A
and
y
∈
A
implies
p
(
x, y
) = 0, and
B
is irreducible if
x, y
∈
B
implies
ρ
xy
>
0.
The keys to the proof of Theorem 1.4 are that:
(i) If
x
is recurrent and
ρ
xy
>
0 then
y
is recurrent, and (ii) In a finite closed set there has to be at least
one recurrent state. To prove these results, it was useful to know that if
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '10
 DURRETT
 Probability, Markov chain, stationary distribution, ty

Click to edit the document details