4
5
Convergence Theorem
Just as in discrete time, we have a convergence theorem for continuous time Markov chains. In
discrete time, we needed the assumptions that the chain was irreducible, aperiodic, and positive
recurrent. In continuous time, it is impossible for the Markov chain to have a period; and thus we
can drop the requirement that the chain must be aperiodic. Positive recurrence is equivalent to the
existence of an invariant distribution; and irreducibility is equivalent to uniqueness of the invariant
distribution. This is exactly the same as the discrete time theory.
The statement of the convergence theorem is: if a continuous time Markov chain is irreducible
and positive recurrent, then a unique invariant distribution
β
exists, and furthermore, for all states
j
, we have regardless of the initial state
i
:
lim
t
→∞
P
(
X
t
=
j

X
0
=
i
) =
β
j
.
(As stated above, we can replace the assumption of positive recurrence by the assumption that
an invariant distribution exists.)
Again, note that this is not a statement about the jump chain. For the jump chain, we would
apply the convergence theorem from our discrete time theory.
6
Law of Large Numbers
As in discrete time, there is a law of large numbers for continuous time Markov chains. We want
to state a law of large numbers for average reward. Let
f
:
X →
R
be a reward function. Then
the law of large numbers for continuous time Markov chains says that
longrun average reward
converges to be expected reward against the invariant distribution. Formally, if a continuous time
Markov chain is irreducible with invariant distribution
β
, then for any bounded reward function
f
,
we have:
lim
t
→∞
T
0
f
(
X
s
)
ds
T
=
i
β
i
f
(
i
)
.
As in discrete time, there is an important connection between the law of large numbers and first
return times. Given a continuous time Markov chain
{
X
t
}
, we let
J
1
, J
2
, J
3
, . . .
denote the times
at which the chain makes a jump from one state to another. As in discrete time, we formally define
the
first return time
for a state
i
. In discrete time, remember, the first return time was the
first
time
that we returned to
i
, if we started in a state
j
=
i
; and it was the
second
time that we returned to
i
, if we started in
i
. In continuous time, we make the same definition, but keeping track of visits to
states. Thus, we formally define the first return time
T
i
1
as follows:
T
i
1
=
first time after
J
1
that the chain returns to
i,
if
X
0
=
i
;
first time after
0
that the chain returns to
i,
if
X
0
=
i.
The
expected return time
for state
i
is
E
[
T
i
1

X
0
=
i
]
.
5
We define the reward until the first return time to
i
as
Y
i
1
:
Y
i
1
=
T
i
1
0
f
(
X
s
)
ds.
Note that this matches the definition of the reward until the first return time to
i
that we made
in discrete time.
As in discrete time, the law of large numbers for continuous time Markov chains is proven
using first return times. In particular, it is also true that:
lim
t
→∞
T
0
f
(
X
s
)
ds
T
=
E
[
Y
i
1

X
0
=
i
]
E
[
T
i
1

X
0
=
i
]
That is, the longrun average reward is the same as the ratio of the expected reward we earn in
one excursion from
i
to
i