Kevin Buckley  2007
1
ECE 8770 Topics in Digital Communications  Sp. 2007
Lecture 5
2
Symbol Detection and Sequence Estimation
2.4
The Viterbi Algorithm
In Subsection 2.3 we introduced MLSE in general terms, and considered it as applied for
memoryless (“noninteracting symbol ”) modulation schemes, DPSK, PRS and CPM.
In Section 3 of the Course we will again consider MLSE for InterSymbol Interference
(ISI) channels. Here we introduce the Viterbi algorithm as a computationally efficient
approach to solving a certain class of ML and MAP sequence estimation problems.
We first introduce it in general terms, and then apply it to DPSK, PRS and CPM
examples.
2.4.1
Sequence Estimation for Hidden Markov Models (HMM’s)
Markov Random Processes:
Consider a continuoustime random process
X
(
t
). We know from an introductory
discussion on random processes that the complete statistical characterization of
X
(
t
) – the set of all joint PDF’s of all possible combinations of samples all possible
numbers of samples of
X
(
t
) – is in general not practical.
Let
t
1
and
t
2
be two points in time, and denote
X
1
=
X
(
t
1
) and
X
2
=
X
(
t
2
) as
the random variable samples of
X
(
t
) at these times. The PDF of random variable
X
(
t
2
) given (a value of)
X
(
t
1
) is denoted
p
(
x
2
/x
1
). This is just the conditional
PDF we’ve been employing. Now consider
K
samples of
X
(
t
),
X
n
;
n
= 1
,
2
, ..., K
,
taken at times
t
n
n
= 1
,
2
,
· · ·
, K
where
t
n
+1
> t
n
. If, for all integer
K
and all
possible
t
n
n
= 1
,
2
,
· · ·
, K
, we have that
p
(
x
K
/x
K
−
1
, x
K
−
2
,
· · ·
, x
1
) =
p
(
x
K
/x
K
−
1
)
,
(1)
then
X
(
t
) is a Markov process. This indicates that, given
X
K
−
1
,
X
K
is statistically
independent of
X
n
;
n
=
K

2
, K

3
,
· · ·
,
1. As a result, for a Markov process,
p
(
x
K
, x
K
−
1
, x
K
−
2
,
· · ·
, x
1
)
=
p
(
x
K
/x
K
−
1
, x
K
−
2
,
· · ·
, x
1
)
(2)
.
p
(
x
K
−
1
/x
K
−
2
, x
K
−
3
,
· · ·
, x
1
)
· · ·
p
(
x
2
/x
1
)
p
(
x
1
)
=
p
(
x
K
/x
K
−
1
)
p
(
x
K
−
1
/x
K
−
2
)
· · ·
p
(
x
2
/x
1
)
p
(
x
1
)
=
p
(
x
1
)
K
productdisplay
n
=2
p
(
x
n
/x
n
−
1
)
.
Markov processes are much more easily characterized, statistically, than general
random processes, and they occur commonly in nature and engineering system.