This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: A Viterbi Algorithm With Sokaecision Outputs
and its Applications Joachim Hagenauer
Peter Hoeher German Aerospace Research Establishment (DLR)
Institute for Communications Technology D8031 Oberpfaﬂenhofen, WestGermany
Tel. ++49/8153/28—893, Fax ++49/8153/28243 Abstract The Viterbi algorithm (VA) is modiﬁed to deliver not
only the most likely path sequence in a ﬁnitestate
markov chain, but either the a posteriori probability
for each bit or a reliability value. With this reliability
indicator the modiﬁed VA produces soft decisions to
be used in decoding of outer codes. The inner Soft—
Output Viterbi Algorithm (SOVA) accepts and deliv
ers soft sample values and can be regarded as a device
for improving the SNR, similar to an FM demodula
tor. Several applications are investigated to show the
gain over the conventional hard—deciding VA, includ
ing concatenated convolutional codes, concatenation
of convolutional and simple block codes, concatena
tion of Trellis—Coded Modulation with convolutional
FEC codes, and coded Viterbi equalization. For these
applications we found additional gains of 1 — 4 dB as
compared to the classical hard—deciding algorithms.
For comparison, we also investigated the more com—
plex symbol—by—symbol MAP algorithm whose opti
mal a posteriori probabilities can be transformed into
soft outputs. 1 Introduction The Viterbi algorithm [1] has become a standard tool
in communication receivers, performing such func
tions as demodulation, decoding, equalization, etc.
An increasing number of applications use two VA in
a Concatenated way; Examples are coded modula
tion systems without bandwidth expansion, such as
coded quadrature amplidude modulation (QAM) [2]
and continuous phase modulation (CPM) [3]. In these
systems Viterbi receivers replace classical modulation schemes. An additional outer coding system could use
convolutional codes with Viterbi decoding to perform
forward errorcorrection (FEC) decoding. There are normally two drawbacks with such a so—
lution: ﬁrst, the inner VA for demodulation produces
bursts of errors against which the outer‘VA is very sen—
sitive; second, the inner VA produces hard decision:
prohibiting the outer VA from using its capability to
accept soft decisions. The ﬁrst drawback can be elim—
inated by using some interleaving between the inner
and outer VA. To eliminate the second drawback, the
inner VA needs to output soft—decisions, i.e., reliabil
ity information. This should improve the performance
of the outer VA considerably. Another important situation where a similar prob
lem arises is when convolutional codes for FEC are
used on channels requiring equalization. This is the
case in the future pan—european mobile radio system
(GSM) [4]. The normal Viterbi equalizer produces
only hard decisions, leading to a reduced performance
of the outer VA performing the FEC. The performance of the above mentioned systems,
and other systems such as multistage Viterbi schemes,
FEC/ARQ schemes etc., will improve by using a Soft—
Output Viterbi Algorithm (SOVA). This is a VA which
uses soft (or hard) decisions to calculate its metrics,
but also decides in a soft way by providing reliability
information together with the output bits. The relia
bility information can be the log—likelihood function. We wish to modify the VA as little as possible. The
goal is to add to the Viterbi receiver a soft—deciding
unit. In earlier work Forney considered “augmented
outputs” from the VA itself [1], such as the depth at
which all paths are merged, the difference in length
between the best two paths, and a list of the 11 best
paths. The last topic was generalized in [5]. Ya— 47.1.1.
1680 CH26823/89/00001680 $1.00 © 1989 IEEE mamoto et al. derived a simple indicator for block
errors by introducing labels [6]. This scheme is re
stricted to requesting a retransmission of the whole
block. Schaub et al. [7] took up the ideas of Forney
by declaring an erasure output on those bits to be
decided, where the metric difference at the point of
merging never exceeds a threshold. 2 The Soft—Output Symbol
by—Symbol MAP and Viter
bi Algorithm (SOVA) 2.1 Detection with Reliablity Infor
mation We assume that in the receiver chain a Viterbi de—
tector is used. This could be a Viterbi equalizer, a
Viterbi demodulator (i.e. for CPM or Trellis—Coded
QAM), or the Viterbi decoder for an inner convolu
tional code. This device is followed by a second stage
detector, which could be a demodulator or decoder
after the equalizer, a decoder after the demodulator,
an outer decoder after the inner decoder, or a source
decoder. We assume that this second device can im
prove its performance if some reliability information
is available in addition to the hard decisions from the
ﬁrst stage. A straightforward way is shown in Fig. l. The first
stage detector delivers estimates 13’ of the symbol se
quence u' by processing the received sequence y in
the Viterbi detector. We want the detector to deliver
further for each symbol an estimate of the probability
that this symbol has been incorrectly detected p5. = Probﬁt # ut 1y} (1) Since the VA of the ﬁrst stage produces error events,
and therefore correlated errors in 11;: and correlated
values 13;: that might degrade the performance of
the next stage, we apply sufﬁcient deinterleaving to
achieve statistical independence. A similar interleav
ing device is applied at the transmitter side. In the
notation we drop the primes. At the dashed line AA’
in Fig. 1, the ﬁrst stage detector delivers symbols 11,;
with statistically independent error probabilities pk.
Now, for the second stage detector the channel is a
discrete (binary) memoryless compound channel [8]
with output pairs (111,431,). If the second stage detec tor performs maximum—likelihood (ML) detection, an
optimum ML metric is [9] A 2mg”) 41,, log 1 :pk, (2)
k Pk where my”) = :Izl is the kth symbol of the m—th in
formation sequence. ﬂk is the hard (:tl) decision of
the ﬁrst Viterbi detector. We will call the ﬁrst stage VA a Soft—Output Viterbi Algorithm (SOVA) because it delivers soft decisions
A. = akik, (3) with 1 — Pk
13!: to be processed by the next ML—detector stage. 13!: = 108 Z 0 (4) 2.2 Algorithms Given a ﬁnite—state discrete—time Markov process ob
served in white noise, the VA is the optimal recur
sive algorithm. It provides the state sequence with
the lowest cost [1] The VA is optimal in the ML
sequence sense. It determines the symbol sequence
ii = {ilk} that maximizes the logelikelihood function logpbrlﬁl
2.2.1 The SoftOutput Symbol—by—Symbol
NIAP Algorithm There exists an optimum general algorithm providing
the a posteriori probabilities (APP’s) for each bit to be
decided [10], [1, Appendix]. This symbolvbyisymbol
MAP algorithm was originally developed to minimize
the bit—error probability instead of the sequence error
probability. The algorithm seems less attractive than
the VA due to the increased complexity. However, we
easily obtain the optimal APPs P(u;c  y) for each bit
to be decided. A soft decision as the logilikelihood
ratio is obtained by (5) where ilk is the MAP estimate for which P(uk  y) is
maximum. The probability P(u.;c I y) can be calcu
lated by a forward and a backward recursion following [10] and [1]. 2.2.2 The Soft—Output Viterbi (SOVA) Algorithm We shall ﬁrst give a motivation for the algorithm be
fore we state it formally. For the sake of simplicity we
restrict ourselves to trellises with two branches ending
in each node. This could include a K/N code punc—
tured from a l/N code because it still uses the trellis of the l/N code [11], [12]. The number of states 5 is
S : 2", where 1/ is the code memory. 47.1.2. 1681 Assume that the classical VA makes a ﬁnal decision
with delay 6, 6 being large enough so that all 2" sur
vivor paths have been merged with sufficiently high
probability. As shown in Fig. 2, the VA has to select
a survivor for state 3);, 0 g s), S S = 2” — 1 at time
h. It does so by selecting the path with the smallest
ML metric, which for the AWGN channel, is E k N
Mm = To 2 2m, _ 23:52, m: 1,2, (6)
j=k6n=l where 3):") is the nth bit of N bits on the branch of
the m—th path at time 3', gm is the received value at
the same position, and E,/No is the signal—to—noise ratio (SNR). With this form we have
Prob{path m} ~ e‘M’“, m = 1, 2. (7) We label the path with the smaller metric by m = 1.
This means M1 3 M2, which implies that the VA
selects path 1 (neglecting ties). Then, the probability
of selecting the wrong survivor path is . eM= _ 1 _ 1 a
e‘Mi+eM3 “1+eM3—M1 —1+eA’ () PA]: = with A : M2  M1 2 0. pm approaches 0.5 if M1 z
M2 and 0 if M2  M1 >> 1. With probability p”, the
VA has made errors in all the e positions where the
information bits of path 2 differ from path 1; in other
words if u§"¢u§2’. j=jl.....j.. (9)
Positions where u?) = u?) are not affected by the
survivor decision. Let 6", be the length of those two
paths until they merge. Then we have e different in—
formation values, and 6m — e nondifferent values. As
sume we have stored the probabilities iii of previous
erroneous decisions with path 1. Under the assump
tion that path 1 has been selected we can update these
probabilities for the e differing decisions on this path
according to A pj (—13j(1—plk)+(1_—p~j)p5kl j=jl)"'rj61 (10) 0 S ﬁj g 0.5. This formula requires statistical inde
pendence between the random variables 23, and pm,
which is approximately true for most of the practical codes. The recursion could be directly performed on
the log—likelihood ratio 1—". .
p’, OSLj<oo. (11)
Pi f1" = log Using (8), (10), and (11) we obtain after some calcu—
lation A r 1 1+ e(afls+A)
Lj 4— f(LJ',A) = —log ——.— 12
a eA+euLi 1 ( ) with A = M2 — M1 2 0,j = j1,. ..,ja. The function
f(lij, A) should be tabulated with ij and A as input
variables and need not to be calculated at each step.
The factor 0: prevents overﬂow with increasing SNR.
A proper choice to achieve asymptotically E[Lj] : 1
Is E
a = 4df'eeff)’ (13) where dfne is the free distance of the code. A good
approximation of (12) is mi,» A) = min (132'. Ma). (14) and requires no table and no knowledge of the SNR. The Soft—Output Viterbi Algorithm (SOVA) can
now be formulated using the notation in [1]. Only
the steps marked by (Jr) augment the classical Viterbi
algorithm. Storage: In (time index, modulo 6 + 1)
13(5):) 2 {ilk—6(3):); . . . , ﬁk(8k)},
(hard decision values, a 6 {i1})
13(3):):{ik—6(5k))"'yik(5k)}y OS 3A: S 5—]
(soft reliability values, 0 g 13 5 oo) (*) I‘(sk), 0 g 3;. S S — 1, (accumulated metric values) OSSkSS—l Recursion: 3.) Classical Viterbi step:
For each state 3k
Compute
P(3k1: 5k) =
r(s.1)+ s; My... — #252
for both transitions (51.4, 31;).
Find I‘(sk) = min I‘(sk_1,sk).
Store I‘(sk) and the corresponding
survivor ﬂk(sk). b) Soft—deciding update: (1:)
For each state sk
Store A = max F(sk_1,sk) — min I‘(sy¢_1,sk).
Initialize 131.91.) = +00.
Forjzk—utojzk—ém
Compare the two paths merging in 5,,
if 125.139,) ¢ 1253(5)) then update
Li 3: “Li: A) 47.1.3. 1682 Technical Realization: With an n, bit soft decision and ﬁxed~point arith
metic, each survivor path of length :5 consists of n, 6
bits. The ﬁrst of 17.‘ bits is the sign bit or the hard— decision bit. The set of the likelihood values is then
Lk E {0,1,. .,2""1—1}.Lk= 0 indicates the most
unreliable value, and Ly, = 2’" —1 the most reli—
able value. Given the metric difference, A, quantized
with nA bits, the likelihood update is done with the
table—look—up. The table is calculated only once by
using Eqn. 12 and is stored in a ROM. Thus the extra
effort of the SOVA relative to the VA is: '0 Storage: — 2" 6 . 17., bits instead of 2"  6 bits.
— Look—up table with 2"“"L"1 vectors each
with n, — 1 bits (unnecessary when using (14). o Computation Complexity: —— Maximal 2"  (6 — 11) bit comparisons.
— 2" e updates of 124. The SOVA can be implemented in a pipelined struc
ture [13] (clocked with symbol rate 1/T), see Fig. 3.
Therefore, high speed implementation is possible.
The Transition Metric Unit (TMU) and the Add—
Compare—Select (ACS) unit remain unchanged. The
novelity is a so called Path Update Unit (PAU). The
PAU is used to “slide” over the stored (hard) informa—
tion bits of the path RAM for the two paths diverging
at time k and remerging at time k — 6",, as shown in
Fig. 2. The range where the information bits 11(1)
and u(2) can differ, is j : k — 6m to j = k — u. Ifthe
bits differ a table—look—up is enabled to perform the
update accordingto Eqn. 12 or 14. If the PAU is im—
plemented in parallel the ACS unit remains the bot—
tleneck. Therefore softdeciding decoding does not
nessesarily limit the speed. 2.2.3 Input—Output SNR Conversion The time—discrete analog output of the SOVA at a
given point in time is a random variable, for which
we can deﬁne an output SNR as the ratio mz/Zaz.
This random variable is not always exactly Gaussian.
However, for the next stage ML algorithmd that de—
codes a code with distance d, the sum 2k_1Ak is
the relevant decision variable. Because of the central
limit theorem this sum is almost Gaussian and gener
ates sequence errors with probability
m2 d
Pd 2 Prob(Zrik < 0) = éerfc d~—. (15) 202 Therefore the output SNR = m2 /202 is a useful ﬁgure
of merit of the SOVA. This result can be interpreted
in such a way that the input SNR (for the inner most
section: the SNR of the channel) is converted into an
output SNR. For closer examination we have plotted
in Fig. 4 the simulated input/output conversion of
the SNR for the best known R = 1/3 and R = 1/2
convolutional codes with memory 1/ =‘ 3. Simulation
results with the symbol—by—symbol MAP algorithm
show only minor differences of less than 0.3 dB. The SNR input/output conversion is similar to that
of analog FM. Similar to the FM—threshold we can de
ﬁne a SOVAthreshold. Enlarging the code distance
by decreasing the code rate is analogous to an in—
creased modulation index with FMmodulation. In
both cases we exchange bandwidth with output SNR. The common way to interpret channel coding is as a
means for minimizing the BER. Concatenated coding
is then viewed as a way to clean up the residual er
rors [14]. The observation of the preceding paragraph
leads to a new interpretation of channel coding as a
means for improving the SNR, since we have analog
input and output discrete—time signals in the inner
sections, and perform hard decisions possibly only in
the outer most section. 3 Applications This novel receiver performs better than the conven
tional Viterbi decoder, demodulator, or equalizer
whenever some form of concatenation is applied. This
can include o modulations with memory
e.g. Trellis—Coded Modulation (TCM), Contin uous Phase Modulation (CPM) such as Tamed
FM (TFM), Gaussian MSK, etc.  channels with memory
e.g. ﬁlter—channels with intersymbol interfer—
ence (ISI), frequency—selective channels, or stor
age media with memory like magnetic recording. o coding with memory
e.g. convolutional codes and all possible combinations thereof. 3.1 Multistage Coding
3.1.1 Convolutional Inner and Outer Codes Multistage or concatenated coding [15] with inner
and outer convolutional codes seems very promising. 47.1.4. 1 683 The inner code uses the soft—quantized received sam
ples, possibly supported by the channel state informa
tion in the case of channels with memory [9] With
the algorithms above, the outer decoder is also able
to perform softdecision ML decoding. First, we use
the R = 1/2, V = 3 code with input/output SNR con
version presented in the last section as the inner code,
and the punctured R = 2/3, 11 = 3 code [11] as the
outer code. Both codes are separated by proper inter
leaving. The overall code rate is R = R, ‘ R0 2 1/3. For the simulations we used the SOVA and the
MAP without signiﬁcant differences. It is advanta
geous to use the same VA for the 1/2 and 2/3 punc
tured code with the PAU turned on only for the in
ner decoder. In [16] we presented a structure which
also shares the interleaver in an eﬂicient way and al
lows unequal error protection. The results are shown
in Fig. 5. We have plotted the performance of the
concatenated system derived from monte—carlo simu
lations. For comparison we also have plotted the per
formance curves for the best known R = 1/3 codes
with memory 11 = 3, 4, and 6. The question of optimal rate sharing between inner
and outer codes has been addressed elsewhere [17]. 3.1.2 Convolutional Codes and Outer Parity—
Check Codes ML decoding of block codes is not yet available in an
elegant way. However, we give an example showing
that with simple algorithms even parity—check codes
gain from the soft decisions of the inner VA.
Suppose that a message of length N bits is tailed
by one bit resulting in a rate R9 = N/(N —— 1) parity~
check code. This code acts as the outer code. Sufﬁ
‘cient interleaving and convolutional (inner) encoding
is the second step of the encoding procedure. At the
receiver we perform softdecision decoding. A parity
check is computed over the sign bits of the deinter
leaved samples. If the parity—check fails we toggle the
sign of the bit with the maximum probability of error
pk which 15 the bit with the lowest magnitude Lk Fi
nally, we output the signs as the hard—decisions. We
have plotted the curves for R, = 1/2 and Rl7 = 8/9 in Fig. 6. An additional gain of 1.5 dB at 10‘6
obtained. 3.1.3 Coded Modulation and Outer FEC Trellis—Coded Modulation [2] has gained strong in
terest in recent years, because it is power and band
width efﬁcient. For example, 4PSK modulation can
be replaced by Coded—SPSK to gain 3 to 6 (13 With
out bandwidth expansion. The drawback is that the conventional Viterbi demodulator of Coded—SPSK de
livers only hard decisions. With the SOVA we should
expect a gain in the next decoding stage. We have investigated the performance of the 4state
rate 2/3 feedforward Ungerboeck code [2] with an
8PSK signal constellation and natural binary map
ping. This code has a coding gain of 3 dB. The
SOVA and the MAP work similar to Sections 2.2.1
and 2.2.2, except that we have to modify the metric
transition unit, and treat the coded information bit
and the uncoded bit (parallel transition) separately. In Fig. 7 we have plotted the montecarlo simula
tion results for the input/output SN R conversion for
the SOVA. The plots show similar results as for the
convolutional codes. Note that the parallel bit reaches
the asymptotic gain of 3 dB. The asymptotic SNR
improvement of the coded bit is about 4.7 dB. An
outer FEC with ML—decoding would now work with
the improved SNR. 3.2 Viterbi Equalizer and Coding Equalization is a challenging problem in high—speed
digital communications over time—dispersive chan
nels, e.g. mobile channels. It is well established
that the Viterbi equalizer performs the desired ML
sequence estimation [18]. However, in coded systems
the problem is obvious since the Viterbi equalizer de
livers hard decisions to the outer decoder. In this section we investigate the SOVA on
frequency—selective fading channels. For the channel
we suppose a tapped delayline representation with
L + 1 independent Rayleigh tap gains. This chan
nel, an idealization of the mobile multipath channel,
can be viewed as the inner code. Now, the modiﬁed
Gaussian likelihood metric reads (compare to (6)) Mm = ZN E“ (k) lyk Z JADEicmil . (16)
k (m) where zk is the symbol of the m—th path correspond ing to the trellis, f3) is the l—th tap gain, 0 g l S L,
and yk is the received value at the same time k, 25:"), yr and ff) denoted in complex notation. E,(lc)/N0
is the instantaneous SNR. As outer code we chose the
rate 1/2 convolutional code with memory V = 4 as
proposed by GSM [4]. The results in Fig. 8, simu
lated for L + 1 independent, Rayleigh—distributed tap
gains, show the diversity gain for several val...
View
Full Document
 Spring '12
 Shea

Click to edit the document details