Ctllll'l
"H ~
Ol'Tl:O.H'M
HEl"F:IVEl<S H)Icfw_ 111t: ADDITIVE WHllF. (;Al;~\IAl'i NOISl: : dA~EI.
285
where </>( t; I) is the carrier phase. The filtered received signal for an additive
gaussian noise channel is
n(t)
53l
(532)
rcfw_t) = sit] + n(t)
wh
272
lll<llAI. COMM! '\/C"AllONS
quadrature. Since there is no crosstalk or interference between the signals on the
two quadrature carriers. the bit error probability is identical to that in (5257).
On the other hand. the symbol error probability for M =
274
DIGITAL
COMMUNICATIONS
The phase ambiguity problem resulting from the estimation of the carrier
phase q, can be overcome by encoding the information in phase differences
between successive signal transmissions as opposed to absolute phase encod ing.
F
CHAPTER
5:
OPTIMl.M
RECF.IVF.RS
FOR THE
AODLTIVE
WHITE
GAUSSIAS
SOISF.
27(
CHA~H
l'a<0,.1
1.61
I A.~
l.:~5
1.U7
0.89
0. i 1
0 .'\4
0 16
0.18
FIGURE s.2.9
Probability density function p,.,cfw_0,)
tor y, = I, 2. 4 and 10.
:\.14 2.51 US 1.26 0.6~ 0.00
270
DIGITAL
C0!\.1~1UNICATIONS
the M possible signal vectors and a decision is made in favor of the signal with
the largest projection.
The correlation detector described above is equivalent to a phase
detector that computes the phase of the received sign
CHAYTER
5:
OPTIMUM
RECEIVERS
FOR
THE AODmVE
WHITE
GAUSSIAN
NOISE
269
CHANNEL
10'
ii.
5
\
\1
\
\
FIGURE S28
i\
\M= 16\
\
\
\
\M%4\
10'
5
\
I\ \ \
\
\
\"=\
\
\
I
1
M=2\
I
I
1006 4Z O 2 4 e 8 10 12 14 16 18 20:~
SNR per bit. y.(dB)
Probability of a
268
DIGITAL
COMMUSICATIONS
where the noise variable n has zero mean and variance a~ = ~N0. On the basis that
all amplitude levels are equally likely a priori, the average probability of a symbol
error is simply the probability that the noise variable n ex
CHAPTER
5:
OPTIMliM RECEIVERS FOR THE ADDITIVE WHJTE GAlJSSIAN NOISE CHANNEL
267
The value of the minimum euclidean distance will depend on the selection of the
code words, i.e., the design of the code.
S26 Probability of Error for Mary PAM
Recall that
266
lJl(;ITAL
CO~IMl'NICAllOJ'S
is similar to that for orthogonal signals (see fig. 525). However. in lhis case,
the probability of error for M = 4 is greater than that for M = 2. This is due to
the fact that we have plotted the symbol error probability
l"llAl'H.K
5:
Ol'TIML'.'vl
KH"EIVERS
FOR THE
Al>DITIVE
WHITE
CiAl;SSIAN
NOISE
large SN R may be obtained by first approximating Pe,(0). For
16,I ~tr. p0,(0,) is well approximated as
Pe,(E>,)= ~cosE>,e~2,sinie,
CHA!"SH
27J
i,; No
1 and
(5260)
By substitu
CHAPTER ~:
OPT!ML':.1
RECEIVERS FOR THE ADOIT!VF.
WHITE
GAIJSSIA~
275
NOISE CHA~NEl.
where ()k is the phase angle of the transmitted
signal at the kth signaling
interval, <Pis the carrier phase, and n, = n, + n1<.0 is the noise vector. Similarly, the
rec
Ctl1.PTl'R
s
OPrl~IUM RlCl'IVFR5
HlR Tllf. ADDITI\ E:
wurre
GAl/SSlA1' NOISE Cll \."El
283
This is a factor of two better than PSK.
In the case of QA.M. we have two orthogonal carriers. with each carrier
having a PAM signal. Thus. we double the rate rel
282
TABLE 521
DIGITAL COMMUl"ICATIONS
SNR ADVANTAGE OF MARY
QAM OVER
MARY PSK
M
8
16
32
64
when M > 4 we find that '!llM > 1, so that Mary QAM yields better
performance than Mary PSK. Table 521 illustrates the SNR advantage of
QAM over PSK for sever
the signal points at a given amplitude. level are phaserotated by ~ll' relative to
the signal points at adjacent amplitude levels. This 16QAM constellation is a
generalization of the optimum 8QAM constellation. However, the circular
16QAM constullatio
278
DIGITAL COMMUNICATIONS
fourphase DPSK and coherent PSK signaling obtained from evaluating the
exact formulas derived in this section. Since binary DPSK is only slightly
inferior to binary PSK at large SNR, and DPSK does not require an elaborate
metho
CHAPTERS:
OPTIMUM RECEIVERS FOR niE
ADD111VE WHITE OAUSSJAN NOISE CHANNEL
2'J9
(C. C)
C=
(3. I)
/J+ 2/2
(3. 1)
(a)
(b)
(I
(d)
(C')
FIGURE 5215
+./3, 0)
Four eightpoint QAM signal constellations.
sets is signal set overother 4phase there is no advan
276
DIGITAL COMMUNICATIONS
also normalize r1cr:_1 by dividing through by ~.
the new set of decision
metrics becomes
x = "1l. + Re (n1c + n t 1)
(5267)
y
(n1c + nf_,)
The variables x and y are =Im
uncorrelated gaussian
identical variances u~= N0 The pha
.
~
magnitude of the crosscorrelators
C(r.,~ s"') = r s,
,.,
=
m "'l, 2, . , Vi1
L rksmk
(5232)
k= l
while the sign of this largest term is used to decide whether
s,(r) or  s.; (r ) was transmitted. According to this decision
rule, the probability of
264
l>IGITi\L COMMl:MCATI01'S
This bound can be simplified further by upperbounding Q(V'l.JNo. We haw
Q(viJN:i) < e
Thus,
P111 < Me
pM < e
02N"
k(~,iN11
(5226)
ttJ1Nu
= 2"<'
(S227)
2 In 2)11
=.
Ask or equivalently. as M. x, the probability of erro
CHAPTER
FIGURE S112
5:
OPTIMUM
RECEIVERS
FOR THE
ADDITIVE
WHITE
GAUSSIAN
NOISE
CHANNEL
253
One stage of trellis diagram for delay
modulation.
M = 4 signals and is characterized by the fourstate trellis shown in Fig. 5112. We
observe that each state h
250
DllOITAL
<'OMMlJNKATIONS
sequence of received signals over successive signal intervals. Below, we
describe two different types of detection algorithms. In this section, we
describe a maximumlikelihood sequence detection algorithm that searches for th
248
DIGITAL COMMUNICATIONS
lNYn( Consequently, the conditional pdfs p(r Ism) for the twovariance
signals
0. T) is a zeromean Gaussian random variable with
where
q~ =
are
l
I
p(r
s.) =
I
p(r
s2)
2rr o;
exp
v'b r
=
[

(r  ~)2)
2<T
[
1
Vf;_
2
exp  (r + ~
CHAPTER ~:
OPTIMUM RECEIVERS fOR THE ADDITIVF. WHITE GAL:SSIAN
~OISE CHANNEL
0/./7;,
2S\
01IT,
91IT,
O/IT,.
t
independent of (r., rz, . , rl\: ). we find that an equivalent ML sequence
detector selects the sequence sC"') that minimizes the euclidean di
("H/1,PTER
\.
OPTIMUM
RECEIVERS
FOR THE
AOOITIVE
WHITE
GAUSSIAN
NOISE
CHANNEL
249
R2 consists of the set of points that are less than rh. If rv<l", > r11, the
decision is made that s1 was transmitted, and if r~
< r,. the decision is
made that= s2 was tra
252
DIGITAL COMMUNICATIONS
by using the outputs r1 and tz from the demodulator. The Viterbi algorithm
compares these two metrics and discards the path having the larger (greater
distance) metric.t= The The elimination the one of metric is paths may becall
CHAPTER
S:
OPTIMl.:M
REC"EIVERS
FOR THE
ADDITIVE
WHITE
GAUSSIAN
NOISE
247
CHA!":'llEL
:t4 
+
'
Select
Rccei,ed
signal r(/)
Ou1pu1
la~esl
decision
: _"
0
I
1
Sam T
a1 r =pk
FIGURE S19
An alternative realization of the optimum AWG!\ receiver.
sampled
246
DIGITAL COMMIJNICATIONS
The maximum of lnp(r Is,) over sm
minimizes the Euclidean distance
is equivalent to finding the signal Sm that
D(r, Sm)=
L (rk N
k=I
(5141)
s,.,4)2
We call D(r, s,). m :;: 1, 2, .
, M, the distance metrics. Hence, for
channe
254
DIGITAL
COMMUNICATIONS
both surviving sequences will have a common first branch. If b 1 # b;, we
may select the bit (b 1 or b;) corresponding to the smaller of the two
metrics. Then
the
first bit is dropped
from the
two
survivingsequences. At t = 7T,