CHAPTER
2:
PROBABILITY
AND STOCHASTIC:
PROCESSES
81
e Determine the conditions under which the power density spectrum of X(n) is equal
to the power density spectrum of X(1).
2-U Consider a band-limited zero-mean stationary stochastic X(t) with power densi

CHAPTER
3:
SOURCE
CODING
87
Let us consider some special cases: First, if P = p, = 0, the channel is called
noiseless and
/(0; 0)
= log, 2 =
1 bit
= p = ~, output specifies the input with certainty. On the other hand, if
PHence, thethe channel is useless

88
DIGITAL COMMUllllCATIOlllS
J(X; Y) = 0 when X and Y are statistically independent. An important
characteristic of the average mutual information is that l(X; Y);ioO (see
Problem 3-4).
Similarly, we define the average self-information, denoted by H(X),

i souaci.
CHAPTER
CODING
93
Exunple 3-25
Suppose that X is a discrete random variable with two equally probable
outcomes x 1 =A and x2 = -A. Let the conditional pdfs p(y Ix;). i = 1,
2, be
gaussian with mean x, and variance u2 That is,
p
p(y
(
A)
y
1
I
I

But therighthandside ofthisequationisequal tozero byapplication of(1) witha =m/2W.
Sincethisistrueforanym, it follows that E X (t)X(t) X(t) =0. Also
X (t)X (t) X (t) =X
E
X ( n t) sin2W t n
(0)
n=
2W t 2W
Again,byapplying(1) witha =t anf =t,weobserve that

CHAPTER
3:
SO~RCE
CODING
83
were known exactly, there would be no need to transmit it. In this section, we
consider both discrete and analog information
sources. and we postulate
mathematical models for each type of source.
The simplest type of discrete s

Problem 2.3 :
Following the same procedure as in example 2-1-1, we prove :
1
pY (y) = |a|
yb
a
pX
Problem 2.4 :
Relationship (2-1-44) gives :
yb
1
pY (y) =
2/3
pX
a
1/3
3a [(y b) /a]
X is a gaussian r.v. with zero mean and unit variance : pX (x) = 1
e 1

(b)
a/
j vX
(jv) = E
=
a/
jvx
e
dx =
x2 + a2
jvx
e
dx
(x + ja) (x ja)
This integral can be evaluated by using the residue theorem in complex variable theory.
Then, for v 0 :
a/
jv x=ja
av
(jv) =
=e
x+ e
2j
x
ja
For v < 0 :
= eav v
(jv) =
a/ jvx
e
2j
x=ja

CHAPTER 3:
SOURCE CODING
99
To establish the lower bound in (3-3-9), we note that for code words that have
length n10 1 ~ k ~ L, the difference H(X)- R may be expressed as
(3-3-10)
Use of the inequality ln x ~
H(X) -
x - 1 in (3-3-10) yields
R~
k~I
(loge,

3
SOURCE CODING
Communication systems are designed to transmit the information generated by a
source to some destination. Information sources may take a variety of
different forms. For example, in radio broadcasting, the source is generally an
audio sourc

(b) When x(t), y(t) are uncorrelated :
xy ( ) = E [x(t + )y(t)] = E [x(t + )] E [y(t)] = mx my
Similarly
:
yx ( ) = mx
my
Hence :
zz ( ) = xx( ) + yy ( ) +
2mx my
(c) When x(t), y(t) are uncorrelated and have zero means :
zz ( ) = xx ( ) + yy
( )
Problem

The autocorrelation of the output sequence is
yy (k) =
h(i)h(j)xx (k j + i) =
h(i)h(k + i)
2
i
j
i=
where the last equality stems from the autocorrelation function of X (n) :
2 , j = k +
i
0,
o.w.
xx (k j + i) = 2 (k j + i)
=
Hence, yy (0) = 6 2 , yy (1)

84
DIOJTAL COMMUNICATIONS
discrete-time source. Then. the source output is characterized statistically by
are jointrandom variables corresponding iili l, where X,. = X(n/2W), l ._n .;om,
the the pdf p(x1, xi. . , x.,.) for all m to the samples of X(t) .
.

(a)
2
(2RC )2 f 2
+1
xx (f ) = 2 yy (f ) = xx (f ) |H (f )
2
(b)
=
yy ( ) = F
1
=
2
cfw_xx (f )
RC
1
RC
(
)2 + (2f ) ej 2f df
2
1
Let : a = RC, v = 2f. Then :
2
yy ( ) =
2RC
a/
a2 + 2
v
jv dv
2
2
=
a| |
| |/RC
e
e
2RC
2RC
=
where the last integral is

CHAPTER
J:
SOURCE
CODING
95
inefficiency has been reduced by approximately a factor of l /l relative to the
symbol-by-symbol
encoding described above. By making J sufficiently large. the
efficiency of the encoding procedure, measured by the ratio JH(X)/N.

102
DIGITAL COMMUNICATIONS
The variable-length encoding (Huffman) algorithm described in the above
examples generates a prefix code having an R that satisfies (3-3-9). However,
instead of encoding on a symbol-by-symbolbasis, a more efficientprocedure is t

Now :
2
d(jv)
E (Y ) = j d(jv) |
= n 2
jn
v=0
(1 j2v 2)n/2+1
dv =
dv
2
4
2
d (jv) = 2n (n/2 + 1) E Y 2 = (jv) |v=0 = n(n + 2) 2
d 2
dv 2
2 n/2+2
dv
(1 j2v )
The variance is 2 = E (Y 2 ) [E (Y )]2 = 2n 4
For the non-central chi-square with n degrees of fre

~
TABLE J.J.l
DIGITAL
COMMUNICATIONS
VARIABLE-LENGTH
Letter
P(a)
CODES
Cede I
Code
n
Code DI
l
a,
02
QJ
a~
l
0
0
1
00
01
10
110
01
1
4
l
.
10
111
Oil
111
words. An example of such encoding is the Morse code, which dates back to the
nineteenth century. In

CHAPTER l:
SOURCE CODING
105
nonnegative and nonincreasing with k, both limits must exist. Their limiting forms can
be established by using (3-3-14) and (3-3-15) to express HHi(X) as
H,(X)=-k
1
+1
.H(X1X2X._,)
+ k + j [H(Xk \ x, x.-1) + H(Xk+1 Ix. x.)
+ .

98
DIGIT"L COMMUNICATIONS
0
flGURE 3-3-3
Construcuon of a binary tree code embedded in a full tree.
there is always a node of order k > j available to be assigned to the next code
word. Thus, we have constructed a code tree that is embedded in the full tr

CHAPTER 3:
SOURCE CODING
85
rtx, I Yi)= P(x;) and, hence, l(x;: Yi)= 0. On the other hand. when the
occurrence of the event Y :; Yi uniquely determines the occurrence of the event X =
.t;. the conditional probability in the numerator of (3-2- 1) is unity

92
DIGITAL COMMUNlCATIONS
continuous random variables, the concept of self-infonnation does not. The
problem is that a continuous random variable requires an infinite number of
binary digits to represent it exactly. Hence, its self-information is infinite

94
DIGITAL COMMUNICATIONS
per output letter from the source can be made arbitrarily close to the entropy
of the source.
3-3-1 Coding for Discrete Memoryless Sources
Suppose that a OMS produces an output letter or symbol every 'fs seconds.
Each symbol is s

Otherwise, the sum of the shifted copies of a (in (3) will overlap and aliasing will occur.
Problem 2.22 :
(a
)
( )
=
a (f )ej 2f df
j 2f
=
W
e
df
W sin
2W
By applying the result in problem 2.21, we have
d (k) = fa (kT ) sin 2W
kT
=
kT
(b) If T =
1
,

sin W 2
.Therefore, sampling X(t) at a
1
rate
W
with autocorrelation function :
1
= W samples/sec produces a white
sequence
T
sin W kT
d (k) = 1
W
=W
kT
W,
= 0
sin k
2
k
2
k=
0,
otherwise
Problem 2.23 :
Lets denote : y(t) = fk (t)fj (t).Then :
(t)f (t)dt

Problem
2.21 :
We will denote the discretetime process by the subscript d and the continuoustime (analog)
process by the subscript a. Also, f willdenote the analogfrequency and fd the discretetime
frequency.
(a
d (k)=E [X (n)X (n +k)]
=E [X (nT )X (nT +kT