h
(
X
) =

Z
∞
∞
1
2
λe

λ

x

ln (
1
2
λe

λ

x

)
dx
(71)
=

Z
∞
∞
1
2
λe

λ

x

(ln (
1
2
λ
)

λ

x

)
dx
(72)
=

Z
∞
∞
1
2
λe

λ

x

(ln (
1
2
λ
)

λ

x

)
dx
(73)
=

Z
∞
∞
1
2
λe

λ

x

ln (
1
2
λ
)
dx
+
Z
∞
∞
1
2
λe

λ

x

λ

x

dx
(74)
=
1
2
ln (
1
2
λ
)
Z
∞
∞

λe

λ

x

dx
+
1
2
Z
∞
∞
λ
2

x

e

λ

x

dx
(75)
7
Solving the integrals:
Z
∞
∞

λe

λ

x

dx
=
Z
0
∞

λe

λ
(

x
)
dx
+
Z
∞
0

λe

λ
(
x
)
dx
(76)
=

e
λx
0
∞
+
e

λx
∞
0
(77)
=

1

1 =

2
(78)
Z
∞
∞
λ
2

x

e

λ

x

dx
=
Z
0
∞
λ
2
(

x
)
e
λx
dx
+
Z
∞
0
λ
2
(
x
)
e

λx
dx
(79)
=
(1

λx
)
e
(
λx
)
0
∞
(1 +
λx
)
e
(

λx
)
∞
0
(80)
= 1 + 1 = 2
(81)
Then,
h
(
X
) =
1
2
ln (
1
2
λ
)
Z
∞
∞

λe

λ

x

dx
+
1
2
Z
∞
∞
λ
2

x

e

λ

x

dx
(82)
=
1
2
ln (
1
2
λ
)(

2) +
1
2
(2)
(83)
=

ln (
1
2
λ
) + 1
(84)
As in part (a), this result is in
nats
, we can convert this results into bits using the same
procedure as before:
h
(
x
) = 1

ln (
1
2
λ
)
(85)
= log
2
(
e
)(1

ln (
1
2
λ
))
(86)
= log
2
(
e
)

log
2
(
e
) ln (
1
2
λ
))
(87)
= log
2
(
e
)

log
2
(
e
)
log
2
(
1
2
λ
)
log
2
(
e
)
(88)
= log
2
(
e
)

log
2
(
1
2
λ
)
(89)
(c) The sum of
X
1
and
X
2
, where
X
1
and
X
2
are independent normal random
variables with means
μ
i
and variances
σ
2
i
,
i
= 1
,
2
.
We have variables:
X
1
→
N
(
μ
1
, σ
2
1
) and
X
2
→
N
(
μ
2
, σ
2
2
).
Then we obtain the Gaussian random variable:
X
1
+
X
2
→
N
(
μ
1
+
μ
2
, σ
2
1
+
σ
2
2
)
8
The differential entropy for a Gaussian random variable is given by:
h
(
X
1
+
X
2
) =
1
2
log
2
(2
πe
(
σ
2
1
+
σ
2
2
))
Since, the mean does not affect the distribution of a Gaussian random variable.
5.
Problem
8.3.
Uniformly distributed noise
. Let the input random variable X to a channel
be uniformly distributed over the interval

1
/
2
≤
x
≤
1
/
2. Let the output of the channel
be
Y
=
X
+
Z
, where the noise random variable is uniformly distributed over the interval

a/
2
≤
z
≤
+
a/
2.
(a) Find
I
(
X
;
Y
)
as a function of a.
I
(
X
;
Y
) =
H
(
Y
)

H
(
Y

X
)
(90)
=
H
(
Y
)

H
(
Z
)
(91)
First we have that
H
(
Y

X
) =
H
(
Z
) = ln (
a
).
Then, we need to compute H(Y). Since
Y
=
X
+
Z
, we know that the distribution of the sum
of two random variables is given by the convolution of their pdfs.
For
a <
1
we have:
f
Y
(
y
) =
1
a
(
y
+
(
a
+1)
2
)
,

(
a
+1)
2
≤
y
≤
(
a

1)
2
1
,
(
a

1)
2
≤
y
≤
(1

a
)
2
1
a
(

y
+
(
a
+1)
2
)
,
(1

a
)
2
≤
y
≤
(
a
+1)
2
So, to compute
H
(
Y
), we can observe the pdf and see that it can be divided in two parts, one
corresponds to a uniform distribution for a given probability (i.e.
λ
) and two small triangles
that form a triangular distribution with probability (1

λ
). From this observation we can
later use tables (for instance:
entropy) to compute
the differential entropy of a particular continuous distribution.
9
So, we can see Y as two disjoint random variables
Y
1
and
Y
2
which happen to be
Y
depending
on a certain probability.
Y
1
can be assigned to the uniform part, and
Y
2
to the triangular
part of the total distribution.
Y
=
Y
1
,
with probability
λ
Y
2
,
with probability 1

λ
The next step is to define a Bernoulli(
λ
) random variable
θ
=
f
(
Y
) which comes from the
behavior of random variable
Y
as follows:
θ
=
f
(
Y
) =
1
,
if
Y
=
Y
1
2
,
if
Y
=
Y
2
Then, we can compute
H
(
Y
).
This definition for
H
(
Y
) will be used from now and so on
solving this problem.
You've reached the end of your free preview.
Want to read all 14 pages?
 Spring '10
 sd
 Probability distribution, Probability theory, probability density function, PÎ±