This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Chapter 9 Gaussian Channel 1. A channel with two independent looks at Y. Let Y1 and Y2 be conditionally
independent and conditionally identically distributed given X. (a) Show I(X;Y1,Y3)= 2I(X;Y1) — I(Yngg)_
(b) Conclude that the capacity of the channel X ﬂ can) is less than twice the capacity of the channel Solution: Channel with two independent looks at Y. (a) I(X;Y1,Y2) = Hahn) — mnnlx) (91)
H(Y1)+H(Y2)—I(Y1;Y2)—H(K!X)WH(YL2EX) {92)
(since 1’1 and Y2 are conditionally independent given X )(9.3)
I(X;Y1)+ Incl/'2) — 1(Y1;Y2) (94)
) 2I(X; Y1) — I (Y1; Yg) (since Y; and Y2 are conditionally idﬁné
tically distributed) (in) The capacity of the single look channel X 6 Y1 is Cl = m(a;§:I(X;Y1). (9.6)
p :1:
217 if
g .1,
i
21
l __ W ‘.__.,,___._._...— 218 Gaussian channel The capacity of the channel X —> (Y1,Y2) is 02 = ﬁ§I(X;Y1,Y2) (9.7)
= ﬁgZI£X3Y1)*I(Y1;lﬁ) (9.8)
g ﬁﬁmxm) (9.9)
= 201. (9.10) Hence, two independent looks cannot be more than twice as good as one iook. 2. The two—look Gaussian channel. X (YbYZ) Consider the ordinary Gaussian channel with two correlated looks at X, i.e., Y 2
(Y1: 1 Where Y1 X+ Z1 (9.11) = X—l— Z2 {9.12) II with a power constraint P on X, and (Z1, Z2) ~ N2“), K), where h N Np K; [ Np N (9.13)
Find the capacity C for (a) p =.»1 (bl p = 0 (c) p =1 Solution: T he two look Gaussian channel. It is clear that the input distribution that maximizes the capacity is X N N(0,P).
Evaluating the mutual information for this distribution, 02, = maxI{X;Y1,Y2) (9.14)
= M37133} _ h(Z11Z2lX) [916)
= flu/1,172) — M21, Z2) (917)
Now since / r
T
{21, 22) ~ N tow Iﬁp if i (9.18) Gaussian channel 219 we have 1 1 .
M21, 22) : 5 log(27re)2]KZl = 5 log(27re)2N2(1 — p2). (9.19)
Since Y1 =X+Z1, and Y2=X+Zg, we have
P + N P + pN '
(Yl’YZ)NN(O’[P+pN P+N (9‘20)
and man) = gamma: = gamma e EH2}?er — p». (921) Hence the capacity is 02 : hiyiayﬂ — h(31:22} (922)
1 BF "
= 510g (1 +  _ (923) (a) p = 1. In this case, C = %log(l + §), which is the capacity of a single look
channel. This is not Surprising, since in this case Y1 = Y2. (b) p :' 0. In this case1 l 2P
0 = 5 log (1 + , which corresponds to using twice the power in a single iook. The capacity is the
same as the capacity of the channel X “> (Y1 + Y2). (c) p = —1. In this case, 0 = 00, which is not surprising since if we add Y1 and Yg,
we can recover X exactly. Note that the capacity of the above channel in all cases is the same as the capacity of
the channel X —+ Yl + Y2. Output power constraint. Consider an additive white Gaussian noise channel with
an expected output power constraint P. Thus Y = X + Z , Z N N (0,02), Z is
independent of X , and BY2 3 P. Find the channel capacity. 90 Solution: Output power constraint O = max I(X; Y) (9.25)
f(X):E{erZ)ZSP
= max (MY) e h(Z)) (9.27) f{X):E(X+z)2 g
(9.28) Given a constraint on the output power of Y, the maximum differential entropy is
achieved by a normal distribution, and we can achieve this by have X N Nﬁ), P —— N J , and in this case,
1 l P V
C = log 2rreP M 5 log 27TeN : — log w. (9.29) 1
E 2 N Gaussian channel 221 5. Fading channel.
Consider an additive noise fading channel YzXV+Z where Z is additive noise, V is a random variable representing fading1 and Z and V
are independent of each other and of X . Argue that knowledge of the fading factor V improves capacity by showing 1(X;YEV) 2.10533”) Solution: Fading Channel
Expanding I (X ; Y, V) in two ways, weget I(X;Y,V) = I(X;V)+I(X;Y[V) (9.42)
= I(X;Y) + I(X;V!Y) (9.43)
i.e.
I(X;V)+I(X;Y[V) — I(X;Y)+I(X;VY)
I(X; YEV) = I(X; Y) + _I{X; VlY) (9.44)
I(X;YEV) 2 I(X;Y) 9.45) where {9.44) follows from the independence of X and V, and (9.45) follows from
I (X ;VIY) 2 0. ' 6. Parallel channels and waterﬁlling. Consider a pair of parallel Gaussian channels, Hm Y X1 a
was» (was “D a ’ 0 ﬁ ’ ’ and there is a power constraint E(Xf+ g 2P. Assume that 0% > 0%. At what
power does the channel stop behaving like a single channel with noise variance 0% , and
begin behaving like a pair of channels? where 222 Gaussian channel WW Solution: Parallel channels and waterﬁll'mg. By the result of Section 10.4, it follows
that we will put all the signal power into the channel with less noise until the total power of noise + signal in that channel equals the noise power in the other channel.
After that, we will split any additional power evenly between the two channels. Thus the combined channel begins to behave like a pair of parallel channels when the 2 signal power is equal to the difference of the two noise powers, i.e., when 2P : 0'1 — 0'3. 7. Multipath Gaussian channel. Consider a Gaussian noise channel of power contraint
P, where the signal takes two different paths and the received noisy signals are added together at the antenna. 1
Y1
X >©w
Y2 ‘ '
2 r . (a) Find the capacity of this channel if Zl and Z2 are jointly normal with covariance matrix Kg 2 l 02 p02
p02 02 ‘ (b) What is the capacity for p = 0, p = 1, p = —1? Solution: Mnltipath Gaussian channel. The channel reduces to the following channel: The power constraint on the input 2X SD is Z1 + Z2. Then 2X Var(Zl —l— Zg) Z1+Zg is 4P. Z1 and Zg are zero mean, and therefore = Ellzl + Z2l2l
= E[le + z; + 22122]
202 + 2,002. Gaussian channel Thus the noise distribution is N(O, 202(1 + p}. 223 (a) Plugging the noise and power values intothe formula for the one—diinensional (P? N ) channel capacity, C’ = %log(1 + g) , we get 1 4P
0 — ilog (1+ ——~202(1+p)) 2  02(1+p) (b) 1. When p20, C=%log(1+2§§ .
ii. Whenp=1, C=%log(l+§;).
iii. Whenp=—l, 0:00. 8. Parallel Gaussian channels Consider the following parallel Gaussian channel 21 X1 Yr
Z2~N(OJN2)
X2 Y2 1 2P
= ~log (1+—). where Z; N N(O,N1) and Z2 ~ N(O,N2) are independent Gaussian random variables
and Y”; = Xi + Zr. We wish to allocate power to the two parallel channels. Let ﬁl and 52 be ﬁxed. Consider a total cost constraint [31131 + 62132 3 ﬂ, Where B is the power
allocated to the ith channel and ﬁt; is the cost per unit power in that channel. Thus P1 2 U and P2 2 U can be chosen subject to the cost constraint ﬂ . (a) For What value of [3 does the channel stop acting like a single channel and start acting like a‘pair of channels? 2,N1=3,N :2 and sale. the capacity and ﬁnd Pth that achieve capacity for $1 1: 1’32 Solution: Parallel channels When we have cost constraints on the power, we need to optimize the total capacity of
the two parallel channels 1 P1 1 P2 )
o = — 1 _ —1 1 ~— .4
2log(i—Nl)+20g(+N2 (9 8)
subject to the constraint that
151131 + 52P2 S 5 (9.49)
Using the methods of Section 9.4, we set
1 P
J(P1, P2) 2 Z 5 log (1 + + M: an) (9.50)
2
and differentiating with respect to P5 , we have
1 1
— A ~ = . 1
21Di+m+ it 0, (95)
or V
a = (— — Nat (9.52)
ﬁt
or
an : (1/ ; parNip. (9.53) (a) It follows that we will put all the signal power into the channel with less weighted
noise (ﬁtNi) until the total weighted power of noise + signal in that channel
equals the weighted noise power in the other channel. After that, we will split
any additional power between the two channels according to their weights. Thus
the combined channel begins to behave like a pair of parallel channels when the
signal power is equal to the difference of the two weighted noise powers, i.e., when
515 = 521V — 2 — )61N1 (b) In this case, 611V; < ﬁzNg, so We would put power into channel 1 until ,6 = 1.
After that we would put power according to their weights, i.e. we would divide
remaining power of 9 in the ratio 2 late 1. Thus we would set P1 = 6 + 1 and
P2 2 3, and so that 1/ = 10 in the equation above. The capacity in this case is 1 .
0,: E log(1 + 7/3) + élogﬂ + 3/2) = 1.53 bits. (9.54) 15. Discrete input continuous output channel. Let Pr{X = 1} = p, Pr{X = 0} z
1 — p, and let Y = X + Z , where Z is uniform over the interval [0,a], a > 1, and Z is independent of X . (a) Calculate
‘ I(X;Y):H(X)—H(X[Y). (b) New calculate I (X; Y) the other way by I(X; Y) = hm — h(YX). (c) Calculate the capacity of this channel by maximizing over 1) Solution: Discrete input Continuous Output channel (a) Since 91H 0 <
ﬂYlX I 0) Z { D othefwis: (9'92) and ‘
_ [l—p)% Ogy<l
f(YX = 1) = i 1 5 y S a (9.93) 10% a<y<l+a Therefore, (1—plé OSy<1
Hy): g 1gyga (9.94)
pi a<y< 1+0, (bl Hlxl = H{X!Y = y) is nonzero only for 1 S y S a, and by Bayes rule,
conditioned on Y, the probabilty that X = 1 is M = wow: = 1) W = P (995) P(X: lley) = and hence H(XY) = P(l g Y S a)H(p) = Therefore I(X;Y) :
Hoe — Home : imp). (C) ﬂYlX = 0) ~ UH], a) , and hence h(YX = O) = logo, and Similarly for X :vl,
so that h(Y[X) : loge.
The difFegential entropy h(Y) can be calculated from (9.94) as 1 1 1—}? a 1 1+ap p
Y z — — — — — _ a _1 ~ .
h( ) foo malog a dy flalogady fa a ogadaeea)
1
= Elwplogpc{1p)10s(1—P))+10ga (9.97)
1
= EHlleoea (9.98) and again I(X; Y) 2 My) — h(Y[X) = imp). (d) The mutual information is maximized for p = 0.5, and the corresponding capacity 
of the channel is i . ...
View
Full Document
 Summer '10
 M.R.Soleymani
 Information Theory, Normal Distribution, Additive white Gaussian noise

Click to edit the document details