This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: ECE 255AN Fall 2011 Homework set 4  solutions Solutions to Chapter 7 problems 4. Channel capacity. Y = X + Z (mod 11) where Z = 1 with probability1 / 3 2 with probability1 / 3 3 with probability1 / 3 In this case, H ( Y  X ) = H ( Z  X ) = H ( Z ) = log 3 , independent of the distribution of X , and hence the capacity of the channel is C = max p ( x ) I ( X ; Y ) (1) = max p ( x ) H ( Y ) H ( Y  X ) (2) = max p ( x ) H ( Y ) log 3 (3) = log 11 log 3 , (4) which is attained when Y has a uniform distribution, which occurs (by symmetry) when X has a uniform distribution. (a) The capacity of the channel is log 11 3 bits/transmission. (b) The capacity is achieved by an uniform distribution on the inputs. p ( X = i ) = 1 11 for i = 0 , 1 ,..., 10. 5. Using two channels at once. Suppose we are given two channels, ( X 1 ,p ( y 1  x 1 ) , Y 1 ) and ( X 2 ,p ( y 2  x 2 ) , Y 2 ), which we can use at the same time. We can define the product chan nel as the channel, ( X 1 X 2 ,p ( y 1 ,y 2  x 1 ,x 2 ) = p ( y 1  x 1 ) p ( y 2  x 2 ) , Y 1 Y 2 ). To find the capacity of the product channel, we must find the distribution p ( x 1 ,x 2 ) on the input alphabet X 1 X 2 that maximizes I ( X 1 ,X 2 ; Y 1 ,Y 2 ). Since the joint distribution p ( x 1 ,x 2 ,y 1 ,y 2 ) = p ( x 1 ,x 2 ) p ( y 1  x 1 ) p ( y 2  x 2 ) , Y 1 X 1 X 2 Y 2 forms a Markov chain and therefore I ( X 1 ,X 2 ; Y 1 ,Y 2 ) = H ( Y 1 ,Y 2 ) H ( Y 1 ,Y 2  X 1 ,X 2 ) (5) = H ( Y 1 ,Y 2 ) H ( Y 1  X 1 ,X 2 ) H ( Y 2  X 1 ,X 2 ) (6) = H ( Y 1 ,Y 2 ) H ( Y 1  X 1 ) H ( Y 2  X 2 ) (7) H ( Y 1 ) + H ( Y 2 ) H ( Y 1  X 1 ) H ( Y 2  X 2 ) (8) = I ( X 1 ; Y 1 ) + I ( X 2 ; Y 2 ) , (9) 1 where (6) and (7) follow from Markovity, and we have equality in (8) if Y 1 and Y 2 are inde pendent. Equality occurs when X 1 and X 2 are independent. Hence C = max p ( x 1 ,x 2 ) I ( X 1 ,X 2 ; Y 1 ,Y 2 ) (10) max p ( x 1 ,x 2 ) I ( X 1 ; Y 1 ) + max p ( x 1 ,x 2 ) I ( X 2 ; Y 2 ) (11) = max p ( x 1 ) I ( X 1 ; Y 1 ) + max p ( x 2 ) I ( X 2 ; Y 2 ) (12) = C 1 + C 2 . (13) with equality iff p ( x 1 ,x 2 ) = p * ( x 1 ) p * ( x 2 ) and p * ( x 1 ) and p * ( x 2 ) are the distributions that maximize C 1 and C 2 respectively. 8. The Z channel. First we express I ( X ; Y ), the mutual information between the input and output of the Zchannel, as a function of x = Pr( X = 1): H ( Y  X ) = Pr( X = 0) 0 + Pr( X = 1) 1 = x H ( Y ) = h (Pr( Y = 1)) = h ( x/ 2) I ( X ; Y ) = H ( Y ) H ( Y  X ) = h ( x/ 2) x Since I ( X ; Y ) = 0 when x = 0 and x = 1, the maximum mutual information is obtained for some value of x such that 0 < x < 1. Using elementary calculus, we determine that d dx I ( X ; Y ) = 1 2 log 2 1 x/ 2 x/ 2 1 , which is equal to zero for x = 2 / 5. (It is reasonable that Pr( X = 1) < 1 / 2 because X = 1 is the noisy input to the channel.) So the capacity of the Zchannel in bits is H (1 / 5) 2 / 5 = . 722 . 4 = 0 . 322....
View
Full
Document
This document was uploaded on 01/19/2012.
 Fall '09

Click to edit the document details