{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

2s - EE 478 Multiple User Information Theory Handout#11...

Info icon This preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
EE 478 Handout #11 Multiple User Information Theory October 14, 2008 Homework Set #2 Solutions 1. Solution: (a) We need to show that for any B 1 , B 2 and α [0 , 1], αC ( B 1 ) + (1 - α ) C ( B 2 ) C ( αB 1 + (1 - α ) B 2 ) . Let X 1 p 1 ( x ) achieves C ( B 1 ) and X 2 p 2 ( x ) achieves C ( B 2 ). Define Q Bern( α ) and X = X 1 , Q = 1; X 2 , Q = 0. (1) By this definition, p ( x ) = αp 1 ( x ) + (1 - α ) p 2 ( x ), and E B ( X ) = X x ∈X p ( x ) b ( x ) = α X x ∈X p 1 ( x ) b ( x ) + (1 - α ) X x ∈X p 2 ( x ) b ( x ) = αB 1 + (1 - α ) B 2 . On the other hand, by chain rule, I ( X, Q ; Y ) = I ( Q ; Y ) + I ( X ; Y | Q ) = I ( X ; Y ) + I ( Q ; Y | X ) . But Q X Y form a Markov chain, and therefore I ( Q ; Y | X ) = 0 and I ( X ; Y ) I ( X ; Y | Q ). Using this observation, and the definition of C ( αB 1 +(1 - α ) B 2 ) as maximum mutual information between X and Y among all the distributions of X that satisfy E B ( X ) αB 1 + (1 - α ) B 2 , yields C ( αB 1 + (1 - α ) B 2 ) I ( X ; Y ) I ( X ; Y | Q ) = αI ( X 1 ; Y ) + (1 - α ) I ( X 2 ; Y ) . (b) We need to show that n i =1 b ( x i ( w )) nB . Note that the codewords are chosen from ² -typical codewords. Therefore, n X i =1 b ( x i ( w )) = n X x ∈X π ( x | x n ( w )) b ( x ) n X x ∈X ( p ( x ) + ²p ( x )) b ( x ) n (E b ( X ) + ² E b ( X )) n ( B - δ ( ² ))(1 + ² ) = nB (1 - ² 1 - ² )(1 + ² ) = nB (1 - 2 ² 2 1 - ² ) < nB. 1
Image of page 1

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
The probability of error analysis is exactly like that of channel with no cost constraint. 2. Solution: Consider I ( X ; X + Z * ) = h ( X + Z * ) - h ( X + Z * | X ) = h ( X + Z * ) - h ( Z * ) h ( X * + Z * ) - h ( Z * ) = I ( X * ; X * + Z * ) , where the inequality follows from the fact that given the variance, the entropy is maximized by the normal distribution. To prove the other inequality, we use the entropy power inequality 2 2 h ( X + Z ) 2 2 h ( X ) + 2 2 h ( Z ) . I ( X * ; X * + Z ) = h ( X * + Z ) - h ( X * + Z | X * ) = h ( X * + Z ) - h ( Z ) = 1 2 log 2 2 h ( X * + Z ) - h ( Z ) 1 2 log 2 2 h ( X * ) + 2 2 h ( Z ) · - h ( Z ) = 1 2 log 2 πeP + 2 2 h ( Z ) · - 1 2 log 2 2 h ( Z ) = 1 2 log 1 + 2 πeP 2 2 h ( z ) 1 2 log 1 + 2 πeP 2 2 h ( Z * ) = 1 2 log 1 + P N = I ( X * ; X * + Z * ) . Alternatively, we can use the result of Question 3 directly to get I ( X * ; X * + Z ) = h ( X * ) - h ( X * | X * + Z ) 1 2 log(2 πeP ) - 1 2 log (2 πe ) PN P + N = 1 2 log 1 + P N = I ( X * ; X * + Z * ) . Combining the two inequalities, we have I ( X ; X + Z * ) I ( X * ; X * + Z * ) I ( X * ; X * + Z ) . Hence, using these inequalities, it follows directly that min Z max X I ( X ; X + Z ) max X I ( X ; X + Z * ) = I ( X * ; X * + Z * ) = min Z I ( X * ; X * + Z ) max X min Z I ( X * ; X * + Z ) . (2) 2
Image of page 2
We will now prove the inequality in the other direction is a general result for all functions of two variables. For any function f ( a, b ) of two variables, for all b , for any a 0 , f ( a 0 , b ) min a f ( a, b ) .
Image of page 3

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}