{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

hw4sol2

# hw4sol2 - Solutions to Homework Set#4 Channel and Source...

This preview shows pages 1–4. Sign up to view the full content.

Solutions to Homework Set #4 Channel and Source coding 1. Lossless source coding with side information. Consider the lossless source coding with side information that is avail- able at the encoder and decoder, where the source X and the side information Y are i.i.d. P X,Y ( x, y ). X n f ( X n , Y n ) ∈ { 1 , 2 , ..., 2 nR } Encoder Decoder Y n Y n ˆ X n ( f ( X n , Y n ) , Y n ) Figure 1: Lossless source coding with side information at the encoder and decoder. Show that a code with rate R < H ( X | Y ) can not be achievable, and interpret the result. Hint: Let T defines f ( X n , Y n ). Consider nR H ( T ) H ( T | Y n ) , (1) and use similar steps, including Fano’s inequality, as we used in the class to prove the converse where side information was not available. Solution Sketch of the solution (please fill in the explanation for each step): nR H ( T ) H ( T | Y n ) , I ( X n ; T | Y n ) = H ( X n | Y n ) - H ( X n | T, Y n ) = nH ( X | Y ) - ǫ n , where ǫ n 0. 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
2. Preprocessing the output. One is given a communication channel with transition probabilities p ( y | x ) and channel capacity C = max p ( x ) I ( X ; Y ) . A helpful statisti- cian preprocesses the output by forming ˜ Y = g ( Y ) , yielding a channel p y | x ). He claims that this will strictly improve the capacity. (a) Show that he is wrong. (b) Under what conditions does he not strictly decrease the capacity? Solution: Preprocessing the output. (a) The statistician calculates ˜ Y = g ( Y ). Since X Y ˜ Y forms a Markov chain, we can apply the data processing inequality. Hence for every distribution on x , I ( X ; Y ) I ( X ; ˜ Y ) . Let ˜ p ( x ) be the distribution on x that maximizes I ( X ; ˜ Y ). Then C = max p ( x ) I ( X ; Y ) I ( X ; Y ) p ( x )=˜ p ( x ) I ( X ; ˜ Y ) p ( x )=˜ p ( x ) = max p ( x ) I ( X ; ˜ Y ) = ˜ C. Thus, the helpful suggestion is wrong and processing the output does not increase capacity. (b) We have equality (no decrease in capacity) in the above sequence of inequalities only if we have equality in data processing inequal- ity, i.e., for the distribution that maximizes I ( X ; ˜ Y ), we have X ˜ Y Y forming a Markov chain. Thus, ˜ Y should be a sufficient statistic. 3. The Z channel. The Z-channel has binary input and output alphabets and transition probabilities p ( y | x ) given by the following matrix: Q = bracketleftbigg 1 0 1 / 2 1 / 2 bracketrightbigg x, y ∈ { 0 , 1 } Find the capacity of the Z-channel and the maximizing input probabil- ity distribution. 2
Solution: The Z channel. First we express I ( X ; Y ), the mutual information between the input and output of the Z-channel, as a function of α = Pr( X = 1): H ( Y | X ) = Pr( X = 0) · 0 + Pr( X = 1) · 1 = α H ( Y ) = H (Pr( Y = 1)) = H ( α/ 2) I ( X ; Y ) = H ( Y ) - H ( Y | X ) = H ( α/ 2) - α Since I ( X ; Y ) is strictly concave on α (why?) and I ( X ; Y ) = 0 when α = 0 and α = 1, the maximum mutual information is obtained for some value of α such that 0 < α < 1.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 11

hw4sol2 - Solutions to Homework Set#4 Channel and Source...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online