This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: ECE 255AN Fall 2011 Homework set 1  solutions 2. (a) Optimal code is 1 , 01 , 001 ,..., n 2 1 , n 1 1 , n 1 0. (b) H ( P ) = X i =1 1 2 i log 2 i = X i =1 i 2 i = 1 2 (1 1 2 ) 2 = 2 , and the optimal code is 0 , 10 , 110 ,... . 3. False. Let X be nonconstant and Y = X . 4. A decreases with increasing . For = 1, we approximate N n =2 1 n log n by R N 2 1 x log x dx log log N . Thus, the sum diverges. For > 1, A can be approximated by R 1 1 x (log x ) dx which is finite. Using this, for part (b) entropy is finite for > 2, else infinite. Similarly, for > 1, B converges, otherwise it diverges and for > 2 the entropy is finite else infinite. 5. False. Consider H def = H ( p, 1 / 3 , 2 / 3 p ). For p = 1 / 3, H = log 3 > 1, while for p = 2 / 3, H = h (2 / 3) < 1. Hence for some 1 / 3 < p < 2 / 3, H = 1, namely d H e = H . But since at least one of the probabilities, 1 / 3, is not a power of 2, the expected length will be > H , and hence > d H e . 6. h ( p ) + 1 . 5(1 p ). 7. Consider any distributions, ( p 1 ,p 2 ,... ) and ( q 1 ,q 2 ,... ). Simplistically speaking, we need to show that: 1 2 X i 1 ( p i log p i + q i log q i ) X i 1 p i + q i 2 log p i + q i 2 It suffices if: p i log p i + q i log q i ( p i + q i )log p i + q i 2 and by concavity of logarithm, LHS = p i p i + q i log p i + q i 2 p i + q i p i + q i log p i + q i 2 q i log( 1 2 + 1 2 ) = 0 1 Solutions to Chapter 2 problem 21. Markov inequality applied to entropy . P ( p ( X ) < d )log 1 d = X x : p ( x ) <d p ( x )log 1 d (1) X x : p ( x ) <d p ( x )log 1 p ( x ) (2) X x p ( x )log 1 p ( x ) (3) = H ( X ) (4) Solutions to Chapter 5 problems 14. Huffman code....
View
Full
Document
This document was uploaded on 01/19/2012.
 Fall '09

Click to edit the document details