hw1_sol - EE595A Introduction to Information Theory Winter...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
EE595A Introduction to Information Theory University of Washington Winter 2004 Dept. of Electrical Engineering Handout 4: Problem Set 1: Solutions Prof: Jeff A. Bilmes <bilmes@ee.washington.edu> Lecture 3, Jan 27, 2004 4.1 Problems from Text Do problems 2.16, 2.3, 2.5, 2.6, 2.12, 2.10, 2.20 in Cover and Thomas. Problem 2.16 Example of joint entropy. Let p ( x, y ) be given by @ @ @ X Y 0 1 0 1 3 1 3 1 0 1 3 Find 1. H ( X ) , H ( Y ) . 2. H ( X | Y ) , H ( Y | X ) . 3. H ( X, Y ) . 4. H ( Y ) - H ( Y | X ) . 5. I ( X ; Y ) . 6. Draw a Venn diagram for the quantities in (a) through (e). Solution 2.16 Example of joint entropy 1. H ( X ) = 2 3 log 3 2 + 1 3 log 3 = 0 . 918 bits = H ( Y ) . 2. H ( X | Y ) = 1 3 H ( X | Y = 0) + 2 3 H ( X | Y = 1) = 0 . 667 bits = H ( Y | X ) . 3. H ( X, Y ) = 3 × 1 3 log 3 = 1 . 585 bits. 4. H ( Y ) - H ( Y | X ) = 0 . 251 bits. 5. I ( X ; Y ) = H ( Y ) - H ( Y | X ) = 0 . 251 bits. 6. See Figure 1. 4-1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
4-2 Figure 4.1: Venn diagram to illustrate the relationships of entropy and relative entropy H(X|Y) I(X;Y) H(Y|X) H(Y) H(X) Problem 2.3 Minimum entropy. What is the minimum value of H ( p 1 , ..., p n ) = H ( p ) as p ranges over the set of n -dimensional probability vectors? Find all p ’s which achieve this minimum. Solution 2.3 We wish to find all probability vectors p = ( p 1 , p 2 , . . . , p n ) which minimize H ( p ) = - X i p i log p i . Now - p i log p i 0 , with equality iff p i = 0 or 1 . Hence the only possible probability vectors which minimize H ( p ) are those with p i = 1 for some i and p j = 0 , j 6 = i . There are n such vectors, i.e., (1 , 0 , . . . , 0) , (0 , 1 , 0 , . . . , 0) , ..., (0 , . . . , 0 , 1) , and the minimum value of H ( p ) is 0. Problem 2.5 Entropy of functions of a random variable. Let X be a discrete random variable. Show that the entropy of a function of X is less than or equal to the entropy of X by justifying the following steps: H ( X, g ( X )) ( a ) = H ( X ) + H ( g ( X ) | X ) (4.1) ( b ) = H ( X ); (4.2) H ( X, g ( X )) ( c ) = H ( g ( X )) + H ( X | g ( X )) (4.3) ( d ) H ( g ( X )) . (4.4) Thus H ( g ( X )) H ( X ) . Solution 2.5 Entropy of functions of a random variable. 1. H ( X, g ( X )) = H ( X ) + H ( g ( X ) | X ) by the chain rule for entropies. 2. H ( g ( X ) | X ) = 0 since for any particular value of X, g(X) is fixed, and hence H ( g ( X ) | X ) = x p ( x ) H ( g ( X ) | X = x ) = x 0 = 0 . 3. H ( X, g ( X )) = H ( g ( X )) + H ( X | g ( X )) again by the chain rule. 4. H ( X | g ( X )) 0 , with equality iff X is a function of g ( X ) , i.e., g ( . ) is one-to-one. Hence H ( X, g ( X )) H ( g ( X )) . Combining parts (b) and (d), we obtain H ( X ) H ( g ( X )) . Problem 2.6
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 10/12/2009 for the course EE 596A taught by Professor Jeffa.bilmes during the Winter '04 term at Washington University in St. Louis.

Page1 / 10

hw1_sol - EE595A Introduction to Information Theory Winter...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online