This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: Introduction to Information Theory (67548) December 25, 2008 Assignment 3 Lecturer: Prof. Michael Werman Due: Sunday, January 11, 2009 Note: Unless specified otherwise, all entropies and logarithms should be taken with base 2 . Problem 1 Binary Channel 1. We need to find the distribution p ( x ) of the input X which maximizes I ( X ; Y ), where Y is the three-valued output. Denote p (0) = π,p (1) = (1- π ). We have that H ( Y | X ) = πH ( Y | X = 0) + (1- π ) H ( Y | X = 1) = πH ( ,α, 1- - α ) + (1- π ) H ( ,α, 1- - α ) = H ( ,α, 1- - α ) To calculate the distribution of Y , we note that Pr( Y = 0) = Pr( X = 0)Pr( Y = 0 | X = 0) + Pr( X = 1)Pr( Y = 0 | X = 1) = π (1- α- ) + (1- π ) = π (1- α- 2 ) + . Similarly, Pr( Y = e ) = Pr( X = 0)Pr( Y = e | X = 0) + Pr( X = 1)Pr( Y = e | X = 1) = πα + (1- π ) α = α. Therefore, Pr( Y = 1) = 1- Pr( Y = 0)- Pr( Y = e ) = 1- α- - π (1- α- 2 ) . Since I ( X ; Y ) = H ( Y )- H ( Y | X ), we get that I ( X ; Y ) = H α,π (1- α- 2 ) + , 1- α- - π (1- α- 2 )- H ( ,α, 1- - α ) . Now, we wish to find π which maximizes this expression. Notice that the second entropy term does not depend on π , so we can concentrate on maximizing the first entropy in the expression above. Denote q π = π (1- α- 2 ) + . Then we need to maximize H ( α,q π , 1- α- q π ). It is not hard to see that this entropy is maximized when q π = 1- α- q π (say, by taking a derivative), which means that we want: π (1...
View Full Document
This note was uploaded on 12/10/2009 for the course CS 67543 taught by Professor Michaelwerman during the Spring '08 term at Hebrew University of Jerusalem.
- Spring '08