This preview shows pages 1–4. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: EE 376A/Stat 376A Handout #7 Information Theory Thursday, January 20, 2011 Prof. T. Cover Solutions to Homework Set #1 1. Entropy of Hamming Code. Consider information bits X 1 ,X 2 ,X 3 ,X 4 { , 1 } chosen at random, and check bits X 5 ,X 6 ,X 7 chosen to make the parity of the circles even. X 4 X 2 X 6 X 3 X 7 X 5 X 1 Thus, for example, 1 1 1 becomes 1 1 1 1 That is, 1011 becomes 1011010. 1 (a) What is the entropy of H ( X 1 ,X 2 ,...,X 7 )? Now we make an error (or not) in one of the bits (or none). Let Y = X e , where e is equally likely to be (1 , ,..., 0) , (0 , 1 , ,..., 0) ,..., (0 , ,..., , 1), or (0 , ,..., 0), and e is independent of X . (b) What is the entropy of Y ? (c) What is H ( X  Y )? (d) What is I ( X ; Y )? Solution: Entropy of Hamming Code. (a) By the chain rule, H ( X 1 ,X 2 ,...,X 7 ) = H ( X 1 ,X 2 ,X 3 ,X 4 ) + H ( X 5 ,X 6 ,X 7  X 1 ,X 2 ,X 3 ,X 4 ) . Since X 5 ,X 6 ,X 7 are all deterministic functions of X 1 ,X 2 ,X 3 ,X 4 , we have H ( X 5 ,X 6 ,X 7  X 1 ,X 2 ,X 3 ,X 4 ) = 0 . And since X 1 ,X 2 ,X 3 ,X 4 are independent Bernoulli(1 / 2) random variables, H ( X 1 ,X 2 ,...,X 7 ) = H ( X 1 ) + H ( X 2 ) + H ( X 3 ) + H ( X 4 ) = 4 . (b) We will expand H ( X e , X ) in two different ways, using the chain rule. On one hand, we can write H ( X e , X ) = H ( X e ) + H ( X  X e ) = H ( X e ) . In the last step, H ( X  X e ) = 0 because X is a deterministic function of X e , since the (7,4) Hamming code can correctly decode X when there is at most one error. (You can check this by experimenting with all possible error patterns satisfying this constraint.) On the other hand, we can also expand H ( X e , X ) as follows: H ( X e , X ) = H ( X ) + H ( X e  X ) = H ( X ) + H ( X e X  X ) = H ( X ) + H ( e  X ) = H ( X ) + H ( e ) = 4 + H ( e ) = 4 + log 2 8 = 7 . 2 The second equality follows since XORing with X is a onetoone function. The third equality follows from the wellknown property of XOR that y y = 0. The fourth equality follows since the error vector e is independent of X . The fifth equality follows since from part (a), we know that H ( X ) = 4. The sixth equality follows since e is uniformly distributed over eight possible values: either there is an error in one of seven positions, or no error at all. Equating our two different expansions for H ( X e , X ), we have H ( X e , X ) = H ( X e ) = 7 . The entropy of Y = X e is 7 bits. (c) As mentioned before, X is a deterministic function of X e , since the (7,4) Ham ming code can correctly decode X when there is at most one error. So H ( X  Y ) = 0. (d) I ( X ; Y ) = H ( X ) H ( X  Y ) = H ( X ) = 4 ....
View
Full
Document
This note was uploaded on 04/05/2011 for the course EE 5368 taught by Professor Staff during the Spring '08 term at UT Arlington.
 Spring '08
 Staff

Click to edit the document details