infohw - Information theory—homework exercises Edited by:...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Information theory—homework exercises Edited by: G´ abor Lugosi 1 Entropy, source coding Problem 1 (Alternative definition of unique decodability) An f : X → Y * code is called uniquely decodable if for any messages u = u 1 ··· u k and v = v 1 ··· v k (where u 1 , v i , . . . , u k , v k ∈ X ) with f ( u 1 ) f ( u 2 ) ··· f ( u k ) = f ( v 1 ) f ( v 2 ) ··· f ( v k ) , we have u i = v i for all i . That is, as opposed to the definition given in class, we require that the codes of any pair of messages with the same length are equal. Prove that the two definitions are equivalent. Problem 2 (Average length of the optimal code) Show that the expected length of the codewords of the optimal binary code may be arbitrarily close to H ( X ) + 1 . More precisely, for any small > , construct a distribution on the source alphabet X such that the average codeword length of the optimal binary code satisfies E | f ( X ) | > H ( X ) + 1- . Problem 3 (Equality in Kraft’s inequality) An f prefix code is called full if it loses its prefix property by adding any new codeword to it. A string x is called undecodable if it is impossible to construct a sequence of codewords such that x is a prefix of their concatenation. Show that the following three statements are equivalent. (a) f is full, (b) there is no undecodable string with respect to f , (c) ∑ n i =1 s- l i = 1 , where s is the cardinality of the code alphabet, l i is the codeword length of the i th codeword, and n is the number of codewords. Problem 4 (Shannon-Fano code) Consider the following code construction. Order the elements of the source alphabet X according to their decreasing probabilities: p ( x 1 ) ≥ p ( x 2 ) ≥ ··· ≥ p ( x n ) > . Introduce the numbers w i as follows: w 1 = 0 , w i = i- 1 X j =1 p ( x i ) ( i = 2 , . . . , n ) . Consider the binary expansion of the numbers w i . Write down the binary expansion of w i until the first bit such that the expansion differs from the expansion of all w j ( j 6 = i ). Thus, we have obtained n finite string. Define the binary codeword f ( x i ) as the obtained binary expansion of w i following the decimal point. Prove that the lengths of the codewords of the obtained code satisfy | f ( x i ) | <- log p ( x i ) + 1 . Therefore, the expected codeword length is smaller than the entropy plus one. 1 Problem 5 (Bad codes) Which of the following binary codes cannot be a Huffman code for any distribu- tion? Verify your answer. (a) 0, 10, 111, 101 (b) 00, 010, 011, 10, 110 (c) 1, 000, 001, 010, 011 Problem 6 Assume that the probability of each element of the source alphabet X = { x 1 , . . . , x n } is of the form 2- i , where i is a positive integer. Prove that the Shannon-Fano code is optimal. Show that the average codeword length of a binary Huffman code is equal to the entropy if and only if the distribution is of the described form....
View Full Document

This note was uploaded on 12/01/2010 for the course ADLAC 1023 at Stanford.

Page1 / 18

infohw - Information theory—homework exercises Edited by:...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online