HW1s[1]

# HW1s[1] - ECE 534 Elements of Information Theory Fall 2010...

This preview shows pages 1–3. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ECE 534: Elements of Information Theory, Fall 2010 Homework 1 Solutions Ex. 2.1 (Davide Basilio Bartolini) Text Coin Flips . A fair coin is flipped until the first head occurs. Let X denote the number of flips required. (a) Find the Entropy H(X) in bits (b) A random variable X is drawn according to this distribution. Find an “efficient” sequence of yes-no questions of the form, “Is X contained in the set S?”. Compare H(X) to the expected number of questions required to determine X. Solution (a) The random variable X is on the domain X = { 1 , 2 , 3 ,... } and it denotes the number of flips needed to get the first head, i.e. 1 + the number of consecutive tails appeared before the first head. Since the coin is said to be fair, we have p (“ head ”) = p (“ tail ”) = 1 2 and hence (exploiting the independence of the coin flips): p ( X = 1) = p (“ head ”) = 1 2 p ( X = 2) = p (“ tail ”) * p (“ head ”) = 1 2 * 1 2 = 1 2 2 . . . p ( X = n ) = ntimes z }| { p (“ tail ”) * ... * p (“ tail ”) * p (“ head ”) = 1 2 * ... * 1 2 * 1 2 = 1 2 n from this, it is clear that the probability mass distribution of X is: p X ( x ) = 1 2 x 1 Once the distribution is known, H ( X ) can be computed from the definition: H ( X ) =- X x ∈X p X ( x )log 2 p X ( x ) =- ∞ X x =1 1 2 x log 2 1 2 x =- ∞ X x =0 1 2 x log 2 1 2 x (since the summed expr. equals 0 for x = 0) =- ∞ X x =0 1 2 x x log 2 1 2 (property of logarithms) =- log 2 1 2 ∞ X x =0 1 2 x x = ∞ X x =0 1 2 x x = 1 2 ( 1- 1 2 ) 2 = 2 [bit] exploiting ∞ X x =0 ( k ) x x = k (1- k ) 2 ! (b) Since the most likely value for X is 1 ( p ( X = 1) = 1 2 ), the most efficient first question is: “Is X = 1?”; the next question will be “Is X = 2?” and so on, until a positive answer is found. If this strategy is used, the random variable Y representing the number of questions will have the same distribution as X and it will be: E [ Y ] = ∞ X y =0 y 1 2 y = 1 2 ( 1- 1 2 ) 2 = 2 which is exactly equal to the entropy of X ....
View Full Document

{[ snackBarMessage ]}

### Page1 / 8

HW1s[1] - ECE 534 Elements of Information Theory Fall 2010...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online