# hw1sol - EE 376B/Stat 376B Handout#7 Information Theory...

This preview shows pages 1–4. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: EE 376B/Stat 376B Handout #7 Information Theory Thursday, April 20, 2005 Prof. T. Cover Solutions to Homework Set #1 1. Monotonicity of entropy per element. For a stationary stochastic process X 1 ,X 2 ,...,X n , show that H ( X 1 ,X 2 ,...,X n ) n ≥ H ( X n | X n- 1 ,...,X 1 ) . Solution: Monotonicity of entropy per element. By stationarity we have for all 1 ≤ i ≤ n , H ( X n | X n- 1 1 ) ≤ H ( X n | X n- 1 n- i +1 ) = H ( X i | X i- 1 1 ) , which implies that H ( X n | X n- 1 ) = ∑ n i =1 H ( X n | X n- 1 ) n ≤ ∑ n i =1 H ( X i | X i- 1 ) n = H ( X 1 ,X 2 ,...,X n ) n . 2. Entropy rates of Markov chains. (a) Find the entropy rate of the two-state Markov chain with transition matrix P = • 1- p 01 p 01 p 10 1- p 10 ‚ . (b) What values of p 01 ,p 10 maximize the entropy rate? (c) Find the entropy rate of the two-state Markov chain with transition matrix P = • 1- p p 1 ‚ . (d) Find the maximum value of the entropy rate of the Markov chain of part (c). We expect that the maximizing value of p should be less than 1 2 , since the 0 state permits more information to be generated than the 1 state. 1 Solution: Entropy rates of Markov chains. (a) The stationary distribution is easily calculated. (See EIT pp. 62–63.) μ = p 10 p 01 + p 10 , μ = p 01 p 01 + p 10 . Therefore the entropy rate is H ( X 2 | X 1 ) = μ H ( p 01 ) + μ 1 H ( p 10 ) = p 10 H ( p 01 ) + p 01 H ( p 10 ) p 01 + p 10 . (b) The entropy rate is at most 1 bit because the process has only two states. This rate can be achieved if (and only if) p 01 = p 10 = 1 / 2, in which case the process is actually i.i.d. with Pr( X i = 0) = Pr( X i = 1) = 1 / 2. (c) As a special case of the general two-state Markov chain, the entropy rate is H ( X 2 | X 1 ) = μ H ( p ) + μ 1 H (1) = H ( p ) p + 1 . (d) By straightforward calculus, we find that the maximum value of H ( X ) of part (c) occurs for p = (3- √ 5) / 2 = 0 . 382. The maximum value is H ( p ) = H (1- p ) = H ˆ √ 5- 1 2 ! = 0.694 bits . Note that ( √ 5- 1) / 2 = 0 . 618 is (the reciprocal of) the Golden Ratio. 3. Second law of thermodynamics. Let X 1 ,X 2 ,X 3 ,... be a stationary first-order Markov chain. We know that H ( X n | X 1 ) ≥ H ( X n- 1 | X 1 ) for n = 2 , 3 ,... . Thus, conditional uncertainty about the future grows with time. This is true although the unconditional uncertainty H ( X n ) remains con- stant. However, show by example that H ( X n | X 1 = x 1 ) does not necessarily grow with n for every x 1 . Solution: Second law of thermodynamics. Note that H ( X n | X 1 ) ≤ H ( X n | X 1 ,X 2 ) (Conditioning reduces entropy) = H ( X n | X 2 ) (by Markovity) = H ( X n- 1 | X 1 ) (by stationarity) 2 Alternatively, by an application of the data processing inequality to the Markov chain X 1 → X n- 1 → X n , we have I ( X 1 ; X n- 1 ) ≥ I ( X 1 ; X n ) ....
View Full Document

### Page1 / 11

hw1sol - EE 376B/Stat 376B Handout#7 Information Theory...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online