hw7s - ECE 178: HW #7 Solutions Anindya Sarkar & Emre...

Info iconThis preview shows pages 1–5. Sign up to view the full content.

View Full Document Right Arrow Icon
ECE 178: HW #7 Solutions Q1 . a) H ( X ) = 4 × 1 / 32 × log 2 (32)+2 × 1 / 16 × log 2 (16)+8 × 1 / 64 × log 2 (64)+1 / 8 × log 2 (8)+1 / 2 × log 2 (2) H ( X ) = 22 / 8bits b) Entropy maximizing distribution is the uniform distribution: p i = 1 / 16 , i = 1 ,..., 16 c) Proof. H ( X ) = X i p i log 2 ( 1 p i ) |{z} Jensen log 2 ( X i p i 1 p i ) = log 2 ( |X| ) Equality holds only if p i = 1 / |X| ∀ i , which completes the proof. 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Q2 . Huffman code can be constructed as follows: Message Probability a 1 2 15 a 2 2 15 a 3 3 15 a 4 3 15 a 5 5 15 Message Probability a 3 3 15 a 4 3 15 a 5 5 15 a 6 = { a 1 ,a 2 } 4 15 Message Probability a 7 = { a 3 ,a 4 } 6 15 a 5 5 15 a 6 = { a 1 ,a 2 } 4 15 Message Probability a 7 = { a 3 ,a 4 } 6 15 a 8 = { a 5 ,a 6 } 9 15 Message Probability Codeword a 1 2 15 100 a 2 2 15 101 a 3 3 15 00 a 4 3 15 01 a 5 5 15 11 2
Background image of page 2
Q3 . Message is in between (0 . 931878 , 0 . 931905) 0 a 1 2/15 a 2 4/15 a 3 7/15 a 4 10/5 a 5 1 0.000000 0.133333 0.266667 0.466667 0.666667 1.000000 0.666667 0.711111 0.755556 0.822222 0.888889 1.000000 0.888889 0.903704 0.918519 0.940741 0.962963 1.000000 0.918519 0.921481 0.924444 0.928889 0.933333 0.940741 0.928889 0.929481 0.930074 0.930963 0.931852 0.933333 0.931852 0.932049 0.932247 0.932543 0.932840 0.933333 0.931852 0.931878 0.931905 0.931944 0.931984 0.932049 3
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Q4 . Figure 1: A lossy open-loop predictive coding model: (a) encoder and (b) decoder e n = f n - ˆ f n , where in HW problem ˆ f n = f n - 1 In the decoder side, the bit-sequence (of the prediction error) is decoded to obtain the error terms (let ˙ e n denote the quantized version of e n , as in Fig. 1). The reconstructed output ˙ f n is obtained by adding the quantized error to the predicted value ˆ f n . ˙ f
Background image of page 4
Image of page 5
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 11

hw7s - ECE 178: HW #7 Solutions Anindya Sarkar & Emre...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online