This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: 6.401/6.450 Introduction to Digital Communication September 12, 2001 MIT, Fall 2001 Handout #7 Solutions to problem set #1 Problem 1.1 What follows is one way of thinking about the problem. It is definitely not the only way – the important point in this question is for you realize that we abstract (often complex) physical phenomenon using simplified models and the choice of the model is governed by our objective. Speech encoder/decoder pairs try to preserve not only recognizability of words but also a host of speaker dependent information like pitch and intonation. If we do not care about any speaker specific characteristics of speech, we essentially have on our hands a problem of coding the English text that the speaker is producing. Hence, to estimate the rate (in bits per second) that a good source encoder would need, we must estimate two quantities. The first is the rate, in terms of English letters per second, that speakers achieve. The second is the average number of bits needed per English letter. Rough estimates of the former are 15-20 English letters per second. A simple code for 26 letters and a space would need log 2 27 or 4.75 bits per English letter. By employing more sophisticated models of dependencies in the English language, researchers estimate that one could probably do with as few as 1.34 bits per letter. Hence we could envision source coders that achieve ≈ 20 bits per second (assuming 15 letters per second and 1.34 bits per letter) – which is considerably lower than what the best speech encoders today achieve! Problem 1.2 (a) Invoking the definition of expectation, we have, E [ V + W ] = X v ∈V X w ∈W ( v + w ) p V W ( v,w ) = X v ∈V X w ∈W v p V W ( v,w ) + X v ∈V X w ∈W w p V W ( v,w ) = X v ∈V v X w ∈W p V W ( v,w ) + X w ∈W w X v ∈V p V W ( v,w ) = X v ∈V v p V ( v ) + X w ∈W w p W ( w ) = E [ V ] + E [ W ] (b) Once again, working from first principles, E [ V · W ] = X v ∈V X w ∈W ( v · w ) p V W ( v,w ) = X v ∈V X w ∈W ( v · w ) p V ( v ) p W ( w ) = X v ∈V v p V ( v ) X w ∈W w p W ( w ) = E [ V ] · E [ W ] 1 (c) To illustrate the case where E [ V · W ] 6 = E [ V ] · E [ W ] consider the joint pmf, p V W (0 , 1) = p V W (1 , 0) = 1 / 2 Clearly, V and W are not independent. Also, E [ V · W ] = 0 whereas E [ V ] = E [ W ] = 1 / 2 and hence E [ V ] · E [ W ] = 1 / 4. The second case is more interesting. For this, consider the pmf, p V W (- 1 , 0) = p V W (0 , 1) = p V W (1 , 0) = 1 / 3 (it is highly recommended that you draw and visualize this pmf). Again, V and W are not independent. Clearly, E [ V · W ] = 0. Also, E [ V ] = 0 (what is E [ W ]?). Hence, E [ V · W ] = E [ V ] · E [ W ]....
View Full Document
This note was uploaded on 10/07/2009 for the course ENSC 5210 taught by Professor Daniellee during the Spring '08 term at Simon Fraser.
- Spring '08