# final_soln - University of Toronto ECE 1502S Department of...

This preview shows pages 1–2. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: University of Toronto ECE 1502S Department of Electrical F. R. Kschischang Information Theory & Computer Engineering Solutions for Final Examination of April 17, 2006 1. Short Snappers (a) False . For example, C = { , } satisfies the Kraft inequality, but is singular, and hence not uniquely decodable. (b) True . We have I ( X ; Y ) = H ( X )- H ( X | Y ) = H ( Y )- H ( Y | X ), so if H ( X ) = H ( Y ), then H ( X | Y ) = H ( Y | X ). (We can also see this from the equality H ( X,Y ) = H ( X ) + H ( Y | X ) = H ( Y ) + H ( X | Y ).) (c) True . Let X be the channel input and let Y be the channel output. We have I ( X ; Y ) = H ( Y )- H ( r ), where r is any row of the channel transition matrix. To maximize I ( X ; Y ) we must maximize H ( Y ), i.e., make Y uniform. (This is achievable, e.g., by making X uniform.) (d) False . It is sufficient to make X uniform, but not always necessary. For example, the capacity of the noisy typewriter channel discussed in the text is achieved with a nonuniform input distribution. (e) True . The maximum entropy distribution with a fixed variance σ 2 is Gaussian, and the corresonding differential entropy h = 1 2 log(2 πσ 2 e ) is finite. (Here I have assumed a nonzero variance; as σ 2 → 0, h → -∞ .) (f) True . Let C 1 be a rate-distortion code of length n 1 that achieves ( R 1 ,D 1 ), and let C 2 be a rate-distortion code of length n 2 that achieves ( R 2 ,D 2 ). Let n be the lowest common multiple of n 1 and n 2 . Form a rate-distortion code C of length 2 n by using C 1 a total of a = n/n 1 times, followed by using C 2 a total of b = n/n 2 times. (In other words C 1 and C 2 are time-shared, with each one active half of the time.) The rate of C is R = 1 2 n ( an 1 R 1 + bn 2 R 2 ) = 1 2 n ( nR 1 + nR 2 ) = ( R 1 + R 2 ) / 2 , and the expected distortion is D = 1 2 n ( an 1 D 1 + bn 2 D 2 ) = 1 2 n ( nD 1 + nD 2 ) = ( D 1 + D 2 ) / 2 . (g) True . We know that for any > 0 and any R < C there exists (for some sufficiently large n ) a binary (2 nR ,n ) code C with maximal error probability at most . Let C be the code obtained from C by appending an extra bit to each codeword of C , where the extra bit is chosen so that each codeword of C has an even number of ones. Then C has length n + 1 and 2 nR codewords, which corresponds to a rate of R = R (1- 1 n +1 ). The rate R can be made to approach arbitrarily closely to C by choosing R and n large enough. Furthermore, the maximal error probability for C is at most (since the decoder can always ignore the extra bit, and simply decode using the decoding rule for C ). Thus, constraining all codewords to have an even number of ones does not restrict the set of rates that are achievable on the binary symmetric channel....
View Full Document

## This note was uploaded on 10/24/2011 for the course ELECTRICAL ECE 571 taught by Professor Kelly during the Spring '11 term at University of Illinois, Urbana Champaign.

### Page1 / 7

final_soln - University of Toronto ECE 1502S Department of...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online