ch10 - Chapter 10 Information Theory and Coding 10.1...

Info iconThis preview shows pages 1–6. Sign up to view the full content.

View Full Document Right Arrow Icon
Chapter 10 Information Theory and Coding 10.1 Problem Solutions Problem 10.1 The information in the message is I ( x )= log 2 (0 . 8) = 0 . 3219 bits I ( x log e (0 . 8) = 0 . 2231 nats I ( x log 10 (0 . 8) = 0 . 0969 Hartleys Problem 10.2 (a) I ( x log 2 (52) = 5 . 7004 bits (b) I ( x log 2 [(52)(52)] = 11 . 4009 bits (c) I ( x log 2 [(52)(51)] = 11 . 3729 bits Problem 10.3 The entropy is I ( x 0 . 3log 2 (0 . 3) 0 . 25 log 2 (0 . 25) 0 . 25 log 2 (0 . 25) 0 . 1log 2 (0 . 1) 0 . 05 log 2 (0 . 05) 0 . 05 log 2 (0 . 05) = 2 . 1477 bits The maximum entropy is I ( x )=log 2 5=2 . 3219 bits Problem 10.4 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
2 CHAPTER 10. INFORMATION THEORY AND CODING 0.7 0.5 0.2 0.2 0.2 0.2 0.1 0.6 0.3 1 y 3 x 2 x 1 x 3 y 2 y Figure 10.1: I ( x )=2 . 471 bits Problem 10.5 (a) The channel diagram is shown is illustrated in Figure 10.1. (b) The output probabilities are p ( y 1 )= 1 3 (0 . 5+0 . 2+0 . 1) = 8 30 =0 . 2667 p ( y 2 1 3 (0 . 3+0 . 6+0 . 2) = 11 30 . 3667 p ( y 3) = 1 3 (0 . . . 7) = 11 30 . 3667 (c) Since [ P ( Y )] = [ P ( X )] [ P ( Y | X )] we can write [ P ( X )] = [ P ( Y )] [ P ( Y | X )] 1 which is [ P ( X )] = [0 . 333 0 . 333 0 . 333] 2 . 5333 1 . 1333 0 . 4000 0 . 8000 2 . 2000 0 . 4000 0 . 1333 0 . 4667 1 . 6000
Background image of page 2
10.1. PROBLEM SOLUTIONS 3 This gives [ P ( X )] = [0 . 5333 0 . 2000 02 . 667] (d) The joint probability matrix is [ P ( X ; Y )] = [0 . 333 0 . 333 0 . 333] 0 . 5333 0 0 00 . 2000 0 000 . 2667 0 . 50 . 30 . 2 0 . 20 . 60 . 2 0 . 10 . . 7 which gives [ P ( X ; Y )] = 0 . 2667 0 . 1600 0 . 1067 0 . 0400 0 . 1200 0 . 0400 0 . 0267 0 . 0533 0 . 1867 Note that the column sum gives the output possibilities [ P ( Y )] , and the row sum gives the input probabilities [ P ( X )] . Problem 10.6 For a noiseless channel, the transition probability matrix is a square matrix with 1s on the main diagonal and zeros elsewhere. The joint probability matrix is a square matrix with the input probabilities on the main diagonal and zeros elsewhere. In other words [ P ( X ; Y )] = p ( x 1 )0 ••• 0 0 p ( x 2 ) 0 . . . . . . . . . . . . p ( x n ) Problem 10.7 This problem may be solved by raising the channel matrix A = 0 . 999 0 . 001 0 . 001 0 . 999 which corresponds to an error probability of 0 . 001 ,toincreas ingpowers n and seeing where the error probability reaches the critical value of 0 . 08 . Consider the MATLAB program a = [0.999 0.001; 0.001 0.999]; % channel matrix n = 1; % initial value a1 = a; % save a while a1(1,2) < 0.08 n=n+1; a1=a^n; end
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
4 CHAPTER 10. INFORMATION THEORY AND CODING n-1 % display result Executing the program yields n 1=87 . Thus we compute (MATLAB code is given) & a^87 ans = 0.9201 0.0799 0.0799 0.9201 & a^88 ans = 0.9192 0.0808 0.0808 0.9192 Thus 87 cascaded channels meets the speci & cation for P E < 0 . 08 . However cascading 88 channels yields P E > 0 . 08 and the speci & cation is not satis & ed. (Note: This may appear to be an impractical result since such a large number of cascaded channels are speci & ed. However, the channel A may represent a lengthly cable with a large number of repeaters. There are a number of other practical examples.) Problem 10.8 The & rst step is to write H ( Y | X ) H ( Y ) .Th i sg iv e s H ( Y | X ) H ( Y )= X i X j p ( x i ,y j )log 2 p ( y j | x i ) X j p ( y j 2 p ( y j ) which is H ( Y | X ) H ( Y X i X j p ( x i j )[log 2 p ( y j | x i ) log 2 p ( y i )] or H ( Y | X ) H ( Y 1 ln 2 X i X j p ( x i j )ln p ( x i j ) p ( x i ) p ( x i ) or H ( Y | X ) H ( Y 1 ln 2 X i X j p ( x i j p ( x i )( y j ) p ( x i j ) Since ln x x 1 , the preceding expression can be written H ( Y | X ) H ( Y ) 1 ln 2 X i X j p ( x i j ) p ( x i ) p ( y j ) p ( x i j ) 1
Background image of page 4
10.1. PROBLEM SOLUTIONS 5 2 y 1 y 3 x 2 x 1 x Figure 10.2: or H ( Y | X )
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 6
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page1 / 46

ch10 - Chapter 10 Information Theory and Coding 10.1...

This preview shows document pages 1 - 6. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online