Chapter 15

Chapter 15 - Chapter 15 15-1.! Pr = 0.4. P2 = 0.3, P3 = 0.2...

Info iconThis preview shows pages 1–12. Sign up to view the full content.

View Full Document Right Arrow Icon
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 2
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 4
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 6
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 8
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 10
Background image of page 11

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 12
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Chapter 15 15-1.! Pr = 0.4. P2 = 0.3, P3 = 0.2 and p4 = 0.1 H(»-) = -(Pr log P, + P; log P; + P3 logP3 + P4 log P4) =1.846 bits (source entropy) There are 10‘ symbols/s. Hence, the rate of information generation is 1346 x 10‘ bits/s. 15.1-2 Information/element = log; 10 = 3.32 bits. information/picture frame -= 332 x 300,000 = 9.96 x 105 bits. 15.1-3 Information/word = log; 10000 = 133 bits. information content of 1000 words - 13.3 x 1000 = 13,300 bits. The information per picture frame was found in Problem 15.1~2 to be 9.96 x 105 bits. Obviously, it is not possible to describe a picture completely by 1000 words, in general. Hence, a picture is worth 1000 words is very much an underrating or understating the reality. 15.1-4 (it) Both options are equally likely. Hence, 1 = 1047,13) = 1 bit (b) P(2 lanterns) = 0.1 [(2 lanterns) = log; 10 = 3.322 bits 151-5 (3) A1127 symbols equiprobable and P(x,-) = 127. mm = 27(517-1032 27) = 4.755 bits/ symbol (b) Using the probability table, we compute 27 Hw(x) = —Z P(x,-) log P(xi) = 4.127 bits/symbol III (e) Using Zipf’s law, we compute entropy/word Hw(x). 8727 Hw(x) = - 2 P(r)log P(r) r=l 8727 = - Z9;llog(9;l) = 9.1353 bits/ word. r=l H/letter =11/82/5.5=2.l4 bits/symbol. Entropy obtained by Zipf’ s law is much closer to the real value than Hl(x) or H; (x). 128 7 63 . 15.2-1 H(m) = 2 Pi IogPi = 33 bits l=l Message Probability Code 5. s; S; s. s, m 1/2 0 1/2 0 1/2 0 1/2 0 1/2 0 1/2 0 m; 1/4 10 1/4 10 1/4 10 1/4 10 1/4 10 1/2 1 m, 1/8 110 1/8 110 1/8 110 1/8 110 1/4 11 m. 1/16 1110 1/16 1110 1/16 1110 1/8 111 m, 1/32 11110 1132 11110 1/16 1111 nu 1/64 111110 1/32 11111 m7 1/64 111111 1 1 l 1 1 1 1 L = §P1L1 = 5(1)+;(2)+§(3)+Tg(4)+3§(5)+a(5)+37(5) :- g-g binary digits H(m) x100= 100% Efficiency r; = Redundancy 7 = (100- q) = 0% 7 15.2-2 H(m) = -Z P,- log P,~ = 2.289 bits i=1 = 2289 = 1.4442 3 - ary units log; 3 Message Probability Code s, s; m, 1/3 0 1/3 0 1/3 0 m; 1/3 1 1/3 1 1/3 1 m; 1/9 20 1/9 20 1/3 2 m. 1/9 21 1/9 21 m, 1/27 220 1/9 22 the 1/27 221 m7 1/27 222 7 1 1 1 1 1 L= P-L-=—l -1+—2 ~2+3-—3 a? 3-arydigiis a 1.4442 3-ary digits Emciency q = £122.). ___ x100=100% Redundancy 7 = (1 - i7)100 = 0% 129 4 15.2.3 H(m) = -2 1’; log P,» = 1.69 bits i=1 Message Probability Code 5, s; m 0.5 0 0.5 0 0.5 0 m; 0.3 1 0 0.3 1 0 0.5 1 m; 0.1 1 10 [—9 0.2 1 1 m4 0.1 1 1 l L = z 131.,- = 05(1) + 03(2) + 01(3) + 0.1(3) = 1.7 binary digits "(m)x100=1i£79— 100=99.2% Efficiency r; = Redundancy r = (l - ")100 = 0.8% For ternary coding, we need one dummy message of probability 0. Thus, Message Probability Code s! m. 0.5 0 0.5 0 m, 0.3 1 0.3 1 m; 0.1 20 0.2 2 m. 0.1 21 m5 0 22 L = 05(1) 0.3( 1) + 0.1(2) 4» 01(2) = 1.2 3 -ary digits 1.69 log 2 3 H( m) 1.0663 x100=-——-x100=88.86% L 1.2 = 1.0663 3- ary units H(m) = 1.69 bits = Efficiency 17 = Redundancy y = (1 - r0100 = 11.14% 15.2-4 Message Probability Code s. s; m. 1/2 0 1/2 0 1/2 0 m; 1/4 1 1/4 1 1/4 1 m; 1/8 20 1/8 20 1/4 2 m4 1/16 21 1/16 21 m, “32 220 1/ 16 22 rm 1/64 221 m7 1/64 222 21 . . L=ZflL,-TE 3-arydigits From Problem 15.2-1, H(m) = 5-;- bits 3 1.242 3- ary units . H(m) 1.242 3—— oo=——-— 100:94.6°/ Efficiency n L x1 13125 x 3 o Redundancy y = (1 — ")100 = 537% 130 Me e Probability Code 3. s, s; m, 1/3 1 1/3 1 1/3 1 1/3 1 1/3 1 2/3 0 m; 1/3 00 1/3 00 1/3 00 1/3 00 1/3 00 1/3 1 m; 1/9 011 1/9 011 1/9 011 2/9 010 1/3 01 m. 1/9 0100 1/9 0100 1/9 0100 1/9 011 m, 1/27 01010 1/27 010101—9 1/9 0101 nu 1/27 010110 2/27 01011 m7 1/27 010111 65 . .. L s 2 P,-L,~ = a? = 2.4074 binary digits H(m) = 2.289 bits (See Problem 15.2 -2). H(m) x ‘00: 2.289 L 2.4074 x 100 = 95.08% Efficiency 1; = Redundancy 7 = (1 - r7)100 = 4.92% 15.2-6 (a) H(m) = 3(-3110g3)-1585 bits (b) Ternary Code # Message Probability Code m. 1/3 0 m; “3 1 m; 1/3 2 1 1 1 1 I - . . --3( )+-3( )+-3(1)—1 3 arydigits 1585 = 1 3 - unit log; 3 “y H(m) = 1.585 bits = H(m) L x100=100% Efficiency 77 = Redundancy r = (1 - 17)100 = 0% (e) Binary Code M Messa e Probabili Code 5. m. 1/3 1 2/3 0 m; 1/3 00 1/3 1 L =%(1)+(2)-;-(2)=%= 1.667 binary digits . H(m) 1585 = -—- 100 = Efficxency 77 L x L667 x 100 = 95.08% Redundancy y = (1 — 7])100 a 4.92% 131 (d) Second extension — binary code 1 l l 29 . . . L - 5[(7)(;)(3) +(2)(;)(4)] - T; — 1.611 b1nary d1g1ts H(m) = 1585 bits Efficiency :1 = Hg") x 100 = x100 = 98.39% Redundancy y = (1 — 77)100 = 1.61% Mess: e b s. s: s, s. is So 57 1mm. 1 1 2/9 01 I 01 1/3 00 9/9 1 0 mm; 1/9 0000 1/9 001 2/9 10 2/9 10 2/9 10 2/9 01 113 00].? 4/9 1 1mm, 1/9 0001 1/9 0000 1/9 001 2/9 11 2/9 11 2/9 10 2/9 01 11mm 1/9 110 1/9 0001 1/9 0000 1/9 00 2/9 000 2/9 H 1mm; 1/9 111 1/9 110 1/9 0001 1/9 0000 1/9 001 mm, 1I9 100 1/9 111 1/9 110 1/9 0001 111,111. In 101 1/9 100 1/9 111 111,111: 1/9 010 1/9 101 mgm, “9 011 15.4-1 (a) The channel matrix can be represented as shown in Fig. $15.4-1 Pcy11=P(y11x11P(x1)+mum/’02) , x. 2/5 Y 2 1 1 2_13 .P=§ P1 I =—.—+—-—_—— 3 3 1o 3 45 J. 32 ’0 L 4 P =l-P =— (y2) 0'1) 45 up; 21 + P(x2) log I (b) H(x) = P(xl)log Phi) P02) lo {£1032 3+ §10g2 g = 0.918 bits Fig. 515.4-1 To compute H(x|y) , we find P(xl|y‘)=wglg p(xlly2)=P(y2|x|)P(Xl)=i P0.) 13’ P02) 32 a runaway = _3_ = P0 Ix 11°02) =21 P021111) PM) ‘3. P(x21.v2) P02) 64 1 H(lel) - P(x1|y1)log P(x1|yl)+ P(x21y‘)log Plley‘) 1o 13 3 13 Elogz l—0+Elogz——0779 1 H = p 1 P 1 ———- (xlyz) (MM) 08 PM”) + (leh) 08 mm”) 132 and mm) = P01)H(xly1)+ P02)H(xly2) 13 32 =— . -— .24 =0.6687 45(0779)4>45(06 ) Thus, mm = H(x) — H(x|y) = o 918 —0.6687 = 0.24893 bxts/ bum 1 13 45 32 4s . = . =_ - _ _= / bol H(y) 2‘1P(y,)logp(yi) 4slogls+4slqg32 0.8673 bns sym Also, H(y|x) = I-I(y) - I(x|y) = 0.8673 -0.2493 = 0.618 bits/ symbol 15.4.2 The channel matrix P y-xi is :1. (1|)x (1:):s,...__—y—————oyi (P) i 1 o o P (62 y; o p 1-11 (G?) "2. PP Y2. ) 0 l-p p A & A150, P(Y1)=P.P(Yz)=P(YJ)=Q x5 # Y3 < ) P(y,-lx.-)P(x.-) Now we use P(x,lyj) = Fig. $15.4-2 l j l i V to obtain xi 1 O 0 P(x,|yi)mauixas yj 0 p l—p 0 1-p p = —Plog P—ZQlogQ with (29 =1— P) H(x) = 2 P(x,-)log P“) = -—[Plog P+(l — P)log(l;21:)] = 0(P)+(l— P) l H = P -P,- -l (xly) (y,) (xly,)osP(xi'yj) I s Flog] +%plog-l-+(l-p)log—L—]+ %(1-p)log‘—‘—+plog—] p 1-p 1-.p p =0+2Qn(p)=(1-P)n(p) mm = 1100 — H(xly) == 0(P)+(l — P) —(l - P)Q(p) = n(p)+(1— P)[1-n(p)] Letting ,6 = 20‘”) or (2(p) = log)? 1(XIY) = c‘U’MU‘ PXl 408/9) d d L a;l(x|y)=0 or 2-};[0(P)+(1-P)(1—logfl)]—0.This means ;F[PlogP+(l - P)-(l-— P)Iog(l— P)(l-logfl)] =0 logP—logO- P)+[|-logfl]=0 133 P Therefore log—l—F = -l + log [3 Note: -l+log2fl=—log22+log2fi=logz-‘g P ,6 p 2 ___.=- p=__._ 1_ = i-P 2 2 5+2“: P p+2 SO 2 [9+2 2 [3+2 C=MAX1x = ’3 log—+— l + l—lo =io-— (1y) 5+2 8 p pace 2 p+z( :17) s p “"5 3.1nthiscase 15.4-3 Consider the cascade of 2 BSCS shown in Fig. $15.4- Pylx(m)=(1'PiX1-Fz)+PiP2=1‘fi—P2-2P193 Pyix(°|1)=(‘- Pile + Pi(1-P2)= Pi + P2 -2PiP2 Ca 5 ea :3 0. up I k'l 55¢: \ c- / w Fig. 515.4-3 Hence, the channel matrix of the cascade is 1—Pi-P2—2P1P2 Pi+P2-2P1P2 ___1-Fi Pt 1—5 P2 Pi+P2-2PiP 1—Pi—Pz-2PiP Pi 1-Pi P2 1-13 This result will prove everything in this problem. s that the channel matrix is indeed M2. P, , from the above result it follow ascadcd channels is M1 M2 . (a) With P, = P2 = that the channel matrix of two c (b) We have already shown (c) Consider a cascade of 1: identical channels broken up as k — lch lf M,,_I is the channel matrix of the first It —1 channels in cascade, then from the res the channel matrix of the k cascaded channels is M I, = M 1‘. already proved it for k = 2, that M2 = M2. Using the process ofin We can verify these results from the development in Example 10.7. have, for a cascade of 3 channels 3 2 1— P5 =(l-Pe) «t-3P¢ (1—Pe) = 1-3:; +3?) - P} +352 4?} = 1-3;; +6112 —4P,3 and P -3P —6P2+4P3 E" e e e Now 134 annel cascaded with the h‘" channel. ults derived in part (b), IM. This is valid for any It . We have duction it is clear that Mk = M". From the results in Example 10.7, we 1’. 1-”. 313—6524.“? 1-(35-552+4P,3) Clearly M3=[1-P, a]3=[1-(3P,-61;2+4P,3) 3p,-5P,2+41>,3] P5 = 3P, -6P,2 +03 which confirms the results in Example 10.7 for k = 3. (d) From Equation 15.25 l l C, = 1 -[PE logFE—Hl- FENCE 1. PE] where P; is the error probability of cascade of 1: identical channel. We have shown in Example 10.7 that P; =1—[(1—P,)" + i "’ pin—MW ,..2,4,5j!(k-1)! ‘ ‘ lka, «1, P5 aid; and l 1 C5 = l-[ch logI-P:+(l-kP,)log I’m] 15.44 The channel matrix is P(y3)=1-P(yi)-P(yz)'1-4=P Also, P(x.lyl)=fl’JL—’gl—‘;(—’fll=%//%=1 P(x;Iyl>=-’1‘le§8£§fl=o P(xi|}'2) = ——LL—LP(y ) = 0 P(leyz)=f$’—2';—(ly321;‘—’2—)=%%=1 P(xlly3)=f‘-’l}'-%’3-‘;—‘fll=£f=% 135 Pew.) = Pompom) = 1 I 2 P(X|.J’2) = P(XI)P(}’2|X1)= 0 Form) = P(X1)P0'3lxx) = 1‘1 P(x2.yl) = P(12)P(YI|32)= 2 0 P(x2.y2) = P(X2)P(y2|x2) =§ P(12.y3) = P(x2)P(J’3|12)=% Therefore, H(x) = -P(x| ) log P(x1) - P02) log P02) _ is l l 1 H = P i, ' (XIy) (x y,)os Pail”) q l l l =—0+0+- +0+- x0+-— = 2() 2p 24 21» 19 mm = H(X) - H(X|y) = l- p bits/ symbols l 15.4-5 H(xlz) - HWY) - P("""*)l°g P(xilzk) '§§ Punk-Hog Pen-In) 1 l = . . " ' " l §§§P(X"y“‘"“°g P(Iil2l¢) §§§P(x"y’ 2") 0g MW _ P(x.-Iy,-) ' :22 P(x"y”’")'°g P(xil,zk) xyz Note that for cascaded channel, the output 1 depends only on y. Therefore, x y r_ P(zklyjnxi)=P(zklyj) ' ’ ' 5 ' By Bayes’ rule Fig. 515.4-5 P(xily;.zl.) = P(xi|Yj) and P(xil}’j,zk) H(x|z) - HO‘IY) = 222 P(""yf’z")l°g PO‘ale) x y 3 ’ P(x,‘|yjszk) _§§’01.zk)[§m”y”'*)l°g PUn'lzk) ] lt can be shown that the summation over 1: of the term inside the bracket is nonnegative. Hence, it follows that H(x|z) — H(x|y) z 0 From the relationship for l (xl y) and I (XI 2), it immediately follows that 1(XIY) 2 1(XIZ) 136 15.5-1 Wehave H(x)=jfilpiog%dx=]fif—plogpdr and jxlpdr=l Thus, WP) = -p|°zp and g;- = -(1+ logp) 6 ¢1(x.p)=pand —31L=I P Substituting these quantities in Equation 15.37, we have —(1+1ogp)+a|=0= p=e al-l and mm = IyM‘euflh = 2M(e“1“) =1 Hence, - 1 1 ea‘ i=2—g andp(x)=m Also, "()0 = p(x)log—-l—) dz = —l—l032Mdr‘= log2M p(x 15.52 We have H(X)=-I:P|°8Pd‘a 4455MB 1= :Pd‘ F(x.p)=-plogp and f$%=—(l+logp) ¢l(x,p)=px and ggl=x P %= Map) = p and a P Substituting these quantities in Equation 15.37, we have —(l+logp)+a,x+az =0 or p = ea;x+a;-1 = (ea2-1)ealx Substituting this relationship in earlier constraints, we get 1 Hence, a;- 1=J:pa=1;°e“=“e"*¢=‘: 1 p(x) = -ale"" and l A =10 xpdx = I: -alxe""dx = —a—l Hence, a. = -—l- nndea1"= -a1= A so 1 '74 —-e x 2 0 pm = A 0 x < 0 To obtain H(x) 137 110:) = 43° p(x) log p(x)dx = —j;° p[-log A "ii-loge] dx = log A 13" p(x)dx + 19%]: xp(x)dx a log A + loge = log(eA) 15.5-3 Information per picture frame = 9.96 x 105 bits. (See Problem 15.1). For 30 picture fram¢s per second, we need a channel with capacity C given by C = 30x9.96x105 ==2.988><107 bits/sec. But for a white Gaussian noise 5 C=Bl l+—- °‘( N) We are given % = 50 db = 100,000 (Note: 100,000 = 50 db) Hence, B = 1.8 MB: 15.54 Consider a narrowband Af where Af -» 0 so that we may consider both signal noise power density to be constant (bandlimited white) over the interval Af . The signal and noise power S and N respectively are given by . ' S = 25,(w)Af and N = 25,,(a))Af The max'nnum channel capacity over this band AI is given by + 4- CM = M 108 [s NN] ' Mag [Ss(02n(:;(w)] The capacity of the channel over the entire band (fl, [2) is given by c = 112,08 [My Sn (0) We now wish to maximize C where the constraint is that the signal power is constant. 2}}? S,(w) df = S (a constant) Using Equation 15.37, we obtain 01’ SS 4» Sn = --1— (a constant) a Thus, ss(w)+ S..(w) = 7:- This shows that to attain the maximum channel capacity, the signal power density + noise power density must be a constant (white). Under this condition, I W -x -1 Cajf‘zlog[ Sam; de—Ill’logl: agandf = (12 - folos (— g} I}: log {snow} 138 = Blog [5,(w) +Sn(w)]— [’2 log [Sn(a2)] df fl 15.5-5 In this problem, we use the results of Problem 15.5-4. Under the best possible conditions, c = 810; [gm + shun] - If log [sum] df constant We shall now show that the integral I}: log [Sn(m)] «if is maximum when Snap) = constant if the noise is constrained to have a given mean square value (power). Thus, we wish to maximize I}: log {mold} under the constraint 212103 [Sn(m)]df = N (a constant) Using Equation 15.37, we have 6 as -——l S + —£=0 65,,(08 “) “as” or —L+a=0 Sn and Sn (w) = wt]; (a constant) Thus, we have shown that for a noise with a given power, the integral 1;: log [Sn(w)]df is maximized when the noise is white. This shows that white Guassian noise is the worst possible noise. kind of 139 ...
View Full Document

This note was uploaded on 05/07/2008 for the course ECE S-306 taught by Professor Dandakar during the Spring '08 term at Drexel.

Page1 / 12

Chapter 15 - Chapter 15 15-1.! Pr = 0.4. P2 = 0.3, P3 = 0.2...

This preview shows document pages 1 - 12. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online