Chapter_02 - CHAPTER 2 2.1 (a) Let wk = x + jy p(-k) = a +...

Info iconThis preview shows pages 1–9. Sign up to view the full content.

View Full Document Right Arrow Icon
21 CHAPTER 2 2.1 (a) Let w k = x + jy p (- k ) = a + jb We may then write f = w k p *(- k ) = ( x + jy )( a - jb ) = ( ax + by ) + j ( ay - bx ) Let f = u + jv with u = ax + by v = ay - bx Hence, From these results we immediately see that In other words, the product term w k p *(- k ) satisfies the Cauchy-Rieman equations, and so this term is analytic. u x ----- a = u y b = v y a = v x b = u x v y = v x u y =
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
22 (b) Let f = w k *p (- k ) = ( x - jy ) ( a + jb ) = ( ax + by ) + j ( bx - ay ) Let f = u + jv with u = ax + by v = bx - ay Hence, From these results we immediately see that In other words, the product term w k * p (-k) does not satisfy the Cauchy-Rieman equations, and so this term is not analytic. 2.2 (a) From the Wiener-Hopf equation, we have (1) u x ----- a = u y b = v x b = v y a = u x v y v x u y = w o R 1 p =
Background image of page 2
23 We are given Hence, the inverse matrix R -1 is Using Eq. (1), we therefore get (b) The minimum mean-square error is R 1 0.5 0.5 1 = p 0.5 0.25 = R 1 1 0.5 0.5 1 1 = 1 0.75 ---------- 1 0.5 0.5 –1 = w o 1 0.75 1 0.5 0.5 0.5 0.25 = 1 3 -- 1 0.5 0.5 2 1 = 1 3 1.5 0 = 0.5 0 = J min σ d 2 p H w o =
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
24 (c) The eigenvalues of matrix R are roots of the characteristic equation That is, the two roots are The associated eigenvectors are defined by R q = λ q For λ 1 = 0.5, we have Expanding q 11 + 0.5 q 12 = 0.5 q 11 0.5 q 11 + q 12 = 0.5 q 12 Therefore, q 11 = - q 12 Normalizing the eigenvector q 1 to unit length, we therefore have σ d 2 0.5, 0.25 0.5 0 = σ d 2 0.25 = 1 λ () 2 0.5 2 –0 = λ 1 0.5 and λ 2 1.5 == 1 0.5 0.5 1 q 11 q 12 0.5 q 11 q 12 = q 1 1 2 ------ 1 1 =
Background image of page 4
25 Similarly, for the eigenvalue λ 2 = 1.5, we may show that Accordingly, we may express the Wiener filter in terms of its eigenvalues and eigen- vectors as follows: 2.3 (a) From the Wiener-Hopf equation we have (1) We are given and Hence, the use of these values in Eq. (1) yields q 2 1 2 ------ 1 1 = w o 1 λ i ---- q i q i H i =1 2    p = 1 1 1, 1 1 3 -- 1 1 + 0.5 0.25 = = ( 11 1 –1 1 3 + ) 0.5 0.25 1 λ 1 ----- q 1 q 1 H 1 λ 2 q 2 q 2 H p w o R 1 p = R 1 0.5 0.25 0.5 1 0.5 0.25 0.5 1 = p 0.5 0.25 0.125 T =
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
26 (b) The minimum mean-square error is (c) The eigenvalues of matrix R are The corresponding eigenvectors constitute the orthogonal matrix: Accordingly, we may express the Wiener filter in terms of its eigenvalues and eigenvectors as follows: w o R 1 p = 1 0.5 0.25 0.5 1 0.5 0.25 0.5 1 1 0.5 0.25 0.125 = 1.33 0.67 –0 0.67 1.67 0.67 0 0.67 1.33 0.5 0.25 0.125 = 0 .500 T = J min σ d 2 p H w o = σ d 2 0.5 0.25 0.125 0.5 0 0 = σ d 2 0.25 = λ 0.4069 , 0.75 , 1.8431 = Q 0.4544 0.7071 0.5418 0.7662 0 0.6426 0.4544 0.7071 0.5418 =
Background image of page 6
27 2.4 By definition, the correlation matrix where w o 1 λ i ---- q i q i H i =1 3    p = 1 0.4069 ---------------- 0.4544 0.7662 0.4544 0.4544 0.7662 0.4554 = 1 0.75 ---------- 0.7071 0 0.7071 0.7071 0 0.7071 + 1 1.8431 0.5418 0.6426 0.5418 0.5418 0.6426 0.5418 +     0.5 0.25 0.125 × 1 0.4069 0.2065 0.3482 0.2065 0.3482 0.5871 0.3482 0.2065 0.3482 0.2065 = 1 0.75 0.5 0 0.5 000 0.5 0 0.5 + 1 1.8431 0.2935 0.3482 0.2935 0.3482 0.4129 0.3482 0.2935 0.3482 0.2935 + 0.5 0.25 0.125 R E u n () u H n [] =
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
28 Invoking the ergodicity theorem, Likewise, we may compute the cross-correlation vector as the time average The tap-weight vector of the Wiener filter is thus defined by which is dependent on the length ( N +1) of the time series.
Background image of page 8
Image of page 9
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 09/11/2010 for the course EE EE245 taught by Professor Ujin during the Spring '10 term at YTI Career Institute.

Page1 / 28

Chapter_02 - CHAPTER 2 2.1 (a) Let wk = x + jy p(-k) = a +...

This preview shows document pages 1 - 9. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online