Chapter_03 - CHAPTER 3 3.1 (a) Let aM denote the tap-weight...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
49 CHAPTER 3 3.1 (a) Let a M denote the tap-weight vector of the forward prediction-error filter. With a tap- input vector u M +1 ( n ), the forward prediction error at the filter output equals The mean-square value of f M ( n ) equals where is the correlation matrix of the tap-input vector. (b) The leading element of the vector a equals 1. Hence, the constrained cost function to be minimized is where λ is the Lagrange multiplier and 1 is the first unit vector defined by . Differentiating J ( a M ) with respect to a M , and setting the result equal to zero yields Solving for a M : (1) However, we may partition R M +1 as f M n () a M H u M +1 n = Ef M n 2 [] E a M H u M +1 n u M +1 H n a M = a M H E u M +1 n u M +1 H n a M = a M H R M +1 a M = R M +1 E u M +1 n u M +1 H n = J a M a M H R M +1 a M λ a M H 1 λ * 1 T a M ++ = 1 T 10 0 ,, , = 2 R M +1 a M 2 λ 1 + 0 = R M +1 a M λ 1 =
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
50 Hence, where P M is the minimum prediction-error power. Accordingly, we may rewrite Eq. (1) as 3.2 (a) Let denote the tap-weight vector of the backward prediction-error filter. With a tap-input vector u M +1 ( n ) the backward prediction error equals The mean-square value of b M ( n ) equals (b) The last element of equals 1. Hence, the constrained objective function to be minimized is where λ is the Lagrange multiplier, and R M +1 r 0 () r H rR M = λ r 0 r H , [] a M = P M = R M +1 a M P M 1 P M 0 == a M B * b M n a M BT u M +1 n = Eb M n 2 E a M u M +1 n u M +1 H n a M B * = a M E u M +1 n u M +1 H n a M B * = a M R M +1 a M B * = a M B * J a M a M R M +1 a M B * λ a M 1 B λ * 1 a M B * ++ =
Background image of page 2
51 Differentiating J ( a M ) with respect to a M , Solving for , we get (1) However, we may express R M +1 in the partitioned form: Therefore, where P M is the minimum backward prediction-error power. We may thus rewrite Eq. (1) as 3.3 (a) Writing the Wiener-Hopf equation Rg = r B * in expanded form, we have 1 BT 00 1 ,, , [] = 2 R M +1 a M B * 2 λ 1 B + 0 = a M B * R M +1 a M B * λ 1 B = R M +1 R M r B * r r 0 () = λ r r 0 , a M B * = P M = R M +1 a M B * P M 1 B 0 P M ==
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
52 Equivalently, we may write Let k = M - l +1, or l = M - k +1. Then Next, put M +1- i = j , or i = M +1- j . Then Putting this relation into matrix form, we write This, in turn, may be put in the compact form R T g B = r * (b) The product r BT g equals r 0 () r 1 () … rM -1 r 1 r 0 -2 +1 +2 r 0 g 1 g 2 g M -1 r 1 = . . . g k rk i k =1 M 1 i + , = i 12 M ,, , = g M - l +1 rM l –1 i + l =1 M 1 i + , i M == g M - l +1 rj l l =1 M rj , j M r 0 r 1 ()… +1 r 1 r 0 –+ 2 -1 -2 r 0 g M g M -1 g 1 r 1 r 2 =
Background image of page 4
53 (1) The product r T g B equals Put M +1- k = l , or k = M +1- l . Hence, (2) From Eqs. (1) and (2): r BT g = r T g B 3.4 Starting with the formula and solving for κ m , we get r BT g rM () +1 r 1 ,, , [] g 1 g 2 g M = . . . g k rk 1 M k =1 M = r T g B r 1 r 2 , g M g M -1 g 1 = 2 g M +1- k k =1 M = r T g B g l rl -1- M l =1 M = rm κ m * P m -1 a m -1 k , * - k k =1 m -1 =
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
54 (1) We also note (2) (3) (a) We are given r (0) = 1 r (1) = 0.8 r (2) = 0.6 r (3) = 0.4 We also note that P 0 = r (0) = 1 Hence, the use of Eq. (1) for m = 1 yields The use of Eq. (3) for m = 1 yields We next reapply Eq. (1) for m = 2: where we have noted that κ 1 = a 1,1 κ m r m * P m -1 ----------- 1 P m -1 a m -1 k , r * m - k () k =1 m -1 = a mk , a m -1, k κ m a m -1, m - k * , + = k 01
Background image of page 6
Image of page 7
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 09/11/2010 for the course EE EE245 taught by Professor Ujin during the Spring '10 term at YTI Career Institute.

Page1 / 59

Chapter_03 - CHAPTER 3 3.1 (a) Let aM denote the tap-weight...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online