Chapter_04 - CHAPTER 4 4.1 (a) For convergence of the...

Info iconThis preview shows pages 1–6. Sign up to view the full content.

View Full Document Right Arrow Icon
108 CHAPTER 4 4.1 (a) For convergence of the steepest-descent algorithm: where is the largest eigenvalue of the correlation matrix R . We are given The two eigenvalues of R are λ 1 = 0.5 λ 2 = 1.5 Hence . The step-size parameter µ must therefore satisfy the condition We may thus choose µ = 1.0. (b) From Eq. 4.9 of the text, With µ = 1 and we therefore have 0 µ 2 λ max ----------- << λ max R 1 0.5 0.5 1 = λ max 1.5 = 0 µ 2 1.5 ------- 1.334 = w n 1 + () w n () µ pR w n [] + = p 0.5 0.25 = w n 1 + w n 0.5 0.25 1 0.5 0.5 1 w n    + =
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
109 That is, Equivalently, we may write (c) To investigate the effect of varying the step-size parameter µ on the trajectory, we find it convenient to work with v ( n ) rather than with w ( n ). The k th natural mode of the steepest descent algorithm is described by Specifically, For the initial condition, we have From the solution to Problem 2.2: 10 01 1 0.5 0.5 1    w n () 0.5 0.25 = 0 0.5 0.5 –0 w n 0.5 0.25 = w 1 n +1 w 2 n +1 0 0.5 0.5 –1 w 1 n w 2 n 0.5 0.25 = w 1 n +1 0.5 w 2 n 0.5 = w 2 n +1 0.5 w 1 n 0.25 = ν k n +1 1 µλ k ν k n , = k 12 , = ν 1 n +1 1 0.5 µ ν 1 n = ν 2 n +1 1 1.5 µ ν 2 n = v 0 ν 1 0 ν 2 0 = Q H w o =
Background image of page 2
110 Hence, That is, For n > 0, we have , k = 1,2 Hence, Solution - This represents an oscillatory trajectory. w o 0.5 0 = Q 1 2 ------ 11 1 –1 = v 0 () 1 2 0.5 0 = 1 2 0.5 0.5 = ν 1 0 ν 2 0 0.5 2 ------- == ν k n 1 µλ k n ν k 0 = ν 1 n 1 0.5 µ n ν 1 0 = ν 2 n 1 1.5 µ n ν 2 0 = µ 1 = ν 1 n 0.5 n ν 1 0 = ν 2 n 0.5 n ν 2 0 =
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
111 This second solution represents a damped trajectory. The transition from a damped to an oscillatory trajector occurs at Specifically, for 0 < µ < 0.667, the trajectory is damped. On the other hand, for 0.667 < µ < 1.334 the trajectory is oscillatory. 4.2 We are given (a) The correlation matrix R = r (0). Hence, λ max = r (0) Correspondingly, (b) Time constant of the filter is (c) µ 0.1 = ν 1 n () 0.59 n ν 1 0 = ν 2 n 0.85 n ν 2 0 = µ 1 1.5 ------- 0.667 == Jw J min r 0 ww o 2 + = µ max 2 λ max ----------- 2 r 0 ---------- τ 1 1 µλ 1 --------- 1 µ r 0 ------------- = Slope = 2 r (0)( w - w o ) J ( w ) 0 w o J min
Background image of page 4
112 4.3 (a) There is a single mode with eigenvalue λ 1 = r (0), and q 1 = 1, Hence, where v 1 ( n ) = q 1 ( w o - w ( n )) = ( w o - w ( n )) (b) 4.4 The estimation error e ( n ) equals where d ( n ) is the desired response, w ( n ) is the tap-weight vector, and u ( n ) is the tap-input vector. Hence, the gradient of the instantaneous squared error equals 4.5 Consider the approximation to the inverse of the correlation matrix: where µ is a positive constant bounded in value as where is the largest eigenvalue of R . Note that according to this approximation, we have R -1 (1) = µ I . Correspondingly, we may approximate the optimum Wiener solution as
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 6
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 09/11/2010 for the course EE EE245 taught by Professor Ujin during the Spring '10 term at YTI Career Institute.

Page1 / 18

Chapter_04 - CHAPTER 4 4.1 (a) For convergence of the...

This preview shows document pages 1 - 6. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online