Chapter_05

# Chapter_05 - CHAPTER 5 5.1 From Fig 5.2 of the text we see...

This preview shows pages 1–5. Sign up to view the full content.

126 CHAPTER 5 5.1 From Fig. 5.2 of the text we see that the LMS algorithm requires 2 M +1 complex multiplications and 2 M complex additions per iteration, where M is the number of tap weights used in the adaptive transversal filter. Therefore, the computational complexity of the LMS algorithm is O ( M ). 5.2 5.3 For backward prediction, we have (assuming real data) 5.4 The adaptive line enhancer minimizes the mean-square error, E [| e ( n )| 2 ]. For the problem at hand, the cost function J = E [| e ( n )| 2 ] consists of the sum of three components: (1) The average power of the primary input noise, denoted by (2) The average power of the noise at the filter output (3) The average power of contribution produced by the sinusoidal components at the input and output of the filter Let the peak value of the transfer function be denoted by a . We may then approximate the peak value of the weights as 2 a / M , where M is the length of the filter. On this basis, we may approximate the average power of the noise at the filter output as (2 a 2 / M ) , which takes care of term (2). For term (3), we assume that the input and output sinusoidal o o d ( n ) u ( n ) w ( n ) ^ * Primary signal Reference signal Σ + _ e ( n ) . en () dn w ˆ 1 * n un = w ˆ n +1 w ˆ n () µ e * n + = u ˆ nn 0 w ˆ T n u n = yn u ˆ n = w ˆ n +1 w ˆ n un n 0 u ˆ 0 [] u n + = σ ν 2 σ ν 2

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
127 components subtract coherently, thereby yielding the average power ( A 2 /2)(1- a ) 2 . Hence, we may express the cost function J as Differentiating J with respect to a and setting the result equal to zero yields the optimum scale factor where 5.5 The index of performance equals The estimation error e ( n ) equals (1) where d ( n ) is the desired response, w ( n ) is the tap-weight vector of the transversal filter, and u ( n ) is the tap-input vector. In accordance with the multiple linear regression model for d ( n ), we have (2) where w o is the parameter vector, and v ( n ) is a white-noise process of zero mean and variance . (a) The instantaneous gradient vector equals J σ ν 2 2 a 2 M --------   σ ν 2 A 2 2 ------ 1 a () 2 ++ a opt A 2 A 2 4 σ ν 2 M + ------------------------------------ = A 2 2 σ ν 2 M 2 1 A 2 2 σ ν 2 M 2 + --------------------------------------------------- = M 2 SNR 1 M 2 + -------------------------------------- = A 2 2 σ ν 2 = J w K , Ee 2 K n [] , K 123 ,,, == en dn w T n u n = w o T n u n = σ v 2
128 Hence, we may express the new adaptation rule for the estimate of the tap-weight vector as (3) (b) Eliminate d ( n ) between Eqs. (1) and (2), with the estimate used in place of w (n): (4) Subtract w o from both sides of Eq. (3): (5) For the case when ( n ) is close to zero (i.e., is close to w o ), we may use Eq. (4) to write ˆ nK , () w ------- J w K , = w e 2 K n [] = 2 Ke 2 K -1 n en w ------------- = 2 2 K -1 n u n = w ˆ n +1 w ˆ n 1 2 -- µ∇ ˆ , = w ˆ n () µ K u n e 2 K -1 n + = w ˆ n w o w ˆ n T u n vn + = n u n + T = u T n n + = n +1 n - µ K u n e 2 K -1 n = w ˆ n e 2 K -1 n u T n n + 2 K -1 = v 2 K -1 n 1 u T n n ----------------------------- + 2 K -1 =

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
129 (6) Substitute Eq. (6) into (5): Taking the expectation of both sides of this relation and recognizing that (1) ( n )is independent of u ( n ) by low-pass filtering action of the filter, (2) u ( n ) is independent of v ( n ) by assumption, and (3) u ( n ) has zero mean, we get (7)
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 24

Chapter_05 - CHAPTER 5 5.1 From Fig 5.2 of the text we see...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online