1.1 Let ru(k ) = E [u(n)u*(n k )] r y(k ) = E [ y(n) y*(n k )] We are given that y(n) = u(n + a) u(n a) Hence, substituting Eq. (3) into (2), and then using Eq. (1), we get r y(k ) = E [(u(n + a) u(n a)(u*(n + a k ) u*(n a k )] = 2ru(k ) ru(2a +
3.1 (a) Let aM denote the tap-weight vector of the forward prediction-error lter. With a tapinput vector uM+1(n), the forward prediction error at the lter output equals f M ( n ) = a M u M +1 ( n ) The mean-square value of fM(n) equals E [ f M (
2.1 (a) Let wk = x + jy p(-k) = a + jb We may then write f = wk p*(-k) = (x + jy)(a - jb) = (ax + by) + j(ay - bx) Let f = u + jv with u = ax + by v = ay - bx Hence, u - = a x v - = a y u - = b y v - = b x
From these results we immediately see t
4.1 (a) For convergence of the steepest-descent algorithm: 2 0 < < - max where max is the largest eigenvalue of the correlation matrix R. We are given 1 0.5 0.5 1
The two eigenvalues of R are 1 = 0.5 2 = 1.5 Hence max = 1.5 . The step-size pa
5.1 From Fig. 5.2 of the text we see that the LMS algorithm requires 2M+1 complex multiplications and 2M complex additions per iteration, where M is the number of tap weights used in the adaptive transversal lter. Therefore, the computational co
Key Features of Budget 2016-2017
Growth of Economy accelerated to 7.6% in 2015-16.
India hailed as a bright spot amidst a slowing global economy by IMF.
Robust growth achieved despite very unfavourable global conditions
and two consecutive ye