a 1 R N Then y 1 a T 1 x e 1 and b x 1 P P a 1 1 a T 1 P a 1 1 a T 1 P A T y y

A 1 r n then y 1 a t 1 x e 1 and b x 1 p p a 1 1 a t

This preview shows page 4 - 7 out of 7 pages.

, a 1 R N . Then y 1 = a T 1 x * + e 1 , and b x 1 = P 0 - P 0 a 1 (1 + a T 1 P 0 a 1 ) - 1 a T 1 P 0 ( A T 0 y 0 + y 1 a 1 ) . Set u = P 0 a 1 . Then b x 1 = b x 0 + y 1 u - a T 1 b x 0 1 + a T 1 u u - y 1 · a T 1 u 1 + a T 1 u u = b x 0 + 1 1 + a T 1 u ( y 1 - a T 1 b x 0 ) u . Thus we can update the solution with one vector-matrix multiply (which has cost O ( N 2 )) and two inner products (with cost O ( N )). In addition, we can carry forward the “information matrix” using the update P 1 = P 0 - 1 1 + a T 1 u uu T . In general (for M 1 new measurements), we have b x 1 = P 1 ( A T 0 y 0 + A T 1 y 1 ) = P 1 ( P - 1 0 b x 0 + A T 1 y 1 ) , and since P - 1 0 = P - 1 1 - A T 1 A 1 , 50 Georgia Tech ECE 6250 Fall 2019; Notes by J. Romberg and M. Davenport. Last updated 3:33, November 20, 2019
Image of page 4

Subscribe to view the full document.

this implies b x 1 = P 1 P - 1 1 b x 0 - A T 1 A 1 b x 0 + A T 1 y 1 = b x 0 + K 1 ( y 1 - A 1 b x 0 ) , where K 1 is the “gain matrix” K 1 = P 1 A T 1 . The update for P 1 is P 1 = P 0 - P 0 A T 1 ( I + A 1 P 0 A T 1 ) - 1 A 1 P 0 = P 0 - U ( I + A 1 U ) - 1 U T , where U = P 0 A T 1 is an N × M 1 matrix, and I + A 1 U is M 1 × M 1 . So the cost of the update is O ( M 1 N 2 ) to compute U = P 0 A T 1 , O ( M 2 1 N ) to compute A 1 U , O ( M 3 1 ) to invert 1 ( I + A 1 U ) - 1 , O ( M 2 1 N ) to compute ( I + A 1 U ) - 1 U T , O ( M 1 N 2 ) to take the result of the last step and apply U , O ( N 2 ) to subtract the result of the last step from P 0 . So assuming that M 1 < N , the overall cost is O ( M 1 N 2 ), which is on the order of M 1 vector-matrix multiplies. 1 In practice, it is probably more stable to find and update a factorization of this matrix. But the cost is the same. 51 Georgia Tech ECE 6250 Fall 2019; Notes by J. Romberg and M. Davenport. Last updated 3:33, November 20, 2019
Image of page 5
Recursive Least Squares (RLS) Given y 0 = A 0 x * + e 0 y 1 = A 1 x * + e 1 . . . y k = A k x * + e k . . . , RLS is an online algorithm for computing the best estimate for x * from all the measurements it has seen up to the current time. Recursive Least Squares Initialize: ( y 0 appears) P 0 = ( A T 0 A 0 ) - 1 b x 0 = P 0 ( A T 0 y 0 ) for k = 1 , 2 , 3 , . . . do ( y k appears) P k = P k - 1 - P k - 1 A T k ( I + A k P k - 1 A T k ) - 1 A k P k - 1 K k = P k A T k b x k = b x k - 1 + K k ( y k - A k b x k - 1 ) end for 52 Georgia Tech ECE 6250 Fall 2019; Notes by J. Romberg and M. Davenport. Last updated 3:33, November 20, 2019
Image of page 6

Subscribe to view the full document.

Image of page 7
  • Fall '08
  • Staff

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern

Ask Expert Tutors You can ask You can ask ( soon) You can ask (will expire )
Answers in as fast as 15 minutes