,
a
1
∈
R
N
. Then
y
1
=
a
T
1
x
*
+
e
1
,
and
b
x
1
=
P
0

P
0
a
1
(1 +
a
T
1
P
0
a
1
)

1
a
T
1
P
0
(
A
T
0
y
0
+
y
1
a
1
)
.
Set
u
=
P
0
a
1
. Then
b
x
1
=
b
x
0
+
y
1
u

a
T
1
b
x
0
1 +
a
T
1
u
u

y
1
·
a
T
1
u
1 +
a
T
1
u
u
=
b
x
0
+
1
1 +
a
T
1
u
(
y
1

a
T
1
b
x
0
)
u
.
Thus we can update the solution with one vectormatrix multiply
(which has cost
O
(
N
2
)) and two inner products (with cost
O
(
N
)).
In addition, we can carry forward the “information matrix” using the
update
P
1
=
P
0

1
1 +
a
T
1
u
uu
T
.
In general (for
M
1
new measurements), we have
b
x
1
=
P
1
(
A
T
0
y
0
+
A
T
1
y
1
)
=
P
1
(
P

1
0
b
x
0
+
A
T
1
y
1
)
,
and since
P

1
0
=
P

1
1

A
T
1
A
1
,
50
Georgia Tech ECE 6250 Fall 2019; Notes by J. Romberg and M. Davenport. Last updated 3:33, November 20, 2019
Subscribe to view the full document.
this implies
b
x
1
=
P
1
P

1
1
b
x
0

A
T
1
A
1
b
x
0
+
A
T
1
y
1
=
b
x
0
+
K
1
(
y
1

A
1
b
x
0
)
,
where
K
1
is the “gain matrix”
K
1
=
P
1
A
T
1
.
The update for
P
1
is
P
1
=
P
0

P
0
A
T
1
(
I
+
A
1
P
0
A
T
1
)

1
A
1
P
0
=
P
0

U
(
I
+
A
1
U
)

1
U
T
,
where
U
=
P
0
A
T
1
is an
N
×
M
1
matrix, and
I
+
A
1
U
is
M
1
×
M
1
.
So the cost of the update is
•
O
(
M
1
N
2
) to compute
U
=
P
0
A
T
1
,
•
O
(
M
2
1
N
) to compute
A
1
U
,
•
O
(
M
3
1
) to invert
1
(
I
+
A
1
U
)

1
,
•
O
(
M
2
1
N
) to compute (
I
+
A
1
U
)

1
U
T
,
•
O
(
M
1
N
2
) to take the result of the last step and apply
U
,
•
O
(
N
2
) to subtract the result of the last step from
P
0
.
So assuming that
M
1
< N
, the overall cost is
O
(
M
1
N
2
), which is
on the order of
M
1
vectormatrix multiplies.
1
In practice, it is probably more stable to find and update a factorization
of this matrix. But the cost is the same.
51
Georgia Tech ECE 6250 Fall 2019; Notes by J. Romberg and M. Davenport. Last updated 3:33, November 20, 2019
Recursive Least Squares (RLS)
Given
y
0
=
A
0
x
*
+
e
0
y
1
=
A
1
x
*
+
e
1
.
.
.
y
k
=
A
k
x
*
+
e
k
.
.
.
,
RLS is an
online algorithm
for computing the best estimate for
x
*
from all the measurements it has seen up to the current time.
Recursive Least Squares
Initialize: (
y
0
appears)
P
0
= (
A
T
0
A
0
)

1
b
x
0
=
P
0
(
A
T
0
y
0
)
for
k
= 1
,
2
,
3
, . . .
do
(
y
k
appears)
P
k
=
P
k

1

P
k

1
A
T
k
(
I
+
A
k
P
k

1
A
T
k
)

1
A
k
P
k

1
K
k
=
P
k
A
T
k
b
x
k
=
b
x
k

1
+
K
k
(
y
k

A
k
b
x
k

1
)
end for
52
Georgia Tech ECE 6250 Fall 2019; Notes by J. Romberg and M. Davenport. Last updated 3:33, November 20, 2019
Subscribe to view the full document.
 Fall '08
 Staff