EE 103
Lecture Notes Section 6 Addendum,
Professor S. E. Jacobsen
1
Consider the system of linear equations
Ax
b
where
A
is mxn, m > n, and the columns of
1
2
n
A
a
a
a
,
,
,
,
, are linearly independent (i.e.,
rank
A
=
n
).
Such a system of equations usually has no solution.
We define, for a given
n
x
R
, the error vector
e x
Ax
b
( )
Linear Least Squares (LLS)
Least-squares
problems are those for which the norm is the Euclidean norm
2
T
z
z z
||
||
In this case we may write
2
2
2
2
1
m
T
i
n
n
n
n
i
x R
x R
x R
x R
e x
e x
e x
e x
e
x
min || ( ) ||
min || ( ) ||
min ( )
( )
min
( )
(6.1)
That is, if the Euclidean norm is used, we choose an
x
that minimizes the
sum of the squared
errors;
hence the term “least squares”.
Let
2
2
T
f
x
e x
e x
e x
( )
( )
( )
|| ( ) ||
; we wish to minimize
f
x
( )
and, of course, if
x
is a minimizer,
then
x
must satisfy the vector equation
0
f
x
( )
When this vector equation is linear in the unknowns,
x
, we have a so-called
linear least squares
problem.
For the remainder of this section, we will focus on the linear least squares problem.
Now,
2
2
2
T
T
T
T
t
T
T
T
T
f
x
e x
e x
Ax
b
Ax
b
x A Ax
b Ax
b b
f
x
x A A
b A
( )
( )
( )
(
) (
)
( )
.
Therefore,
0
T
T
T
f
x
x A A
b A
( )

This
** preview**
has intentionally

**sections.**

*blurred***to view the full version.**

*Sign up*