LINEAR ALGEBRA
W W L CHEN
c
W W L Chen, 1997, 2005.
This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain,
and may be downloaded and/or photocopied, with or without permission from the author.
However, this document may not be kept on any information storage and retrieval system without permission
from the author, unless such system is not accessible to any individuals other than its owners.
Chapter 11
APPLICATIONS OF
REAL INNER PRODUCT SPACES
11.1. Least Squares Approximation
Given a continuous function
f
: [
a, b
]
→
R
, we wish to approximate
f
by a polynomial
g
: [
a, b
]
→
R
of
degree at most
k
, such that the error
b
a

f
(
x
)
−
g
(
x
)

2
d
x
is minimized. The purpose of this section is to study this problem using the theory of real inner product
spaces. Our argument is underpinned by the following simple result in the theory.
PROPOSITION 11A.
Suppose that
V
is a real inner product space, and that
W
is a finitedimensional
subspace of
V
. Given any
u
∈
V
, the inequality
u
−
proj
W
u
≤
u
−
w
holds for every
w
∈
W
.
In other words, the distance from
u
to any
w
∈
W
is minimized by the choice
w
= proj
W
u
, the
orthogonal projection of
u
on the subspace
W
. Alternatively, proj
W
u
can be thought of as the vector
in
W
closest to
u
.
Proof of Proposition 11A.
Note that
u
−
proj
W
u
∈
W
⊥
and
proj
W
u
−
w
∈
W.
Chapter 11 : Applications of Real Inner Product Spaces
page 1 of 11
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Linear Algebra
c
W W L Chen, 1997, 2005
It follows from Pythagoras’s theorem that
u
−
w
2
= (
u
−
proj
W
u
) + (proj
W
u
−
w
)
2
=
u
−
proj
W
u
2
+ proj
W
u
−
w
2
,
so that
u
−
w
2
−
u
−
proj
W
u
2
= proj
W
u
−
w
2
≥
0
.
The result follows immediately.
Let
V
denote the vector space
C
[
a, b
] of all continuous real valued functions on the closed interval
[
a, b
], with inner product
f, g
=
b
a
f
(
x
)
g
(
x
)d
x.
Then
b
a

f
(
x
)
−
g
(
x
)

2
d
x
=
f
−
g, f
−
g
=
f
−
g
2
.
It follows that the least squares approximation problem is reduced to one of finding a suitable polynomial
g
to minimize the norm
f
−
g
.
Now let
W
=
P
k
[
a, b
] be the collection of all polynomials
g
: [
a, b
]
→
R
with real coeﬃcients and of
degree at most
k
. Note that
W
is essentially
P
k
, although the variable is restricted to the closed interval
[
a, b
]. It is easy to show that
W
is a subspace of
V
. In view of Proposition 11A, we conclude that
g
= proj
W
f
gives the best least squares approximation among polynomials in
W
=
P
k
[
a, b
]. This subspace is of di
mension
k
+1. Suppose that
{
v
0
, v
1
, . . . , v
k
}
is an orthogonal basis of
W
=
P
k
[
a, b
]. Then by Proposition
9L, we have
g
=
f, v
0
v
0
2
v
0
+
f, v
1
v
1
2
v
1
+
. . .
+
f, v
k
v
k
2
v
k
.
Example 11.1.1.
Consider the function
f
(
x
) =
x
2
in the interval [0
,
2]. Suppose that we wish to find a
least squares approximation by a polynomial of degree at most 1. In this case, we can take
V
=
C
[0
,
2],
with inner product
f, g
=
2
0
f
(
x
)
g
(
x
)d
x,
and
W
=
P
1
[0
,
2], with basis
{
1
, x
}
. We now apply the GramSchmidt orthogonalization process to this
basis to obtain an orthogonal basis
{
1
, x
−
1
}
of
W
, and take
g
=
x
2
,
1
1
2
1 +
x
2
, x
−
1
x
−
1
2
(
x
−
1)
.
This is the end of the preview.
Sign up
to
access the rest of the document.
 Fall '08
 PETRINA
 Linear Algebra, Hilbert space, real inner product, W W L Chen, Symmetric matrix

Click to edit the document details