Chapter 4.
Eigenvalues and Eigenvectors.
In this chapter
V
is an
n
dimensional vector space over
F
=
R
or
C
. Recall
L
(
V
) is
the vector space of all linear maps from
V
to
V
.
Let
f
∈ L
(
V
).
An
eigenvalue
for
f
is an element
a
∈
F
such that there exists a
nonzero vector
v
∈
V
with
f
(
v
) =
av
. For
a
∈
F
, the
eigenspace
for
a
(on
V
with respect
to
f
) is
E
(
a
) =
{
v
∈
V
:
f
(
v
) =
av
}
.
Thus
a
is an eigenvalue of
f
iff
E
(
a
)
6
= 0.
The nonzero members of
E
(
a
) are the
eigenvectors
for the eigenvalue
a
. The eigenspace
E
(
a
)
is
a subspace of
V
; indeed:
Lemma 4A.
Let
f
∈ L
(
V
)
and
a
∈
F
. Then
E
(
a
) =
N
(
a
·
id
V

f
)
.
Proof.
Recall
id
V
is the identity map on
V
and
id
V
∈ L
(
V
). As
L
(
V
) is a vector space
and
f
and
id
V
are in
L
(
V
), so is
g
=
a
·
id
V

f
. Further by definition of addition and
scalar multiplication in the vector space
L
(
V
),
g
(
v
) =
a
·
id
V
(
v
)

f
(
v
) =
av

f
(
v
)
,
so
g
(
v
) = 0 iff
f
(
v
) =
av
. That is
N
(
g
) =
E
(
a
) as claimed.
Examples
(1) Notice

f
= 0
·
id
V

f
and
N
(
f
) =
N
(

f
), so by Lemma 4A,
E
(0) =
N
(
f
).
(2) Take
f
=
id
V
to be the identity map on
V
. Then for all 0
6
=
v
∈
V
,
v
=
f
(
v
) = 1
·
v
,
so
v
is an eigenvector for
f
with eigenvalue 1. Thus 1 is an eigenvalue for the identity
map and
V
=
E
(1) is the eigenspace for this eigenvalue.
Theorem 4.2.
Let
a
1
, . . . , a
m
be distinct eigenvalues for
f
∈ L
(
V
)
and
v
i
an eigenvector
for
a
i
. Then
{
v
1
, . . . , v
m
}
is linearly independent of order
m
.
Proof.
The proof is by induction on
m
.
If
m
= 1 the result holds as eigenvectors are
nonzero. Thus we may take
m >
1, and proceeding by induction on
m
, assume that each
proper subset of
Y
=
{
v
1
, . . . , v
m
}
is independent.
Suppose 0 =
∑
i
b
i
v
i
is a dependence relation on
Y
. As each proper subset of
Y
is
independent, each
b
j
is nonzero. Now
0 =
f
(
X
i
b
i
v
i
) =
X
i
b
i
f
(
v
i
) =
X
i
b
i
a
i
v
i
.
Also 0 =
a
m
(
∑
i
b
i
v
i
) =
∑
i
a
m
b
i
v
i
, so
0 =
X
i
(
a
i

a
m
)
b
i
v
i
=
X
i<m
c
i
v
i
,
where
c
i
= (
a
i

a
m
)
b
i
, since
c
m
= 0. However for
j < m
,
a
m
6
=
a
j
, so
c
j
6
= 0. Thus we
have a dependence relation on
{
v
1
, . . . , v
m

1
}
, contrary to an earlier observation.
1
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
2
Polynomial functions of matrices
.
Let
x
be a symbol and define a
polynomial
in
x
over
F
to be a formal sum
f
(
x
) =
m
X
i
=0
a
i
x
i
,
for some nonnegative integer
m
and elements
a
i
∈
F
. Formally a polynomial is just a
sequence of elements of
F
such that all but a finite number of terms in the sequence are
0; that is
∑
i
a
i
x
i
denotes the sequence
a
0
, a
1
, . . .
. The polynomial notation is one way to
write such sequences. Call
a
i
the
i
th
coefficient
of the polynomial
f
. The
zero polynomial
is the polynomial all of whose coefficients are 0. The
degree
of the zero polynomial is 0,
and for all other polynomials
f
(
x
) =
∑
i
a
i
x
i
we define the
degree
of
f
to be
deg(
f
) = max
{
i
:
a
i
6
= 0
}
.
Our polynomial
f
is said to be
monic
if
a
m
= 1, where
m
= deg(
f
). Write
F
[
x
] for the
set of all polymonials.
We define two binary operations + and
·
on
F
[
x
] which we call
polynomial addition
and
polynomial multiplication
:
X
i
a
i
x
i
+
X
i
b
i
x
i
=
X
i
(
a
i
+
b
i
)
x
i
(
X
i
a
i
x
i
)
·
(
X
i
b
i
x
i
) =
X
i
c
i
x
i
,
where
c
i
=
i
X
j
=0
a
j
b
i

j
.
This is the end of the preview.
Sign up
to
access the rest of the document.
 Winter '07
 Aschbacher
 Linear Algebra, Algebra, Eigenvectors, Vectors, Matrices, Vector Space, Characteristic polynomial

Click to edit the document details