Then the diagram
V
T
/
[
·
]
B
W
[
·
]
C
V
[
T
]
C
B
/
W
commutes, i.e.
[
T
]
C
B
[
v
]
B
= [
Tv
]
C
for all
v
∈
V
.
Proof.
To show both matrices are equal, we must show that their entries are equal. We know
([
T
]
C
B
)
i,j
=
e
*
i
[
Tv
j
]
C
. Hence
([
T
]
C
B
[
v
]
B
)
i,
1
=
n
X
j
=1
([
T
]
C
B
)
i,j
([
v
]
B
)
j,
1
=
n
X
j
=1
e
*
i
[
Tv
j
]
C
([
v
]
B
)
j,
1
=
e
*
i
n
X
j
=1
([
v
]
B
)
j,
1
[
Tv
j
]
C
=
e
*
i
"
n
X
j
=1
([
v
]
B
)
j,
1
Tv
j
#
C
=
e
*
i
"
T
n
X
j
=1
([
v
]
B
)
j,
1
v
j
#
C
=
e
*
i
[
Tv
]
C
= ([
Tv
]
C
)
i,
1
as [
·
]
C
,
e
*
i
, and
T
are linear transformations, and
v
=
n
X
j
=1
([
v
]
B
)
j
1
v
j
.
Remarks
3.4.7
.
When we proved 2.4.3 and 2.5.6 (1), we used the coordinate map implicitly.
The algorithm of Gaussian elimination proves the hard result of 2.4.2, which in turn, proves
these technical theorems after we identify the vector space with
F
n
for some
n
.
Proposition 3.4.8.
Suppose
S, T
∈
L
(
V, W
)
and
λ
∈
F
, and suppose that
B
=
{
v
1
, . . . , v
n
}
is a basis for
V
and
C
=
{
w
1
, . . . , w
m
}
is a basis for
W
. Then
[
S
+
λT
]
C
B
= [
S
]
C
B
+
λ
[
T
]
C
B
.
Thus
[
·
]
C
B
:
L
(
V, W
)
→
M
m
×
n
(
F
)
is a linear transformation.
49
Proof.
First, we note that this follows immediately from 3.4.2 and 3.4.6 as
[
S
+
λT
]
C
B
x
= [(
S
+
λT
)[
x
]

1
B
]
C
= [
S
[
x
]

1
B
+
λT
[
x
]

1
B
]
C
= [
S
[
x
]

1
B
]
C
+
λ
[
T
[
x
]

1
B
]
C
= [
S
]
C
B
x
+
λ
[
T
]
C
B
x
for all
x
∈
F
n
.
We give a direct proof as well. We have that
Sv
j
=
m
X
i
=1
([
S
]
C
B
)
i,j
w
i
and
Tv
j
=
m
X
i
=1
([
T
]
C
B
)
i,j
w
i
for all
j
∈
[
n
]. Thus
(
S
+
λT
)
v
j
=
Sv
j
+
λTv
j
=
m
X
i
=1
([
S
]
C
B
)
i,j
w
i
+
λ
m
X
i
=1
([
T
]
C
B
)
i,j
w
i
=
m
X
i
=1
(([
S
]
C
B
)
i,j
+
λ
([
T
]
C
B
)
i,j
)
w
i
.
for all
j
∈
[
n
], and ([
S
+
λT
]
C
B
)
i,j
= ([
S
]
C
B
)
i,j
+
λ
([
T
]
C
B
)
i,j
, so [
S
+
λT
]
C
B
= [
S
]
C
B
+
λ
[
T
]
C
B
.
Proposition 3.4.9.
Suppose
T
∈
L
(
U, V
)
and
S
∈
L
(
V, W
)
, and suppose that
A
=
{
u
1
, . . . , u
m
}
is a basis for
U
,
B
=
{
v
1
, . . . , v
n
}
is a basis for
V
, and
C
=
{
w
1
, . . . , w
p
}
is a basis for
W
.
Then
[
ST
]
C
A
= [
S
]
C
B
[
T
]
B
A
.
Proof.
We have that
STu
j
=
S
n
X
i
=1
([
T
]
B
A
)
i,j
v
i
=
n
X
i
=1
([
T
]
B
A
)
i,j
(
Sv
i
) =
n
X
i
=1
([
T
]
B
A
)
i,j
p
X
k
=1
([
S
]
C
B
)
k,i
w
k
!
=
p
X
k
=1
n
X
i
=1
([
S
]
C
B
)
k,i
([
T
]
B
A
)
i,j
!
w
k
=
p
X
k
=1
(
[
S
]
C
B
[
T
]
B
A
)
k,j
w
k
for all
j
∈
[
m
], so [
ST
]
C
A
= [
S
]
C
B
[
T
]
B
A
.
Exercises
V, W
will denote vector spaces. Let
T
∈
L
(
V, W
).
Exercise 3.4.10.
Exercise 3.4.11.
Exercise 3.4.12.
Suppose dim(
V
) =
n <
∞
and dim(
W
) =
m <
∞
, and let
B, C
be bases
for
V, W
respectively.
(1) Show that
L
(
V, W
)
∼
=
M
m
×
n
(
F
).
(2) Show
T
is invertible if and only if [
T
]
C
B
is invertible.
Exercise 3.4.13.
Suppose
V
is finite dimensional and
B, C
are two bases of
V
.
Show
[
T
]
B
∼
[
T
]
C
.
50
Chapter 4
Polynomials
In this section, we discuss the background material on polynomials needed for linear algebra.
The two main results of this section are the Euclidean Algorithm and the Fundamental
Theorem of Algebra.
The first is a result on factoring polynomials, and the second says
that every polynomial in
C
[
z
] has a root. The latter result relies on a result from complex
analysis which is stated but not proved. For this section,
F
is a field.
4.1
The Algebra of Polynomials
Definition 4.1.1.
A polynomial
p
over
F
is a sequence
p
= (
a
i
)
i
∈
Z
≥
0
where
a
i
∈
F
for all
i
∈
Z
≥
0
such that there is an
n
∈
N
such that
a
i
= 0 for all
i > n
. The minimal
n
∈
Z
≥
0
such that
a
i
= 0 for all
i > n
(if it exists) is called the degree of
p
, denoted deg(
p
), and we
define the degree of the zero polynomial, the sequence of all zeroes, denoted 0, to be
∞
.
You've reached the end of your free preview.
Want to read all 149 pages?
 Fall '08
 GUREVITCH
 Linear Algebra, Algebra, Linear Equations, Equations, Matrices, Systems Of Linear Equations, Vector Space, Sets, WI