Linear transformations: the basics
A linear transformation is a homomorphism from one vector space to another
(over the same field). That is, a linear transformation is a function from one
F
space
V
to an
F
space
W
(possibly the same as
V
) which preserves the vector
space structure. Given the field
F
, this structure consists of vector addition and
scalar multiplication. It is probably correct to say that these transformations
are the main object of study in linear algebra. Here’s the proper definition:
DEFINITION (Linear transformation)
Suppose that
F
is a field and
V
and
W
are vector spaces over
F
. A
linear transformation from
V
to
W
is a
function
T
:
V
→
W
such that:
1. For any
~v
1
and
~v
2
in
V
,
T
(
~v
1
+
~v
2
) =
T~v
1
+
T~v
2
. (That is,
T
preserves
or
respects
vector addition); and
2. For any
~v
∈
V
and scalar
α
∈
F
,
T
(
α~v
) =
αT~v
. (That is,
T
preserves, or
respects, scalar multiplication.)
In particular, such a
T
takes as input a vector
~v
∈
V
and puts out as output
some
~w
=
T~v
∈
W
.
A linear transformation is also known as a
linear map
. In the (very impor
tant) special case where
V
=
W
, a linear transformation from
V
to itself is
usually called a
linear operator on
V
. Another important special case is where
V
is any vector space over
F
and
W
=
F
; in this case a linear transformation
is known as a
linear functional
on
V
.
Before proceeding to examples, we make some comments:
1. Please do
not
say that a linear transformation
T
is “closed under addition”
(or “closed under scalar multiplication”). That’s just bad grammar. It is
not even false; it makes no sense. It is like saying “My hat purples.”
2. Note that I have written
T~v
rather than
T
(
~v
) when I didn’t need the
brackets; this is standard.
3. A significant part of the structure of a vector space
V
is the zero vector
~
0
V
; surely we should insist that a linear
T
should take zero to zero. (In
fact we would insist on that, except that it is easy to prove. Note that
~
0
W
+
T
~
0
V
=
T
~
0
V
=
T
(
~
0
V
+
~
0
V
) =
T
~
0
V
+
T
~
0
V
.
By cancellation in the space
W
, we have
T
~
0
V
=
~
0
W
.)
4. A similar comment holds for
T
(

~v
); as

~v
=

1
F
~v
, this must be

T~v
.
The basic kind of example is when
V
=
F
n
for some
n
and
W
=
F
m
, and
A
is an
m
×
n
matrix with entries from
F
. We let
T
A
:
V
→
W
be defined by
1
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
T
A
~v
=
A~v
— matrix multiplication. To see that this is linear, suppose
~v
1
and
~v
2
are in
F
n
; by distributivity of matrix multiplication,
T
A
(
~v
1
+
~v
2
) =
A
(
~v
1
+
~v
2
) =
A~v
1
+
A~v
2
=
T
A
~v
1
+
T
A
~v
2
,
so
T
A
preserves vector addition. Also for any
~v
∈
F
n
, and
α
∈
F
,
T
A
(
α~v
) =
A
(
α~v
) =
αA~v
=
αT
A
~v
by a basic (unfortunately unnamed) property of matrix multiplication.
Let’s get really specific. Say
F
is the real numbers,
V
=
R
3
and
W
=
R
2
. Define
T
:
R
3
→ R
2
by
T
x
1
x
2
x
3
=
3
x
1

7
x
2
+ 2
x
3
x
1
+ 4
x
2

x
3
¶
. (E.g.,
T
1
2
3
=

5
6
¶
.) Then
T
=
T
A
, where
A
=
3

7
2
1
4

1
¶
.
It is easy
enough to verify directly from the definition of linear transformations that this
T
is one, but I don’t think that’s any easier than the general verification above.
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '07
 Loveys
 Linear Algebra, Algebra, Transformations, Vector Space, Linear map, linear transformation, T2 T1

Click to edit the document details