7.3 Properties of Matrices The objects of study
in linear algebra are linear operators. We have
seen that linear operators can be represented
as matrices through choices of ordered bases,
and that matrices provide a means of efficient
computation. We now

suppose that M1 exists. Then Mx = 0 x =
M10 = 0. Thus, if M is invertible, then Mx = 0
has no non-zero solutions. On the other hand,
Mx = 0 always has the solution x = 0. If no
other solutions exist, then M can be put into
reduced row echelon form with ev

function q : R R whose graph contains (1, 1),
(3, 2) and (5, 7). (f) second order homogeneous
polynomial function r : R R whose graph
contains (3, 2). (g) number of points required
to specify a third order polynomial R R. (h)
number of points required to

zero vector in two-dimensional Lorentzian
space-time with zero length. (b) Find and
sketch the collection of all vectors in twodimensional Lorentzian space-time with zero
length. (c) Find and sketch the collection of all
vectors in three-dimensional Loren

two products (doors, and door frames).
Explain how it can be viewed as a function
mapping one vector space into another. (d)
Assuming that L is linear and Lf is 1 door and 2
frames, and Lg is 3 doors and 1 frame, find a
matrix for L. Be sure to specify th

the target space basis. More carefully, if L is a
linear operator from V to W then the matrix
for L in the ordered bases B = (b1, b2, . . .) for V
and B0 = (1, 2, . . .) for W, is the array of
numbers m j i specified by L(bi) = m1 i 1 +
+ m j ij + Remark

Associativity of matrix multiplication. We
know for real numbers x, y and z that x(yz) =
(xy)z , i.e., the order of multiplications does
not matter. The same property holds for
matrix multiplication, let us show why.
Suppose M = mi j , N = n j k and R = r

0 0 1 Application: Cryptography A very
simple way to hide information is to use a
substitution cipher, in which the alphabet is
permuted and each letter in a message is
systematically exchanged for another. For
example, the ROT-13 cypher just exchanges a

t 0 1 . Then M2 = 1 2t 0 1 , M3 = 1 3t 0 1 , . . .
and so f(M) = 1 t 0 1 2 1 2t 0 1 + 3 1 3t 0 1 = 2
6t 0 2 . Suppose f(x) is any function defined by
a convergent Taylor Series: f(x) = f(0) + f 0 (0)x
+ 1 2!f 00(0)x 2 + . Then we can define the
matrix fun

M = LU , where: L is lower triangular . This
means that all entries above the main diagonal
are zero. In notation, L = (l i j ) with l i j = 0 for
all j > i. L = l 1 1 0 0 l 2 1 l 2 2 0 l
3 1 l 3 2 l 3 3 . . . . . . . . . . . . U is
upper triangular . Thi

Suppose L : V linear V and L(v1) = v1 + v2
, L(v2) = 2v1 + v2 . Compute the matrix of L in
the basis B and then compute the trace of this
matrix. Suppose that ad bc 6= 0 and consider
now the new basis B 0 = (av1 + bv2, cv1 + dv2).
Compute the matrix of L

with standard addition and scalar
multiplication; V := cfw_a0 1 + a1x + a2x 2 | a0,
a1, a2 R Let d dx : V V be the derivative
operator. The following three equations, along
with linearity of the derivative operator, allow
one to take the derivative of any

section 7.3. (b) Multiply out MN and write out
a few of its entries in the same form as in part
(a). In terms of the entries of M and the
entries of N, what is the entry in row i and
column j of MN? (c) Take the transpose (MN) T
and write out a few of its

problem 4 There are many ways to cut up an
n n matrix into blocks. Often context or the
entries of the matrix will suggest a useful way
to divide the matrix into blocks. For example,
if there are large blocks of zeros in a matrix, or
blocks that look like

right hand side of the above equation really
stand for the vector obtained by multiplying
the coefficients stored in the column vector by
the corresponding basis element and then
summing over them. Next, lets consider a
tautological example showing how to

scalars r and s). Your answer should have two
parts. Show that (1) (2), and then show that
(2) (1). 118 6.5 Review Problems 119 2. If f is
a linear function of one variable, then how
many points on the graph of the function are
needed to specify the funct

powerful tool for calculations involving linear
transformations. It is important to understand
how to find the matrix of a linear
transformation and the properties of matrices.
7.1 Linear Transformations and Matrices
Ordered, finite-dimensional, bases for

w) 6= (u v) w. 149 150 Matrices (b) We saw
in Chapter 1 that the operator B = u (cross
product with a vector) is a linear operator. It
can therefore be written as a matrix (given an
ordered basis such as the standard basis). How
is it that composing such

as the set R S ? Generalize to other size
matrices. 8. Show that any function in R
cfw_,?,# can be written as a sum of multiples of
the functions e, e?, e# defined by e(k) =
1 , k = 0 , k = ? 0 , k = # , e?(k) = 0 , k =
1 , k = ? 0 , k = # , e#(k) = 0

We are now ready to learn the powerful
consequences of linearity. 111 112 Linear
Transformations 6.1 The Consequence of
Linearity Now that we have a sufficiently
general notion of vector space it is time to talk
about why linear operators are so special.

on any vector by first expressing the vector as
a 116 6.4 Bases (Take 1) 117 sum of multiples
and then applying linearity; L x y = L x + y 2 1
1 + x y 2 1 1 = x + y 2 L 1 1 + x y 2 L 1 1
= x + y 2 2 4 + x y 2 6 8 = x + y 2(x + y) +
3(x y) 4(x y) = 4x 2y 6

range is the line through the origin in the x2
direction. It is not clear how to formulate L as
a matrix; since L c1 c1 + c2 c2 = 0 0 0
1 0 1 0 0 0 c1 c1 + c2 c2 = (c1 + c2)
0 1 0 , or L c1 c1 + c2 c2 = 0 0 0
0 1 0 0 0 0 c1 c1 + c2 c2 = (c1 + c2)
0 1

already try to write down the standard basis
vectors for R n for other values of n and
express an arbitrary vector in R n in terms of
them. The last example probably seems
pedantic because column vectors are already
just ordered lists of numbers and the b

(e) Compare and contrast your results from
parts (b) and (d). 4. Find the matrix for d dx
acting on the vector space of all power series
in the ordered basis (1, x, x2 , x3 , .). Use this
matrix to find all power series solutions to the
differential equat

since (f + g)(2) = 0. 5.2 Other Fields Above, we
defined vector spaces over the real numbers.
One can actually define vector spaces over any
field. This is referred to as choosing a different
base field. A field is a collection of numbers
satisfying prope

examples, it is also not true that all vector
spaces consist of functions. Examples are
somewhat esoteric, so we omit them. Another
important class of examples is vector spaces
that live inside R n but are not themselves R n .
104 5.1 Examples of Vector S

infinitely many components, but even
infinitely many components between any two
components! You are familiar with algebraic
definitions like f(x) = e x 2x+5 . However, most
vectors in this vector space can not be defined
algebraically. For example, the no

2 . Since n n matrices are linear
transformations R n R n , we can see that
the order of successive linear transformations
matters. Here is an example of matrices acting
on objects in three dimensions that also shows
matrices not commuting. Example 89 In

could write L x y B = (x + y) 0 1 0 E and
thus see that L acts like the matrix 0 0 1 1 0
0 . Hence L x y B = 0 0 1 1 0 0 x
y E ; given input and output bases, the
linear operator is now encoded by a matrix.
This is the general rule for this chapter: 128 7

row, a web browser downloading the file can
start displaying an incomplete version of the
picture before the download is complete.
Finally, a compression algorithm is applied to
the matrix to reduce the file size. 134 7.3
Properties of Matrices 135 Exampl

AAT 1 (b) Show that the matrix B above is a
right inverse for A, i.e., verify that AB = I . (c) Is
BA defined? (Why or why not?) (d) Let A be an
n m matrix with n > m. Suggest a formula for
a left inverse C such that CA = I Hint: you may
assume that ATA h

relatively simple when working over the
complex numbers. This phenomenon occurs
when diagonalizing matrices, see chapter 13.
The rational numbers Q are also a field. This
field is important in computer algebra: a real
number given by an infinite string of

directions available. To figure out the
dimension of a vector space, I stand at the
origin, and pick a direction. If there are any
vectors in my vector space that arent in that
direction, then I choose another direction that
isnt in the line determined by

. (+v) (Additive Inverse) For every u V there
exists w V such that u + w = 0V . ( i)
(Multiplicative Closure) c v V . Scalar times
a vector is a vector. ( ii) (Distributivity) (c+d) v
= c v +d v. Scalar multiplication distributes
over addition of scalars.