invertible they are square, and because their
product is defined they must both be nn. Fix
spaces and bases say, R n with the standard
bases to get maps g, h: R n R n that are
associated with the matrices, G = RepEn,En (g)
and H = RepEn,En (h). Consider h
because the matrix is nonsingular Corollary
IV.3.23 says there are elementary reduction
matrices such that Rr R1 M = I with r > 1.
Elementary matrices are invertible and their
inverses are also elementary so multiplying
both sides of that equation from th
equivalent if one can be converted to the
other by a sequence of row reduction steps,
while two matrices are matrix equivalent if
one can be converted to the other by a
sequence of row reduction steps followed by a
sequence of column reduction steps.
Cons
0 1 0 0 1 0 0 1 0
= 0 y 0 which matches our
intuitive expectation. The picture above
showing the figure walking out on the line
until ~vs tip is overhead is one way to think of
the orthogonal projection of a vector into a
line. We finish this subsection
model to ensure that the solution has a
desired accuracy. 4.7 Lemma A matrix H is
invertible if and only if it can be written as the
product of elementary reduction matrices. We
can compute the inverse by applying to the
identity matrix the same row steps
Matrix Multiplication We can consider matrix
multiplication as a mechanical process, putting
aside for the moment any implications about
the underlying maps. The striking thing about
this operation is the way that rows and
columns combine. The i, j entry
bases for the spaces. B = h 1 1 1 , 0
1 1 , 0 0 1 i C = h1 + x, 1 x, x2 i D =
h 1 0 0 0 , 0 2 0 0 , 0 0 3 0 , 0 0 0 4 i (a) Give the
formula for the composition map g h: R 3
M22 derived directly from the above
definition. (b) Represent h and g with respe
(3) If H ki+j G then Ci,j(k)H = G. Proof
Clear. QED 3.21 Example This is the first
system, from the first chapter, on which we
performed Gausss Method. 3x3 = 9 x1 + 5x2
2x3 = 2 (1/3)x1 + 2x2 = 3 We can reduce it with
matrix multiplication. Swap the first
changes a representation with respect to the
basis h~ 1, . . . , ~ i, . . . , ~ j, . . . , ~ ni into
one with respect to this basis 254 Chapter
Three. Maps Between Spaces h~ 1, . . . , ~ j, .
. . , ~ i, . . . , ~ ni. ~v = c1 ~ 1 + + ci ~ i
+ + cj~ j + + c
constructive it not only says the bases
change, it shows how they change. 1.21 Let V,
W be vector spaces, and let B, B be bases for
V and D, D be bases for W. Where h: V W is
linear, find a formula relating RepB,D(h) to
RepB, D (h). X 1.22 Show that the c
not orthogonal. B = h 4 2 , 1 3 i ~ 1 ~ 2 We
will derive from B a new basis for the space
h~1,~2i consisting of mutually orthogonal
vectors. The first member of the new basis is
just ~ 1. ~1 = 4 2 ! 270 Chapter Three. Maps
Between Spaces For the second me
is the 22 identity matrix, with 1s in its 1, 1
and 2, 2 entries and zeroes elsewhere; see
Exercise 34). (b) Let p(x) be a polynomial p(x) =
cnx n + + c1x + c0. If T is a square matrix we
define p(T) to be the matrix cnT n + + c1T +
c0I (where I is the app
symmetric if each i, j entry equals the j, i entry
(that is, if the matrix equals its transpose).
Show that the matrices HHT and HTH are
symmetric. (c) Show that the inverse of the
transpose is the transpose of the inverse. (d)
Show that the inverse of a
so people have tried to in reduce the number
of multiplications used to compute a matrix
product. (a) How many real number
multiplications do we need in the formula we
gave for the product of a mr matrix and a rn
matrix? (b) Matrix multiplication is assoc
minimum distance from the point (v1, v2) by
using calculus (i.e., consider the distance
function, set the first derivative equal to zero,
and solve). Generalize to R n. X 1.17 Prove that
the orthogonal projection of a vector into a
line is shorter than th
same rank. There is only one rank zero matrix.
The other two classes have infinitely many
members; weve shown only the canonical
representative. One nice thing about the
representative in Theorem 2.6 is that we can
completely understand the linear map whe
0 1 2 3 6 1 3 8 7 1 0 = 2 3 6
1 3 8 7 1 0 and from the right. 2 3 6
1 3 8 7 1 0 1 0 0 0 1 0 0 0 1 =
2 3 6 1 3 8 7 1 0 In short, an identity
matrix is the identity element of the set of nn
matrices with respect to the operation of
matrix multiplication.
gives some nice properties and more are in
Exercise 25 and Exercise 26. 2.12 Theorem If F,
G, and H are matrices, and the matrix products
are defined, then the product is associative
(FG)H = F(GH) and distributes over matrix
addition F(G + H) = FG + FH an
of the result. 1 2 3 4 5 6 7 8 9 0 1
0 0 0 0 = 0 1 0 4 0 7 Section IV.
Matrix Operations 235 3.4 Example Rescaling
unit matrices simply rescales the result. This is
the action from the left of the matrix that is
twice the one in the prior example. 0 2 0
0
proj[~2] (~ 3) . . . ~k = ~ k proj[~1] (~
k) proj[~k1] (~ k) form an
orthogonal basis for the same subspace. 2.8
Remark This is restricted to R n only because
we have not given a definition of orthogonality
for other spaces. Proof We will use induction
to
equals ~vi+1? If so, what is the earliest such i?
Section VI. Projection 269 VI.2 Gram-Schmidt
Orthogonalization The prior subsection
suggests that projecting ~v into the line
spanned by ~s decomposes that vector into
two parts proj[~s] (~p) ~v ~v proj[~s
think of orthogonal projection into a line is to
have the person stand on the vector, not the
line. This person holds a rope looped over the
line. As they pull, the loop slides on the line.
When it is tight, the rope is orthogonal to the
line. That is, we
sided inverse if and only if it is both one-toone and onto. The appendix also shows that if
a function f has a two-sided inverse then it is
unique, so we call it the inverse and write f
1 . In addition, recall that we have shown in
Theorem II.2.20 that if
calculate H = RepB, D (h) either by directly
using B and D , or else by first changing bases
with RepB,B (id) then multiplying by H =
RepB,D(h) and then changing bases with
RepD,D (id). H = RepD,D (id) H RepB,B
(id) () 2.1 Example The matrix T = cos(/6)
matrix just given m n p q! 1 1 2 1 ! = 1 0 0 1!
by using Gausss Method to solve the resulting
linear system. m + 2n = 1 m n = 0 p + 2q = 0 p
q = 1 Answer: m = 1/3, n = 1/3, p = 2/3, and q
= 1/3. (This matrix is actually the two-sided
inverse of H; the ch
exercises.) Here is another property of matrix
multiplication that might be puzzling at first
sight. (a) Prove that the composition of the
projections x, y : R 3 R 3 onto the x and y
axes is the zero map despite that neither one
is itself the zero map. (b
that someone standing on ~p and looking
straight up or down that is, looking
orthogonally to the plane sees the tip of ~v.
In this section we will generalize this to other
projections, orthogonal and non-orthogonal.
VI.1 Orthogonal Projection Into a Line
holds for maps: with respect to the basis pairs
E2, E2 and E2, B, the identity map has these
representations. RepE2,E2 (id) = 1 0 0 1!
RepE2,B(id) = 1/2 1/2 1/2 1/2! This section
shows how to translate among the
representations. That is, we will compute h
sentence holds because matrix-vector
multiplication represents a map application
and so RepB,D(id) RepB(~v) = RepD(id(~v) ) =
RepD(~v) for each ~v. For the second sentence,
with respect to B, D the matrix M represents a
linear map whose action is to map e
Gauss-Jordan reduction. We have already seen
how to produce a matrix that rescales rows,
and a row swapper. 3.16 Example Multiplying
by this matrix rescales the second row by
three. 1 0 0 0 3 0 0 0 1 0 2 1 1 0
1/3 1 1 1 0 2 0 = 0 2 1 1 0 1 3 3 1 0
2 0 3.1
two ways to compute the matrix for going
down the squares right side, RepE2,D (id).
We could calculate it directly as we did for the
other change of basis matrix. Or, we could
instead calculate it as the inverse of the matrix
for going up RepD, E2 (id). F
easy to check, the arithmetic seems
unconnected to any idea. The argument in the
proof is shorter and also says why this
property really holds. This illustrates the
comments made at the start of the chapter on
vector spaces at least sometimes an
argument
This subsection show that this isnt the entire
story. Understanding matrices operations by
understanding the mechanics of how the
entries combine is also useful. In the rest of
this book we shall continue to focus on maps
as the primary objects but we wil
some ways an extension of real number
multiplication. We also have a matrix
multiplication operation and its inverse that
are somewhat like the familiar real number
operations (associativity, and distributivity
over addition, for example), but there are
d