matrix M0 of L we could simply calculate what
L does the the new input basis vectors in terms
of the new output basis vectors: L(1 + t), L(t + t
2 ), L(1 + t 2 ) = 1 2 + 2 1 , 2 1 + 3 3 , 1 2 +
3 3 = (w 0 1 + w 0 2 , w0 1 + 2w 0 2 , 2w 0 1 +
w 0 2 ) = (w
the second entry of the first row, whose effect
upon multiplying the two matrices precisely
undoes what we we did to the second column
of the first matrix. For the third column of M
we use GramSchmidt to deduce the third
orthogonal vector 1 6 1 3 7 6 =
10000100000
0.0001000
be diagonalized? Either
diagonalize it or explain why this is impossible.
Note: It turns out that every matrix is similar
to a block matrix whose diagonal blocks look
like diagonal matrices or the ones above and
whose off-diagonal b
orthogonal basis for the latter vector space.
Note that the set of vectors you start out with
needs to be ordered to uniquely specify the
algorithm; changing the order of the vectors
will give a different orthogonal basis. You
might need to be the one to
a subspace of W. 13. Let Sn and An define the
space of nn symmetric and anti-symmetric
matrices, respectively. These are subspaces of
the vector space Mn n of all n n matrices.
What is dim Mn n , dim Sn, and dim An? Show
that Mn n = Sn + An. Define an inn
any matrix (or vector) N, we can compute N by
applying complex conjugation to each entry of
N. Compute (x ) T . Then compute (x Mx) T .
Note that for matrices AB + C = AB + C. (h)
Show that = . Using the result of a previous
part of this problem, what doe
domain; ranf = ImgS. For this reason, the range
of f is also sometimes called the image of f and
is sometimes denoted im(f) or f(S). We have
seen that the range of a matrix is always a
span of vectors, and hence a vector space.
Note that we prefer the phr
1 2 1 2 0 0 ,
1 6 1 6 2 6 0 , 3 6 3 6
3 6 3 2 , and
orthonormal basis for L . Moreover, we have
R 4 = L L = c c c c
c R x y z w
R 4
x + y + z + w = 0 , a
decomposition of R 4 into a line and its three
dimensional orthogonal complement. Notice
that fo
= cfw_0W , U + V = U V. When U V 6= cfw_0W ,
U + V 6= U V . This distinction is important
because the direct sum has a very nice
property: Theorem 14.6.1. If w U V then
there is only one way to write w as the sum of
a vector in U and a vector in V . Proof
eigenvector, we can pick a value for x. Setting x
= 1 is convenient, and gives the eigenvector v1
= 1 4 . = 2: We solve the linear system 4 2
16 8 x y = 0 0 . Here again both equations
agree, because we chose to make the system
singular. We see that y = 2
Example 14.5 QR Decomposition In Chapter 7,
Section 7.7 teaches you how to solve linear
systems by decomposing a matrix M into a
product of lower and upper triangular
matrices M = LU . The GramSchmidt
procedure suggests another matrix
decomposition, M = Q
eigenvaluesee Review Problem 3). Let P be
the square matrix of orthonormal column
vectors P = x1 x2 xn , While x1 is an
eigenvector for M, the others are not
necessarily eigenvectors for M. Then MP =
1x1 Mx2 Mxn . 279 280 Diagonalizing
Symmetric Matrices
the matrix M I is singular, and so we require
that 1To save writing many minus signs
compute det(M I); which is equivalent if
you only need the roots. 233 234 Eigenvalues
and Eigenvectors Figure 12.2: Dont forget the
characteristic polynomial; you will ne
of this book, not matrices; matrices are merely
a convenient way of doing computations.
Change of Basis Example Lets now calculate
how the matrix of a linear transformation
changes when changing basis. To wit, let L: V
W with matrix M = (mi j ) in the or
answered. How do we find eigenvectors and
their eigenvalues? How many eigenvalues
and (independent) eigenvectors does a given
linear transformation have? When can a
linear transformation be diagonalized? We will
start by trying to find the eigenvectors fo
0 is a homogeneous linear equation, linear
combinations of solutions are solutions; in
other words the kernel ker(w) is a vector
space. Given the linear function W, some
vectors are now more special than others. We
can use musical intuition to do more! If
determining the strings shape. The vector v is
called an eigenvector and its corresponding
eigenvalue. The solution sets for each are
called V. For any the set Vk is a vector space
since elements of this set are solutions to the
homogeneous equation (L )v
wi = X j uj (uj wi) Thus the matrix for the
change of basis from T to R is given by P = (p j
i ) = (uj wi). 258 14.3 Relating Orthonormal
Bases 259 We would like to calculate the
product P PT . For that, we first develop a dirty
trick for products of dot
by L x y z = x + y x + z y + z . Let
ei be the vector with a one in the ith position
and zeros in all other positions. (a) Find Lei for
each i = 1, 2, 3. (b) Given a matrix M = m1 1
m1 2 m1 3 m2 1 m2 2 m2 3 m3 1 m3 2 m3 3
, what can you say about Mei for
w u = 0 for all u U is the orthogonal
complement of U in W. Remark The symbols
U are often read as U-perp. This is the
set of all vectors in W orthogonal to every
vector in U. Notice also that in the above
definition we have implicitly assumed that the
in
that S is called the domain of f, T is called the
codomain or target of f. We now formally
introduce a term that should be familar to you
from many previous courses. 285 286 Kernel,
Range, Nullity, Rank 16.1 Range Definition The
range of a function f : S
machinery and results are available in this
case. 14.1 Properties of the Standard Basis The
standard notion of the length of a vector x =
(x1, x2, . . . , xn) R n is |x| = x x = p (x1) 2
+ (x2) 2 + (xn) 2 . The canonical/standard
basis in R n e1 = 1 0 . .
each row and column? What are their
determinants? (Note: These matrices are
known as permutation matrices.) (c) Given L :
R 3 R 3 is linear and L x y z = 2y
z 3x 2z + x + y write the matrix M for L in
the standard basis, and two reorderings of the
standa
k j . Then we can write vj = X k X i vip i k q k j .
But P k p i k q k j is the k, j entry of the product
matrix P Q. Since the expression for vj in the
basis S is vj itself, then P Q maps each vj to
itself. As a result, each vj is an eigenvector for
P Q
answers the question: What is
diagonalization? is invertible because its
determinant is 1. Therefore, the eigenvectors
of M form a basis of R, and so M is
diagonalizable. Moreover, because the
columns of P are the components of
eigenvectors, MP = Mv1 Mv2
Then: ij = i i = j 0 i 6= j . Moreover, for a
diagonal matrix D with diagonal entries 1, . . .
, n, we can write D = 11 + + nn. 14.2
Orthogonal and Orthonormal Bases There are
many other bases that behave in the same
way as the standard basis. As such, we
matrix. Explain why your formula must work
for any real n n symmetric matrix. 5. If M is
not square then it can not be symmetric.
However, MMT and MTM are symmetric, and
therefore diagonalizable. (a) Is it the case that
all of the eigenvalues of MMT must
similar hold for 3 3 matrices? (Try assuming
that the matrix of M is diagonal to answer
this.) 9. Discrete dynamical system. Let M be
the matrix given by M = 3 2 2 3 . Given any
vector v(0) = x(0) y(0) , we can create an
infinite sequence of vectors v(1),
space properties are inherited from the fact
that V itself is a vector space. In other words,
the subspace theorem (9.1.1, chapter 9)
ensures that V := cfw_v V |Lv = 0 is a subspace
of V . Eigenspaces Reading homework:
problem 3 You can now attempt the se