Course Hero has millions of student submitted documents similar to the one

below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.

Solutions Instructors Manual
Elementary Linear
Algebra with
Applications
Ninth Edition
Bernard Kolman
Drexel University
David R. Hill
Temple University
Editorial Director, Computer Science, Engineering, and Advanced Mathematics: Marcia J. Horton
Senior Editor: Holly Stark
Editorial Assistant: Jennifer Lonschein
Senior Managing Editor/Production Editor: Scott Disanno
Art Director: Juan Lpez
o
Cover Designer: Michael Fruhbeis
Art Editor: Thomas Benfatti
Manufacturing Buyer: Lisa McDowell
Marketing Manager: Tim Galligan
Cover Image: (c) William T. Williams, Artist, 1969 Trane, 1969 Acrylic on canvas, 108 84 .
Collection of The Studio Museum in Harlem. Gift of Charles Cowles, New York.
c 2008, 2004, 2000, 1996 by Pearson Education, Inc.
Pearson Education, Inc.
Upper Saddle River, New Jersey 07458
Earlier editions c 1991, 1986, 1982, by KTI;
1977, 1970 by Bernard Kolman
All rights reserved. No part of this book may be reproduced, in any form or by any means, without permission in
writing from the publisher.
Printed in the United States of America
10
9
8
7
6
5
4
3
2
1
ISBN 0-13-229655-1
Pearson
Pearson
Pearson
Pearson
Pearson
Pearson
Pearson
Pearson
Education, Ltd., London
Education Australia PTY. Limited, Sydney
Education Singapore, Pte., Ltd
Education North Asia Ltd, Hong Kong
Education Canada, Ltd., Toronto
Educacin de Mexico, S.A. de C.V.
o
EducationJapan, Tokyo
Education Malaysia, Pte. Ltd
Contents
Preface
iii
1 Linear Equations and Matrices
1.1 Systems of Linear Equations . . . . . . . . . . . . . .
1.2 Matrices . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Matrix Multiplication . . . . . . . . . . . . . . . . .
1.4 Algebraic Properties of Matrix Operations . . . . . .
1.5 Special Types of Matrices and Partitioned Matrices .
1.6 Matrix Transformations . . . . . . . . . . . . . . . .
1.7 Computer Graphics . . . . . . . . . . . . . . . . . . .
1.8 Correlation Coecient . . . . . . . . . . . . . . . . .
Supplementary Exercises . . . . . . . . . . . . . . . .
Chapter Review . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
2
3
7
9
14
16
18
19
24
2 Solving Linear Systems
2.1 Echelon Form of a Matrix . .
2.2 Solving Linear Systems . . . .
2.3 Elementary Matrices; Finding
2.4 Equivalent Matrices . . . . .
2.5 LU -Factorization (Optional) .
Supplementary Exercises . . .
Chapter Review . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
27
27
28
30
32
33
33
35
3 Determinants
3.1 Denition . . . . . . . . . . . . . . .
3.2 Properties of Determinants . . . . .
3.3 Cofactor Expansion . . . . . . . . . .
3.4 Inverse of a Matrix . . . . . . . . . .
3.5 Other Applications of Determinants
Supplementary Exercises . . . . . . .
Chapter Review . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
37
37
37
39
41
42
42
43
4 Real Vector Spaces
4.1 Vectors in the Plane and in 3-Space
4.2 Vector Spaces . . . . . . . . . . . . .
4.3 Subspaces . . . . . . . . . . . . . . .
4.4 Span . . . . . . . . . . . . . . . . . .
4.5 Span and Linear Independence . . .
4.6 Basis and Dimension . . . . . . . . .
4.7 Homogeneous Systems . . . . . . . .
4.8 Coordinates and Isomorphisms . . .
4.9 Rank of a Matrix . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
45
45
47
48
51
52
54
56
58
62
...
...
A1
...
...
...
...
ii
CONTENTS
Supplementary Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5 Inner Product Spaces
5.1 Standard Inner Product on R2 and
5.2 Cross Product in R3 (Optional) . .
5.3 Inner Product Spaces . . . . . . . .
5.4 Gram-Schmidt Process . . . . . . .
5.5 Orthogonal Complements . . . . .
5.6 Least Squares (Optional) . . . . .
Supplementary Exercises . . . . . .
Chapter Review . . . . . . . . . . .
64
69
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
71
71
74
77
81
84
85
86
90
6 Linear Transformations and Matrices
6.1 Denition and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2 Kernel and Range of a Linear Transformation . . . . . . . . . . . . . . . . . . . .
6.3 Matrix of a Linear Transformation . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4 Vector Space of Matrices and Vector Space of Linear Transformations (Optional)
6.5 Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.6 Introduction to Homogeneous Coordinates (Optional) . . . . . . . . . . . . . . .
Supplementary Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
93
93
96
97
99
102
103
105
106
7 Eigenvalues and Eigenvectors
7.1 Eigenvalues and Eigenvectors . . . . .
7.2 Diagonalization and Similar Matrices .
7.3 Diagonalization of Symmetric Matrices
Supplementary Exercises . . . . . . . .
Chapter Review . . . . . . . . . . . . .
R3
..
..
..
..
..
..
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
109
109
115
120
123
126
8 Applications of Eigenvalues and Eigenvectors (Optional)
8.1 Stable Age Distribution in a Population; Markov Processes
8.2 Spectral Decomposition and Singular Value Decomposition
8.3 Dominant Eigenvalue and Principal Component Analysis .
8.4 Dierential Equations . . . . . . . . . . . . . . . . . . . . .
8.5 Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . .
8.6 Real Quadratic Forms . . . . . . . . . . . . . . . . . . . . .
8.7 Conic Sections . . . . . . . . . . . . . . . . . . . . . . . . .
8.8 Quadric Surfaces . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
129
129
130
130
131
132
133
134
135
10 MATLAB Exercises
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
137
Appendix B Complex Numbers
163
B.1 Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
B.2 Complex Numbers in Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Preface
This manual is to accompany the Ninth Edition of Bernard Kolman and David R.Hills Elementary Linear
Algebra with Applications . Answers to all even numbered exercises and detailed solutions to all theoretical
exercises are included. It was prepared by Dennis Kletzing, Stetson University. It contains many of the
solutions found in the Eighth Edition, as well as solutions to new exercises included in the Ninth Edition of
the text.
Chapter 1
Linear Equations and Matrices
Section 1.1, p. 8
2. x = 1, y = 2, z = 2.
4. No solution.
6. x = 13 + 10t, y = 8 8t, t any real number.
8. Inconsistent; no solution.
10. x = 2, y = 1.
12. No solution.
14. x = 1, y = 2, z = 2.
16. (a) For example: s = 0, t = 0 is one answer.
(b) For example: s = 3, t = 4 is one
(c) s = 2 .
18. Yes. The trivial solution is always a solution to a homogeneous system.
20. x = 1, y = 1, z = 4.
22. r = 3.
24. If x1 = s1 , x2 = s2 , . . . , xn = sn satisfy each equation of (2) in the original order, then those
same numbers satisfy each equation of (2) when the equations are listed with one of the original ones
interchanged, and conversely.
25. If x1 = s1 , x2 = s2 , . . . , xn = sn is a solution to (2), then the pth and q th equations are satised.
That is,
ap1 s1 + + apn sn = bp
aq1 s1 + + aqn sn = bq .
Thus, for any real number r,
(ap1 + raq1 )s1 + + (apn + raqn )sn = bp + rbq .
Then if the q th equation in (2) is replaced by the preceding equation, the values x1 = s1 , x2 = s2 , . . . ,
xn = sn are a solution to the new linear system since they satisfy each of the equations.
2
Chapter 1
26. (a) A unique point.
(b) There are innitely many points.
(c) No points simultaneously lie in all three planes.
C2
28. No points of intersection:
C1
One point of intersection:
C1
Two points of intersection:
C2
C1
C2
C2
C1
C1 = C2
Innitely many points of intersection:
30. 20 tons of low-sulfur fuel, 20 tons of high-sulfur fuel.
32. 3.2 ounces of food A, 4.2 ounces of food B, and 2 ounces of food C.
34. (a)
p(1) = a(1)2 + b(1) + c = a + b + c = 5
p(1) = a(1)2 + b(1) + c = a b + c = 1
p(2) = a(2)2 + b(2) + c = 4a + 2b + c = 7.
(b) a = 5, b = 3, c = 7.
Section 1.2, p. 19
0
1
2. (a) A = 0
0
1
1
0
1
1
1
0
1
0
0
0
0
1
0
0
0
1
1
0
0
0
0
1
(b) A = 1
1
1
1
0
1
0
0
1
1
0
1
0
1
0
1
0
0
1
0
0 .
0
0
4. a = 3, b = 1, c = 8, d = 2.
5 5
8
7 7
6. (a) C + E = E + C = 4
(b) Impossible.
(c)
.
2
9 .
0
1
5
3
4
9
3 9
0
10 9
(d) 12 3 15 .
(e) 8 1 2 .
(f) Impossible.
6 3 9
5 4
3
1
T
2
8. (a) A =
3
2
1
1 , (AT )T =
2
4
2
1
3
.
4
5
5
(b)
8
4
2
9
5
3 .
4
(c)
6
11
10
.
17
Section 1.3
3
(d)
4
3 .
10
3
6
(e)
9
0 4
.
4
0
(f)
17
16
2
.
6
10
10
30
+1
=
.
01
00
02
1
2
3
12. 6
+2
3 .
5
2
4
10. Yes: 2
14. Because the edges can be traversed in either direction.
x1
x2
16. Let x = . be an n-vector. Then
.
.
xn
x1
0
x1 + 0
x1
x2 0 x2 + 0 x2
x + 0 = . + . = . = . = x.
. . . .
.
.
.
.
0
xn
xn
xn + 0
n
m
aij = (a11 + a12 + + a1m ) + (a21 + a22 + + a2m ) + + (an1 + an2 + + anm )
18.
i=1 j =1
= (a11 + a21 + + an1 ) + (a12 + a22 + + an2 ) + + (a1m + a2m + + anm )
m
n
=
aij .
j =1 i=1
n
n
19. (a) True.
n
(ai + 1) =
i=1
n
(b) True.
i=1
(c) True.
m
n
i=1
n
1=
i=1
ai + n.
i=1
n
1 =
j =1
m = mn.
i=1
m
ai
i=1
ai +
m
bj = a1
j =1
m
m
bj + + an
bj + a2
j =1
j =1
m
= (a1 + a2 + + an )
n
=
i=1
20. new salaries = u + .08u = 1.08u.
Section 1.3, p. 30
2. (a) 4.
4. x = 5.
(b) 0.
(c) 1.
(d) 1.
bj
m
m
ai
j =1
n
j =1
i=1
bj =
j =1
bj
j =1
ai bj
4
Chapter 1
6. x = 2, y = 3.
8. x = 5.
10. x = 6 , y =
5
12
5.
0 1 1
(b) 12
5 17 .
19
0 22
12. (a) Impossible.
14. (a)
58 12
.
66 13
(b) Same as (a).
(d) Same as (c).
16. (a) 1.
9
(f) 0
3
(c)
0 3
0
0 .
0
1
(c)
3 0 1 .
14
29 .
17
15 7
(c) 23 5
13 1
28 32
; same.
16 18
(e)
(b) 6.
8
(d) 14
13
8
13 .
9
(e) Impossible.
28 8 38
.
34 4 41
16 8 26
.
30
0 31
1
4
2
(d) 2
(e) 10.
8
4 .
3 12 6
(f)
(g) Impossible.
18. DI2 = I2 D = D.
0
.
0
1
14
22. (a) .
0
20.
0
0
0
18
(b) .
3
13
13
1
2
1
1
2
1
24. col1 (AB ) = 1 2 + 3 4 + 2 3 ; col2 (AB ) = 1 2 + 2 4 + 4 3 .
3
0
2
3
0
2
26. (a) 5.
(b) BAT
28. Let A = aij be m p and B = bij be p n.
(a) Let the ith row of A consist entirely of zeros, so that aik = 0 for k = 1, 2, . . . , p. Then the (i, j )
entry in AB is
p
aik bkj = 0
for j = 1, 2, . . . , n.
k=1
(b) Let the j th column of A consist entirely of zeros, so that akj = 0 for k = 1, 2, . . . , m. Then the
(i, j ) entry in BA is
m
bik akj = 0
for i = 1, 2, . . . , m.
k=1
2
3
30. (a)
2
0
3 3
1
0
2
0
3
0 4
0
1
1
1
3
.
0
1
2
3
(b)
2
0
3 3
1
0
2
0
3
0 4
0
1
1
x1
7
1
x2
2
3
x3 = .
0 3
x4
5
1
x5
Section 1.3
5
2
3
(c)
2
0
32.
1
7
3 2
3
0
1
5
3 3
1
0
2
0
3
0 4
0
1
1
2
3
1 5
x1
5
=
.
x2
4
2x1 + x2 + 3x3 + 4x4 = 0
34. (a) 3x1 x2 + 2x3
=3
2x1 + x2 4x3 + 3x4 = 2
(b) same as (a).
36. (a) x1
38. (a)
3
1
12
25
1
1
3
2
1
4
+ x2
+ x3
=
.
(b) x1 2 + x2 1 = 2 .
1
4
2
3
1
1
x1
121
x1
0
0
1
.
(b) 1 1 2 x2 = 0 .
x2 =
3
1
202
0
x3
x3
39. We have
v1
v2
. = uT v.
.
.
n
uv =
ui vi = u1 u2 un
i=1
vn
100
40. Possible answer: 2 0 0 .
300
42. (a) Can say nothing.
(b) Can say nothing.
n
43. (a) Tr(cA) =
n
caii = c
i=1
aii = c Tr(A).
i=1
n
(b) Tr(A + B ) =
n
(aii + bii ) =
i=1
n
aii +
i=1
bii = Tr(A) + Tr(B ).
i=1
(c) Let AB = C = cij . Then
n
n
Tr(AB ) = Tr(C ) =
i=1
n
n
aik bki =
i=1 k=1
bki aik = Tr(BA).
k=1 i=1
n
aT =
ii
(d) Since aT = aii , Tr(AT ) =
ii
n
n
cii =
i=1
aii = Tr(A).
i=1
(e) Let AT A = B = bij . Then
n
n
aT aji =
ij
bii =
j =1
Hence, Tr(AT A) 0.
n
a2
ji
j =1
=
Tr(B ) = Tr(AT A) =
n
n
a2 0.
ij
bii =
i=1
i=1 j =1
6
Chapter 1
44. (a) 4.
(b) 1.
(c) 3.
10
01
45. We have Tr(AB BA) = Tr(AB ) Tr(BA) = 0, while Tr
= 2.
b1j
b2 j
be m n and n p, respectively. Then bj = . and the ith
.
.
bnj
46. (a) Let A = aij and B = bij
n
entry of Abj is
aik bkj , which is exactly the (i, j ) entry of AB .
k=1
(b) The ith row of AB is
we have
k
aik bk1
ai b =
k
k
aik bk2
aik bk1
k
k
aik bkn . Since ai = ai1 ai2 ain ,
aik bk2
k
aik bkn .
This is the same as the ith row of Ab.
47. Let A = aij and B = bij be m n and n p, respectively. Then the j th column of AB is
a11 b1j + + a1n bnj
.
.
(AB )j =
.
am1 b1j + + amn bnj
a11
a1n
.
.
= b1j . + + bnj .
.
.
am1
amn
= b1j Col1 (A) + + bnj Coln (A).
Thus the j th column of AB is a linear combination of the columns of A with coecients the entries in
bj .
48. The value of the inventory of the four types of items.
50. (a) row1 (A) col1 (B ) = 80(20) + 120(10) = 2800 grams of protein consumed daily by the males.
(b) row2 (A) col2 (B ) = 100(20) + 200(20) = 6000 grams of fat consumed daily by the females.
51. (a) No. If x = (x1 , x2 , . . . , xn ), then x x = x2 + x2 + + x2 0.
n
1
2
(b) x = 0.
52. Let a = (a1 , a2 , . . . , an ), b = (b1 , b2 , . . . , bn ), and c = (c1 , c2 , . . . , cn ). Then
n
(a) a b =
n
ai bi and b a =
i=1
bi ai , so a b = b a.
i=1
n
(b) (a + b) c =
n
(ai + bi )ci =
i=1
n
(c) (k a) b =
bi ci = a c + b c.
ai ci +
i=1
i=1
n
ai bi = k (a b).
(kai )bi = k
i=1
n
i=1
Section 1.4
7
53. The i, ith element of the matrix AAT is
n
n
n
aik aT =
ki
k=1
(aik )2 .
aik aik =
k=1
k=1
n
(aik )2 equals zero, which implies aik = 0 for each i
Thus if AAT = O, then each sum of squares
k=1
and k . Thus A = O.
54. AC =
17 2 22
. CA cannot be computed.
18 3 23
55. B T B will be 6 6 while BB T is 1 1.
Section 1.4, p. 40
1. Let A = aij , B = bij , C = cij . Then the (i, j ) entry of A + (B + C ) is aij + (bij + cij ) and
that of (A + B ) + C is (aij + bij ) + cij . By the associative law for addition of real numbers, these two
entries are equal.
2. For A = aij , let B = aij .
n
4. Let A = aij , B = bij , C = cij . Then the (i, j ) entry of (A + B )C is
n
AC + BC is
n
aik ckj +
k=1
(aik + bik )ckj and that of
k=1
bik ckj . By the distributive and additive associative laws for real numbers,
k=1
these two expressions for the (i, j ) entry are equal.
6. Let A = aij , where aii = k and aij = 0 if i = j , and let B = bij . Then, if i = j , the (i, j ) entry of
n
AB is
n
ais bsj = kbij , while if i = j , the (i, i) entry of AB is
s=1
ais bsi = kbii . Therefore AB = kB .
s=1
n
7. Let A = aij and C = c1 c2 cm . Then CA is a 1 n matrix whose ith entry is
a1j
a2j
Since Aj = . , the ith entry of
.
.
amj
cj aij .
j =1
n
m
cj Aj is
j =1
cj aij .
j =1
cos 2 sin 2
cos 3 sin 3
cos k sin k
.
(b)
.
(c)
.
sin 2 cos 2
sin 3 cos 3
sin k cos k
(d) The result is true for p = 2 and 3 as shown in parts (a) and (b). Assume that it is true for p = k .
Then
8. (a)
Ak+1 = Ak A =
cos k sin k
sin k cos k
cos sin
sin cos
=
cos k cos sin k sin cos k sin + sin k cos
sin k cos cos k sin cos k cos sin k sin
=
cos(k + 1) sin(k + 1)
.
sin(k + 1) cos(k + 1)
Hence, it is true for all positive integers k .
8
Chapter 1
1
1
10
01
2 .
10. Possible answers: A =
;A=
;A= 2
1
1
01
10
2
2
12. Possible answers: A =
1
1
00
01
;A=
;A=
.
1 1
00
00
13. Let A = aij . The (i, j ) entry of r(sA) is r(saij ), which equals (rs)aij and s(raij ).
14. Let A = aij . The (i, j ) entry of (r + s)A is (r + s)aij , which equals raij + saij , the (i, j ) entry of
rA + sA.
16. Let A = aij , and B = bij . Then r(aij + bij ) = raij + rbij .
n
n
18. Let A = aij and B = bij . The (i, j ) entry of A(rB ) is
aik (rbkj ), which equals r
k=1
aik bkj , the
k=1
(i, j ) entry of r(AB ).
20.
1
6 A,
k = 1.
6
22. 3.
24. If Ax = rx and y = sx, then Ay = A(sx) = s(Ax) = s(rx) = r(sx) = ry.
26. The (i, j ) entry of (AT )T is the (j, i) entry of AT , which is the (i, j ) entry of A.
27. (b) The (i, j ) entry of (A + B )T is the (j, i) entry of aij + bij , which is to say, aji + bji .
(d) Let A = aij and let bij = aji . Then the (i, j ) entry of (cA)T is the (j, i) entry of caij , which
is to say, cbij .
50
4 8
28. (A + B )T = 5 2 , (rA)T = 12 4 .
12
8 12
34
34
30. (a) 17 .
(b) 17 .
(c) B T C is a real number (a 1 1 matrix).
51
51
1 3
12
1 2
;B= 2
;C=
.
0
0
1
01
3
20
00
00
A=
;B=
;C=
.
30
10
01
32. Possible answers: A =
33. The (i, j ) entry of cA is caij , which is 0 for all i and j only if c = 0 or aij = 0 for all i and j .
34. Let A =
ab
be such that AB = BA for any 2 2 matrix B . Then in particular,
cd
ab
cd
10
10
=
00
00
a0
ab
=
c0
00
so b = c = 0, A =
a0
.
0d
ab
cd
Section 1.5
9
Also
a0
0d
11
11
=
00
00
a0
0d
aa
ad
=
,
00
00
which implies that a = d. Thus A =
a0
for some number a.
0a
35. We have
(A B )T = (A + (1)B )T
= AT + ((1)B )T
= AT + (1)B T = AT B T
by Theorem 1.4(d)).
36. (a) A(x1 + x2 ) = Ax1 + Ax2 = 0 + 0 = 0.
(b) A(x1 x2 ) = Ax1 Ax2 = 0 0 = 0.
(c) A(rx1 ) = r(Ax1 ) = r0 = 0.
(d) A(rx1 + sx2 ) = r(Ax1 ) + s(Ax2 ) = r0 + s0 = 0.
37. We verify that x3 is also a solution:
Ax3 = A(rx1 + sx2 ) = rAx1 + sAx2 = rb + sb = (r + s)b = b.
38. If Ax1 = b and Ax2 = b, then A(x1 x2 ) = Ax1 Ax2 = b b = 0.
Section 1.5, p. 52
1. (a) Let Im = dij so dij = 1 if i = j and 0 otherwise. Then the (i, j ) entry of Im A is
m
dik akj = dii aij
(since all other ds = 0)
k=1
= aij
(since dii = 1).
2. We prove that the product of two upper triangular matrices is upper triangular: Let A = aij with
n
aij = 0 for i > j ; let B = bij with bij = 0 for i > j . Then AB = cij where cij =
aik bkj . For
k=1
i > j , and each 1 k n, either i > k (and so aik = 0) or else k i > j (so bkj = 0). Thus every
term in the sum for cij is 0 and so cij = 0. Hence cij is upper triangular.
3. Let A = aij and B = bij , where both aij = 0 and bij = 0 if i = j . Then if AB = C = cij , we
n
have cij =
aik bkj = 0 if i = j .
k=1
9 1
4. A + B = 0 2
0
0
1
18 5 11
7 and AB = 0 8 7 .
3
0
0
0
5. All diagonal matrices.
10
Chapter 1
6. (a)
7 2
3 10
(b)
9 11
22
13
(c)
20 20
4
76
q summands
8. Ap Aq = (A A A) (A A A) = Ap+q ;
p factors
(Ap )q = Ap Ap Ap Ap = Ap + p + + p = Apq .
q factors
q factors
p + q factors
9. We are given that AB = BA. For p = 2, (AB )2 = (AB )(AB ) = A(BA)B = A(AB )B = A2 B 2 .
Assume that for p = k , (AB )k = Ak B k . Then
(AB )k+1 = (AB )k (AB ) = Ak B k A B = Ak (B k1 AB )B
= Ak (B k2 AB 2 )B = = Ak+1 B k+1 .
Thus the result is true for p = k + 1. Hence it is true for all positive integers p. For p = 0, (AB )0 =
In = A0 B 0 .
10. For p = 0, (cA)0 = In = 1 In = c0 A0 . For p = 1, cA = cA. Assume the result is true for p = k :
(cA)k = ck Ak , then for k + 1:
(cA)k+1 = (cA)k (cA) = ck Ak cA = ck (Ak c)A = ck (cAk )A = (ck c)(Ak A) = ck+1 Ak+1 .
T
11. True for p = 0: (AT )0 = In = In = (A0 )T . Assume true for p = n. Then
(AT )n+1 = (AT )n AT = (An )T AT = (AAn )T = (An+1 )T .
12. True for p = 0: (A0 )1 = In 1 = In . Assume true for p = n. Then
(An+1 )1 = (An A)1 = A1 (An )1 = A1 (A1 )n = (A1 )n+1 .
13.
1 1
kA
(kA) =
1
k
k A1 A = In and (kA)
1 1
kA
= k
1
k
1
AA1 = In . Hence, (kA)1 = k A1 for
k = 0.
T
14. (a) Let A = kIn . Then AT = (kIn )T = kIn = kIn = A.
1
(b) If k = 0, then A = kIn = 0In = O, which is singular. If k = 0, then A1 = (kA)1 = k A1 , so A
is nonsingular.
(c) No, the entries on the main diagonal do not have to be the same.
16. Possible answers:
ab
. Innitely many.
0a
17. The result is false. Let A =
12
5 11
10 14
. Then AAT =
and AT A =
.
34
11 25
14 20
18. (a) A is symmetric if and only if AT = A, or if and only if aij = aT = aji .
ij
(b) A is skew symmetric if and only if AT = A, or if and only if aT = aji = aij .
ij
(c) aii = aii , so aii = 0.
19. Since A is symmetric, AT = A and so (AT )T = AT .
20. The zero matrix.
21. (AAT )T = (AT )T AT = AAT .
22. (a) (A + AT )T = AT + (AT )T = AT + A = A + AT .
Section 1.5
11
(b) (A AT )T = AT (AT )T = AT A = (A AT ).
23. (Ak )T = (AT )k = Ak .
24. (a) (A + B )T = AT + B T = A + B .
(b) If AB is symmetric, then (AB )T = AB , but (AB )T = B T AT = BA, so AB = BA. Conversely, if
AB = BA, then (AB )T = B T AT = BA = AB , so AB is symmetric.
25. (a) Let A = aij be upper triangular, so that aij = 0 for i > j . Since AT = aT , where aT = aji ,
ij
ij
we have aT = 0 for j > i, or aT = 0 for i < j . Hence AT is lower triangular.
ij
ij
(b) Proof is similar to that for (a).
26. Skew symmetric. To show this, let A be a skew symmetric matrix. Then AT = A. Therefore
(AT )T = A = AT . Hence AT is skew symmetric.
27. If A is skew symmetric, AT = A. Thus aii = aii , so aii = 0.
k
28. Suppose that A is skew symmetric, so AT = A. Then (Ak )T = (AT )k = (A) = Ak if k is a
positive odd integer, so Ak is skew symmetric.
29. Let S = 1 (A + AT ) and K = 1 (A AT ). Then S is symmetric and K is skew symmetric, by
2
2
Exercise 18. Thus
S + K = 1 (A + AT + A AT ) = 1 (2A) = A.
2
2
Conversely, suppose A = S + K is any decomposition of A into the sum of a symmetric and skew
symmetric matrix. Then
AT = (S + K )T = S T + K T = S K
A + AT = (S + K ) + (S K ) = 2S,
S=
A AT = (S + K ) (S K ) = 2K, K =
2
1
30. S =
7
2
3
3
0 1 7
1
3 and K =
1
0
1 .
2
6
7 1
0
7
12
3
23
46
31. Form
wx
10
=
. Since the linear systems
yz
01
2w + 3y = 1
4w + 6y = 0
and
2x + 3z = 0
4x + 6z = 1
have no solutions, we conclude that the given matrix is singular.
1
0
0
4
32. D1 = 0 1
0 .
2
1
0
0
3
34. A =
36. (a)
1
2
1
2
2 1
12
13
1
2
1
2
.
4
16
=
.
6
22
(b)
38
.
53
(A + AT ),
(A AT )
12
Chapter 1
38.
9
.
6
40.
8
.
9
42. Possible answer:
10
00
10
+
=
.
00
01
01
43. Possible answer:
12
1 2
00
+
=
.
34
3
4
68
44. The conclusion of the corollary is true for r = 2, by Theorem 1.6. Suppose r 3 and that the
conclusion is true for a sequence of r 1 matrices. Then
(A1 A2 Ar )1 = [(A1 A2 Ar1 )Ar ]1 = A1 (A1 A2 Ar1 )1 = A1 A11 A1 A1 .
r
r
r
2
1
45. We have A1 A = In = AA1 and since inverses are unique, we conclude that (A1 )1 = A.
46. Assume that A is nonsingular, so that there exists an n n matrix B such that AB = In . Exercise 28
in Section 1.3 implies that AB has a row consisting entirely of zeros. Hence, we cannot have AB = In .
47. Let
a11
0
A=
0
0
a22
0
0
0
0
0
,
.
.
.
ann
where aii = 0 for i = 1, 2, . . . , n. Then
A1
1
a11
0
=
0
0
0
1
a22
0
.
.
.
0
as can be veried by computing AA1 = A1 A = In .
16 0
0
48. A4 = 0 81
0 .
0 0 625
ap
11
0
49. Ap =
0
0
ap
22
0
0
0
0
0
.
.
.
.
p
ann
50. Multiply both sides of the equation by A1 .
51. Multiply both sides by A1 .
0
0
1
ann
Section 1.5
52. Form
13
ab
cd
wx
10
=
. This leads to the linear systems
yz
01
aw + by = 1
cw + dy = 0
and
ax + bz = 0
cx + dz = 1.
A solution to these systems exists only if ad bc = 0. Conversely, if ad bc = 0 then a solution to
these linear systems exists and we nd A1 .
53. Ax = 0 implies that A1 (Ax) = A0 = 0, so x = 0.
T
54. We must show that (A1 )T = A1 . First, AA1 = In implies that (AA1 )T = In = In . Now
1 T
1 T T
1 T
1 T
1
(AA ) = (A ) A = (A ) A, which means that (A ) = A .
4
5
0
1 is one possible answer.
4
55. A + B = 0
6 2
6
22 22 21
22 23
56. A = 2 2 2 2 2 1 and B = 2 2 2 3 .
22 22 21
12 13
33 32
33 32
21 48 41
18 26 34
24 26 42
AB =
28 38 54
33 33 56
34 37 58
A=
and B =
48
33
47
70
74
79
40
5
16
.
35
42
54
33 32
.
23 22
57. A symmetric matrix. To show this, let A1 , . . . , An be symmetric matrices and let x1 , . . . , xn be scalars.
Then AT = A1 , . . . , AT = An . Therefore
n
1
(x1 A1 + + xn An )T = (x1 A1 )T + + (xn An )T
= x1 AT + + xn AT
1
n
= x1 A1 + + xn An .
Hence the linear combination x1 A1 + + xn An is symmetric.
58. A scalar matrix. To show this, let A1 , . . . , An be scalar matrices and let x1 , . . . , xn be scalars. Then
Ai = ci In for scalars c1 , . . . , cn . Therefore
x1 A1 + + xn An = x1 (c1 I1 ) + + xn (cn In ) = (x1 c1 + + xn cn )In
which is the scalar matrix whose diagonal entries are all equal to x1 c1 + + xn cn .
59. (a) w1 =
5
19
65
214
, w2 =
, w3 =
, w4 =
; u2 = 5, u3 = 19, u4 = 65, u5 = 214.
1
5
19
65
(b) wn1 = An1 w0 .
60. (a) w1 =
4
8
16
, w2 =
, w3 =
.
2
4
8
(b) wn1 = An1 w0 .
14
Chapter 1
63. (b) In Matlab the following message is displayed.
Warning: Matrix is close to singular or badly scaled.
Results may be inaccurate.
RCOND = 2.937385e-018
Then a computed inverse is shown which is useless. (RCOND above is an estimate of the condition
number of the matrix.)
(c) In Matlab a message similar to that in (b) is displayed.
64. (c) In Matlab, AB BA is not O. It is a matrix each of whose entries has absolute value less than
1 1014 .
65. (b) Let x be the solution from the linear system solver in Matlab and y = A1 B . A crude measure
of dierence in the two approaches is to look at max{|xi yi | i = 1, . . . , 10}. This value is
approximately 6 105 . Hence, computationally the methods are not identical.
66. The student should observe that the diagonal of ones marches toward the upper right corner and
eventually exits the matrix leaving all of the entries zero.
67. (a) As k , the entries in Ak 0, so Ak
00
.
00
(b) As k , some of the entries in Ak do not approach 0, so Ak does not approach any matrix.
Section 1.6, p. 62
2.
y
3
1
3
f(u) = (3, 0)
x
1 O
4.
1
3
u = (1, 2)
y
3
1
O
2 1
u = ( 2, 3)
x
1
2
f(u) = (6.19, 0.23)
Section 1.6
15
6.
y
( 6, 6)
f(u) = 2 u
6
4
u = ( 3, 3)
6
4
2
2
8.
O
x
1
z
u = (0, 2, 4)
f(u) = (4, 2, 4)
1
O
1
y
1
x
10. No.
12. Yes.
14. No.
16. (a) Reection about the line y = x.
(b) Reection about the line y = x.
2
0
1 , 0 .
18. (a) Possible answers:
0
1
0
1
(b) Possible answers: 4 , 2 .
4
0
20. (a) f (u + v) = A(u + v) = Au + Av = f (u) + f (v).
(b) f (cu) = A(cu) = c(Au) = cf (u).
(c) f (cu + dv) = A(cu + dv) = A(cu) + A(cv) = c(Au) + d(Av) = cf (u) + df (v).
21. For any real numbers c and d, we have
f (cu + dv) = A(cu + dv) = A(cu) + A(dv) = c(Au) + d(Av) = cf (u) + df (v) = c0 + d0 = 0 + 0 = 0.
16
Chapter 1
0 0
u1
0
. .
.
.
22. (a) O(u) =
. = . = 0.
.
.
.
0 0
0
un
(b) I (u) =
0 0
u1
u1
. .
.
.
. = . = u.
.
.
.
0 1
un
un
1
0
Section 1.7, p. 70
2.
y
4
2
O
x
2
4. (a)
4
6
8
10 12 14 16
y
(12, 16)
(4, 16)
16
12
8
4
(12, 4)
(4, 4)
3
1
O
x
1
3
4
8
12
Section 1.7
17
(b)
y
2
1
1
4
O
6.
1
4
3
4
x
1
2
y
1
1
2
x
O
1
8. (1, 2), (3, 6), (11, 10).
10. We nd that
(f1 f2 )(e1 ) = e2
(f2 f1 )(e1 ) = e2 .
Therefore f1 f2 = f2 f1 .
12. Here f (u) =
20
u. The new vertices are (0, 0), (2, 0), (2, 3), and (0, 3).
03
y
(2, 3)
3
O
x
2
14. (a) Possible answer: First perform f1 (45 counterclockwise rotation), then f2 .
(b) Possible answer: First perform f3 , then f2 .
cos sin
. Then A represents a rotation through the angle . Hence A2 represents a
sin
cos
rotation through the angle 2, so
cos 2 sin 2
A2 =
.
sin 2
cos 2
16. Let A =
18
Chapter 1
Since
A2 =
cos sin
sin
cos
cos sin
cos2 sin2
=
sin
cos
2 sin cos
2 sin cos
,
cos2 sin2
we conclude that
cos 2 = cos2 sin2
sin 2 = 2 sin cos .
17. Let
A=
cos 1 sin 1
sin 1
cos 1
and B =
cos(2 ) sin(2 )
cos 2 sin 2
=
sin(2 )
cos(2 )
sin 2 cos 2
.
Then A and B represent rotations through the angles 1 and 2 , respectively. Hence BA represents
a rotation through the angle 1 2 . Then
cos(1 2 ) sin(1 2 )
.
sin(1 2 )
cos(1 2 )
BA =
Since
BA =
cos 2 sin 2
sin 2 cos 2
cos 1 sin 1
cos 1 cos 2 + sin 1 sin 2 cos 1 sin 2 sin 1 cos 2
=
,
sin 1
cos 1
sin 1 cos 2 cos 1 sin 2 cos 1 cos 2 + sin 1 sin 2
we conclude that
cos(1 2 ) = cos 1 cos 2 + sin 1 sin 2
sin(1 2 ) = sin 1 cos 2 cos 1 sin 2 .
Section 1.8, p. 79
2. Correlation coecient = 0.9981. Quite highly correlated.
10
8
6
4
2
0
0
5
10
4. Correlation coecient = 0.8774. Moderately positively correlated.
100
80
60
40
20
0
0
50
100
Supplementary Exercises
19
Supplementary Exercises for Chapter 1, p. 80
b1
.
0
b
b
k=2
B = 11 12 .
0
0
b
b
b
k=3
B = 11 12 13 .
0
0
0
b
b
b
b
k=4
B = 11 12 13 14 .
0
0
0
0
(b) The answers are not unique. The only requirement is that row 2 of B have all zero entries.
100
11
2.
4. (a)
(b) 0 0 0 .
(c) I4 .
01
000
2. (a) k = 1
(d) Let A =
B=
ab
01
a2 + bc ab + bd
. Then A2 =
=
= B implies
cd
00
ac + dc bc + d2
b(a + d) = 1
c(a + d) = 0.
It follows that a + d = 0 and c = 0. Thus
A2 =
a2
b
0
b(a + d)
=
2
0
d
1
.
0
Hence, a = d = 0, which is a contradiction; thus, B has no square root.
5. (a) (AT A)ii = (rowi AT ) (coli A) = (coli A)T (coli A)
(b) From part (a)
(AT A)ii = a1i a2i ani
a1i
a2i
. =
.
.
ani
n
a2 0.
ji
j =1
(c) AT A = On if and only if (AT A)ii = 0 for i = 1, . . . , n. But this is possible if and only if aij = 0
for i = 1, . . . , n and j = 1, . . . , n
6. (Ak )T = (A A A)T = AT AT AT = (AT )k .
k times
k times
7. Let A be a symmetric upper (lower) triangular matrix. Then aij = aji and aij = 0 for j > i (j < i).
Thus, aij = 0 whenever i = j , so A is diagonal.
8. If A is skew symmetric then AT = A. Note that xT Ax is a scalar, thus (xT Ax)T = xT Ax. That is,
xT Ax = (xT Ax)T = xT AT x = (xT Ax).
The only scalar equal to its negative is zero. Hence xT Ax = 0 for all x.
9. We are asked to prove an if and only if statement. Hence two things must be proved.
(a) If A is nonsingular, then aii = 0 for i = 1, . . . , n.
Proof: If A is nonsingular then A is row equivalent to In . Since A is upper triangular, this can
occur only if we can multiply row i by 1/aii for each i. Hence aii = 0 for i = 1, . . . , n. (Other
row operations will then be needed to get In .)
20
Chapter 1
(b) If aii = 0 for i = 1, . . . , n then A is nonsingular.
Proof: Just reverse the steps given above in part (a).
0a
0b
and B =
. Then A and B are skew symmetric and AB =
a 0
b 0
which is diagonal. The result is not true for n > 2. For example, let
0
1
2
A = 1
0
3 .
2 3
0
5
6 3
Then A2 = 6 10
2 .
3
2 13
10. Let A =
ab
0
0 ab
11. Using the denition of trace and Exercise 5(a), we nd that
Tr(AT A) = sum of the diagonal entries of AT A
n
n
(AT A)ii =
=
i=1
i=1
n
(denition of trace)
a2
ji
(Exercise 5(a))
j =1
= sum of the squares of all entries of A
Thus the only way Tr(AT A) = 0 is if aij = 0 for i = 1, . . . , n and j = 1, . . . , n. That is, if A = O.
12. When AB = BA.
13. Let A =
1
0
1
2
1
2
. Then
A2 =
1
0
1
2
2
+ (1)
2
2
and A3 =
(1)
2
1
1
2
2
3
+ (1) + (1)
2
2
0
3
.
(1)
2
Following the pattern for the elements we have
2
n
1 1 + (1) + + (1)
2
2
2
.
An =
n
0
(1)
2
A formal proof by induction can be given.
14. B k = P Ak P 1 .
15. Since A is skew symmetric, AT = A. Therefore,
A[(A1 )T ] = A(A1 )T = AT (A1 )T = (A1 A)T = I T = I
and similarly, [(A1 )T ]A = I . Hence (A1 )T = A1 , so (A1 )T = A1 , and therefore A1 is
skew symmetric.
16. If Ax = 0 for all n 1 matrices x, then AEj = 0, j = 1, 2, . . . , n where Ej = column j of In . But then
a1j
a2j
AEj = . = 0.
.
.
anj
Hence column j of A = 0 for each j and it follows that A = O.
Supplementary Exercises
21
17. If Ax = x for all n 1 matrices X , then AEj = Ej , where Ej is column j of In . Since
a1j
a2j
AEj = . = Ej
.
.
anj
it follows that aij = 1 if i = j and 0 otherwise. Hence A = In .
18. If Ax = B x for all n 1 matrices x, then AEj = BEj , j = 1, 2, . . . , n where Ej = column j of In . But
then
a1j
b1 j
a2j
b2 j
AEj = . = BEj = . .
.
.
.
.
anj
bnj
Hence column j of A = column j of B for each j and it follows that A = B .
2
19. (a) In = In and O2 = O
00
10
and another is
.
01
00
(b) One such matrix is
(c) If A2 = A and A1 exists, then A1 (A2 ) = A1 A which simplies to give A = In .
20. We have A2 = A and B 2 = B .
(a) (AB )2 = ABAB = A(BA)B = A(AB )B
= A2 B 2 = AB
T2
T
T
(since A and B are idempotent)
T
(b) (A ) = A A = (AA)
2T
= (A ) = A
T
(since AB = BA)
(by the properties of the transpose)
(since A is idempotent)
(c) If A and B are n n and idempotent, then A + B need not be idempotent. For example, let
11
00
11
A=
and B =
. Both A and B are idempotent and C = A + B =
. However,
00
11
11
22
C2 =
= C.
22
(d) k = 0 and k = 1.
21. (a) We prove this statement using induction. The result is true for n = 1. Assume it is true for n = k
so that Ak = A. Then
Ak+1 = AAk = AA = A2 = A.
Thus the result is true for n = k + 1. It follows by induction that An = A for all integers n 1.
2
(b) (In A)2 = In 2A + A2 = In 2A + A = In A.
22. (a) If A were nonsingular then products of A with itself must also be nonsingular, but Ak is singular
since it is the zero matrix. Thus A must be singular.
(b) A3 = O.
(c) k = 1
A = O; In A = In ; (In A)1 A = In
k = 2 A2 = O; (In A)(In + A) = In A2 = In ; (In A)1 = In + A
k = 3 A3 = O; (In A)(In + A + A2 ) = In A3 = In ; (In A)1 = In + A + A2
etc.
22
Chapter 1
1
1
v .
.
.
24.
1
1
.
.
.
1
1
1
1
.
.
.
1
25. (a) Mcd(cA) =
(caij ) = c
i+j =n+1
(b) Mcd(A + B ) =
aij = c Mcd(A)
i+j =n+1
(aij + bij ) =
i+j =n+1
aij +
i+j =n+1
bij = Mcd(A) + Mcd(B )
i+j =n+1
(c) Mcd(AT ) = (AT )1n + (AT )2 n1 + + (AT )n1 = an1 + an1 2 + + a1n = Mcd(A)
(d) Let A =
7 3
1
and B =
00
1
1
. Then
1
AB =
and
BA =
1
3
26. (a)
0
0
2
4
0
0
0
0
1
2
0
0
.
0
3
10
0
4
0
with Mcd(AB ) = 4
7 3
7 3
with Mcd(BA) = 10.
12
1
10
0
1
0
y=
and
z=
obtaining y =
and z =
. Then the solution
34
1
23
3
1
1
1
1
y
to the given linear system Ax = B is x = where x =
.
0
z
(b) Solve
1
27. Let
0
a
a
0
and B =
AB =
A=
0
b
b
.
0
ab
0
0
ab
Then A and B are skew symmetric and
which is diagonal. The result is not true for n > 2. For example, let
0
1
1
A=
0
2 3
2
3 .
0
Supplementary Exercises
23
Then
6 3
10
2 .
2 13
5
A2 = 6
3
28. Consider the linear system Ax = 0. If A11 and A22 are nonsingular, then the matrix
A1
11
O
O
A1
22
is the inverse of A (verify by block multiplying). Thus A is nonsingular.
29. Let
A=
A11
O
B=
B11 B12
B21 B22
A12
A22
where A11 is r r and A22 is s s. Let
where B11 is r r and B22 is s s. Then
AB =
A11 B11 + A12 B21
A22 B21
I
A11 B12 + A12 B22
=r
A22 B22
O
O
.
Is
We have A22 B22 = Is , so B22 = A1 . We also have A22 B21 = O, and multiplying both sides of this
22
equation by A1 , we nd that B21 = O. Thus A11 B11 = Ir , so B11 = A1 . Next, since
22
11
A11 B12 + A12 B22 = O
then
A11 B12 = A12 B22 = A12 A1
22
Hence,
B12 = A1 A12 A1 .
11
22
Since we have solved for B11 , B12 , B21 , and B22 , we conclude that A is nonsingular. Moreover,
A1 A1 A12 A1
11
11
22
A1 =
.
1
O
A22
30. (a) XY T
456
= 8 10 12 .
12 15 18
31. Let X = 1 5
T
(b) XY T
and Y = 4 3
XY T =
1
5
4 3 =
T
1
2
=
1
2
0
0
0
0
35
6 10
.
3 5
6 10
. Then
4 3
20 15
and Y X T =
It follows that XY T is not necessarily the same as Y X T .
4
3
15=
4
20
.
3 15
24
Chapter 1
32. Tr(XY T ) = x1 y1 + x2 y2 + + xn yn
(See Exercise 27)
= XT Y .
1
7
33. col1 (A) row1 (B ) + col2 (A) row2 (B ) = 3 2 4 + 9 6 8
5
11
24
42 56
44 60
= 6 12 + 54 72 = 60 84 = AB .
10 20
66 88
76 108
T
34. (a) H T = (In 2W W T )T = In 2(W W T )T = In 2(W T )T W T = In 2W W T = H .
(b) HH T = HH = (In 2W W T )(In 2W W T )
= In 4W W T + 4W W T W W T
= In 4W W T + 4W (W T W )W T
= In 4W W T + 4W (In )W T = In
Thus, H T = H 1 .
1
1
2
5 1
0
123
1 1
2
5
(c) 0
35. (a) 3 1 2 (b)
5 1 1
2
0
231
2
5 1 1
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0 = I5
0
1
1
0
(d) 0
1
2
2
1
0
0
1
1
2
1
0
0
0
1
2
1
0
0
0
1
2
1
c1 c2 c3
36. We have C = circ(c1 , c2 , c3 ) = c3 c1 c2 . Thus C is symmetric if and only if c2 = c3 .
c2 c3 c1
n
37. C x =
ci x.
i=1
38. We proceed directly.
c1
C T C = c2
c3
c1
T
c3
CC =
c2
c3 c2
c1
c1 c3 c3
c2 c1
c2
c1
c2 c3
c2
c1 c2
c3 c1
c3
c2 c3
c2 + c2 + c2
1
3
2
= c2 c1 + c1 c3 + c3 c2
c1 c2
c3 c1
c3 c1 + c2 c3 + c1 c2
c2 + c2 + c2
c3 c2
1
2
3
= c3 c1 + c1 c2 + c2 c3
c1 c3
c2 c1
c2 c1 + c3 c2 + c1 c3
c1 c2 + c3 c1 + c2 c3
c2 + c2 + c2
2
1
3
c3 c2 + c2 c1 + c1 c3
c1 c3 + c2 c1 + c3 c2
c2 + c2 + c2
3
1
2
c2 c3 + c3 c1 + c1 c2
It follows that C T C = CC T .
Chapter Review for Chapter 1, p. 83
True or False
1. False.
6. True.
2. False.
7. True.
3. True.
8. True.
4. True.
9. True.
5. True.
10. True.
c1 c3 + c3 c2 + c2 c1
c2 c3 + c1 c2 + c3 c1
c2 + c2 + c2
3
2
1
c1 c2 + c2 c3 + c3 c1
c3 c2 + c1 c3 + c2 c1 .
c2 + c2 + c2
2
3
1
Chapter Review
Quiz
1. x =
2
.
4
2. r = 0.
3. a = b = 4.
4. (a) a = 2.
(b) b = 10, c = any real number.
5. u =
3
, where r is any real number.
r
25
Chapter 2
Solving Linear Systems
Section 2.1, p. 94
1r1 r1
3r1 + r2 r2
2. (a) Possible answer:
4r1 + r3 r3
2r2 + r3 r3
1 1
0
1
0
0
2r1 + r2 r2
1
4r1 + r3 r3
(b) Possible answer:
0
r2 + r3 r3
0
1
6 r3 r3
1
0
0
8
0
1
0 1
3r3 + r1 r1
4. (a)
r3 + r2 r2 0
0
1
2
0
0
0
0
r1 r1
2r1 + r2 r2
2r1 + r3 r3
1
r r2
6. (a) 2 2
3r3 r3
4
3 r3 + r2 r2
5r3 + r1 r1
2r2 + r1 r1
8. (a) REF
I3
(b) RREF
1
4
0
0 3
1
1
0
0
1 4
1
2
0
1
(b) 3r2 + r1 r1
3r1 + r2 r2
5r1 + r3 r3
(b) 2r1 + r4 r4
r2 + r3 r3
r2 + r1 r1
1
0
0
0
1
0
0
0
1
0
0 1
0
1
1 1
4
0
0
0 3
1
2
0
0
0
0
(c) N
9. Consider the columns of A which contain leading entries of nonzero rows of A. If this set of columns is
the entire set of n columns, then A = In . Otherwise there are fewer than n leading entries, and hence
fewer than n nonzero rows of A.
10. (a) A is row equivalent to itself: the sequence of operations is the empty sequence.
(b) Each elementary row operation of types I, II or III has a corresponding inverse operation of the
same type which undoes the eect of the original operation. For example, the inverse of the
operation add d times row r of A to row s of A is subtract d times row r of A from row s of
A. Since B is assumed row equivalent to A, there is a sequence of elementary row operations
which gets from A to B . Take those operations in the reverse order, and for each operation do its
inverse, and that takes B to A. Thus A is row equivalent to B .
(c) Follow the operations which take A to B with those which take B to C .
28
Chapter 2
10000
12. (a) 2 1 0 0 0
35100
3
10000
(b) 0 1 0 0 0
00100
Section 2.2, p. 113
2. (a) x = 6 s t, y = s, z = t, w = 5.
(b) x = 3, y = 2, z = 1.
4. (a) x = 5 + 2t, y = 2 t, z = t.
(b) x = 1, y = 2, z = 4 + t, w = t.
6. (a) x = 2 + r, y = 1, z = 8 2r, x4 = r, where r is any real number.
(b) x = 1, y = 2 , z = 2 .
3
3
(c) No solution.
8. (a) x = 1 r, y = 2, z = 1, x4 = r, where r is any real number.
(b) x = 1 r, y = 2 + r, z = 1 + r, x4 = r, where r is any real number.
r
, where r = 0.
0
1r
4
12. x = 1 r , where r = 0.
4
r
10. x =
14. (a) a = 2.
16. (a) a = 6.
(b) a = 2.
(b) a = 6.
(c) a = 2.
ab0
. If we reduce this matrix to reduced row echelon form, we see
cd0
that the linear system has only the trivial solution if and only if A is row equivalent to I2 . Now show
that this occurs if and only if ad bc = 0. If ad bc = 0 then at least one of a or c is = 0, and it is a
routine matter to show that A is row equivalent to I2 . If ad bc = 0, then by case considerations we
nd that A is row equivalent to a matrix that has a row or column consisting entirely of zeros, so that
A is not row equivalent to I2 .
18. The augmented matrix is
Alternate proof: If ad bc = 0, then A is nonsingular, so the only solution is the trivial one. If
ad bc = 0, then ad = bc. If ad = 0 then either a or d = 0, say a = 0. Then bc = 0, and either b
b
or c = 0. In any of these cases we get a nontrivial solution. If ad = 0, then a = d , and the second
c
equation is a multiple of the rst one so we again have a nontrivial solution.
19. This had to be shown in the rst proof of Exercise 18 above. If the alternate proof of Exercise 18 was
given, then Exercise 19 follows from the former by noting that the homogeneous system Ax = 0 has
only the trivial solution if and only if A is row equivalent to I2 and this occurs if and only if ad bc = 0.
3
1
2
20. 2 + 1 t, where t is any number.
0
0
22. a + b + c = 0.
24. (a) Change row to column.
(b) Proceed as in the proof of Theorem 2.1, changing row to column.
Section 2.2
29
25. Using Exercise 24(b) we can assume that every m n matrix A is column equivalent to a matrix in
column echelon form. That is, A is column equivalent to a matrix B that satises the following:
(a) All columns consisting entirely of zeros, if any, are at the right side of the matrix.
(b) The rst nonzero entry in each column that is not all zeros is a 1, called the leading entry of the
column.
(c) If the columns j and j + 1 are two successive columns that are not all zeros, then the leading
entry of column j + 1 is below the leading entry of column j .
We start with matrix B and show that it is possible to nd a matrix C that is column equivalent to B
that satises
(d) If a row contains a leading entry of some column then all other entries in that row are zero.
If column j of B contains a nonzero element, then its rst (counting top to bottom) nonzero element
is a 1. Suppose the 1 appears in row rj . We can perform column operations of the form acj + ck for
each of the nonzero columns ck of B such that the resulting matrix has row rj with a 1 in the (rj , j )
entry and zeros everywhere else. This can be done for each column that contains a nonzero entry hence
we can produce a matrix C satisfying (d). It follows that C is the unique matrix in reduced column
echelon form and column equivalent to the original matrix A.
26. 3a b + c = 0.
28. Apply Exercise 18 to the linear system given here. The coecient matrix is
ar d
.
c br
Hence from Exercise 18, we have a nontrivial solution if and only if (a r)(b r) cd = 0.
29. (a) A(xp + xh ) = Axp + Axh = b + 0 = b.
(b) Let xp be a particular solution to Ax = b and let x be any solution to Ax = b. Let xh = x xp .
Then x = xp + xh = xp + (x xp ) and Axh = A(x xp ) = Ax Axp = b b = 0. Thus xh is
in fact a solution to Ax = 0.
30. (a) 3x2 + 2
32.
32
2x
(b) 2x2 x 1
x + 1.
2
34. (a) x = 0, y = 0
(b) x = 5, y = 7
36. r = 5, r2 = 5.
37. The GPS receiver is located at the tangent point where the two circles intersect.
38. 4Fe + 3O2 2Fe2 O3
40. x =
1
4
0
.
1i
4
42. No solution.
30
Chapter 1
Section 2.3, p. 124
1. The elementary matrix E which results from In by a type I interchange of the ith and j th row diers
from In by having 1s in the (i, j ) and (j, i) positions and 0s in the (i, i) and (j, j ) positions. For that
E , EA has as its ith row the j th row of A and for its j th row the ith row of A.
The elementary matrix E which results from In by a type II operation diers from In by having c = 0
in the (i, i) position. Then EA has as its ith row c times the ith row of A.
The elementary matrix E which results from In by a type III operation diers from In by having c in
the (j, i) position. Then EA has as j th row the sum of the j th row of A and c times the ith row of A.
1
0
0
0
1000
0010
0 2
0 1 0 0
0 1 0 0
0
0
.
2. (a)
(b)
(c)
0
0 0 1 0 .
1 0 0 0 .
0
1
0
0
0
0
1
0031
0001
4. (a) Add 2 times row
(b) Add 2 times row
1
0
0
(c) AB =
1
2
0
1
0
BA = 0
1
2
0
1
0
0
1
0
0
1 to row 3: 0
1
0 0
1
0 = C
2
0
1
0
0
1
1
0
0
1
0
0
1 to row 3: 0
1
0 0
1
0 = B
0
0
1
2
0
1
0
1
0
0
1
0
0
0
= 0
0
1
0
1
0 .
1
2
0
1
0
0
1
0
1
0
0
1
0
0
0 0
1
0 = 0
1
0 .
1
2
0
1
0
0
1
Therefore B is the inverse of A.
6. If E1 is an elementary matrix of type I then E1 1 = E1 . Let E2 be obtained from In by multiplying
the ith row of In by c = 0. Let E2 be obtained from In by multiplying the ith row of In by 1 . Then
c
E2 E2 = In . Let E3 be obtained from In by adding c times the ith row of In to the j th row of In . Let
E3 be obtained from In by adding c times the ith row of In to the j th row of In . Then E3 E3 = In .
1 1
0
1
8. A1 = 3
3 .
2
2
2
1
0
1
3
3
1
1 1
0
3 1
1
2
2
5
5
5
2
4 .
1 .
3
10. (a) Singular.
(b) 1 2
(c) 1 3
(d) 5
.
1
2
2
5 5
1
1
1
5
1
2
3
0
1
2
2 2
2 2
5
5
5
1 1
0 1
0 1
0
0
2
1
.
12. (a) A =
(b) Singular.
3
1
1
1
5
5
5
1
2
1
2
5 2 5 5
Section 2.3
31
14. A is row equivalent to I3 ; a possible answer is
123
100
120
1
0
0 1 2 = 0 1 0 0 1 0 0
A=
1
103
101
001
0 2
16. A =
3
2
1
1
2
0
100
100
1
0 1 0 0 1 2 0
0
1
004
001
0
0 1
1
0 .
0
1
0 1 .
2
1
0
1
2
1
18. (b) and (c).
20. For a = 1 or a = 3.
21. This follows directly from Exercise 19 of Section 2.1 and Corollary 2.2. To show that
A1 =
1
d b
ad bc c a
we proceed as follows:
1
22. (a) 0
0
1
d b
ad bc c a
0
0
10
(b) 0 1
1
0 .
0 3
01
1
ab
ad bc
db bd
10
=
=
.
cd
01
ad bc ca + ac bc + ad
0
1
0 5
(c) 0
0 .
1
0 .
0
0
0
1
23. The matrices A and B are row equivalent if and only if B = Ek Ek1 E2 E1 A.
Let P = Ek Ek1 E2 E1 .
24. If A and B are row equivalent then B = P A, where P is nonsingular, and A = P 1 B (Exercise 23). If
A is nonsingular then B is nonsingular, and conversely.
25. Suppose B is singular. Then by Theorem 2.9 there exists x = 0 such that B x = 0. Then (AB )x =
A0 = 0, which means that the homogeneous system (AB )x = 0 has a nontrivial solution. Theorem
2.9 implies that AB is singular, a contradiction. Hence, B is nonsingular. Since A = (AB )B 1 is a
product of nonsingular matrices, it follows that A is nonsingular.
Alternate Proof: If AB is nonsingular it follows that AB is row equivalent to In , so P (AB ) = In . Since
P is nonsingular, P = Ek Ek1 E2 E1 . Then (P A)B = In or (Ek Ek1 E2 E1 A)B = In . Letting
Ek Ek1 E2 E1 A = C , we have CB = In , which implies that B is nonsingular. Since P AB = In ,
A = P 1 B 1 , so A is nonsingular.
26. The matrix A is row equivalent to O if and only if A = P O = O where P is nonsingular.
27. The matrix A is row equivalent to B if and only if B = P A, where P is a nonsingular matrix. Now
B T = AT P T , so A is row equivalent to B if and only if AT is column equivalent to B T .
28. If A has a row of zeros, then A cannot be row equivalent to In , and so by Corollary 2.2, A is singular.
If the j th column of A is the zero column, then the homogeneous system Ax = 0 has a nontrivial
solution, the vector x with 1 in the j th entry and zeros elsewhere. By Theorem 2.9, A is singular.
10
00
,B=
. Then (A + B )1 exists but A1 and B 1 do not. Even
00
01
supposing they all exist, equality need not hold. Let A = 1 , B = 2 so (A + B )1 = 1 =
3
1 + 1 = A1 + B 1 .
2
29. (a) No. Let A =
32
Chapter 1
(b) Yes, for A nonsingular and r = 0.
(rA)
1 1
1
A
A A1 = 1 In = In .
=r
r
r
30. Suppose that A is nonsingular. Then Ax = b has the solution x = A1 b for every n 1 matrix b.
Conversely, suppose that Ax = b is consistent for every n 1 matrix b. Letting b be the matrices
1
0
0
0
1
0
.
e1 = . , e 2 = 0 , . . . , e n = . ,
.
.
.
.
.
.
0
0
0
1
we see that we have solutions x1 , x2 , . . . , xn to the linear systems
Ax1 = e1 ,
Ax2 = e2 ,
...,
Axn = en .
()
Letting C be the matrix whose j th column is xj , we can write the n systems in () as AC = In , since
In = e1 e2 en . Hence, A is nonsingular.
31. We consider the case that A is nonsingular and upper triangular. A similar argument can be given for
A lower triangular.
By Theorem 2.8, A is a product of elementary matrices which are the inverses of the elementary
matrices that reduce A to In . That is,
A = E 1 1 Ek 1 .
The elementary matrix Ei will be upper triangular since it is used to introduce zeros into the upper
triangular part of A in the reduction process. The inverse of Ei is an elementary matrix of the same
type and also an upper triangular matrix. Since the product of upper triangular matrices is upper
triangular and we have A1 = Ek E1 we conclude that A1 is upper triangular.
Section 2.4, p. 129
1. See the answer to Exercise 4, Section 2.1. Where it mentions only row operations, now read row and
column operations.
2. (a)
I4
.
0
(b) I3 .
(c)
I2
0
0
.
0
(d) I4 .
4. Allowable equivalence operations (elementary row or elementary column operation) include in particular elementary row operations.
5. A and B are equivalent if and only if B = Et E2 E1 AF1 F2 Fs . Let Et Et1 E2 E1 = P and
F1 F2 Fs = Q.
1
2
0
1
0 1
I0
6. B = 2
; a possible answer is: B = 1 1
0 A 0
1 1 .
00
1
1
1
0
0
1
8. Suppose A were nonzero but equivalent to O. Then some ultimate elementary row or column operation
must have transformed a nonzero matrix Ar into the zero matrix O. By considering the types of
elementary operations we see that this is impossible.
Section 2.5
33
9. Replace row by column and vice versa in the elementary operations which transform A into B .
10. Possible answers are:
1 2
3
0
(a) 0 1
4
3 .
0
2 5 2
(b)
10
.
00
1
(c) 0
0
0
0
1 2
5
5
0
0
4
0
2 .
4
11. If A and B are equivalent then B = P AQ and A = P 1 BQ1 . If A is nonsingular then B is nonsingular,
and conversely.
Section 2.5, p. 136
0
2. x = 2 .
3
2
1
4. x = .
0
5
100
3
6. L = 4 1 0 , U = 0
5 3 1
0
1
6
8. L =
1
2
0
1
2
3
0
0
1
2
1 2
3
6
2 , x = 4 .
0 4
1
0
5
0
0
, U =
0
0
1
0
1
0
0
0.2
1
0
10. L =
0.4
0.8
1
2 1.2 0.4
4
0
1
1
2
3
2
1
, x = .
5
0 4
1
4
0
0 2
0
4
1
0.25 0.5
1.5
0
1.2 2.5
, U = 0 0.4
, x = 4.2 .
0
2.6
0
0 0.85
2
1
0
0
0 2.5
2
Supplementary Exercises for Chapter 2, p. 137
2. (a) a = 4 or a = 2.
(b) The system has a solution for each value of a.
4. c + 2a 3b = 0.
5. (a) Multiply the j th row of B by
1
k.
(b) Interchange the ith and j th rows of B .
(c) Add k times the j th row of B to its ith row.
6. (a) If we transform E1 to reduced row echelon form, we obtain In . Hence E1 is row equivalent to In
and thus is nonsingular.
(b) If we transform E2 to reduced row echelon form, we obtain In . Hence E2 is row equivalent to In
and thus is nonsingular.
34
Chapter 2
(c) If we transform E3 to reduced row echelon form, we obtain In . Hence E3 is row equivalent to In
and thus is nonsingular.
1 a a2 a3
0 1 a a2
.
8.
0 0
1
a
0
0
0
41
10. (a) 47 .
35
12. s = 0, 2.
1
83
(b) 45 .
62
13. For any angle , cos and sin are never simultaneously zero. Thus at least one element in column 1
is not zero. Assume cos = 0. (If cos = 0, then interchange rows 1 and 2 and proceed in a similar
manner to that described below.) To show that the matrix is nonsingular and determine its inverse,
we put
cos sin 1 0
sin cos 0 1
1
into reduced row echelon form. Apply row operations cos times row 1 and sin times row 1 added to
row 2 to obtain
sin
1
1
0
cos
cos
.
2
sin
sin
0
+ cos
1
cos
cos
Since
sin2
sin2 + cos2
1
+ cos =
=
,
cos
cos
cos
sin
the (2, 2)-element is not zero. Applying row operations cos times row 2 and cos
added to row 1 we obtain
1 0 cos sin
0 1 sin
cos
.
It follows that the matrix is nonsingular and its inverse is
cos sin
sin
cos
.
14. (a) A(u + v) = Au + Av = 0 + 0 = 0.
(b) A(u v) = Au Av = 0 0 = 0.
(c) A(ru) = r(Au) = r0 = 0.
(d) A(ru + sv) = r(Au) + s(Av) = r0 + s0 = 0.
15. If Au = b and Av = b, then A(u v) = Au Av = b b = 0.
times row 2
Chapter Review
35
16. Suppose at some point in the process of reducing the augmented matrix to reduced row echelon form
we encounter a row whose rst n entries are zero but whose (n + 1)st entry is some number c = 0. The
corresponding linear equation is
0 x1 + + 0 xn = c or
0 = c.
This equation has no solution, thus the linear system is inconsistent.
17. Let u be one solution to Ax = b. Since A is singular, the homogeneous system Ax = 0 has a nontrivial
solution u0 . Then for any real number r, v = ru0 is also a solution to the homogeneous system. Finally,
by Exercise 29, Sec. 2.2, for each of the innitely many vectors v, the vector w = u + v is a solution
to the nonhomogeneous system Ax = b.
18. s = 1, t = 1.
20. If any of the diagonal entries of L or U is zero, there will not be a unique solution.
21. The outer product of X and Y can be written
x1
x2
XY T =
xn
in the form
y1
y1
y1
y 2 yn
y 2 yn
.
.
.
y2 y n
.
If either X = O or Y = O, then XY T = O. Thus assume that there is at least one nonzero component
1
in X , say xi , and at least one nonzero component in Y , say yj . Then xi Rowi (XY T ) makes the ith
row exactly Y T . Since all the other rows are multiples of Y T , row operations of the form xk Ri + Rp ,
for p = i, can be performed to zero out everything but the ith row. It follows that either XY T is row
equivalent to O or to a matrix with n 1 zero rows.
Chapter Review for Chapter 2, p. 138
True or False
1. False.
6. True.
2. True.
7. True.
3. False.
8. True.
4. True.
9. True.
Quiz
102
1. 0 1 3
000
2. (a) No.
(b) Innitely many.
(c) No.
6 + 2r + 7s
r
, where r and s are any real numbers.
(d) x =
3s
s
3. k = 6.
5. True.
10. False.
36
Chapter 2
0
0 .
4.
0
1
1
1
2
2
2
5. 1 1
0 .
3
1
1
2
2 2
6. P = A1 , Q = B .
7. Possible answers: Diagonal, zero, or symmetric.
Chapter 3
Determinants
Section 3.1, p. 145
2. (a) 4.
(b) 7.
4. (a) odd.
(c) 0.
(b) even.
6. (a) .
(b) +.
8. (a) 7.
(c) even.
(b) 2.
(c) +.
10. det(A) = a11 a22 a33 a44 a11 a22 a34 a43 a11 a23 a32 a44 + a11 a23 a34 a42 + a11 a24 a32 a43 a11 a24 a33 a42 +
(24 summands).
12. (a) 24.
(b) 36.
14. (a) t2 8t 20.
16. (a) t = 10, t = 2.
(c) 180.
(b) t3 t.
(b) t = 0, t = 1, t = 1.
Section 3.2, p. 154
2. (a) 4.
(b) 24. (c) 30.
(d) 72.
(e) 120.
(f) 0.
4. 2.
6. (a) det(A) = 7, det(B ) = 3.
(b) det(A) = 24, det(B ) = 30.
8. Yes, since det(AB ) = det(A) det(B ) and det(BA) = det(B ) det(A).
9. Yes, since det(AB ) = det(A) det(B ) implies that det(A) = 0 or det(B ) = 0.
10. det(cA) =
()(ca1j1 )(ca2j2 ) (canjn ) = cn
()a1j1 a2j2 anjn = cn det(A).
11. Since A is skew symmetric, AT = A. Therefore
det(A) = det(AT )
= det(A)
= (1)n det(A)
= det(A)
by Theorem 3.1
since A is skew symmetric
by Exercise 10
since n is odd
The only number equal to its negative is zero, so det(A) = 0.
38
Chapter 3
12. This result follows from the observation that each term in det(A) is a product of n entries of A, each
with its appropriate sign, with exactly one entry from each row and exactly one entry from each column.
13. We have det(AB 1 ) = (det A)(det B 1 ) = (det A)
1
.
det B
14. If AB = In , then det(AB ) = det(A) det(B ) = det(In ) = 1, so det(A) = 0 and det(B ) = 0.
15. (a) By Corollary 3.3, det(A1 ) = 1/ det(A). Since A = A1 , we have
det(A) =
1
det(A)
=
(det(A))2 = 1.
Hence det(A) = 1.
(b) If AT = A1 , then det(AT ) = det(A1 ). But
det(A) = det(AT )
and
det(A1 ) =
1
det(A)
hence we have
det(A) =
1
det(A)
=
(det(A))2 = 1
=
det(A) = 1.
16. From Denition 3.2, the only time we get terms which do not contain a zero factor is when the terms
involved come from A and B alone. Each one of the column permutations of terms from A can be
associated with every one of the column permutations of B . Hence by factoring we have
det
AO
OB
(terms from A for any column permutation)|B |
=
= |B |
(terms from A for any column permutation)
= (det B )(det A) = (det A)(det B ).
17. If A2 = A, then det(A2 ) = [det(A)]2 = det(A), so det(A) = 1. Alternate solution: If A2 = A and A is
nonsingular, then A1 A2 = A1 A = In , so A = In and det(A) = det(In ) = 1.
18. Since AA1 = In , det(AA1 ) = det(In ) = 1, so det(A) det(A1 ) = 1. Hence, det(A1 ) =
1
.
det(A)
19. From Denition 3.2, the only time we get terms which do not contain a zero factor is when the terms
involved come from A and B alone. Each one of the column permutations of terms from A can be
associated with every one of the column permutations of B . Hence by factoring we have
AO
CB
=
(terms from A for any column permutations)|B |
= |B |
(terms from A for any column permutation)
= |B ||A|
20. (a) det(AT B T ) = det(AT ) det(B T ) = det(A) det(B T ).
(b) det(AT B T ) = det(AT ) det(B T ) = det(AT ) det(B ).
1 a a2
1
a
a2
2=
2
22. 1 b b
0 b a b a2
2
1cc
0 c a c2 a2
= (b a)(c2 a2 ) (c a)(b2 a2 ) = (b a)(c a)(c + a) (c a)(b a)(b + a)
= (b a)(c a)[(c + a) (b + a)] = (b a)(c a)(c b).
Section 3.3
39
24. (a) and (b).
26. (a) t = 0.
(b) t = 1.
(c) t = 0, 1.
28. The system has only the trivial solution.
29. If A = aij is upper triangular, then det(A) = a11 a22 ann , so det(A) = 0 if and only if aii = 0 for
i = 1, 2, . . . , n.
30. (a) I3
(b) Only the trivial solution.
31. (a) A matrix having at least one row of zeros.
(b) Innitely many.
32. If A2 = A, then det(A2 ) = det(A), so [det(A)]2 = det(A). Thus, det(A)(det(A) 1) = 0. This implies
that det(A) = 0 or det(A) = 1.
33. If A and B are similar, then there exists a nonsingular matrix P such that B = P 1 AP . Then
det(B ) = det(P 1 BP ) = det(P 1 ) det(A) det(P ) =
1
det(P ) det(A) = det(A).
det(P )
34. If det(A) = 0, then A is nonsingular. Hence, A1 AB = A1 AC , so B = C .
36. In Matlab the command for the determinant actually invokes an LU-factorization, hence is closely
associated with the material in Section 2.5.
37. For = 105 , Matlab gives the determinant as 3 105 which agrees with the theory; for = 1014 ,
3.2026 1014 ; for = 1015 , 6.2800 1015 ; for = 1016 , zero.
Section 3.3, p. 164
2. (a) 23.
4. (a) 3.
6. (b) 2.
(b) 7.
(b) 0.
(c) 24.
8. (b) 24.
(d) 28.
(c) 15.
(c) 3.
(d) 6.
(f) 30.
(e) 120.
(d) 72.
9. We proceed by successive expansions along rst columns:
det(A) = a11
a22 a23 a2n
0 a33 a3n
.
.
.
.
.
.
.
.
.
.
.
.
0
0 ann
= a11 a22
a33 a34 a3n
0 a44 a4n
.
.
.
.
.
.
.
.
.
.
.
.
0
0 ann
= = a11 a22 ann .
12. t = 1, t = 1, t = 2.
13. (a) From Denition 3.2 each term in the expansion of the determinant of an n n matrix is a product
of n entries of the matrix. Each of these products contains exactly one entry from each row and
exactly one entry from each column. Thus each such product from det(tIn A) contains at most
n terms of the form t aii . Hence each of these products is at most a polynomial of degree n.
Since one of the products has the form (t a11 )(t a22 ) (t ann ) it follows that the sum of
the products is a polynomial of degree n in t.
40
Chapter 3
(b) The coecient of tn is 1 since it only appears in the term (t a11 )(t a22 ) (t ann ) which
we discussed in part (a). (The permutation of the column indices is even here so a plus sign is
associated with this term.)
(c) Using part (a), suppose that
det(tIn A) = tn + c1 tn1 + c2 tn2 + + cn1 t + cn .
Set t = 0 and we have det(A) = cn which implies that cn = (1)n det(A). (See Exercise 10 in
Section 6.2.)
14. (a) f (t) = t2 5t 2, det(A) = 2.
(b) f (t) = t3 t2 13t 26, det(A) = 26.
(c) f (t) = t2 2t, det(A) = 0.
16. 6.
18. Let P1 (x1 , y1 ), P2 (x2 , y2 ), P3 (x3 , y3 ) be the vertices of a triangle T . Then from Equation (2), we have
x1
1
area of T =
det x2
2
x3
y1
y2
y3
1
1 =
1
1
2
|x1 y2 + y1 x3 + x2 y3 x3 y2 y3 x1 x2 y1 | .
Let A be the matrix representing a counterclockwise rotation L through an angle . Thus
cos sin
sin
cos
A=
and P1 , P2 , P3 are the vertices of L(T ), the image of T . We have
L
x1
y1
=
x1 cos y1 sin
,
x1 sin + y1 cos
L
x2
y2
=
x2 cos y2 sin
,
x2 sin + y2 cos
L
x3
y3
=
x3 cos y3 sin
,
x3 sin + y3 cos
Then
x1 cos y1 sin x1 sin + y1 cos 1
1
area of L(T ) =
det x2 cos y2 sin x2 sin + y2 cos 1
2
x3 cos y3 sin x3 sin + y3 cos 1
=
1
2
=
1
2
|(x1 cos y1 sin )[x2 sin + y2 cos x3 sin y3 cos ]
+ (x2 cos y2 sin )[x3 sin + y3 cos x1 sin y1 cos ]
+(x3 cos y3 sin )[x1 sin + y1 cos x2 sin y2 cos ]|
|x1 y2 + y1 x3 + x2 y3 x3 y2 x1 y3 x2 y1 |
= area of T .
19. Let T be the triangle with vertices (x1 , y1 ), (x2 , y2 ), and (x3 , y3 ). Let
A=
ab
cd
Section 3.4
41
and dene the linear operator L : R2 R2 by L(v) = Av for v in R2 . The vertices of L(T ) are
(ax1 + by1 , cx1 + dy1 ),
(ax2 + by2 , cx2 + dy2 ),
and
(ax3 + by3 , cx3 + dy3 ).
Then by Equation (2),
Area of T =
1
2
|x1 y2 x1 y3 x2 y1 + x2 y3 + x3 y1 x3 y2 |
and
Area of L(T ) =
1
2
|ax1 dy2 ax1 dy3 ax2 dy1 + ax2 dy3 + ax3 dy1 ax3 dy2
bcx1 y2 + bcx1 y3 + bcx2 y1 bcx2 y3 bcx3 y1 + bcx3 y2 |
Now,
|det(A)| Area of T = |ad bc| 1 |x1 y2 x1 y3 x2 y1 + x2 y3 + x3 y1 x3 y2 |
2
=
1
2
|ax1 dy2 ax1 dy3 ax2 dy1 + ax2 dy3 + ax3 dy1 ax3 dy2
bcx1 y2 + bcx1 y3 + bcx2 y1 bcx2 y3 bcx3 y1 + bcx3 y2 |
= |Area of L(T )|
Section 3.4, p. 169
2 7 6
2. (a) 1 7 3 .
4
7
5
2
6
7
1
7
3
1
4.
.
1
7
7
4
1 5
7
7
(b) 7.
6. If A is symmetric, then for each i and j , Mji is the transpose of Mij . Thus Aji = (1)j +i |Mji | =
(1)i+j |Mij | = Aij .
8. The adjoint matrix is upper triangular if A is upper triangular, since aij = 0 if i > j which implies
that Aij = 0 if i > j .
bc(c b) ac(a c) ab(b a)
1
b2 c2
10.
c2 a2
a2 b2 .
(b a)(c a)(c b)
cb
ac
ba
6 2
9
1
12.
0
8 12 .
24
0
0 12
13. We follow the hint. If A is singular then det(A) = 0. Hence A(adj A) = det(A) In = 0In = O. If adj A
were nonsingular, (adj A)1 exists. Then we have
A(adj A)(adj A)1 = A = O(adj A)1 = O,
that is, A = O. But the adjoint of the zero matrix must be a matrix of all zeros. Thus adj A = O so
adj A is singular. This is a contradiction. Hence it follows that adj A is singular.
14. If A is singular, then adj A is also singular by Exercise 13, and det(adj A) = 0 = [det(A)]n1 . If A is
nonsingular, then A(adj A) = det(A)In . Taking the determinant on each side,
det(A) det(adj A) = det(det(A)In ) = [det(A)]n .
Thus det(adj A) = [det(A)]n1 .
42
Chapter 3
Section 3.5, p. 172
2. x1 = 1, x2 = 1, x3 = 0, x4 = 2.
4. x1 = 1, x2 = 2, x3 = 2.
6. x1 = 1, x2 = 2 , x3 = 2 .
3
3
Supplementary Exercises for Chapter 3, p. 174
(b) t = 3, 4, 1.
2. (a) t = 1, 4.
(c) t = 1, 2, 3.
(d) t = 3, 1, 1.
n
3. If A = O for some positive integer n, then
0 = det(O) = det(An ) = det (A A A) = det(A) det(A) det(A) = (det(A))n .
n times
n times
It follows that det(A) = 0.
a1b
4. (a) b 1 c
c1a
1c3 +c1 c1
ab 1 b
= bc 1 c
ca 1 a
2
1aa
(b) 1 b b2
1 c c2
r1 +r2 r2 ; r1 +r3 r3
1
0
0
= 0 b a (b + a)(b a)
0 c a (c + a)(c a)
1
a
bc
= 0 b a c(b a)
0 c a b(c a)
c1 +c3 c3
ab 1 a
= bc 1 b
ca 1 c
1
a
a2
= 0 b a (b + a)(b a)
0 c a (c + a)(c a)
(a+b+c)c2 +c3 c3
r1 +r2 r2 ; r1 +r3 r3
ac1 +c2 c2 ; a2 c1 +c3 c3
1
0
0
= 0 b a c(b a)
0 c a b(c a)
ac1 +c2 c2 ; bcc1 +c3 c3
1 a bc
= 1 b ca .
1 c ab
5. If A is an n n matrix then
det(AAT ) = det(A) det(AT ) = det(A) det(A) = (det(A))2 .
(Here we used Theorems 3.9 and 3.1.) Since the square of any real number is 0 we have det(AAT ) 0.
6. The determinant is not a linear transformation from Rnn to R1 for n > 1 since for an arbitrary scalar
c, det(cA) = cn det(A) = c det(A).
7. Since A is nonsingular, Corollary 3.4 implies that
A1 =
1
(adj A).
det(A)
Multiplying both sides on the left by A gives
AA1 = In =
1
A (adj A).
det(A)
Hence we have that
(adj A)1 =
1
A.
det(A)
From Corollary 3.4 it follows that for any nonsingular matrix B , adj B = det(B ) B 1 . Let B = A1
and we have
1
adj (A1 ) = det(A1 ) (A1 )1 =
A = (adj A)1 .
det(A)
Chapter Review
43
8. If rows i and j are proportional with taik = ajk , k = 1, 2, . . . , n, then
det(A) = det(A)tri +rj rj = 0
since this row operation makes row j all zeros.
9. Matrix Q is n n with each entry equal to 1. Then, adding row j to row 1 for j = 2, 3, . . . , n, we have
det(Q nIn ) =
1n
1
1
1n
.
.
.
.
.
.
1
1
1
1
.
.
.
1
1
1
.
.
.
1 n
=
0
0
1 1n
.
.
.
.
.
.
1
1
0
1
.
.
.
1
0
1
.
.
.
1 n
=0
by Theorem 3.4.
10. If A has integer entries then the cofactors of A are integers and adj A has only integer entries. If A is
nonsingular and
1
A1 =
adj A
det(A)
has integer entries it must follow that
1
times each entry of adj A is an integer. Since adj A
det(A)
1
must be an integer, so det(A) = 1. Conversely, if det(A) = 1, then A is
det(A)
1
nonsingular and A = 1 adj A implies that A1 has integer entries.
has integer entries
11. If A and b have integer entries and det(A) = 1, then using Cramers rule to solve Ax = b, we nd
that the numerator in the fraction giving xi is an integer and the denominator is 1, so xi is an integer
for i = 1, 2, . . . , n.
Chapter Review for Chapter 3, p. 174
True or False
1. False.
7. False.
2. True.
8. True.
3. False.
9. True.
4. True.
10. False.
5. True.
11. True.
6. False.
12. False.
Quiz
1. 54.
2. False.
3. 1.
4. 2.
5. Let the diagonal entries of A be d11 , . . . , dnn . Then det(A) = d11 dnn . Since A is singular if and
only if det(A) = 0, A is singular if and only if some diagonal entry dii is zero.
6. 19.
7. A1
5
1
1
2
2
= 1 3 1 .
1
4
1
8. det(A) = 14. Therefore x1 =
11
7,
x2 = 4 , x3 = 5 .
7
7
Chapter 4
Real Vector Spaces
Section 4.1, p. 187
2. (5, 7).
y
(5, 7)
7
5
3
(3, 2)
1
x
5
3
1
1
3
5
4. (1, 6, 3).
6. a = 2, b = 2, c = 5.
0
2
8. (a)
.
(b) 3 .
4
6
2
4
10. (a)
.
(b) 3 .
7
3
3
0
1
6
2 , 2u v = 4 , 3u 2v = 6 , 0 3v = 0 .
12. (a) u + v =
4
5
3
7
3
3
4
3
1 , 2u v = 4 , 3u 2v = 7 , 0 3v = 6 .
(b) u + v =
1
11
18
9
46
Chapter 4
0
3
5
3
(c) u + v = 1 , 2u v = 1 , 3u 2v = 2 , 0 3v = 3 .
3
6
11
12
14. (a) r = 2.
(b) s = 8 .
3
(c) r = 3, s = 2.
16. c1 = 1, c2 = 2.
18. Impossible.
20. c1 = r, c2 = s, c3 = t.
u1
u1
22. If u = u2 , then (1)u = u2 = u.
u3
u3
23. Parts 28 of Theorem 4.1 require that we show equality of certain vectors. Since the vectors are column
matrices, this is equivalent to showing that corresponding entries of the matrices involved are equal.
Hence instead of displaying the matrices we need only work with the matrix entries. Suppose u, v, w
are in R3 with c and d real scalars. It follows that all the components of matrices involved will be real
numbers, hence when appropriate we will use properties of real numbers.
(2)
(u + (v + w))i = ui + (vi + wi )
((u + v) + w)i = (ui + vi ) + wi
Since real numbers ui + (vi + wi ) and (ui + vi ) + wi are equal for i = 1, 2, 3 we have u + (v + w) =
(u + v) + w.
(3)
(u + 0)i = ui + 0
(0 + u)i = 0 + ui
(u)i = ui
Since real numbers ui + 0, 0 + ui , and ui are equal for i = 1, 2, 3 we have u + 0 = 0 + u = u.
(4)
(u + (u))i = ui + (ui )
(0)i = 0
Since real numbers ui + (ui ) and 0 are equal for i = 1, 2, 3 we have u + (u) = 0.
(5)
(c(u + v))i = c(ui + vi )
(cu + cv)i = cui + cvi
Since real numbers c(ui + vi ) and cui + cvi are equal for i = 1, 2, 3 we have c(u + v) = cu + cv.
(6)
((c + d)u)i = (c + d)ui
(cu + du)i = cui + dui
Since real numbers (c + d)ui and cui + dui are equal for i = 1, 2, 3 we have (c + d)u = cu + du.
(7)
(c(du))i = c(dui )
((cd)u)i = (cd)ui
Since real numbers c(dui ) and (cd)ui are equal for i = 1, 2, 3 we have c(du) = (cd)u.
(8)
(1u)i = 1ui
(u)i = ui
Since real numbers 1ui and ui are equal for i = 1, 2, 3 we have 1u = u.
The proof for vectors in R2 is obtained by letting i be only 1 and 2.
Section 4.2
47
Section 4.2, p. 196
1. (a) The polynomials t2 + t and t2 1 are in P2 , but their sum (t2 + t) + (t2 1) = t 1 is not in
P2 .
(b) No, since 0(t2 + 1) = 0 is not in P2 .
2. (a) No.
(b) Yes.
(c) O =
00
.
00
ab
V , then abcd = 0. Let A =
cd
A V since (a)(b)(c)(d) = 0.
(d) Yes. If A =
a b
. Then A A =
c d
00
00
and
(e) No. V is not closed under scalar multiplication.
4. No, since V is not closed under scalar multiplication. For example, v =
2
V , but
4
1
2
v=
1
V.
2
u1
v1
w1
u1
v2
w2
5. Let u = . , v = . , w = . .
.
.
.
.
.
.
un
vn
wn
(1) For each i = 1, . . . , n, the ith component of u + v is ui + vi , which equals the ith component
vi + ui of v + u.
(2) For each i = 1, . . . , n, ui + (vi + wi ) = (ui + vi ) + wi .
(3) For each i = 1, . . . , n, ui + 0 = 0 + ui = ui .
(4) For each i = 1, . . . , n, ui + (ui ) = (ui ) + ui = 0.
(5) For each i = 1, . . . , n, c(ui + vi ) = cui + cvi .
(6) For each i = 1, . . . , n, (c + d)ui = cui + dui .
(7) For each i = 1, . . . , n, c(dui ) = (cd)ui .
(8) For each i = 1, . . . , n, 1 ui = ui .
6. P is a vector space.
(a) Let p(t) and q (t) be polynomials not both zero. Suppose the larger of their degrees is n. Then
p(t) + q (t) and cp(t) are computed as in Example 5. The properties of Denition 4.4 are veried
as in Example 5.
8. Property 6.
10. Properties 4 and (b).
12. The vector 0 is the real number 1, and if u is a vector (that is, a positive real number) then u1 is
13. The vector 0 in V is the constant zero function.
14. Verify the properties in Denition 4.4.
15. Verify the properties in Denition 4.4.
16. No.
1
u.
48
Chapter 4
17. No. The zero element for would have to be the real number 1, but then u = 0 has no negative
v such that u v = 0 v = 1. Thus (4) fails to hold. (5) fails since c (u v) = c + (uv) =
(c + u)(c + v) = c u c v. Etc.
18. No. For example, (1) fails since 2u v = 2v u.
19. Let 01 and 02 be zero vectors. Then 01 02 = 01 and 01 02 = 02 . So 01 = 02 .
20. Let u1 and u2 be negatives of u. Then u u1 = 0 and u u2 = 0. So u u1 = u u2 . Then
u1 (u u1 ) = u1 (u u2 )
(u1 u) u1 = (u1 u) u2
0 u1 = 0 u2
u1 = u2 .
0=c
21. (b) c
(0 0) = c
0c
(c) Let c u = 0. If c = 0, then
so u = 0.
1
c
0 so c
(c
u) =
0 = 0.
1
c
0 = 0. Now
1
c
(c
u) =
1
c
22. Verify as for Exercise 9. Also, each continuous function is a real valued function.
23. v (v) = 0, so (v) = v.
24. If u v = u w, add u to both sides.
25. If a
u=b
u, then (a b)
u = 0. Now use (c) of Theorem 4.2.
Section 4.3, p. 205
2. Yes.
4. No.
6. (a) and (c).
8. (a).
10. (c).
12. (a) Let
a1 0 b1
A = 0 c1 0
d1 0 e1
a2 0 b2
and B = 0 c2 0
d2 0 e2
be any vectors in W . Then
a1 + a2
0
A+B =
d1 + d2
0
c1 + c2
0
b1 + b2
0
e1 + e2
is in W . Moreover, if k is a scalar, then
ka1
0 kb1
kA = 0 kc1
0
kd1
0 ke1
is in W . Hence, W is a subspace of M33 .
(c)
u=1
u = u,
Section 4.3
Alternate
a
0
d
49
solution: Observe
0b
10
c 0 = a 0 0
0e
00
that every
0
0
0 + b 0
0
0
vector in W can be written as
01
000
000
000
0 0 + c 0 1 0 + d 0 0 0 + e 0 0 0 ,
00
000
100
001
so W consists of all linear combinations of ve xed vectors in M33 . Hence, W is a subspace of
M33 .
14. We have
Az =
ab
cd
1
a+b
=
,
1
c+d
so A is in W if and only if a + b = 0 and c + d = 0. Thus, W consists of all matrices of the form
a a
.
c c
Now if
A1 =
a1 a1
c1 c1
and A2 =
a2 a2
c2 c2
are in W , then
A1 + A2 =
a a2
a + a2 (a1 + a2 )
a1 a1
+2
=1
c1 + c2 (c1 + c2 )
c1 c1
c2 c2
is in W . Moreover, if k is a scalar, then
kA1 = k
k a1 (ka1 )
a1 a1
=
kc1 (kc1 )
c1 c1
is in W . Alternatively, we can observe that every vector in W can be written as
a a
1 1
0
0
=a
+c
,
c c
0
0
1 1
so W consists of all linear combinations of two xed vectors in M22 . Hence, W is a subspace of M22 .
16. (a) and (b).
18. (b) and (c).
20. (a), (b), (c), and (d).
21. Use Theorem 4.3.
22. Use Theorem 4.3.
23. Let x1 and x2 be solutions to Ax = b. Then A(x1 + x2 ) = Ax1 + Ax2 = b + b = b if b = 0.
24. {0}.
25. Since
121
t
0
1 0 1 t = 0 ,
264
t
0
t
it follows that t is in the null space of A.
t
50
Chapter 4
26. We have cx0 + dx0 = (c + d)x0 is in W , and if r is a scalar then r(cx0 ) = (rc)x0 is in W .
27. No, it is not a subspace. Let x be in W so Ax = 0. Letting y = x, we have y is also in W and
Ay = 0. However, A(x + y) = 0, so x + y does not belong to W .
28. Let V be a subspace of R1 which is not the zero subspace and let v = 0 be any vector in V . If u is
any nonzero vector in R1 , then u = u v, so R1 is a subset of V . Hence, V = R1 .
v
29. Certainly {0} and R2 are subspaces of R2 . If u is any nonzero vector then span {u} is a subspace of
R2 . To show this, observe that span {u} consists of all vectors in R2 that are scalar multiples of u. Let
v = cu and w = du be in span {u} where c and d are any real numbers. Then v +w = cu+du = (c+d)u
is in span {u} and if k is any real number, then k v = k (cu) = (kc)u is in span {u}. Then by Theorem
4.3, span {u} is a subspace of R2 .
To show that these are the only subspaces of R2 we proceed as follows. Let W be any subspace of R2 .
Since W is a vector space in its own right, it contains the zero vector 0. If W = {0}, then W contains a
nonzero vector u. But then by property (b) of Denition 4.4, W must contain every scalar multiple of
u. If every vector in W is a scalar multiple of u then W is span {u}. Otherwise, W contains span {u}
and another vector which is not a multiple of u. Call this other vector v. It follows that W contains
span {u, v}. But in fact span {u, v} = R2 . To show this, let y be any vector in R2 and let
u=
u1
,
u2
v1
,
v2
v=
and y =
y1
.
y2
We must show there are scalars c1 and c2 such that c1 u + c2 v = y. This equation leads to the linear
system
y
u1 v1 c1
= 1.
u2 v2 c2
y2
Consider the transpose of the coecient matrix:
u1 v1
u2 v2
T
=
u1 u2
.
v1 v2
This matrix is row equivalent to I2 since its rows are not multiples of each other. Therefore the matrix
is nonsingular. It follows that the coecient matrix is nonsingular and hence the linear system has a
solution. Therefore span {u, v} = R2 , as required, and hence the only subspaces of R2 are {0}, R2 , or
scalar multiples of a single nonzero vector.
30. (b) Use Exercise 25. The depicted set represents all scalar multiples of a nonzero vector, hence is a
subspace.
31. We have
abc
ab
=
a00
a0
32. Every vector in W is of the form
a+b
1
=a
0
1
0
0
1
0
+b
0
0
1
0
1
= aw1 + bw2 .
0
ab
, which can be written as
bc
ab
10
01
00
=a
+b
+c
= av1 + bv2 + cv3 ,
bc
00
10
01
where
v1 =
34. (a) and (c).
10
,
00
v2 =
01
,
10
and v3 =
00
.
01
Section 4.4
51
35. (a) The line l0 consists of all vectors of the form
x
u
y = t v .
z
w
Use Theorem 4.3.
(b) The line l through the point P0 (x0 , y0 , z0 ) consists of all vectors of the form
x
x0
u
y = y0 + t v .
z
z0
w
If P0 is not the origin, the conditions of Theorem 4.3 are not satised.
36. (d)
38. (a) x = 3 + 4t, y = 4 5t, z = 2 + 2t.
(b) x = 3 2t, y = 2 + 5t, z = 4 + t.
42. Use matrix multiplication cA where c is a row vector containing the coecients and matrix A has rows
that are the vectors from Rn .
Section 4.4, p. 215
2. (a) 1 does not belong to span S .
(b) Span S consists of all vectors of the form
a
0
, where a is any real number. Thus, the vector
0
1
is not in span S .
(c) Span S consists of all vectors of M22 of the form
Thus, the vector
4. (a) Yes.
(b) Yes.
ab
, where a and b are any real numbers.
ba
12
is not in span S .
34
(c) No.
(d) No.
6. (d).
8. (a) and (c).
10. Yes.
0
1
2 0
12. , .
1
0
0
1
13. Every vector A in W is of the form A =
a
b
, where a, b, and c are any real numbers. We have
c a
a
b
1
0
01
00
,
=a
+b
+c
c a
0 1
00
10
so A is in span S . Thus, every vector in W is in span S . Hence, span S = W .
52
Chapter 4
0
1
0
14. S =
0
0
0
000
0
0 0 0 , 0
100
0
0
010
001
000
0
, 0 0 0 , 0 0 0 , 1 0 0 , 0
0
1
000
000
000
0
00
0 0 .
10
0
0
000
1
0 , 0 0 1,
0 1
000
16. From Exercise 43 in Section 1.3, we have Tr(AB ) = Tr(BA), and Tr(AB BA) = Tr(AB )Tr(BA) = 0.
Hence, span T is a subset of the set S of all n n matrices with trace = 0. However, S is a proper
subset of Mnn .
Section 4.5, p. 226
1. We form Equation (1):
2
3
10
0
c1 1 + c2 1 + c3 0 = 0 ,
3
2
10
0
which has nontrivial solutions. Hence, S is linearly dependent.
2. We form Equation (1):
1
0
2
0
2 + c2 1 + c3 0 = 0 ,
c1
1
1
1
0
which has only the trivial solution. Hence, S is linearly independent.
4. No.
6. Linearly dependent.
8. Linearly independent.
10. Yes.
12. (b) and (c) are linearly independent, (a) is linearly dependent.
46
11
10
03
=3
+1
+1
.
86
21
02
21
14. Only (d) is linearly dependent: cos 2t = cos2 t sin2 t.
16. c = 1.
18. Suppose that {u, v} is linearly dependent. Then c1 u + c2 v = 0, where c1 and c2 are not both zero.
Say c2 = 0. Then v = c1 u. Conversely, if v = k u, then k u 1v = 0. Since the coecient of v is
c2
nonzero, {u, v} is linearly dependent.
19. Let S = {v1 , v2 , . . . , vk } be linearly dependent. Then a1 v1 + a2 v2 + + ak vk = 0, where at least
one of the coecients a1 , a2 , . . . , ak is not zero. Say that aj = 0. Then
vj =
a1
a2
aj 1
aj +1
ak
v1 v2
vj 1
vj +1 vk .
aj
aj
aj
aj
aj
20. Suppose a1 w1 + a2 w2 + a3 w3 = a1 (v1 + v2 + v3 ) + a2 (v2 + v3 ) + a3 v3 = 0. Since {v1 , v2 , v3 } is
linearly independent, a1 = 0, a1 + a2 = 0 (and hence a2 = 0), and a1 + a2 + a3 = 0 (and hence a3 = 0).
Thus {w1 , w2 , w3 } is linearly independent.
Section 4.5
53
21. Form the linear combination
c1 w1 + c2 w2 + c3 w3 = 0
which gives
c1 (v1 + v2 ) + c2 (v1 + v3 ) + c3 (v2 + v3 ) = (c1 + c2 )v1 + (c1 + c3 )v2 + (c2 + c3 )v3 = 0.
Since S is linearly independent we have
c1 + c2
=0
c1
+ c3 = 0
c2 + c3 = 0
1100
a linear system whose augmented matrix is 1 0 1 0 . The reduced row echelon form is
0110
1000
0 1 0 0
0010
thus c1 = c2 = c3 = 0 which implies that {w1 , w2 , w3 } is linearly independent.
22. Form the linear combination
c1 w1 + c2 w2 + c3 w3 = 0
which gives
c1 v1 + c2 (v1 + v3 ) + c3 (v1 + v2 + v3 ) = (c1 + c2 + c3 )v1 + (c2 + c3 )v2 + c3 v3 = 0.
Since S is linearly dependent, this last equation is satised with c1 + c2 + c3 , c3 , and c2 + c3 not all
being zero. This implies that c1 , c2 , and c3 are not all zero. Hence, {w1 , w2 , w3 } is linearly dependent.
23. Suppose {v1 , v2 , v3 } is linearly dependent. Then one of the vj s is a linear combination of the preceding
vectors in the list. It must be v3 since {v1 , v2 } is linearly independent. Thus v3 belongs to span
{v1 , v2 }. Contradiction.
24. Form the linear combination
c1 Av1 + c2 Av2 + + cn Avn = A(c1 v1 + c2 v2 + + cn vn ) = 0.
Since A is nonsingular, Theorem 2.9 implies that
c1 v1 + c2 v2 + + cn vn = 0.
Since {v1 , v2 , . . . , vn } is linearly independent, we have c1 = c2 = = cn = 0. Hence, {Av1 , Av2 , . . . ,
Avn } is linearly independent.
25. Let A have k nonzero rows, which we denote by v1 , v2 , . . . , vk where
vi = ai1 ai2 1 ain .
Let c1 < c2 < < ck be the columns in which the leading entries of the k nonzero rows occur. Thus
vi = 0 0 0 1 ai ci+1 ain that is, aij = 0 for j < ci and cici = 1. If a1 v1 + a2 v2 + +
ak vk = 0 0 0 , examining the c1 th entry on the left yields a1 = 0, examining the c2 th entry
yields a2 = 0, and so forth. Therefore v1 , v2 , . . . , vk are linearly independent.
54
Chapter 4
k
26. Let vj =
m
aij ui . Then w =
i=1
m
bj vj =
j =1
aij ui =
bj
j =1
k
k
i=1
i=1
m
aij bj ui .
j =1
27. In R1 let S1 = {1} and S2 = {1, 0}. S1 is linearly independent and S2 is linearly dependent.
28. See Exercise 27 above.
29. In Matlab the command null(A) produces an orthonormal basis for the null space of A.
31. Each set of two vectors is linearly independent since they are not scalar multiples of one another. In
Matlab the reduced row echelon form command implies sets (a) and (b) are linearly independent
while (c) is linearly dependent.
Section 4.6, p. 242
2. (c).
4. (d).
11
00
10
01
00
c + c3
00
c1 + c4
+ c2
+ c3
+ c4
=
, then 1
=
. The
00
11
01
11
00
00
c2 + c4 c2 + c3 + c4
rst three entries imply c3 = c1 = c4 = c2 . The fourth entry gives c2 c2 c2 = c2 = 0. Thus
ci = 0 for i = 1, 2, 3, 4. Hence the set of four matrices is linearly independent. By Theorem 4.12, it is
a basis.
2
1
2
3
3
1 = 1 1 + 2 2 1 4 .
8. (b) is a basis for R and
3
2
0
1
6. If c1
10. (a) forms a basis: 5t2 3t + 8 = 3(t2 + t) + 0t2 + 8(t2 + 1).
12. A possible answer is
1
0
1
16. 0
0
14.
0
01
,
1
10
0
00
, 1
00
0
00
1 1 0 1 , 0 1 2 1 , 0 0 3 1 ; dim W = 3.
.
000
000
000
001
10
0 0 , 0 0 0 , 0 1 0 , 0 0 1 , 0 0 0 .
001
010
000
100
00
18. A possible answer is {cos2 t, sin2 t} is a basis for W ; dim W = 2.
0
1
0
5
0
1
0
1
20. (a) 1 , 0 , (b) , , (c) 1 , 0 .
1 1
0
1
0
1
0
1
22. {t3 + 5t2 + t, 3t2 2t + 1}.
24. (a) 3.
(b) 2.
26. (a) 2.
(b) 3.
(c) 3.
(d) 3.
1
0
1
0 , 0 , 1 .
28. (a) A possible answer is
2
0
0
0
1
1
0 , 1 , 0 .
(b) A possible answer is
2
3
0
Section 4.6
55
100
010
001
000
000
000
,
,
,
,
,
000
000
000
100
010
001
30.
;
dim M23 = 6. dim Mmn = mn.
32. 2.
34. The set of all polynomials of the form at3 + bt2 + (b a), where a and b are any real numbers.
n
35. We show that {cv1 , v2 , . . . , vk } is also a set of k = dim V vectors which spans V . If v =
ai vi is a
i=1
vector in V , then
n
v=
a1
ai vi .
(cv1 ) +
c
i=2
36. Let d = max{d1 , d2 , . . . , dk }. The polynomial td+1 + td + + t + 1 cannot be written as a linear
combination of polynomials of degrees d.
37. If dim V = n, then V has a basis consisting of n vectors. Theorem 4.10 then implies the result.
38. Let S = {v1 , v2 , . . . , vk } be a minimal spanning set for V . From Theorem 4.9, S contains a basis T
for V . Since T spans S and S is a spanning set for V , T = S . It follows from Corollary 4.1 that k = n.
39. Let T = {v1 , v2 , . . . , vm }, m > n be a set of vectors in V . Since m > n, Theorem 4.10 implies that T
is linearly dependent.
40. Let dim V = n and let S be a set of vectors in V containing m elements, m < n. Assume that S spans
V . By Theorem 4.9, S contains a basis T for V . Then T must contain n elements. This contradiction
implies that S cannot span V .
41. Let dim V = n. First observe that any set of vectors in W that is linearly independent in W is linearly
independent in V . If W = {0}, then dim W = 0 and we are done. Suppose now that W is a nonzero
subspace of V . Then W contains a nonzero vector v1 , so {v1 } is linearly independent in W (and in
V ). If span {v1 } = W , then dim W = 1 and we are done. If span {v1 } = W , then there exists a
vector v2 in W which is not in span {v1 }. Then {v1 , v2 } is linearly independent in W (and in V ).
Since dim V = n, no linearly independent set of vectors in V can have more than n vectors. Hence, no
linearly independent set of vectors in W can have more than n vectors. Continuing the above process
we nd a basis for W containing at most n vectors. Hence dim W dim V .
42. Let dim V = dim W = n. Let S = {v1 , v2 , . . . , vn } be a basis for W . Then S is also a basis for V , by
Theorem 4.13. Hence, V = W .
43. Let V = R3 . The trivial subspaces of any vector space are {0} and V . Hence {0} and R3 are subspaces
of R3 . In Exercise 35 in Section 4.3 we showed that any line through the origin is a subspace of R3 .
Thus we need only show that any plane passing through the origin is a subspace of R3 . Any plane
in R3 through the origin has an equation of the form ax + by + cz = 0. Sums and scalar multiples of
any point on will also satisfy this equation, hence is a subspace of R3 . To show that {0}, V , lines,
and planes through the origin are the only subspaces of R3 we argue in a manner similar to that given
in Exercise 29 in Section 4.3 which considered a similar problem in R2 . Let W be any subspace of R3 .
T
Hence W contains the zero vector 0. If W = {0} then it contains a nonzero vector v = a b c
where at least one of a, b, or c is not zero. Since W is a subspace it contains span {v}. If W =
span {v} then W is a line in R3 through the origin. Otherwise, there exists a vector u in W which
is not in span {v}. Hence {v, u} is a linearly independent set. But then W contains span {v, u}. If
W = span {v, u} then W is a plane through the origin. Otherwise there is a vector x in W that is
not in span {v, u}. Hence {v, u, x} is a linearly independent set in W and W contains span {v, u, x}.
But {v, u, x} is a maximal linearly independent set in R3 , hence a basis for R3 . It follows in this case
that W = R3 .
56
Chapter 4
44. Let S = {v1 , v2 , . . . , vn }. Since every vector in V can be written as a linear combination of the vectors
in S , it follows that S spans V . Suppose now that
a1 v1 + a2 v2 + + an vn = 0.
We also have
0v1 + 0v2 + + 0vn = 0.
From the hypothesis it then follows that a1 = 0, a2 = 0, . . . , an = 0. Hence, S is a basis for V .
45. (a) If span S = V , then there exists a vector v in V that is not in S . Vector v cannot be the zero vector
since the zero vector is in every subspace and hence in span S . Hence S1 = {v1 , v2 , . . . , vn , v}
is a linearly independent set. This follows since vi , i = 1, . . . , n are linearly independent and v
is not a linear combination of the vi . But this contradicts Corollary 4.4. Hence our assumption
that span S = V is incorrect. Thus span S = V . Since S is linearly independent and spans V it
is a basis for V .
(b) We want to show that S is linearly independent. Suppose S is linearly dependent. Then there
is a subset of S consisting of at most n 1 vectors which is a basis for V . (This follows from
Theorem 4.9) But this contradicts dim V = n. Hence our assumption is false and S is linearly
independent. Since S spans V and is linearly independent it is a basis for V .
46. Let T = {v1 , v2 , . . . , vk } be a maximal independent subset of S , and let v be any vector in S . Since
T is a maximal independent subset then {v1 , v2 , . . . , vk , v} is linearly dependent, and from Theorem
4.7 it follows that v is a linear combination of {v1 , v2 , . . . , vk }, that is, of the vectors in T . Since S
spans V , we nd that T also spans V and is thus a basis for V .
47. If A is nonsingular then the linear system Ax = 0 has only the trivial solution x = 0. Let
c1 Av1 + c2 Av2 + + cn Avn = 0.
Then A(c1 v1 + + cn vn ) = 0 and by the opening remark we must have
c1 v1 + c2 v2 + + cn vn = 0.
However since {v1 , v2 , . . . , vn } is linearly independent it follows that c1 = c2 = = cn = 0. Hence
{Av1 , Av2 , . . . , Avn } is linearly independent.
48. Since A is singular, Theorem 2.9 implies that the homogeneous system Ax = 0 has a nontrivial solution
x. Since {v1 , v2 , . . . , vn } is a linearly independent set of vectors in Rn , it is a basis for Rn , so
x = c1 v1 + c2 v2 + + cn vn .
Observe that x = 0, so c1 , c2 , . . . , cn are not all zero. Then
0 = Ax = A(c1 v1 + c2 v2 + + cn vn ) = c1 (Av1 ) + c2 (Av2 ) + + cn (Avn ).
Hence, {Av1 , Av2 , . . . , Avn } is linearly dependent.
Section 4.7, p. 251
2. (a) x = r + 2s, y = r, z = s, where r, s are any real numbers.
1
2
(b) Let x1 = 1 , x2 = 0 . Then
0
1
r + 2s
1
2
= r 1 + s 0 = rx1 + sx2 .
r
s
0
1
Section 4.7
57
(c)
z
x1
x2
y
O
x
1
4
0
1 0 0
4. 0 , 6 , 1 ; dimension = 3.
0 1 0
0
0
1
1
4
3 1
6. 2 , 2 ; dimension = 2.
1 0
1
0
8. No basis; dimension = 0.
17
2
1 0
0 5
10. , ; dimension = 2.
0 1
0 0
0
0
0
0
3
6
12. 1 , 3 .
1 0
0
1
14.
3
1
.
16. No basis.
18. = 3, 2.
20. = 1, 2, 2
0
2
1 , xh = r 0 , r any number.
22. x = xp + xh , where xp =
0
1
58
Chapter 4
23. Since each vector in S is a solution to Ax = 0, we have Axi = 0 for i = 1, 2, . . . , n. The span of S
consists of all possible linear combinations of the vectors in S . Hence
y = c1 x1 + c2 x2 + + ck xk
represents an arbitrary member of span S . We have
Ay = c1 Ax1 + c2 Ax2 + + ck Axk = c1 0 + c2 0 + + ck 0 = 0.
24. If A has a row or column of zeros, then A is singular (Exercise 46 in Section 1.5), so by Theorem 2.9,
the homogeneous system Ax = 0 has a nontrivial solution.
25. (a) Let A = aij . Since the dimension of the null space of A is 3, the null space of A is R3 . Then the
natural basis {e1 , e2 , e3 } is a basis for the null space of A. Forming Ae1 = 0, Ae2 = 0, Ae3 = 0,
we nd that all the columns of A must be zero. Hence, A = O.
(b) Since Ax = 0 has a nontrivial solution, the null space of A contains a nonzero vector, so the
dimension of the null space of A is not zero. If this dimension is 3, then by part (a), A = O, a
contradiction. Hence, the dimension is either 1 or 2.
26. Since the reduced row echelon forms of matrices A and B are the same it follows that the solutions to
the linear systems Ax = 0 and B x = 0 are the same set of vectors. Hence the null spaces of A and B
are the same.
Section 4.8, p. 267
3
2. 2 .
1
1
1 .
4.
3
1
2
6. .
2
4
8. (3, 1, 3).
10. t2 3t + 2.
12.
1 1
.
21
13. (a) To show S is a basis for R2 we show that the set is linearly independent and since dim R2 = 2 we
can conclude they are a basis. The linear combination
c1
1
1
0
+ c2
=
1
2
0
leads to the augmented matrix
1
1
1 2
0
0
.
The reduced row echelon form of this homogeneous system is I2
independent.
0
0
so the set S is linearly
Section 4.8
59
(b) Find c1 and c2 so that
c1
1
1
2
+ c2
=
.
1
2
6
The corresponding linear system has augmented matrix
1
1
1 2
The reduced row echelon form is
10
01
10
8
2
6
so v
(c) Av1 =
S
10
.
8
=
0.3
1
= .3
= 0.3v1 , so 1 = 0.3.
0.3
1
(d) Av2 =
.
0.25
1
= 0.25
= 0.25v2 so 2 = 0.25.
.50
2
(e) v = 10v1 8v2 so An v = 10An v1 8An v2 = 10(1 )n v1 8(2 )n v2 .
(f) As n increases, the limit of the sequence is the zero vector.
14. (a) Since dim R2 = 2, we show that S is a linearly independent set. The augmented matrix corresponding to
1
2
0
c1
+ c2
=
1
3
0
is
1 2
1
3
0
. The reduced row echelon form is
0
I2
0
so S is a linearly independent set.
(b) Set
v=
4
1
2
= c1
+ c2
.
3
1
3
Solving for c1 and c2 we nd c1 = 18 and c2 = 7. Thus v
(c) Av1 =
1 2
3
4
1 2
3
4
=
18
.
7
1
1
=
, so 1 = 1.
1
1
(d) Av2 =
S
2
4
2
=
=2
, so 2 = 2.
3
6
3
(e) An v = An [18v1 + 7v2 ] = 18An v1 + 7An v2 = 18(1)n v1 + 7(2)n v2 = 18v1 + 7(2n )v2 .
(f) As n increases the sequence becomes unbounded since limn An v = 18v1 + 7v2 limn 2n .
9
1
2 5 2
16. (a) v T = 8 , w T = 2
(b) PS T = 1 6 2 .
28
13
1
2
1
2
18
1 , w = 17 .
(c) v S =
(d) Same as (c).
S
3
8
2
1 2
(e) QT S = 1
(f) Same as (a).
0 2 .
4 1
7
4
0
0
1
0
18. (a) v T = 2 , w T = 8 .
(b) PS T = 1 1
0 .
2
2
1
1
2
1
1
6
2
60
Chapter 4
2
8
3 , w = 4 .
(c) v S =
S
2
2
120
(e) QT S = r 1 0 0 .
011
(d) Same as (c).
(f)
Same as (a).
5
.
3
4
1 .
22.
3
2
3
3
24. T = 2 , 1 , 1 .
0
0
3
20.
26. T =
2
1
,
5
3
.
28. (a) V is isomorphic to itself. Let L : V V be dened by L(v) = v for v in V ; that is, L is the
identity map.
(b) If V is isomorphic to W , then there is an isomorphism L : V W which is a one-to-one and
onto mapping. Then L1 : W V exists. Verify that L1 is one-to-one and onto and is also an
isomorphism. This is all done in the proof of Theorem 6.7.
(c) If U is isomorphic to V , let L1 : U V be an isomorphism. If V is isomorphic to W , let
L2 : V W be an isomorphism. Let L : U W be dened by L(v) = L2 (L1 (v)) for v in U .
Verify that L is an isomorphism.
29. (a) L(0V ) = L(0V + 0V ) = L(0V ) + L(0V ), so L(0V ) = 0W .
(b) L(v w) = L(v + (1)w) = L(v) + L((1)w) = L(v) + (1)L(w) = L(v) L(w).
30. By Theorem 3.15, Rn and Rm are isomorphic if and only if their dimensions are equal.
31. Let L : Rn Rn be dened by
a1
a2
= . .
.
.
an
L
a1 a2 an
Verify that L is an isomorphism.
a
32. Let L : P2 R3 be dened by L(at2 + bt + c) = b . Verify that L is an isomorphism.
c
33. (a) Let L : M22 R4 be dened by
L
ab
cd
a
b
= .
c
d
Verify that L is an isomorphism.
Section 4.8
61
(b) dim M22 = 4.
34. If v is any vector in V , then v = aet + bet , where a and b are scalars. Then let L : V R2 be dened
a
by L(v) =
. Verify that L is an isomorphism.
b
35. From Exercise 18 in Section 4.6, V = span S has a basis {sin2 t, cos2 t} hence dim V = 2. It follows
from Theorem 4.14 that V is isomorphic to R2 .
36. Let V and W be isomorphic under the isomorphism L. If V1 is a subspace of V then W1 = L(V ) is a
subspace of W which is isomorphic to V1 .
37. Let v = w. The coordinates of a vector relative to basis S are the coecients used to express the
vector in terms of the members of S . A vector has a unique expression in terms of the vectors of a
basis, hence it follows that v S must equal w S . Conversely, let
a1
a2
= . .
.
.
v
S
=w
S
an
then v = a1 v1 + a2 v2 + + an vn and w = a1 v1 + a2 v2 + + an vn . Hence v = w.
38. Let S = {v1 , v2 , . . . , vn } and v = a1 v1 + a2 v2 + + an vn , w = b1 v1 + b2 v2 + + bn vn . Then
a1
b1
a2
b2
w S = . .
v S = . and
.
.
.
.
an
bn
We also have
v + w = (a1 + b1 )v1 + (a2 + b2 )v2 + + (an + bn )vn
cv = (ca1 )v1 + (ca2 )v2 + + (can )vn ,
so
a1
b1
a1 + b1
a2 + b2 a2 b2
=
= . + . = v
.
.
. .
.
.
.
an + bn
an
bn
a1
ca1
a2
ca2
cv S = . = c . = c v S .
.
.
.
.
can
an
v+w
S
S
+w
S
a1
a2
39. Consider the homogeneous system MS x = 0, where x = . . This system can then be written in
.
.
an
terms of the columns of MS as
a1 v1 + a2 v2 + + an vn = 0,
62
Chapter 4
where vj is the j th column of MS . Since v1 , v2 , , vn are linearly independent, we have a1 = a2 =
= an = 0. Thus, x = 0 is the only solution to MS x = 0, so by Theorem 2.9 we conclude that MS
is nonsingular.
40. Let v be a vector in V . Then v = a1 v1 + a2 v2 + + an vn . This last equation can be written in
matrix form as
v = MS v S
where MS is the matrix whose j th column is vj . Similarly, v = MT v
T
.
41. (a) From Exercise 40 we have
MS v
S
= MT v
.
T
From Exercise 39 we know that MS is nonsingular, so
v
= M S 1 MT v
S
T
= PS T v
,
.
Equation (3) is
v
so
S
T
PS T = MS 1 MT .
(b) Since MS and MT are nonsingular,
gular matrices, is nonsingular.
211
6
4
0 2 1, MT = 3 1
(c) MS =
101
3
3
42. Suppose that
w1 S , w2 S , . . . , wk
1, 2, . . . , k, not all zero such that
a1 w1
S
MS 1 is nonsingular, so PS T , as the product of two nonsin-
2
5
2
2
1 1
3
3
3
1
1
2
, M 1 = 3
1 1
5
3 , PS T =
3
S
1
4
2
1
1
2
3
3
3
1
2 .
1
is linearly dependent. Then there exist scalars, ai , i =
S
+ a2 w2
S
+ + ak wk
S
= 0V
S
.
Using Exercise 38 we nd that the preceding equation is equivalent to
a1 w1 + a2 w2 + + ak wk
S
= 0V
S
.
By Exercise 37 we have
a1 w1 + a2 w2 + + ak wk = 0V .
Since the ws are linearly independent, the preceding equation is only true when all ai = 0. Hence
we have a contradiction and our assumption that the wi s are linearly dependent must be false. It
follows that
w1
S
, w2
S
, . . . , wk
S
is linearly independent.
43. From Exercise 42 we know that T = v1 S , v2 S , . . . , vn S is a linearly independent set of vectors
in Rn . By Theorem 4.12, T spans Rn and is thus a basis for Rn .
Section 4.9, p. 282
2. A possible answer is {t3 , t2 , t, 1}.
4. A possible answer is
6. (a)
1 0,0 1
.
1, 0, 0, 33 , 0, 1, 0, 23 , 0, 0, 1, 8
7
7
7
.
(b) {(1, 2, 1, 3), (3, 5, 2, 0), (0, 1, 2, 1)}.
Section 4.9
63
0
0
1
0
1
0
8. (a) , 5 , .
4 2 0
0
0
1
10. (a) 2.
2
3
2
2
2
4
(b) , , .
3 3 2
4
2
1
(b) 2.
11. The result follows from the observation that the nonzero rows of A are linearly independent and span
the row space of A.
12. (a) 3.
(b) 2.
(c) 2.
14. (a) rank = 2, nullity = 2.
(b) rank = 4, nullity = 0.
16. (a) and (b) are consistent.
18. (b).
20. (a).
22. (a).
24. (a) 3.
(b) 3.
26. No.
28. Yes, linearly independent.
30. Yes.
32. Yes.
34. (a) 3.
(b) The six columns of A span a column space of dimension rank A, which is at most 4. Thus the six
columns are linearly dependent.
(c) The ve rows of A span a row space of dimension rank A, which is at most 3. Thus the ve rows
are linearly dependent.
36. (a) 0, 1, 2, 3.
(b) 3.
(c) 2.
37. S is linearly independent if and only if the n rows of A are linearly independent if and only if
rank A = n.
38. S is linearly independent if and only if the column rank of A = n if and only if rank A = n.
39. If Ax = 0 has a nontrivial solution then A is singular, rank A < n, and the columns of A are linearly
dependent, and conversely.
40. If rank A = n, then the dimension of the column space of A is n. Since the columns of A span its
column space, it follows by Theorem 4.12 that they form a basis for the column space and are thus
linearly independent. Conversely, if the columns of A are linearly independent, then the dimension of
the column space is n, so rank A = n.
41. If the rows of A are linearly independent, then rank A = n and the columns of A span Rn .
42. From the denition of reduced row echelon form, any column in which a leading one appears must be
a column of an identity matrix. Assuming that vi has its rst nonzero entry in position ji , for i =
1, 2, . . . , k, every other vector in S must have a zero in position ji . Hence if v = b1 v1 + b2 v2 + + bk vk ,
it follows that aji = bi as desired.
64
Chapter 4
43. Let rank A = n. Then Corollary 4.7 implies that A is nonsingular, so x = A1 b is a solution. If x1
and x2 are solutions, then Ax1 = Ax2 and multiplying both sides by A1 , we have x1 = x2 . Thus,
Ax = b has a unique solution.
Conversely, suppose that Ax = b has a unique solution for every n 1 matrix b. Then the n linear
systems Ax = e1 , Ax = e2 , . . . , Ax = en , where e1 , e2 , . . . , en are the columns of In , have solutions
x1 , x2 , . . . , xn . Let B be the matrix whose j th column is xj . Then the n linear systems above can be
written as AB = In . Hence, B = A1 , so A is nonsingular and Corollary 4.7 implies that rank A = n.
44. Let Ax = b have a solution for every m 1 matrix b. Then the columns of A span Rm . Thus there is
a subset of m columns of A that is a basis for Rm and rank A = m. Conversely, if rank A = m, then
column rank A = m. Thus m columns of A are a basis for Rm and hence all the columns of A span
Rm . Since b is in Rm , it is a linear combination of the columns of A; that is, Ax = b has a solution
for every m 1 matrix b.
45. Since the rank of a matrix is the same as its row rank and column rank, the number of linearly
independent rows of a matrix is the same as the number of linearly independent columns. It follows
that the largest the rank can be is min{m, n}. Since m = n, it must be that either the rows or columns
are linearly dependent.
46. Suppose that Ax = b is consistent. Assume that there are at least two dierent solutions x1 and x2 .
Then Ax1 = b and Ax2 = b, so A(x1 x2 ) = Ax1 Ax2 = b b = 0. That is, Ax = 0 has a
nontrivial solution so nullity A > 0. By Theorem 4.19, rank A < n. Conversely, if rank A < n, then
by Corollary 4.8, Ax = 0 has a nontrivial solution y. Suppose that x0 is a solution to Ax = b. Thus,
Ay = 0 and Ax0 = b. Then x0 + y is a solution to Ax = b, since A(x0 + y) = Ax0 + Ay = b + 0 = b.
Since y = 0, x0 + y = x0 , so Ax = b has more than one solution.
47. The solution space is a vector space of dimension d, d 2.
48. No. If all the nontrivial solutions of the homogeneous system are multiples of each other, then the
dimension of the solution space is 1. The rank of the coecient matrix is 5. Since nullity = 7 rank,
nullity 7 5 = 2.
49. Suppose that S = {v1 , v2 , . . . , vn } spans Rn (Rn ). Then by Theorem 4.11, S is linearly independent
and hence the dimension of the column space of A is n. Thus, rank A = n. Conversely, if rank A = n,
then the set S consisting of the columns (rows) of A is linearly independent. By Theorem 4.12, S spans
Rn .
Supplementary Exercises for Chapter 4, p. 285
1. (a) The verication of Denition 4.4 follows from the properties of continuous functions and real
numbers. In particular, in calculus it is shown that the sum of continuous functions is continuous
and that a real number times a continuous function is again a continuous function. This veries
(a) and (b) of Denition 4.4. We demonstrate that (1) and (5) hold and (2), (3), (4), (6), (7), (8)
are shown in a similar way. To show (1), let f and g belong to C [a, b] and for t in [a, b]
(f g )(t) = f (t) + g (t) = g (t) + f (t) = (g f )(t)
since f (t) and g (t) are real numbers and the addition of real numbers is commutative. To show
(5), let c be any real number. Then
c
(f g )(t) = c(f (t) + g (t)) = cf (t) + cg (t)
=c
f (t) + c
g (t) = (c
f c
g )(t)
since c, f (t), and g (t) are real numbers and multiplication of real numbers distributes over addition
or real numbers.
Supplementary Exercises
65
(b) k = 0.
(c) Let f and g have roots at ti , i = 1, 2, . . . , n; that is, f (ti ) = g (ti ) = 0. It follows that f g has
roots at ti , since (f g )(ti ) = f (ti ) + g (ti ) = 0 + 0 = 0. Similarly, k f has roots at ti since
(k f )(ti ) = kf (ti ) = k 0 = 0.
a1
b1
a2
b2
2. (a) Let v = and w = be in W . Then a4 a3 = a2 a1 and b4 b3 = b2 b1 . It follows
a3
b3
a4
b4
that
a1 + b1
a + b2
v+w = 2
a3 + b3
a4 + b4
and
(a4 + b4 ) (a3 + b3 ) = (a4 a3 ) + (b4 b3 ) = (a2 a1 ) + (b2 b1 ) = (a2 + b2 ) (a1 + b1 ),
so v + w is in W . Similarly, if c is any real number,
ca1
ca2
cv =
ca3
ca4
and
ca4 ca3 = c(a4 a3 ) = c(a2 a1 ) = ca2 ca1
so cv is in W .
a1
a2
(b) Let v = with a4 a3 = a2 a1 be any vector in W . We seek constants c1 , c2 , c3 , c4 such
a3
a4
that
1
0
1
0
a1
0
1
1
0 a2
c1 + c2 + c3 + c4 =
0
0
1
1 a3
1
1
1
1
a4
which leads to the linear system whose augmented matrix is
1 0 1 0 a1
0 1 1 0 a2
0 0 1 1 a3 .
1 1 1 1 a4
When this augmented matrix is transformed to reduced row echelon form we obtain
1 0 0 1
a1 a3
0 1 0 1
a2 a3
.
0 0 1
1
a3
000
0 a4 + a1 a2 a3
Since a4 + a1 a2 a3 = 0, the system is consistent for any v in W . Thus W = span S .
66
Chapter 4
0
1
1
0
1
1
(c) A possible answer is , , .
0 0 1
1
1
1
1
1
0
0
1
0
1
4
(d) = 2 + 2 + 2 .
1
0
0
2
1
1
6
1
4. Yes.
1
0
, U = span
0
1
W U and hence W U is not a subspace of V .
5. (a) Let V = R2 , W = span
. It follows that
1
0
1
+
=
is not in
0
1
1
(b) When W is contained in U or U is contained in W .
(c) Let u and v be in W U and let c be a scalar. Since vectors u and v are in both W and U so is
u + v. Thus u + v is in W U . Similarly, cu is in W and in U , so it is in W U .
1
0
0
1
0
0
6. If W = R3 , then it contains the vectors 0, 1, 0. If W contains the vectors 0, 1, 0,
0
0
1
0
0
1
then W contains the span of these vectors which is R3 . It follows that W = R3 .
7. (a) Yes.
(b) They are identical.
8. (a) m arbitrary and b = 0.
(b) r = 0.
9. Suppose that W is a subspace of V . Let u and v be in W and let r and s be scalars. Then ru and sv
are in W , so ru + sv is in W . Conversely, if ru + sv is in W for any u and v in W and any scalars r
and s, then for r = s = 1 we have u + v is in W . Also, for s = 0 we have ru is in W . Hence, W is a
subspace of V .
10. Let x and y be in W , so that Ax = x and Ay = y. Then
A(x + y) = Ax + Ay = x + y = (x + y).
Hence, x + y is in W . Also, if r is a scalar, then A(rx) = r(Ax) = r(x) = (rx), so rx is in W .
Hence W is a subspace of Rn .
12. a = 1.
3
1
1
1 2 2
14. (a) One possible answer: , , .
1
1
3
1
1
1
1
0
0
0 1 0
(b) One possible answer: , , .
1
2
0
0
0
1
1
3
(c) v S = 3 , v T = 1.
2
5
0
2
15. Since S is a linearly independent set, just follow the steps given in the proof of Theorem 3.10.
Supplementary Exercises
67
1
1
1
0 , 1 , 0 .
16. Possible answer:
2
1
0
1
2 .
18. (a)
(b) There is no basis.
1
19. rank AT = row rank AT = column rank A = rank A.
20. (a) Theorem 3.16 implies that row space A = row space B . Thus,
rank A = row rank A = row rank B = rank B .
(b) This follows immediately since A and B have the same reduced row echelon form.
21. (a) From the denition of a matrix product, the rows of AB are linear combinations of the rows of B .
Hence, the row space of AB is a subspace of the row space of B and it follows that rank (AB )
rank B . From Exercise 19 above, rank (AB ) rank ((AB )T ) = rank (B T AT ). A similar argument
shows that rank (AB ) rank AT = rank A. It follows that rank (AB ) min{rank A, rank B }.
(b) One such pair of matrices is A =
10
00
and B =
.
00
01
(c) Since A = (AB )B 1 , by (a), rank A rank (AB ). But (a) also implies that rank (AB ) rank A,
so rank (AB ) = rank A.
(d) Since B = A1 (AB ), by (a), rank B rank (AB ). But (a) also implies that rank (AB ) rank B ,
so rank (AB ) = rank B .
(e) rank (P AQ) = rank (P A), by part (c), which is rank A, by part (d).
22. (a) Let q = dim NS(A) and let S = {v1 , v2 , . . . , vq } be a basis for NS(A). We can extend S to
a basis for Rn . Let T = {w1 , w2 , . . . , wr } be a linearly independent subset of Rn such that
v1 , . . . , vq , w1 , . . . , wr is a basis for Rn . Then r + q = n. We need only show that r = rank A.
Every vector v in Rn can be written as
r
v=
r
ci vi +
i=1
bj wj
j =1
r
bj Awj . Since v is an arbitrary vector in Rn , this implies that column
and since Avi = 0, Av =
j =1
space A = span {Aw1 , Aw2 , . . . , Awr }. These vectors are also linearly independent, because if
k1 Aw1 + k2 Aw2 + + kr Awr = 0
r
then w =
kj wj belongs to NS(A). As such it can be expressed as a linear combination
j =1
of v1 , v2 , . . . , vq . But since S and span T have only the zero vector in common, kj = 0 for
j = 1, 2, . . . , r. Thus, rank A = r.
(b) If A is nonsingular then A1 (Ax) = A1 0 which implies that x = 0 and thus dim NS(A) = 0. If
dim NS(A) = 0 then NS(A) = {0} and Ax = 0 has only the trivial solution so A is nonsingular.
23. From Exercise 22, NS(BA) is the set of all vectors x such that BAx = 0. We rst show that if x is in
NS(BA), then x is in NS(A). If BAx = 0, B 1 (BAx) = B 1 0 = 0, so Ax = 0, which implies that x is
in NS(A). We next show that if x is in NS(A), then x is in NS(BA). If Ax = 0, then B (Ax) = B 0 = 0,
so (BA)x = 0. Hence, x is in NS(BA). We conclude that NS(BA) = NS(A).
68
Chapter 4
24. (a) 1.
(b) 2.
26. We have XY T
x1 y1
x2 y1
= .
.
.
xn y1
x1 y2
x2 y2
xn y2
x1 yn
x2 yn
.
.
.
.
xn yn
Each row of XY T is a multiple of Y T , hence rank XY T = 1.
27. Let x be nonzero. Then Ax = x so Ax x = (A In )x = 0. That is, there is no nonzero solution
to the homogeneous system with square coecient matrix A In . Hence the only solution to the
homogeneous system with coecient matrix A In is the zero solution which implies that A In is
nonsingular.
28. Assume rank A < n. Then the columns of A are linearly dependent. Hence there exists x in Rn such
that x = 0 and Ax = 0. But then AT Ax = 0 which implies that the homogeneous linear system with
coecient matrix AT A has a nontrivial solution. This is a contradiction that AT A is nonsingular,
hence the columns of A must be linearly independent. That is, rank A = n.
29. (a) Counterexample: A =
10
00
,B=
. Then rank A = rank B = 1 but A + B = I2 , so
00
01
rank (A + B ) = 2.
(b) Counterexample: A =
1 9
, B = A. Then rank A = rank B = 2 but A + B = O, so
71
rank (A + B ) = 0.
(c) For A and B as in part (b), rank (A + B ) = rank A+ rank B = 2 + 2 = 4.
30. Linearly dependent. Since v1 , v2 , . . . , vk are linearly dependent in Rn , we have
c1 v1 + c2 v2 + + ck vk = 0
where c1 , c2 , . . . , ck are not all zero. Then
A(c1 v1 + c2 v2 + + ck vk ) = A0 = 0
c1 (Av1 ) + c2 (Av2 ) + + ck (Avk ) = 0
so Av1 , Av2 , . . . , Avk are linearly dependent.
31. Suppose that the linear system Ax = b has at most one solution for every m 1 matrix b. Since
Ax = 0 always has the trivial solution, then Ax = 0 has only the trivial solution. Conversely, suppose
that Ax = 0 has only the trivial solution. Then nullity A = 0, so by Theorem 4.19, rank A = n. Thus,
dim column space A = n, so the n columns of A, which span its column space, form a basis for the
column space. If b is an m 1 matrix then b is a vector in Rm . If b is in the column space of A,
then b can be written as a linear combination of the columns of A in one and only one way. That is,
Ax = b has exactly one solution. If b is not in the column space of A, then Ax = b has no solution.
Thus, Ax = b has at most one solution.
32. Suppose Ax = b has at most one solution for every m 1 matrix b. Then by Exercise 30, the
associated homogeneous system Ax = 0 has only the trivial solution. That is, nullity A = 0. Then
rank A = n nullity A = n. So the columns of A are linearly independent. Conversely, if the columns
of A are linearly independent, then rank A = n, so nullity A = 0. This implies that the associated
homogeneous system Ax = 0 has only the trivial solution. Hence, by Exercise 30, Ax = b has at most
one solution for every m 1 matrix b.
Chapter Review
69
33. Let A be an m n matrix whose rank is k . Then the dimension of the solution space of the associated
homogeneous system Ax = 0 is n k , so the general solution to the homogeneous system has n k
arbitrary parameters. As we noted at the end of Section 4.7, every solution x to the nonhomogeneous
system Ax = b can be written as xp +xh , where xp is a particular solution to the given nonhomogeneous
system, and xh is a solution to the associated homogeneous system Ax = 0. Hence, the general solution
to the given nonhomogeneous system has n k arbitrary parameters.
34. Let u = w1 + w2 and v = w1 + w2 be in W , where w1 and w1 are in W1 and w2 and w2 are in W2 .
Then u + v = w1 + w2 + w1 + w2 = (w1 + w1 ) + (w2 + w2 ). Since w1 + w1 is in W1 and w2 + w2 is
in W2 , we conclude that u + v is in W . Also, if c is a scalar, then cu = cw1 + cw2 , and since cw1 is
in W1 , and cw2 is in W2 , we conclude that cu is in W .
35. Since V = W1 + W2 , every vector v in W can be written as w1 + w2 , w1 in W1 and w2 in W2 . Suppose
now that v = w1 + w2 and v = w1 + w2 . Then w1 + w2 = w1 + w2 so
w1 w1 = w2 w2
()
Since w1 w1 is in W1 and w2 w2 is in W2 , w1 w1 is in W1 W2 = {0}. Hence w1 = w1 .
Similarly, or from () we conclude that w2 = w2 .
36. W must be closed under vector addition and under multiplication of a vector by an arbitrary scalar.
k
Thus, along with v1 , v2 , . . . , vk , W must contain
ai vi for any set of coecients a1 , a2 , . . . , ak . Thus
i=1
W contains span S .
Chapter Review for Chapter 4, p. 288
True or False
1. True.
7. True.
13. False.
19. False.
2.
8.
14.
20.
True.
True.
True.
False.
3.
9.
15.
21.
False.
True.
True.
True.
4.
10.
16.
22.
False.
False.
True.
True.
5. True.
11. False.
17. True.
Quiz
1. No. Property 1 in Denition 4.4 is not satised.
2. No. Properties 58 in Denition 4.4 are not satised.
3. Yes.
4. No. Property (b) in Theorem 4.3 is not satised.
5. If p(t) and q (t) are in W and c is any scalar, then
(p + q )(0) = p(0) + q (0) = 0 + 0 = 0
(cp)(0) = cp(0) = c0 = 0.
Hence p + q and cp are in W . Therefore, W is a subspace of P2 . Basis = {t2 , t}.
6. No. S is linearly dependent.
1
3
1
2 , 0 , 0 .
7.
0
1
0
6. False.
12. True.
18. True.
70
Chapter 4
1
1
0
1
8. , .
1 0
0
1
9.
1 0 2 , 0 1 2
.
10. Dimension of null space = n rank A = 3 2 = 1.
1
1
3
7
1
6
4
and xh = r
11. xp =
3 , where r is any number.
0
2
0
1
12. c = 2.
Chapter 5
Inner Product Spaces
Section 5.1, p. 297
2. (a) 2.
(b) 26.
(c) 21.
4. (a) 3 3.
(b) 3 3.
6. (a) 155.
(b) 3.
8. c = 3.
10. (a)
12.
32
.
14 77
2
(b) .
25
35.
a1
13. (a) If u = a2 , then u u = a2 + a2 + a2 > 0 if not all a1 , a2 , a3 = 0.
1
2
3
a3
u u = 0 if and only if u = 0.
a1
b1
3
(b) If u = a2 , and v = b2 , then u v = v u =
ai bi .
i=1
a3
b3
a1 + b1
c1
a2 + b2 . Then if w = c2 ,
(c) We have u + v =
a3 + b3
c3
(u + v) w = (a1 + b1 )c1 + (a2 + b2 )c2 + (a3 + b3 )c3
= (a1 c1 + b1 c1 ) + (a2 c2 + b2 c2 ) + (a3 c3 + b3 c3 )
= (a1 c1 + a2 c2 + a3 c3 ) + (b1 c1 + b2 c2 + b3 c3 )
=uw+vw
(d) cu v = (ca1 )b1 + (ca2 )b2 + (ca3 )b3 = c(a1 b1 + a2 b2 + a3 b3 ) = c(u v).
14. u u = 14, u v = v u = 15, (u + v) w = 6, u w = 0, v w = 6.
15. (a)
1
1
0
0
= 1;
= 1.
0
0
1
1
(b)
1
0
= 0.
0
1
72
Chapter 5
1
1
0 0 = 1, etc.
16. (a)
0
0
1
0
0 1 = 0, etc.
(b)
0
0
18. (a) v1 and v2 ; v1 and v3 ; v1 and v4 ; v1 and v6 ; v2 and v3 ; v2 and v5 ; v2 and v6 ; v3 and v5 ; v4 and
v5 ; v5 and v6 .
(b) v1 and v5 .
(c) v3 and v6 .
20. x = 3 + 0t, y = 1 + t, z = 3 5t.
22.
Wind O
100 km./hr.
y
Plane Heading
260 km./hr.
Resultant Speed
Resultant speed: 240 km./hr.
24. c = 2.
26. Possible = 1, b = 0, c = 1.
28. c = 4 .
5
29. If u and v are parallel, then v = k u, so
cos =
uv
u ku
ku2
= 1.
=
=
uv
u kv
|k | u 2
a
a
1
30. Let v = b be a vector in R3 that is orthogonal to every vector in R3 . Then v i = 0 so b 0 =
c
c
0
a = 0. Similarly, v j = 0 and v k = 0 imply that b = c = 0.
31. Every vector in span {w, x} is of the form aw + bx. Then v (aw + bx) = a(0) + b(0) = 0.
32. Let v1 and v2 be in V , so that u v1 = 0 and u v2 = 0. Let c be a scalar. Then u (v1 + v2 ) =
u v1 + u v2 = 0 + 0 = 0, so v1 + v2 is in V . Also, u (cv1 ) = c(u v1 ) = c(0) = 0, so cv1 is in V .
33. cx = (cx)2 + (cy )2 = c2 x2 + y 2 = |c| x .
34. u
1
1
x=
x = 1.
x
x
35. Let a1 v1 + a2 v2 + a3 v3 = 0. Then (a1 v1 + a2 v2 + a3 v3 ) vi = 0 vi = 0 for i = 1, 2, 3. Thus,
ai (vi vi ) = 0. Since vi vi = 0 we can conclude that ai = 0 for i = 1, 2, 3.
36. We have by Theorem 5.1,
u (v + w) = (v + w) u = v u + w u = u v + u w.
Section 5.1
73
37. (a) (u + cv) w = u w + (cv) w = u w + c(v w).
(b) u (cv) = cv u = c(v u) = c(u v).
(c) (u + v) cw = u (cw) + v (cw) = c(u w) + c(v w).
38. Taking the rectangle as suggested, the length of each diagonal is
a2 + b2 .
39. Let the vertices of an isosceles triangle be denoted by A, B , C . We show that the cosine of the angles
between sides CA and AB and sides AC and CB are the same. (See the gure.)
B
A
(0, 0)
1
c
,0
2
2
C
(c, 0)
To simplify the expressions involved let A(0, 0), B (c/2, b) and C (c, 0). (The perpendicular from B to
side AC bisects it. Hence we have the form of a general isosceles triangle.) Let
v = vector from A to B =
c
2
b
w = vector from A to C =
c
0
u = vector from C to B =
c
2
.
b
Let 1 be the angle between v and w; then
c2
2
vw
cos 1 =
=
vw
c2
4
+ b2
.
c2
Let 2 be the angle between w and u; then
cos 2 =
c2
2
w u
=
wu
c2
4
+ b2
.
c2
Hence cos 1 = cos 2 implies that 1 = 2 since an angle between vectors lies between 0 and
radians.
40. Let the vertices of a parallelogram be denoted A, B , C , D as shown in the gure. We assign coordinates
to the vertices so that the lengths of the opposite sides are equal. Let (A(0, 0), B (t, h), C (s + t, h),
D(s, 0).
B
v
C
A
u
D
Then vectors corresponding to the diagonals are as follows:
74
Chapter 5
The parallelogram is a rhombus provided all sides are equal. Hence we have
length (AB ) = length
(AD ). It follows that length (AD ) = s and length (AB ) = t2 + h2 , thus s = t2 + h2 . To show that
the diagonals are orthogonal we show v w = 0:
v w = (s + t)(s t) h2
= s2 t2 h2 = s2 (t2 + h2 )
= s2 s2
(since s =
t2 + h 2 )
= 0.
Conversely, we next show that if the diagonals of a parallelogram are orthogonal then the parallelogram
is a rhombus. We show that length (AB ) = t2 + h2 = s = length (AD ).Since the diagonals are
orthogonal we have v w = s2 (t2 + h2 ) = 0. But then it follows that s = t2 + h2 .
Section 5.2, p. 306
2. (a) 4i + 4j + 4k (b) 3i 8j k
(c) 0i + 0j + 0k (d) 4i + 4j + 8k.
4. (a) u v = (u2 v3 u3 v2 )i + (u3 v1 u1 v3 )j + (u1 v2 u2 v1 )k
v u = (u3 v2 u2 v3 )i + (v3 u1 v1 u3 )j + (v1 u2 v2 u1 )k = (u v)
(b) u (v + w) = [u2 (v3 + w3 ) u3 (v2 + w2 )]i
+ [u3 (v1 + w1 ) (u1 (v3 + w3 )]j
+ [u1 (v2 + w2 ) u2 (v1 + w1 )]k
= (u2 v3 u3 v2 )i + (u3 v1 u1 v3 )j + (u1 v2 u2 v1 )k
+ (u2 w3 u3 w2 )i + (u3 w1 u1 w3 )j + (u1 w2 u2 w1 )k
=uv+uw
(c) Similar to the proof for (b).
(d) c(u v) = c[(u2 v3 u3 v2 )i + (u3 v1 u1 v3 )j + (u1 v2 u2 v1 )k]
= (cu2 v3 cu3 v2 )i + (cu3 v1 cu1 v3 )j + (cu1 v2 cu2 v1 )k
= (cu) v.
Similarly, c(u v) = u (cv).
(e) u u = (u2 u3 u3 u2 )i + (u3 u1 u3 )j + (u1 u2 u2 u1 )k = 0.
(f) 0 u = (0u3 u3 0)i + (0u1 u1 0)j + (0u2 u2 0)k = 0.
(g) u (v w) = [u1 i + u2 j + u3 k] [(v2 w3 v3 w2 )i + (v3 w1 v1 w3 )j + (v1 w2 v2 w1 )k]
= [u2 (v1 w2 v2 w1 ) u3 (v3 w1 v1 w3 )]i
+ [u3 (v2 w3 v3 w2 ) u1 (v1 w2 v2 w1 )]j
+ [u1 (v3 w1 v1 w3 ) u2 (v2 w3 v3 w2 )]k.
On the other hand,
(u w)v (u v)w = (u1 w1 + u2 w2 + u3 w3 )[v1 i + v2 j + v3 k] (u1 v1 + u2 v2 + u3 v3 )[w1 i + w2 j + w3 k].
Expanding and simplifying the expression for u (v w) shows that it is equal to that for
(u w)v (u v)w.
(h) Similar to the proof for (g).
6. (a) (15i 2j + 9k) u = (15i 0; 2j + 9k) v = 0.
(b) (3i + 3j + 3k) u = 0; (3i + 3j + 3k) v = 0.
(c) (7i + 5j k) u = 0; (7i + 5j k) v = 0.
(d) 0 u = 0; 0 v = 0.
Section 5.2
75
7. Let u = u1 i + u2 j + u3 k, v = v1 i + v2 j + v3 k, and w = w1 i + w2 j + w3 k. Then
(u v) w = [(u2 v3 u3 v2 )i + (u3 v1 u1 v3 )j + (u1 v2 u2 v1 )k] w
= (u2 v3 u3 v2 )w1 + (u3 v1 u1 v3 )w2 + (u1 v2 u2 v1 )w3
(expand and collect terms containing ui ):
= u1 (v2 w3 v3 w2 ) + u2 (v3 w1 v1 w3 ) + u3 (v1 w2 v2 w1 )
= u (v w)
8. (a) u v = 3 = u v cos =
29 11 cos cos =
uv =
u v sin =
(b) u v = 1 = u v cos =
=
sin =
310 .
319
So
310
310
29 11
= 310 = u v .
319
2 14 cos cos =
uv =
=
3
319
=
1
28
=
sin =
27 .
28
So
27
27
u v sin = 2 14 = 27 = u v .
28
(c) u v = 9 = u v cos =
6 26 cos cos =
uv =
9
156
=
=
sin =
75 .
156
So
75
75
u v sin = 6 26
= 75 = u v .
156
(d) u v = 12 = u v cos =
6 24 cos cos = 1
=
=
sin = 0. So
uv =0
u v sin = 6 240 = 310 = u v .
9. If v = cu for some c, then u v = c(u u) = 0. Conversely, if u v = 0, the area of the parallelogram
with adjacent sides u and v is 0, and hence that parallelogram is degenerate; u and v are parallel.
10. u v
2
+ (u v)2 = u
2
v 2 (sin2 + cos2 ) = u
2
v 2.
11. Using property (h) of cross product,
(u v) w + (v w) u + (w u) v =
[(w u)v (w v)u] + [(u v)w (u w)v] + [(v w)u (v u)w] = 0.
12.
14.
1
2
478.
150.
16. 39.
18. (a) 3x 2y + 4z + 16 = 0;
20. (a) x =
8
13
(b) y 3z + 3 = 0.
+ 23t, y = 27 + 2t, z = 0 + 13t;
16
22. 17 , 38 , 6 .
5
5
(b) x = 0 + 7t, y = 8 + 22t, z = 4 + 13t.
76
Chapter 5
24. (a) Not all of a, b and c are zero. Assume that a = 0. Then write the given equation ax + by + cz + d = 0
d
d
as a x + a + by + cz = 0. This is the equation of the plane passing through the point a , 0, 0
and having the vector v = ai + bj + ck as normal. If a = 0 then either b = 0 or c = 0. The above
argument can be readily modied to handle this case.
(b) Let u = (x1 , y1 , z1 ) and v = (x2 , y2 , z2 ) satisfy the equation of the plane. Then show that u + v
and cu satisfy the equation of the plane for any scalar c.
1
0
(c) Possible answer: 0 , 1 .
3
1
2
4
26. u v = (u2 v3 u3 v2 )i + (u3 v1 u1 v3 )j + (u1 v2 u2 v1 )k.
Then
u1 u2 u3
(u v) w = (u2 v3 u3 v2 )w1 + (u3 v1 u1 v3 )w2 + (u1 v2 u2 v1 )w3 = v1 v2 v3 .
w1 w2 w3
28. Computing the determinant we have
xy1 + yx2 + x1 y2 x2 y1 y2 x x1 y = 0.
Collecting terms and factoring we obtain
x(y1 y2 ) y (x1 x2 ) + (x1 y2 x2 y1 ) = 0.
Solving for y we have
y2 y1
x1 y2 x2 y1
x
x2 x1
x2 x1
y2 y1
x1 y2 y2 x2 + y2 x2 x2 y1
=
x
x2 x1
x2 x1
y2 y1
y2 (x1 x2 ) + x2 (y2 y1 )
=
x
x2 x1
x2 x1
y2 y1
=
(x x2 ) + y2
x2 x1
y=
which is the two-point form of the equation of a straight line that goes through points (x1 , y1 ) and
(x2 , y2 ). Now, three points are collinear provided that they are on the same line. Hence a point (x0 , y0 )
is collinear with (x1 , y1 ) and (x2 , y2 ) if it satises the equation in (6.1). That is equivalent to saying
that (x0 , y0 ) is collinear with (x1 , y1 ) and (x2 , y2 ) provided
x0 y0 1
x1 y1 1
x2 y2 1
= 0.
29. Using the row operations r1 + r2 r2 , r1 + r3 r3 , and r1 + r4 r4 we have
xyz
x1 y1 z1
0=
x2 y2 z2
x3 y3 z3
x
y
z
1
x1 x y1 y z1 z
1
=
x2 x y2 y z2 z
1
x3 x y3 y z3 z
1
1
x1 x y1 y z1 z
1
= (1) x2 x y2 y z2 z .
1
x3 x y3 y z3 z
1
Section 5.3
77
Using the row operations r1 r1 , r1 + r2 r2 , and r1 + r3 r3 , we have
x x1 y y1 z z1
0 = x2 x1 y2 y1 z2 z1
x3 x1 y3 y1 z3 z1
= (x x1 )[y2 y1 + z3 z1 y3 + y1 z2 + z1 ]
+ (y y1 )[z2 z1 + x3 x1 z3 + z1 x2 + x1 ]
+ (z z1 )[x2 x1 + y3 y1 x3 + x1 y2 + y1 ]
= (x x1 )[y2 y3 + z3 z2 ]
+ (y y1 )[z2 z3 + x3 x2 ]
+ (z z1 )[x2 x3 + y3 y2 ]
This is a linear equation of the form Ax + By + Cz + D = 0 and hence represents a plane. If we replace
(x, y, z ) in the original expression by (xi , yi , zi ), i = 1, 2, or 3, the determinant is zero; hence the plane
passes through Pi , i = 1, 2, 3.
Section 5.3, p. 317
1. Similar to proof of Theorem 5.1 (Exercise 13, Section 5.1).
2. (b) (v, u) = a1 b1 a2 b1 a1 b2 3a2 b2 = (u, v).
(c) (u + v, w) = (a1 + b1 )c1 (a2 + b2 )c1 (a1 + b1 )c2 + 3(a2 + b2 )c2
= (a1 c1 a2 c1 a1 c2 + 3a2 c2 ) + (b1 c1 b2 c1 b1 c2 + 3b2 c2 )
= (u, w) + (v, w).
(d) (cu, v) = (ca1 )b1 (ca2 )b1 (ca1 )b2 + 3(ca2 )b2 = c(a1 b1 a2 b1 a1 b2 + 3a2 b2 ) = c(u, v).
n
n
a2 0. Also (A, A) = 0 if and only if aij = 0,
ij
3. (a) If A = aij then (A, A) = Tr(AT A) =
j =1 i=1
that is, if and only if A = O.
(b) If B = bij then (A, B ) = Tr(B T A) and (B, A) = Tr(AT B ). Now
n
n
n
Tr(B T A) =
n
bT aki =
ik
i=1 k=1
bki aki ,
i=1 k=1
and
n
n
n
Tr(AT B ) =
n
aT bki =
ik
i=1 k=1
aki bki ,
i=1 k=1
so (A, B ) = (B, A).
(c) If C = cij , then (A + B, C ) = Tr[C T (A + B )] = Tr[C T A + C T B ] = Tr(C T A) + Tr(C T B ) =
(A, C ) + (B, C ).
(d) (cA, B ) = Tr(B T (cA)) = c Tr(B T A) = c(A, B ).
5. Let u = u1 u2 , v = v1 v2 , and w = w1 w2 be vectors in R2 and let c be a scalar. We dene
(u, v) = u1 v1 u2 v1 u1 v2 + 5u2 v2 .
(a) Suppose u is not the zero vector. Then one of u1 and u2 is not zero. Hence
(u, u) = u1 u1 u2 u1 u1 u2 + 5u2 u2 = (u1 u2 )2 + 4(u2 )2 > 0.
78
Chapter 5
If (u, u) = 0, then
u1 u1 u2 u1 u1 u2 + 5u2 u2 = (u1 u2 )2 + 4(u2 )2 = 0
which implies that u1 = u2 = 0 hence u = 0. If u = 0, then u1 = u2 = 0 and
(u, u) = u1 u1 u2 u1 u1 u2 + 5u2 u2 = 0.
(b) (u, v) = u1 v1 u2 v1 u1 v2 + 5u2 v2 = v1 u1 v2 u1 v1 u2 + 5v2 u2 = (v, u)
(u1 + v1 )w1 (u2 + v2 )w2 (u1 + v1 )w2 + 5(u2 + v2 )w2
u1 w1 + v1 w1 u2 w2 v2 w2 u1 w2 v1 w2 + 5u2 w2 + 5v2 w2
(u1 w1 u2 w2 u1 w2 + 5u2 w2 ) + (v1 w1 v2 w2 v1 w2 + 5v2 w2 )
(u, w) + (v, w)
(c) (u + v, w) =
=
=
=
(d) (cu, v) = (cu1 )v1 (cu2 )v1 (cu1 )v2 + 5(cu2 )v2 = c(u1 v1 u2 v1 u1 v2 + 5u2 v2 ) = c(u, v)
6. (a) (p(t), q (t)) =
1
0
p(t)2 dt 0. Since p(t) is continuous,
1
p(t)2 dt = 0
p(t) = 0.
0
(b) (p(t), q (t)) =
(c)
(d)
1
0
p(t)q (t) dt =
1
0
q (t)p(t) dt = (q (t), p(t)).
1
1
1
(p(t) + q (t), r(t)) = 0 (p(t) + q (t))r(t) dt = 0 p(t)r(t) dt + 0 q (t)r(t) dt
1
1
(cp(t), q (t)) = 0 (cp(t))q (t) dt = c 0 p(t)q (t) dt = c(p(t), q (t)).
= (p(t), r(t)) + (q (t), r(t)).
7. (a) 0 + 0 = 0 so (0, 0) = (0, 0 + 0) = (0, 0) + (0, 0), and then (0, 0) = 0. Hence 0 =
0 = 0.
(0, 0) =
(b) (u, 0) = (u, 0 + 0) = (u, 0) + (u, 0) so (u, 0) = 0.
(c) If (u, v) = 0 for all v in V , then (u, u) = 0 so u = 0.
(d) If (u, w) = (v, w) for all w in V , then (u v, w) = 0 and so u = v.
(e) If (w, u) = (w, v) for all w in V , then (w, u v) = 0 or (u v, w) = 0 for all w in V . Then
u = v.
8. (a) 7.
10. (a)
12. (a)
13
6.
(c) 9.
(b) 0.
22.
14. (a) 1 .
2
(b) 3.
(b)
(c) 4.
(b) 1.
18.
(c) 1.
(c) 4 3.
7
16. u + v 2 = (u + v, u + v) = (u, u) + 2(u, v) + (v, v) = u 2 + 2(u, v) + (v, v) = u 2 + 2(u, v) + v 2 ,
and u v 2 = (u, u) 2(u, v) + (v, v) = u 2 2(u, v) + (v, v) = u 2 2(u, v) + v 2 . Hence
u + v 2 + u v 2 = 2 u 2 + 2 v 2.
17. cu = (cu, cu) = c2 (u, u) = c2 (u, u) = |c| u .
18. For Example 3: [a1 b1 a2 b1 a1 b2 + 3a2 b2 ]2 [(a1 a2 )2 + 2a2 ][(b1 b2 )2 + b2 ].
2
2
For Exercise 3: [Tr(B T A)]2 Tr(AT A) Tr(B T B ).
For Example 5: [a2 a2 b1 a1 b2 + 5a2 b2 ]2 [a2 2a1 a2 + 5a2 ][b2 2b1 b2 + 5b2 ].
1
1
21
2
19. u + v 2 = (u + v, u + v) = (u, u)+2(u, v)+(v, v) = u 2 +2(u, v)+ v 2 . Thus u + v
if and only if (u, v) = 0.
2
= u 2+ v
2
Section 5.3
79
20. 3.
21.
u + v 2 1 u v 2 = 1 (u + v, u + v) 1 (u v, u v)
4
4
4
= 1 [(u, u) + 2(u, v) + (v, v)] 1 [(u, u) 2(u, v) + (v, v)] = (u, v).
4
4
1
4
22. The vectors in (b) are orthogonal.
23. Let W be the set of all vectors in V orthogonal to u. Let v and w be vectors in W so that (u, v) = 0
and (u, w) = 0. Then (u, rv + sw) = r(u, v) + s(u, w) = r(0) + s(0) = 0 for any scalars r and s.
24. Example 3: Let S be the natural basis for R2 ; C =
1 1
.
1
3
Example 5: Let S be the natural basis for R2 ; C =
1 1
.
1
5
26. (a) d(u, v) = v u 0.
(b) d(u, v) = v u = (v u, v u) = 0 if and only if v u = 0.
(c) d(u, v) = v u = u v = d(v, u).
(d) We have v u = (w u)+(v w) and v u w u + v w so d(u, v) d(u, w)+ d(w, v).
23
(b) 3.
28. (a) 110 .
30. Orthogonal: (a).
Orthonormal: (c).
32. 3a = 5b.
34. a = b = 0.
36. (a) 5a = 3b.
(b) b =
2a(cos 1 1)
.
e(sin 1 cos 1 + 1)
37. We must verify Denition 5.2 for
n
n
(v, w) =
ai cij bj = v
T
S
Cw
S
.
i=1 j =1
We choose to use the matrix formulation of this inner product which appears in Equation (1) since we
can then use matrix algebra to verify the parts of Denition 5.2.
T
(a) (v, v) = v S C v S > 0 whenever v S = 0 since C is positive denite. (v, v) = 0 if and only if
v S = 0 since A is positive denite. But v S = 0 is true if and only if v = 0.
(b) (v, w) = v
T
S
Cw
S
is a real number so it is equal to its transpose. That is,
(v, w) = v
=w
T
S
T
S
Cw
Cv
S
S
=
v
T
S
T
Cw
S
=w
T
S
CT v
S
(since C is symmetric)
= (w, v)
T
(c) (u + v, w) =
u
S
+v
S
Cw
S
=
u
T
S
+v
T
S
Cw
(u, w) + (v, w).
(d) (k v, w) = k v
T
S
T
Cw
=k v SC w
= k (v, w)
S
S
(by properties of matrix algebra)
S
=u
T
S
Cw
S
+v
T
S
Cw
S
=
80
Chapter 5
38. From Equation (3) it follows that (Au, B v) = (u, AT B v).
b1
a1
b2
a2
39. If u and v are in Rn , let u = . and v = . . Then
.
.
.
.
an
bn
n
ai bi = a1 a2 an
(u, v) =
i=1
b1
b2
. .
.
.
bn
40. (a) If v1 and v2 lie in W and c is a real number, then ((v1 + v2 ), ui ) = (v1 , ui ) + (v2 , ui ) = 0 + 0 = 0
for i = 1, 2. Thus v1 + v2 lies in W . Also (cv1 , ui ) = c(v1 , ui ) = c0 = 0 for i = 1, 2. Thus cv1
lies in W .
0
1
0 1
(b) Possible answer: , .
0
1
1
0
41. Let S = {w1 , w2 , . . . , wk }. If u is in span S , then
u = c1 w1 + c2 w2 + + ck wk .
Let v be orthogonal to w1 , w2 , . . . , wk . Then
(v, w) = (v, c1 w1 + c2 w2 + + ck wk )
= c1 (v, w1 ) + c2 (v, w2 ) + + ck (v, wk )
= c1 (0) + c2 (0) + + ck (0) = 0.
42. Since {v1 , v2 , . . . , vn } is an orthonormal set, by Theorem 5.4 it is linearly independent. Hence, A is
nonsingular. Since S is orthonormal,
(vi , vj ) =
1
0
if i = j
if i = j .
This can be written in terms of matrices as
T
vi vj =
1
0
if i = j
if i = j
or as AAT = In . Then A1 = AT . Examples of such matrices:
1
1
1
2 6
1
0
0
1
1
3
1
2
2 ,
1
1
1
1
A=
A = 0 , A =
6 .
2
2
2
1
1
3
2 2
1
1
2
1
0
0
3
2
2
6
43. Since some of the vectors vj can be zero, A can be singular.
44. Suppose that A is nonsingular. Let x be a nonzero vector in Rn . Consider xT (AT A)x. We have
xT (AT A)x = (Ax)T (Ax). Let y = Ax. Then we note that xT (AT A)x = yyT which is positive if
y = 0. If y = 0, then Ax = 0, and since A is nonsingular we must have x = 0, a contradiction. Hence,
y = 0.
Section 5.4
81
45. Since C is positive denite, for any nonzero vector x in Rn we have xT C x > 0. Multiply both sides of
C x = k x or the left by xT to obtain xT C x = k xT x > 0. Since x = 0, xT x > 0, so k > 0.
46. Let C be positive denite. Using the natural basis {e1 , e2 , . . . , en } for Rn we nd that eT C ei = aii
i
which must be positive, since C is positive denite.
47. Let C be positive denite. Then if x is any nonzero vector in Rn , we have xT C x > 0. Now let r = 5.
Then xT (rC )x < 0. Hence, rC need not be positive denite.
48. Let B and C be positive denite matrices. Then if x is any nonzero vector in Rn , we have xT B x > 0
and xT C x > 0. Now xT (B + C )x = xT B x + xT C x > 0, so B + C is positive denite.
49. By Exercise 48, S is closed under addition, but by Exercise 46 it is not closed under multiplication.
Hence, S is not a subspace of Mnn .
Section 5.4, p. 329
1
5
54
2
2.
(b) 0 ,
54
1
5
5
1
2. 0 , 2 .
1
5
0
1
4. 1 , 1 .
1
1
2
54
1
(t + 1), 7 (9t 5) .
et 3t
8.
.
3 t,
e2
7
2
2
3
7
6.
1
2
0
1
1
1 , 1 , 2 1 .
10.
2
6
3
1
1
1
12. Possible answer:
1
3
1
3
14.
1
2
1
2
1
0 0 , 6
16.
1
2
1
2
0 0,
1
3
1
6
1
3
1
3
0
1
3
1
6
,
2
6
,
2
6
1
6
1
12
1
12
0 , 1
42
3
12
1
12
.
1
42
2
42
6
42
.
4
1 5 .
18.
42
1
n
19. Let v =
cj uj . Then
j =1
(v, ui ) =
n
j =1
since (uj , ui ) = 1 if j = i and 0 otherwise.
cj uj , ui =
n
cj (uj , ui ) = ci
j =1
82
Chapter 5
1
2
20. (a)
(b) u =
7
2
1
2 0 ,
1
6
1
2 0 +
1
2
2
6
1
6
9
6
1
6
.
1
6
2
6 .
a1
a2
= . ,
.
.
21. Let T = {u1 , u2 , . . . , un } be an orthonormal basis for an inner product space V . If v
T
an
then v = a1 u1 + a2 u2 + + an un . Since (ui , uj ) = 0 if i = j and 1 if i = j , we conclude that
v=
(v, v) =
1
2
1
22. (a) 14.
(b) 5 0 , 1 2 .
3
2
1
24. 1, 12 t 1 , 180 t2 t + 1 .
2
6
a2 + a2 + + a2 .
n
1
2
(c) v
T
=
5
, so
3
v
T
=
5+9=
14.
25. (a) Verify that
(ui , uj ) =
1, if i = j
0, if i = j .
00
00
and B =
, then (A, B ) = Tr(B T A) = Tr
10
01
10
then (A, A) = Tr(AT A) = Tr
= 1.
00
1
2
(b) v S = .
3
Thus, if A =
00
00
= 0. If A =
4
00
11
1 1
1
1
, 2
, 2
.
01
00
0
0
1
2
5
2
0
5
4
3 = 0 3
1
28.
0 3
5
5
1
0
2
1
26.
5
5
2
1
0.8944 0.4082
5
6
5
5
5
1
2
2.2361 2.2361 .
30. (a) Q = 0.4472 0.8165 , R = 5
5
6
6
0
2.4495
0
0 0.4082
6
1
0 6
1
2
3
0 6
0.5774
0 0.8165
1
1
1
(b) Q =
0.5774 0.7071 0.4082 .
3
2
6
0.5774
0.7071 0.4082
1
1
1
2
6
3
3
1.7321
0
0
0 0
R=
0 8 2
0 2.8284
1.4142 .
0
0
6
0
0 2.4495
00
,
10
Section 5.4
83
(c) Q =
2
5
1
5
0
5
R= 0
0
0.8944 0.4082 0.1826
2
2 0.4472
0.8165
0.3651
6
30
0
0.4082 0.9129
1
5
6
30
0
0
2.2361
0
0
7
0 2.4495 2.8577 .
6 6
0
0 0.9129
0 5
30
1
6 1
30
31. We have (u, cv) = c(u, v) = c(0) = 0.
32. If v is in span {u1 , u2 , . . . , un } then v is a linear combination of u1 , u2 , . . . , un . Let v = a1 u1 + a2 u2 +
+ an un . Then (u, v) = a1 (u, u1 )+ a2 (u, u2 )+ + an (u, un ) = 0 since (u, ui ) = 0 for i = 1, 2, . . . , n.
33. Let W be the subset of vectors in Rn that are orthogonal to u. If v and w are in W then (u, v) =
(u, w) = 0. It follows that (u, v + w) = (u, v) + (u, w) = 0, and for any scalar c, (u, cv) = c(u, v) = 0,
so v + w and cv are in W . Hence, W is a subspace of Rn .
34. Let T = {v1 , v2 , . . . , vn } be a basis for Euclidean space V . Form the set Q = {u1 , . . . , uk , v1 , . . . , vn }.
None of the vectors in Q is the zero vector. Since Q contains more than n vectors, Q is a linearly
dependent set. Thus one of the vectors is not orthogonal to the preceding ones. (See Theorem 5.4).
It cannot be one of the us, so at least one of the vs is not orthogonal to the us. Check v1 uj ,
j = 1, . . . , k . If all these dot products are zero, then {u1 , . . . , uk , v1 } is an orthonormal set, otherwise
delete v1 . Proceed in a similar fashion with vi , i = 2, . . . , n using the largest subset of Q that has been
found to be orthogonal so far. What remains will be a set of n orthogonal vectors since Q originally
contained a basis for V . In fact, the set will be orthonormal since each of the us and vs originally
had length 1.
35. S = {v1 , v2 , . . . , vk } is an orthonormal basis for V . Hence dim V = k and
0, if i = j .
1, if i = j .
(vi , vj ) =
Let T = {a1 v1 , a2 v2 , . . . , ak vk } where aj = 0. To show that T is a basis we need only show that it
spans V and then use Theorem 4.12(b). Let v belong to V . Then there exist scalars ci , i = 1, 2, . . . , k
such that
v = c1 v1 + c2 v2 + + ck vk .
Since aj = 0, we have
c1
c2
ck
a1 v1 + a2 v2 + + ak vk
a1
a2
ak
so span T = V . Next we show that the members of T are orthogonal. Since S is orthogonal we have
v=
(ai vi , aj vj ) = ai aj (vi , vj ) =
0,
if i = j
ai aj , if i = j .
Hence T is an orthogonal set. In order for T to be an orthonormal set we must have ai aj = 1 for all i
and j . This is only possible if all ai = 1.
36. We have
ui = vi +
(ui , v1 )
(ui , v2 )
(ui , vi1 )
v1 +
v2 + +
vi1 .
(v1 , v1 )
(v2 , v2 )
(vi1 , vi1 )
Then
rii = (ui , wi ) = (vi , wi ) +
(ui , v1 )
(ui , v2 )
(ui , vi1 )
(v1 , wi ) +
(v2 , wi ) + +
(vi1 , wi )
(v1 , v1 )
(v2 , v2 )
(vi1 , vi1 )
because (vi , wj ) = 0 for i = j . Moreover, wi =
1
vi
vi , so (vi , wi ) =
1
vi
(vi , vi ) = vi .
84
Chapter 5
37. If A is an n n nonsingular matrix, then the columns of A are linearly independent, so by Theorem
5.8, A has a QR-factorization.
Section 5.5, p. 348
7
5
2. (a) 1
5
1
(b) W is the normal to the plane represented by W .
1 3
2
2
5 13
4 4
4.
,
1
0
0
1
54
2t
6.
10 3
3t
+ t2 , 10t4 10t3 + t, 45t4 40t3 + 1
4 2
5 4
3
3
3
3
,
10
01
8.
10. Basis for
Basis for
Basis for
Basis for
12. (a)
7
5
14. (a)
1 7
3
3
7 2
3 3
null space of A:
1 , 0
0
1
row space of A:
1 0 1 7 , 0 1 7
33
3
1 1
2
2
1 3
null space of AT : 2 , 2 .
1 0
0
1
0
1
0 , 1 .
column space of A: 1 1
2
2
1
3
2
2
11
5
2
3
4
3
.
9
5
.
3 .
5
11
(b)
1
3
3
5
3
8
3
1
0
0
0
16. w = , u = .
2
0
3
0
1
1
18. w = 0, u = 1 .
1
1
20. 2
22.
2
4 cos t + cos 2t.
3
.
(b)
2
5
1
5
1
2
(c) 1 .
2
0
1
5
2 .
5
2
3
.
(c)
1
10
9
5
1
5
31
10
.
Section 5.6
85
24. The zero vector is orthogonal to every vector in W .
25. If v is in V , then (v, v) = 0. By Denition 5.2, v must be the zero vector. If W = {0}, then every
vector v in V is in W because (v, 0) = 0. Thus W = V .
26. Let W = span S , where S = {v1 , v2 , . . . , vm }. If u is in W , then (u, w) = 0 for any w in W .
Hence, (u, vi ) = 0 for i = 1, 2, . . . , m. Conversely, suppose that (u, vi ) = 0 for i = 1, 2, . . . , m. Let
m
m
w=
ci (u, vi ) = 0. Hence u is in W .
ci vi be any vector in W . Then (u, w) =
i=1
i=1
27. Let v be a vector in Rn . By Theorem 5.12(a), the column space of AT is the orthogonal complement
of the null space of A. This means that Rn = null space of A column space of AT . Hence, there
exist unique vectors w in the null space of A and u in the column space of AT so v = w + u.
28. Let V be a Euclidean space and W a subspace of V . By Theorem 5.10, we have V = W W .
Let {w1 , w2 , . . . , wr } be a basis for W , so dim W = r, and {u1 , u2 , . . . , us } be a basis for W , so
dim W = s. If v is in V , then v = w + u, where w is in W and u is in W . Moreover, w and u are
unique. Then
r
v=
s
ai wi +
i=1
bj uj
j =1
so S = {w1 , w2 , . . . , wr , v1 , v2 , . . . , vs } spans V . We now show that S is linearly independent. Suppose
r
s
ai wi +
i=1
r
s
ai wi =
Then
i=1
bj uj = 0.
j =1
r
j =1
r
ai wi lies in W W = {0}. Hence
bj uj , so
i=1
ai wi = 0, and since
i=1
w1 , w2 , . . . , wr are linearly independent, a1 = a2 = = ar = 0. Similarly, b1 = b2 = = bs = 0.
Thus, S is also linearly independent and is then a basis for V . This means that dim V = r + s =
dim W + dim W , and w1 , w2 , . . . , wr , u1 , u2 , . . . , us is a basis for V .
29. If {w1 , w2 , . . . , wm } is an orthogonal basis for W , then
1
1
1
w1 ,
w2 , . . . ,
wm
w1
w2
wm
is an orthonormal basis for W , so
projW v =
=
v,
1
w1
w1
1
1
w1 + v,
w2
w1
w2
1
1
w2 + + v,
wm
w2
wm
1
wm
wm
(v, w1 )
(v, w2 )
(v, wm )
w1 +
w2 + +
wm .
(w1 , w1 )
(w2 , w2 )
(wm , wm )
Section 5.6, p. 356
1. From Equation (1), the normal system of equations is AT Ax = AT b. Since A is nonsingular so is AT
and hence so is AT A. It follows from matrix algebra that (AT A)1 = A1 (AT )1 and multiplying
both sides of the preceding equation by (AT A)1 gives
x = (AT A)1 AT b = A1 (AT )1 AT b = A1 b.
2. x =
24
17
8
17
1.4118
.
0.4706
86
Chapter 5
4. Using Matlab, we obtain
0.8165
0.3961
0.4022 0.1213
0.4082 0.0990 0.5037
0.7549
,
Q=
0 0.5941
0.7029
0.3911
0.4082
0.6931
0.3007
0.5124
2.4495
0
R=
0
0
0.4082
1.6833
,
0
0
x=
1.4118
.
0.4706
6. y = 1.87 + 1.345t, e = 1.712.
7. Minimizing E2 amounts to searching over the vector space P2 of all quadratics in order to determine
the one whose coecients give the smallest value in the expression E2 . Since P1 is a subspace of P2 ,
the minimization of E2 has already searched over P1 and thus the minimum of E1 cannot be smaller
than the minimum of E2 .
8. y (t) = 4.9345 0.0674t + 0.9970 cos t.
6
5.5
5
4.5
4
3.5
3
2.5
0
2
4
6
8
10
12
14
16
18
9. x1 4.9345, x2 6.7426 102 , x3 9.9700 101 .
10. Let x be the number of years since 1960 (x = 0 is 1960).
(a) y = 127.871022x 251292.9948
(b) In 2008, expenditure prediction = 5484 in whole dollars.
In 2010, expenditure prediction = 5740 in whole dollars.
In 2015, expenditure prediction = 6379 in whole dollars.
12. Let x be the number of years since 1996 (x = 0 is 1996).
(a) y = 147.186x2 572.67x + 20698.4
(b) Compare with the linear regression: y = 752x + 18932.2. E1 1.4199 107 , E2 2.7606 106 .
Supplementary Exercises for Chapter 5, p. 358
1. (u, v) = x1 2x2 + x3 = 0; choose x2 = s, x3 = t. Then x1 = 2s t and any vector of the form
2s t
2
1
s = s 1 + t 0
t
0
1
Supplementary Exercises
87
is orthogonal to u. Hence,
1
2
1 , 0
0
1
is a basis for the subspace of vectors orthogonal to u.
1
1
1
0 1 2 1 1
1
, .
2. Possible answer: 2 , 6 3
0
0
0
1
1
1
1
1
1
0 1 2 1 1
1
, , .
4. Possible answer:
2 1 6 1 3 1
0
0
0
1
6.
14
7. If n = m, then
sin(m n)t sin(m + n)t
2(m n)
2(m + n)
sin(mt) sin(nt) dt =
0
= 0.
0
This follows since m n and m + n are integers and sine is zero at integer multiples of .
8
1 + cos 2t
sin t.
(c)
.
3
2
1
4
1
0.4082 0.2673
0.8729
6
14
21
1
3
2 0.4082
10. (a) Q =
0.8018
0.4364
6
14
21
0.8165
0.5345 0.2182
2
2
1
6
14
21
3
6
3
6
2.4495
1.2247 1.2247
6
6
3
R = 0 7
0
1.8708
0.8018 .
14
14
0
0
1.9640
0
0 9
21
2
5
3
0.6667
0.5270
90
1
(b) Q = 3 4 0.3333 0.4216
90
0.6667
0.7379
2
7
3
90
8. (a) 4 cos t + 2 sin t.
R=
3
0
1
10
10
(b)
3.0000 1.0000
.
0
3.1623
0
1
12. (a) The subspace of R3 with basis 0 , 1 .
10
2
7
7
(b) The subspace of R4 with basis
14. (a) 3 t.
5
16. 2 .
(b)
3
t.
10
(c) 2152 (3t2 1).
5
3
0 , 0 1 4 2
3
.
88
Chapter 5
17. Let u = coli (In ). Then 1 = (u, u) = (u, Au) = aii , and thus, the diagonal entries of A are equal to 1.
Now let u = coli (In ) + colj (In ) with i = j . Then
(u, u) = (coli (In ), coli (In )) + (colj (In ), colj (In )) = 2
and
(u, Au) = (coli (In ) + colj (In ), coli (A) + colj (A)) = aii + ajj + aij + aji = 2 + 2aij
since A is symmetric. It then follows that aij = 0, i = j . Thus, A = In .
18. (a) This follows directly from the denition of positive denite matrices.
(b) This follows from the discussion in Section 5.3 following Equation (5) where it is shown that every
positive denite matrix is nonsingular.
(c) Let ei be the ith column of In . Then if A is diagonal we have eT Aei = aii . It follows immediately
i
that A is positive semidenite if and only if aii 0, i = 1, 2, . . . , n.
19. (a) P x = (P x, P x) = (P x)T P x = xT P T P x = xT In x = xT x = x .
(b) Let be the angle between P x and P y. Then, using part (a), we have
(P x, P y)
(P x)T P y
xT P T P y
xT y
=
=
=
.
Px Py
xy
xy
xy
cos =
But this last expression is the cosine of the angle between x and y. Since the angle is restricted
to be between 0 and we have that the two angles are equal.
20. If A is skew symmetric then AT = A. Note that xT Ax is a scalar, thus (xT Ax)T = xT Ax. That
is, xT Ax = (xT Ax)T = xT AT x = (xT Ax). The only scalar equal to its negative is zero. Hence
xT Ax = 0 for all x.
21. (a) The columns bj are in Rm . Since the columns are orthonormal they are linearly independent.
There can be at most m linearly independent vectors in Rm . Thus m n.
(b) We have
0, for i = j
1, for i = j .
bT bj =
i
It follows that B T B = In , since the (i, j ) element of B T B is computed by taking row i of B T
times column j of B . But row i of B T is just bT and column j of B is bj .
i
k
22. Let x be in S . Then we can write x =
n
cj uj . Similarly if y is in T , we have y =
j =1
(x, y) =
k
j =1
cj uj , y =
k
k
cj (uj , y) =
j =1
ci ui . Then
i=k+1
n
cj
j =1
uj ,
k
ci ui
i=k+1
=
n
ci (uj , ui ) .
cj
j =1
i=k+1
Since j = i, (uj , ui ) = 0, hence (x, y) = 0.
23. Let dim V = n and dim W = r. Since V = W W by Exercise 28, Section 5.5 dim W = n r.
First, observe that if w is in W , then w is orthogonal to every vector in W , so w is in (W ) . Thus,
W is a subspace of (W ) . Now again by Exercise 28, dim(W ) = n (n r) = r = dim W . Hence
(W ) = W .
24. If u is orthogonal to every vector in S , then u is orthogonal to every vector in V , so u is in V = {0}.
Hence, u = 0.
Supplementary Exercises
89
25. We must show that the rows v1 , v2 , . . . , vm of AAT are linearly independent. Consider
a1 v1 + a2 v2 + am vm = 0
which can be written in matrix form as xA = 0 where x = a1 a2 am . Multiplying this equation
on the right by AT we have xAAT = 0. Since AAT is nonsingular, Theorem 2.9 implies that x = 0,
so a1 = a2 = = am = 0. Hence rank A = m.
26. We have
0 = ((u v), (u + v)) = (u, u) + (u, v) (v, u) (v, v) = (u, u) (v, v).
Therefore (u, u) = (v, v) and hence u = v .
27. Let v = a1 v1 + a2 v2 + + an vn and w = b1 v1 + b2 v2 + + bn vn . By Exercise 26 in Section 4.3,
d(v, w) = v w . Then
d(v, w) = v w =
(v w, v w)
=
((a1 b1 )v1 + (a2 b2 )v2 + + (an bn )vn , (a1 b1 )v1 + (a2 b2 )v2 + + (an bn )vn )
=
(a1 b1 )2 + (a2 b2 )2 + + (an bn )2
since (vi , vj ) = 0 if i = j and 1 if i = j .
2
3
28. (a)
= 2;
1
4
1
(c)
cx
1
0
2
(b)
30. x
= 5;
= 5;
1
2
3
=
13;
2
0
2
4
1
2
3
0
2
= 2;
2
=
= 3.
= 2.
4
1
17;
2
= 4.
= |x1 + |x2 | + + |xn | 0; x = 0 if and only if |xi | = 0 for i = 1, 2, . . . , n if and only if x = 0.
1
1
= |cx1 | + |cx2 | + + |cxn | = |c| |x1 | + |c| |x2 | + + |c| |xn | = |c|(|x1 | + |x2 | + + |xn |) = |c| x 1 .
Let x and y be in Rn . By the Triangle Inequality, |xi + yi | |xi | + |yi | for i = 1, 2, . . . , n. Therefore
x + y = |x1 + y1 | + + |xn + yn |
|x1 | + |y1 | + + |xn | + |yn |
= (|x1 | + + |xn |) + (|y1 | + + |yn |)
= x 1 + y 1.
Thus
is a norm.
31. (a) x = max{|x1 |, . . . , |xn |} 0 since each of |x1 |, . . . , |xn | is 0. Clearly, x = 0 if and only if
x = 0.
(b) If c is any real scalar
cx
= max{|cx1 |, . . . , |cxn |} = max{|c| |x1 |, . . . , |c| |xn |} = |c| max{|x1 |, . . . , |xn |} = |c| x
(c) Let y = y1 y2 yn
T
and let
x
y
= max{|x1 |, . . . , |xn |} = |xs |
= max{|y1 |, . . . , |yn |} = |yt |
.
90
Chapter 5
for some s, t, where 1 s n and 1 t n. Then for i = 1, . . . , n, we have using the triangle
inequality:
|xi + yi | |xi | + |yi | |xs | + |yt |.
Thus
x + y = max{|x1 + y1 |, . . . , |xn + yn |} |xs | + |yt | = x
+y
.
32. (a) Let x be in Rn . Then
x
2
2
= x2 + + x2 x2 + + x2 + 2|x1 | |x2 | + + 2|xn1 | |xn |
1
n
1
n
= (|x1 | + + |xn |)2
= x 2.
1
(b) Let |xi | = max{|x1 |, . . . , |xn |}. Then
x
Now x
1
= |xi | |x1 | + + |xn | = x 1 .
= |x1 | + + |xn | |xi | + + |xi | = n|xi |. Hence
x1
|xi | = x
n
Therefore
x1
x
n
.
x 1.
Chapter Review for Chapter 5, p. 360
True or False
1. True.
7. False.
2. False.
8. False.
3. False.
9. False.
4. False.
10. False.
5. True.
11. True.
6. True.
12. True.
Quiz
1. b =
2
2,
c=
2
2.
r 4s
1
4
3r + 6s
= r 3 + s 6 , where r and s are any numbers.
2. x =
1
0
r
s
0
1
3. p(t) = a + bt, where a = 5 b and b is any number.
9
4. (a) The inner product of u and v is bounded by the product of the lengths of u and v.
(b) The cosine of the angle between u and v lies between 1 and 1.
5. (a) v1 v2 = 0, v1 v3 = 0, v2 v3 = 0.
(b) Normalize the vectors in S : 1 v1 ,
2
0
0
(c) Possible answer: v4 = .
1
1
6. (b) w = 5 u1 + 1 u2 + 1 u3 .
3
3
3
1
v2 , 1 v3 .
6
12
Chapter Review
91
(c) projW w =
1
3
4
3
1
3
.
26
Distance from V to w =
.
3
2
1
2 3 0
0 1 0
7. , 3 , .
2 0
1 3
2
0
1
0
1
0
8. .
0
1
9. Form the matrix A whose columns are the vectors in S . Find the row reduced echelon form of A. The
columns of this matrix can be used to obtain a basis for W . The rows of this matrix give the solution
to the homogeneous system Ax = 0 and from this we can nd a basis for W .
10. We have
projW (u + v) = (u + v, w1 ) + (u + v, w2 ) + (u + v, w3 )
= (u, w1 ) + (v, w1 ) + (u, w2 ) + (v, w2 ) + (u, w3 ) + (v, w3 )
= (u, w1 ) + (u, w2 ) + (u, w3 ) + (v, w1 ) + (v, w2 ) + (v, w3 )
= projW u + projW v.
Chapter 6
Linear Transformations and Matrices
Section 6.1, p. 372
2. Only (c) is a linear transformation.
4. (a)
6. If L is a linear transformation then L(au + bv) = L(au) + L(bv) = aL(u) + bL(v). Conversely, if the
condition holds let a = b = 1; then L(u + v) = L(u) + L(v), and if we let b = 0 then L(au) = aL(u).
8. (a)
0 1
.
1
0
(b)
k00
(c) 0 k 0 .
00k
1k
.
01
r00
(b) 0 r 0.
00r
100
10. (a)
.
010
(c)
5
12. (a) 4 .
7
x2 + 2x3
(b) 2x1 + x2 + 3x3 .
x1 + 2x2 3x3
14. (a) 8 5 .
(b)
1
0
.
0 1
a1 + 3a2
2
5a1 + a2
.
2
16. We have
L(X + Y ) = A(X + Y ) (X + Y )A = AX + AY XA Y A
= (AX XA) + (AY Y A)
= L(X ) + L(Y ).
Also, L(aX ) = A(aX ) (aX )A = a(AX XA) = aL(X ).
18. We have
L(v1 + v2 ) = (v1 + v2 , w) = (v1 , w) + (v2 , w) = L(v1 ) + L(v2 ).
Also, L(cv) = (cv, w) = c(v, w) = cL(v).
20. (a) 17t 7.
(b)
5a b
2
t+
a + 5b
.
2
94
Chapter 6
21. We have
L(u + v) = 0W = 0W + 0W = L(u) + L(v)
and
L(cu) = 0W = c0W = cL(u).
22. We have
L(u + v) = u + v = L(u) + L(v)
and
L(cu) = cu = cL(u).
23. Yes:
a1
c1
L
a
b1
+2
d1
c2
b2
d2
a1 + a2
c1 + c2
=L
b1 + b2
d1 + d2
= (a1 + a1 ) + (d1 + d2 )
= (a1 + d1 ) + (a2 + d2 )
a1
c1
a2
c2
b2
d2
.
= ka + kd = k (a + d) = kL
a
c
b
d
=L
b1
d1
+L
Also, if k is any real number
Lk
a
c
b
d
ka
kc
=L
kb
kd
.
24. We have
L(f + g ) = (f + g ) = f + g = L(f ) + L(g )
and
L(af ) = (af ) = af = aL(f ).
25. We have
b
L(f + g ) =
b
(f (x) + g (x)) dx =
a
b
f (x) dx +
a
g (x) dx = L(f ) + L(g )
a
and
b
L(cf ) =
b
cf (x) dx = c
a
f (x) dx = cL(f ).
a
26. Let X , Y be in Mnn and let c be any scalar. Then
L(X + Y ) = A(X + Y ) = AX + AY = L(X ) + L(Y )
L(cX ) = A(cX ) = c(AX ) = cL(X )
Therefore, L is a linear transformation.
27. No.
28. No.
29. We have by the properties of coordinate vectors discussed in Section 4.8, L(u + v) = u + v
u S + v S = L(u) + L(v) and L(cu) = cu S = c u S = cL(u).
S
=
Section 6.1
95
30. Let v = a b c d and we write v as a linear combination of the vectors in S :
a1 v1 + a2 v2 + a3 v3 + a4 v4 = v = a b c d .
The resulting linear system has the solution
a1 = 4a + 5b 3c 4d
a3 = a b + c + d
Then L
= 2a 5b + 3c + 4d
abcd
a2 = 2a + 3b 2c 2d
a4 = 3a 5b + 3c + 4d.
14a + 19b 12c 14d .
31. Let L(vi ) = wi . Then for any v in V , express v in terms of the basis vectors of S ;
v = a1 v1 + a2 v2 + + an vn
n
and dene L(v) =
n
ai wi . If v =
i=1
n
ai vi and w =
i=1
bi vi are any vectors in V and c is any scalar,
i=1
then
n
L(v + w) = L
n
n
(ai + bi ) vi =
i=1
(ai + bi )wi =
i=1
and in a similar fashion
n
ai wi +
i=1
bi wi = L(v) + L(w)
i=1
n
L(cv) =
cai wi = c
i=1
ai wi = cL(v)
i=1
for any scalar c, so L is a linear transformation.
32. Let w1 and w2 be in L(V1 ) and let c be a scalar. Then w1 = L(v1 ) and w2 = L(v2 ), where v1 and v2
are in V1 . Then w1 + w2 = L(v1 ) + L(v2 ) = L(v1 + v2 ) and cw1 = cL(v1 ) = L(cv1 ). Since v1 + v2
and cv1 are in V1 , we conclude that w1 + w2 and cw1 lie in L(V1 ). Hence L(V1 ) is a subspace of V .
33. Let v be any vector in V . Then
v = c1 v1 + c2 v2 + + cn vn .
We now have
L1 (v) = L1 (c1 v1 + c2 v2 + + cn vn )
= c1 L1 (v1 ) + c2 L1 (v2 ) + + cn L1 (vn )
= c1 L2 (v1 ) + c2 L2 (v2 ) + + cn L2 (vn )
= L2 (c1 v1 + c2 v2 + + cn vn )
= L2 (v).
34. Let v1 and v2 be in L1 (W1 ) and let c be a scalar. Then L(v1 + v2 ) = L(v1 ) + L(v2 ) is in W1 since
L(v1 ) and L(v2 ) are in W1 and W1 is a subspace of V . Hence v1 + v2 is in L1 (W1 ). Similarly,
L(cv1 ) = cL(v1 ) is in W1 so cv1 is in L1 (W1 ). Hence, L1 (W1 ) is a subspace of V .
35. Let {e1 , . . . , en } be the natural basis for Rn . Then O(ei ) = 0 for i = 1, . . . , n. Hence the standard
matrix representing O is the n n zero matrix O.
36. Let {e1 , . . . , en } be the natural basis for Rn . Then I (ei ) = ei for i = 1, . . . , n. Hence the standard
matrix representing I is the n n identity matrix In .
37. Suppose there is another matrix B such that L(x) = B x for all x in Rn . Then L(ej ) = B ej = Colj (B )
for j = 1, . . . , n. But by denition, L(ej ) is the j th column of A. Hence Colj (B ) = Colj (A) for
j = 1, . . . , n and therefore B = A. Thus the matrix A is unique.
38. (a) 71
52
33
47
30
26
84
56
43
99
69
55.
(b) CERTAINLY NOT.
96
Chapter 6
Section 6.2, p. 387
2. (a) No.
(b) Yes.
(c) Yes.
(d) No.
2a
(e) All vectors of the form
, where a is any real number.
a
1
2
(f) A possible answer is
,
.
2
4
4. (a)
.
00
(b) Yes.
(c) No.
6. (a) A possible basis for ker L is {1} and dim ker L = 1.
(b) A possible basis for range L is {2t3 , t2 } and dim range L = 2.
8. (a) {t2 + t + 1}.
10. (a)
01
10
,1
01
0
2
(b) {t, 1}.
.
(b)
0 2
1 0
,
1
0
01
.
12. (a) Follows at once from Theorem 6.6.
(b) If L is onto, then range L = W and the result follows from part (a).
14. (a) If L is one-to-one then dim ker L = 0, so from Theorem 6.6, dim V = dim range L. Hence range L =
W.
(b) If L is onto, then W = range L, and since dim W = dim V , then dim ker L = 0.
15. If y is in range L, then y = L(x) = Ax for some x in Rm . This means that y is a linear combination
of the columns of A, so y is in the column space of A. Conversely, if y is in the column space of A,
then y = Ax, so y = L(x) and y is in range L.
2
0
0 1
16. (a) A possible basis for ker L is 1 , 0 ; dim ker L = 2.
1 0
0
0
0
0
1
0 1 0
(b) A possible basis for range L is , , ; dim range L = 3.
0
0
1
1
1
0
18. Let S = {v1 , v2 , . . . , vn } be a basis for V . If L is invertible then L is one-to-one: from Theorem 6.7
it follows that T = {L(v1 ), L(v2 ), . . . , L(vn )} is linearly independent. Since dim W = dim V = n, T
is a basis for W . Conversely, let the image of a basis for V under L be a basis for W . Let v = 0V
be any vector in V . Then there exists a basis for V including v (Theorem 4.11). From the hypothesis
we conclude that L(v) = 0W . Hence, ker L = {0V } and L is one-to-one. From Corollary 6.2 it follows
that L is onto. Hence, L is invertible.
1
0
1
2 , 1 , 1 . Since this set of vectors is linearly independent, it is
19. (a) Range L is spanned by
0
1
3
a basis for range L. Hence L : R3 R3 is one-to-one and onto.
4
3
(b) 1 .
3
2
3
Section 6.3
97
20. If S is linearly dependent then a1 v1 + a2 v2 + + an vn = 0V , where a1 , a2 , . . . , an are not all 0. Then
a1 L(v1 ) + a2 L(v2 ) + + an L(vn ) = L(0V ) = 0W ,
which gives the contradiction that T is linearly dependent. The converse is false: let L : V W be
dened by L(v) = 0W .
22. A possible answer is L
u1 u2
23. (a) L is one-to-one and onto.
= u1 + 3u2 u1 + u2 2u1 u2 .
2u1 u3
(b) 2u1 u2 + 2u3 .
u1 + u2 u3
24. If L is one-to-one, then dim V = dim ker L + dim range L = dim range L. Conversely, if dim range L =
dim V , then dim ker L = 0.
26. (a) 7;
(b) 5.
28. (a) Let a = 0, b = 1. Let
f (x) =
Then L(f ) =
1
0
0 for x = 1
2
1 for x = 1 .
2
f (x) dx = 0 = L(0) so L is not one-to-one.
(b) Let a = 0, b = 1. For any real number c, let f (x) = c (constant). Then L(f ) =
L is onto.
1
0
c dx = c. Thus
29. Suppose that x1 and x2 are solutions to L(x) = b. We show that x1 x2 is in ker L:
L(x1 x2 ) = L(x1 ) L(x2 ) = b b = 0.
30. Let L : Rn Rm be dened by L(x) = Ax, where A is m n. Suppose that L is onto. Then
dim range L = m. By Theorem 6.6, dim ker L = n m. Recall that ker L = null space of A, so nullity
of A = n m. By Theorem 4.19, rank A = n nullity of A = n (n m) = m. Conversely, suppose
rank A = m. Then nullity A = n m, so dim ker L = n m. Then dim range L = n dim ker L =
n (n m) = m. Hence L is onto.
31. From Theorem 6.6, we have dim ker L + dim range L = dim V .
(a) If L is one-to-one, then ker L = {0}, so dim ker L = 0. Hence dim range L = dim V = dim W so L
is onto.
(b) If L is onto, then range L = W , so dim range L = dim W = dim V . Hence dim ker L = 0 and L is
one-to-one.
Section 6.3, p. 397
1000
2. (a) 0 1 1 0.
0011
cos
sin
1
0
6. (a)
0
4.
0 1
(b) 0
1
1
1
sin
.
cos
00
1
.
2.
(b)
10
01
3
1 1
0
3 .
0
1
(c) 2 0 2 .
98
Chapter 6
1
0
8. (a)
3
0
0
1
0
3
2
0
4
0
0
4
3
0
3
2
. (b) 6 5 4 3 .
3
0
3
7
0
4
8
6
4
4
10
10. (a) 0 1.
01
1
1
(b) 1
2
1
2
1
2
0
3
0
4
2 3 2 4
.
(c)
3
0
4
0
2
4
2
6
1
2
(d)
3
4
1
1
3
3
3
0
7
0
0
1
.
0
3
.
(c) 3t2 + 3t + 3.
3
2
12. Let S = {v1 , v2 , . . . , vm } be an ordered basis for U and T = {v1 , v2 , . . . , vm , vm+1 , . . . , vn } an ordered
basis for V (Theorem 4.11). Now L(vj ) for j = 1, 2, . . . , m is a vector in U , so L(vj ) is a linear
combination of v1 , v2 , . . . , vm . Thus L(vj ) = a1 v1 + a2 v2 + + am vm + 0vm+1 + + 0vn . Hence,
L(vj )
14. (a)
5
.
13
(b)
5
.
3
(c)
3
.
7
(d)
1
.
1
T
a1
a2
.
.
.
= am .
0
.
.
.
0
(e)
2
.
3
15. Let S = {v1 , v2 , . . . , vn } be an ordered basis for V and T = {w1 , w2 , . . . , wm } an ordered basis for
W . Now O(vj ) = 0W for j = 1, 2, . . . , n, so
O(vj )
T
0
0
= . .
.
.
0
16. Let S = {v1 , v2 , . . . , vn } be an ordered basis for V . Then I (vj ) = vj for j = 1, 2, . . . , n, so
I (vj )
18.
S
0
0
.
.
.
= 1 j th row.
0
.
.
.
0
1
0
.
0 1
20. (a)
0 1
0 1
1 1
2
2
. (b)
. (c)
.
1
1
0
1
0
1
2
2
(d)
1 1
.
1
1
Section 6.4
99
21. Let {v1 , v2 , . . . , vn } be an ordered basis for V . Then L(vi ) = cvi . Hence
L(vi )
c
0
Thus, the matrix .
.
.
0
22. (a) L(v1 )
T
(b) L(v1 ) =
(c)
=
S
0
.
.
.
c ith row.
=
0
.
.
.
0
0
0
. = cIn represents L with respect to S .
.
.
c
0
c
1
, L(v2 )
1
T
=
2
, and L(v3 )
1
T
=
1
.
0
0
3
1
, L(v2 ) =
, and L(v3 ) =
.
3
3
2
1
.
11
23. Let I : V V be the identity operator dened by I (v) = v for v in V . The matrix A of I with respect
to S and T is obtained as follows. The j th column of A is I (vj ) T = vj T , so as dened in Section
3.7, A is the transition matrix PT S from the S -basis to the T -basis.
Section 6.4, p. 405
1. (a) Let u and v be vectors in V and c1 and c2 scalars. Then
(L1
L2 )(c1 u + c2 v) = L1 (c1 u + c2 v) + L2 (c1 u + c2 v)
(from Denition 6.5)
= c1 L1 (u) + c2 L1 (v) + c1 L2 (u) + c2 L2 (v)
(since L1 and L2 are linear transformations)
= c1 (L1 (u) + L2 (u)) + c2 (L1 (v) + L2 (v))
(using properties of vector operations since the images are in W )
= c1 (L1 L2 )(u) + c2 (L1 L2 )(v)
(from Denition 6.5)
Thus by Exercise 4 in Section 6.1, L1
L2 is a linear transformation.
100
Chapter 6
(b) Let u and v be vectors in V and k1 and k2 be scalars. Then
(c
L)(k1 u + k2 v) = cL(k1 u + k2 v)
(from Denition 6.5)
= c(k1 L(u) + k2 L(v))
(since L is a linear transformation)
= ck1 L(u) + ck2 L(v)
(using properties of vector operations since the images are in W )
= k1 cL(u) + k2 cL(v)
(using properties of vector operations)
= k1 (c
L)(u) + k2 (c L)(v)
(by Denition 6.5)
(c) Let S = {v1 , v2 . . . . , vn }. Then
L(v1
A=
The matrix representing c
L(v1
T
L(vn )
T
.
L is given by
L(v2 )
T
L(v2 )
T
T
L(vn )
L(v1 )
=
c
=
cL(v1 )
T
T
c
T
L(v2 )
cL(v2 )
T
T
L(vn )
c
cL(vn )
T
T
(by Denition 6.5)
=
c L(v1 )
T
c L(v2 )
c L(vn )
T
T
(by properties of coordinates)
=c
L(v1 )
T
L(v2 )
T
L(vn )
T
= cA
(by matrix algebra)
2. (a) (O
L)(u) = O(u) + L(u) = L(u) for any u in V .
(b) For any u in V , we have
[L
((1)
L)](u) = L(u) + (1)L(u) = 0 = O(u).
4. Let L1 and L2 be linear transformations of V into W . Then L1 L2 and c L1 are linear transformations
by Exercise 1 (a) and (b). We must now verify that the eight properties of Denition 4.4 are satised.
For example, if v is any vector in V , then
(L1
Therefore, L1
L2 = L2
L2 )(v) = L1 (v) + L2 (v) = L2 (v) + L1 (v) = (L2
L1 )(v).
L1 . The remaining seven properties are veried in a similar manner.
6. (L2 L1 )(au + bv) = L2 (L1 (au + bv)) = L2 (aL1 (u) + bL1 (v))
= aL2 (L1 (u)) + bL2 (L1 (v)) = a(L2 L1 )(u) + b(L2 L1 )(v).
8. (a) 3u1 5u2 2u3 4u1 + 7u2 + 4u3 11u1 + 3u2 + 10u3 .
(b) 8u1 + 4u2 + 4u3 3u1 + 2u2 + 3u3 u1 + 5u2 + 4u3 .
Section 6.4
101
3 5 2
(c) 4
7
4 .
11
3 10
844
(d) 3 2 3 .
154
10. Consider u1 L1 + u2 L2 + u3 L3 = O. Then
(u1 L1 + u2 L2 + u3 L3 )
=O
100
100
=00
= u1 1 1 + u2 1 0 + u3 1 0
= u1 + u2 + u3 u1 .
Thus, u1 = 0. Also,
(u1 L1 + u2 L2 + u3 L3 )
010
=O
= 0 0 = u1 u2 u3 .
010
Thus u2 = u3 = 0.
12. (a) 4.
(b) 16.
(c) 6.
13. (a) Verify that L(au + bv) = aL(u) + bL(v).
a1j
a2j
= . = the j th column of A. Hence A
.
.
amj
(b) L(vj ) = a1j w1 + a2j w2 + + amj wm , so L(vj )
T
represents L with respect to S and T .
1
2
2
, L(e2 ) =
, L(e3 ) =
.
3
4
1
14. (a) L(e1 ) =
(b)
u1 + 2u2 2u3
.
3u1 + 4u2 u3
(c)
1
.
8
16. Possible answer: L1
u1
u2
=
u1
u2
18. Possible answers: L
u1
u2
=
u1 u2
;L
u1 u2
1
2
20. 3
2
1
2
22.
9
2
1
2
5
2
1
2
1
2
and L2
u1
u2
u1
u2
=
=
u2
.
u1
0
.
u1
1 .
2
1
1
2
2
6
2
1
0 .
3 1
3
2
23. From Theorem 6.11, it follows directly that A2 represents L2 = L L. Now Theorem 6.11 implies that
A3 represents L3 = L L2 . We continue this argument as long as necessary. A more formal proof can
be given using induction.
24.
1
10
1
5
3
10
2
5
.
102
Chapter 6
Section 6.5, p. 413
1. (a) A = In 1 AIn .
(b) If B = P 1 AP then A = P BP 1 . Let P 1 = Q so A = Q1 BQ.
(c) If B = P 1 AP and C = Q1 BQ,
M 1 AM .
1010
1 1
1
0 0 1 1
1
1 1
2. (a)
0 0 0 1. (b) 0
1 1
1100
0
0
1
1
0
4. P =
0
1
1
1
0
0
1
0
1
0
0
0
1 1
1
,P =
0
0
0
1
then C = Q1 P 1 AP Q and letting M = P Q we get C =
0
0
.
0
0
101
0 1
(c) 1 1 0. (d) 0
1
001
1
1
1 1
0
3 .
0
1
(e) 3.
0
0
1
0 1 1
.
0
1
0
1
1
1
0
0
0
1
1
1
0 1 1 0
P 1 AP =
0
0
1
0 3
0
1
1
1
1
0
3
0
4
1
2 3 2 4 0
=
3
0
4
0 0
2
4
2
6
1
0
1
0
3
2
0
4
0
1
1
0
0
1
0
1
0
0 1110
2 0 1 0 1
0 0 0 1 0
4 1000
0
4
3
0
3
1 6 5 4 3
=
.
0 3
3
7
0
1
8
6
4
4
6. If B = P 1 AP , then B 2 = (P 1 AP )(P 1 AP ) = P 1 A2 P . Thus, A2 and B 2 are similar, etc.
7. If B = P 1 AP , then B T = P T AT (P 1 )T . Let Q = (P 1 )T , so B T = Q1 AT Q.
8. If B = P 1 AP , then Tr(B ) = Tr(P 1 AP ) = Tr(AP P 1 ) = Tr(AIn ) = Tr(A).
1
1
0
10. Possible answer: 1 , 0 , 1 .
1
1
0
11. (a) If B = P 1 AP and A is nonsingular then B is nonsingular.
(b) If B = P 1 AP then B 1 = P 1 A1 P .
12.
0 1
.
1
0
1
0
1
1
14. P =
, Q = 0
1
1 1
1 1
0
1
1 , Q1 = 1
2
1
1
2
1
1
2
B = Q1 AP =
0
1
2
1
2
1
2
0
0
1
2
1
2
1
2
1
2
10
1
1
1
1
1
=
01
2
2
1 1
1
01
1
2
2
0
0
1
1
1
1
1
.
=1
0
2
2
1 1
1
1
3
2
2
16. A and O are similar if and only if A = P 1 OP = O for a nonsingular matrix P .
17. Let B = P 1 AP . Then det(B ) = det(P 1 AP ) = det(P )1 det(A) det(P ) = det(A).
Section 6.6
103
Section 6.6, p. 425
4
1
3
2
1
O
1
2
3
4
4
1
0
1
(c) 1 , 1 , 1 .
2
1
1
1
3
1
1
2
1 ,
1
1
2
3
4
3
3
2
3
2 .
1
0 1
2
4
1
2
(e) 1 ,
2
1
1
2
(d) Q = 0 1 1 .
2
2
00
1
2
O1
1
(b) M = 0 1 1 .
2
00
1
2
2. (a)
0 1
2
1
1
O1
1
2
3
4
(f) No. The
1
4. (a) M = 0
0
(b) Yes,
1
6. A = 0
0
images are not the same since the matrices M and Q are dierent.
02
1 2 .
01
1 0 2
compute P 1 ; P 1 = 0 1 2 .
00
1
0 3
101
1 2 and B = 0 1 3 . The images will be the same since AB = BA.
0
1
001
8. The original triangle is reected about the x-axis and then dilated (scaled) by a factor of 2. Thus the
matrix M that performs these operations is given by
200
1
00
2
00
M = 0 2 0 0 1 0 = 0 2 0 .
001
0
01
0
01
Note that the two matrices are diagonal and diagonal matrices commute under multiplication, hence
the order of the operations is not relevant.
10. Here there are various ways to proceed depending on how one views the mapping.
Solution #1: The original semicircle is dilated by a factor of 2. The point at (1, 1) now corresponds
to a point at (2, 2). Next we translate the point (2, 2) to the point (6, 2). In order to translate point
104
Chapter 6
(2, 2) to (6, 2) we add 8 to the x-coordinate and
performs these operations is given by
1
0 8
20
0
0 2
M=
1
0
0
0
1
00
0 to the y -coordinate. Thus the matrix M that
0
2
= 0
0
1
0
0 8
2
0 .
0
1
Solution #2: The original semicircle is translated so that the point (1, 1) corresponds to point (3, 1).
In order to translate point (1, 1) to (3, 1) we add 4 to the x-coordinate and 0 to the y -coordinate.
Next we perform a scaling by a factor of 2. Thus the matrix M that performs these operations is given
by
200
1 0 4
2 0 8
M = 0 2 0 0 1
0 = 0 2
0 .
001
00
1
00
1
Note that the matrix of the composite transformation is the same, yet the matrices for the individual
steps dier.
12. The image can be obtained by rst translating
Using this procedure the corresponding matrix
2
2
0 1
2
2
2
M = 22
0 0
2
0
0
01
the semicircle to the origin and then rotating it 45 .
is
2
2
2
2
0 1
2
2
1 1 = 22
0 .
2
0
1
0
0
1
14. (a) Since we are translating down the y -axis, only the y coordinates of the vertices of the triangle
change. The matrix for this sweep is
100
0
0 1 0 sj +1 10
.
0 0 1
0
000
1
(b) If we translate and then
matrix product
10
0 1
0 0
00
rotate for each step the composition of the operations is given by the
0
0
cos(sj +1 /4) 0 sin(sj +1 /4) 0
0 sj +1 10
0
1
0
0
sin(sj +1 /4) 0 cos(sj +1 /4) 0
1
0
0
1
0
0
0
1
cos(sj +1 /4) 0 sin(sj +1 /4)
0
0
1
0
sj +1 10
.
=
sin(sj +1 /4) 0 cos(sj +1 /4)
0
0
0
0
1
(c) Take the composition of the sweep matrix from part (a) with a scaling by 1 in the z -direction. In
2
the scaling matrix we must write the parameterization so it decreases from 1 to 1 , hence we use
2
1 sj +1 1 . We obtain the matrix
2
100
0
10
0
0
10
0
0
0 1 0 sj +1 10 0 1
0
sj +1 10
0
0 0 1
=
.
0 0 1
0 0 0 1 sj +1 1
0
0 0 0 1 sj +1 1
2
2
00
0
1
00
0
1
000
1
Supplementary Exercises
105
Supplementary Exercises for Chapter 6, p. 430
1. Let A and B belong to Mnm and let c be a scalar. From Exercise 43 in Section 1.3 we have that
Tr(A + B ) = Tr(A) + Tr(B ) and Tr(cA) = c Tr(A). Thus Denition 6.1 is satised and it follows that
Tr is a linear transformation.
2. Let A and B belong to Mnm and let c be a scalar. Then L(A +B ) = (A +B )T = AT + B T = L(A)+L(B )
and L(cA) = (cA)T = cAT = cL(A), so L is a linear transformation.
4. (a)
3 4 8.
(b)
0 0 0.
(c)
1
3
6. (a) No.
2u1 + u2 + u3 4u1 4u2 + 2u3 7u1 4u2 + 2u3 .
(b) Yes.
8. (a) ker L =
00
00
(c) Yes.
(d) No.
(e) t2 t + 1
; it has no basis.
(b)
(f) t2 , t.
10
01
00
00
,
,
,
00
00
10
01
.
10. A possible basis consists of any nonzero constant function.
12. (a) A possible basis is t
1
2
.
(b) A possible basis is {1}.
(c) dim ker L + dim range L = 1 + 1 = 2 = dim P1 .
14. (a) L(p1 (t)) = 3t 3, L(p2 (t)) = t + 8.
(b) L(p1 (t))
(c)
7
3 (t
S
=
2
, L(p2 (t))
1
S
=
3
.
2
+ 5).
16. Let u be any vector in Rn and assume that L(u) = u . From Theorem 6.9, if we let S be the
standard basis for Rn then there exists an n n matrix A such that L(u) = Au. Then
L(u)
2
= (L(u), L(u)) = (Au, Au) = (u, AT Au)
by Equation (3) of Section 5.3,, and it then follows that (u, u) = (u, AT Au). Since AT A is symmetric,
Supplementary Exercise 17 of Chapter 5 implies that AT A = In . It follows that for v, w any vectors
in Rn ,
(L(u), L(v)) = (Au, Av) = (u, AT Av) = (u, v).
Conversely, assume that (L(u), L(v)) = (u, v) for all u, v in Rn . Then L(u)
(u, u) = u 2 , so L(u) = u .
2
= (L(u), L(u)) =
17. Assume that (L1 + L2 )2 = L2 + 2L1 L2 + L2 . Then
1
2
L2 + L1 L2 + L2 L1 + L2 = L2 + 2L1 L2 + L2 ,
1
2
1
2
and simplifying gives L1 L2 = L2 L1 . The steps are reversible.
18. If (L(u), L(v)) = (u, v) then
cos =
(L(u), L(v))
(u, v)
=
L(u) L(v)
uv
where is the angle between L(u) and L(v). Thus is the angle between u and v.
19. (a) Suppose that L(v) = 0. Then 0 = (0, 0) = (L(v), L(v)) = (v, v). But then from the denition of
an inner product, v = 0. Hence ker L = {0}.
106
Chapter 6
(b) See the proof of Exercise 16.
20. Let w be any vector in range L. Then there exists a vector v in V such that L(v) = w. Next there
exists scalars c1 , . . . , ck such that v = c1 v1 + + ck vk . Thus
w = L(c1 v1 + + ck vk ) = c1 L(v1 ) + + ck L(vk ).
Hence {L(v1 ), L(v2 ), . . . , L(vk )} spans range L.
21. (a) We use Exercise 4 in Section 6.1 to show that L is a linear transformation. Let
u1
u2
u= .
.
.
v1
v2
and v = .
.
.
un
vn
be vectors in Rn and let r and s be scalars. Then
u1
v1
ru1 + sv1
u2
v2
ru2 + sv2
L(ru + sv) = L r . + s . = L
.
.
.
.
.
.
.
un
vn
run + svn
= (ru1 + sv1 )v1 + (ru2 + sv2 )v2 + + (run + svn )vn
= r(u1 v1 + u2 v2 + + un vn ) + s(v1 v1 + v2 v2 + + vn vn )
= rL(u) + sL(v)
Therefore L is a linear transformation.
(b) We show that ker L = {0V }. Let v be in the kernel of L. Then L(v) = a1 v1 + a2 v2 + an vn = 0.
Since the vectors v1 , v2 , . . . , vn form a basis for V , they are linearly independent. Therefore
a1 = 0, a2 = 0, . . . , an = 0. Hence v = 0. Therefore ker L = {0} and hence L is one-to-one by
Theorem 6.4.
(c) Since both Rn and V have dimension n, it follows from Corollary 6.2 that L is onto.
22. By Theorem 6.10, dim V = n 1 = n, so dim V = dim V . This implies that V and V are isomorphic
vector spaces.
23. We have BA = A1 (AB )A, so AB and BA are similar.
Chapter Review for Chapter 6, p. 432
True or False
1. True.
6. True.
11. True.
2. False.
7. True.
12. False.
3. True.
8. True.
Quiz
10
.
k1
1 1 1
3. (a) Possible answer: 1 2 1 .
1 3 1
1. Yes.
2. (b)
(b) No.
4. False.
9. True.
5. False.
10. False.
Chapter Review
4
4. 3 .
4
107
5.
1 1
6. (a) 1
1 .
2
0
0 1
.
3
5
10
(b)
.
21
112
(c) 1 2 0 .
010
1
1
(d) 2
0 .
1 1
Chapter 7
Eigenvalues and Eigenvectors
Section 7.1, p. 450
2. The characteristic polynomial is 2 1, so the eigenvalues are 1 = 1 and 2 = 1. Associated
1
1
eigenvectors are x1 =
and x2 =
.
1
1
4. The eigenvalues of L are 1 = 2, 2 = 1, and 3 = 3. Associated eigenvectors are x1 = 1 0 0 ,
x2 = 1 1 0 , and x3 = 3 1 1 .
6. (a) p() = 2 2 = ( 2). The eigenvalues and associated eigenvectors are:
1 = 0;
x1 =
1
1
2 = 2;
x2 =
1
1
(b) p() = 3 22 5 + 6 = ( + 2)( 1)( 3). The eigenvalues and associated eigenvectors are
1 = 2;
2 = 1;
3 = 3;
0
x1 = 0
1
6
x2 = 3
8
0
x3 = 5
2
(c) p() = 3 . The eigenvalues and associated eigenvectors are
1 = 2 = 3 = 0;
1
x1 = 0 .
0
110
Chapter 7
(d) p() = 3 52 + 2 + 8 = ( + 1)( 2)( 4). The eigenvalues and associated eigenvectors are
1 = 1;
2 = 2;
3 = 4;
8
10
x1 =
7
1
x2 = 2
1
1
x3 = 0
1
8. (a) p() = 2 + 6 = ( 2)( + 3). The eigenvalues and associated eigenvectors are:
1 = 2;
x1 =
4
1
2 = 3;
x2 =
1
1
(b) p() = 2 + 9. No eigenvalues or eigenvectors.
(c) p() = 3 152 + 72 108 = ( 3)( 6)2 . The eigenvalues and associated eigenvectors are:
1 = 3;
2 = 3 = 6;
2
x1 = 1
0
1
x2 = x3 = 1
0
(d) p() = 3 + = (2 + 1). The eigenvalues and associated eigenvectors are:
1 = 0;
0
x1 = 0
1
10. (a) p() = 2 + + 1 i = ( i)( + 1 + i). The eigenvalues and associated eigenvectors are:
1 = i;
x1 =
i
1
2 = 1 i;
x2 =
1 i
1
(b) p() = ( 1)(2 2i 2) = ( 1)[ (1 + i)][ (1 + i)]. The eigenvalues and associated
Section 7.1
111
eigenvectors are:
1 = 1 + i;
2 = 1 + i;
3 = 1;
1
x1 = 1
0
1
x2 = 1
0
0
x1 = 0
1
(c) p() = 3 + = ( + i)( i). The eigenvalues and associated eigenvectors are:
0
0
1 = 0;
x1 =
1
1
2 = i;
x2 = i
1
1
3 = i;
x1 = i
1
(d) p() = 2 ( 1) + 9( 1) = ( 1)( 3i)( + 3i). The eigenvalues and associated eigenvectors
are:
0
1 = 1;
x1 = 1
2 = 3i;
3 = 3i;
0
3i
x2 = 0
1
3i
x1 = 0
1
11. Let A = aij be an n n upper triangular matrix, that is, aij = 0 for i > j . Then the characteristic
polynomial of A is
p() = det(In A) =
a11 a12 a1n
0
a22 a2n
.
.
.
..
.
.
.
.
.
.
.
0
0
0 ann
= ( a11 )( a22 ) ( ann ),
which we obtain by expanding along the cofactors of the rst column repeatedly. Thus the eigenvalues
of A are a11 , . . . , ann , which are the elements on the main diagonal of A. A similar proof shows the
same result if A is lower triangular.
112
Chapter 7
12. We prove that A and AT have the same characteristic polynomial. Thus
T
pA () = det(In A) = det((In A)T ) = det(In AT )
= det(In AT ) = pAT ().
Associated eigenvectors need not be the same for A and AT . As a counterexample, consider the matrix
in Exercise 7(c) for 2 = 2.
14. Let V be an n-dimensional vector space and L : V V be a linear operator. Let be an eigenvalue
of L and W the subset of V consisting of the zero vector 0V , and all the eigenvectors of L associated
with . To show that W is a subspace of V , let u and v be eigenvectors of L corresponding to and
let c1 and c2 be scalars. Then L(u) = u and L(v) = v. Therefore
L(c1 u + c2 v) = c1 L(u) + c2 L(v) = c1 u + c2 v = (c1 u + c2 v).
Thus c1 u + c2 v is an eigenvector of L with eigenvalue . Hence W is closed with respect to addition and
scalar multiplication. Since technically an eigenvector is never zero we had to explicitly state that 0V
was in W since scalars c1 and c2 could be zero or c1 = c2 and u = v making the linear combination
c1 u + c2 v = 0V . It follows that W is a subspace of V .
15. We use Exercise 14 as follows. Let L : Rn Rn be dened by L(x) = Ax. Then we saw in Chapter
4 that L is a linear transformation and matrix A represents this transformation. Hence Exercise 14
implies that all the eigenvectors of A with associated eigenvalue , together with the zero vector, form
a subspace of V .
16. To be a subspace, the subset must be closed under scalar multiplication. Thus, if x is any eigenvector,
then 0x = 0 must be in the subset. Since the zero vector is not an eigenvector, we must include it in
the subset of eigenvectors so that the subset is a subspace.
0
1
18. (a) 0 , 1 .
1
0
0
0
(b) .
1
0
3
1
0
3
20. (a) Possible answer: .
(b) Possible answer: .
0
1
0
0
21. If is an eigenvalue of A with associated eigenvector x, then Ax = x. This implies that A(Ax) =
A(x), so that A2 x = Ax = (x) = 2 x. Thus, 2 is an eigenvalue of A2 with associated eigenvector
x. Repeat k times.
22. Let A =
1
4
5 4
. Then A2 =
. The characteristic polynomial of A2 is
1 2
1
8
det(I2 A2 ) =
5
4
1
8
= ( 5)( 8) 4 = 2 13 + 36 = ( 4)( 9).
Thus the eigenvalues of A2 are 1 = 9 and 2 = 4 which are the squares of the eigenvalues of matrix A.
(See Exercise 8(a).) To nd an eigenvector corresponding to 1 = 9 we solve the homogeneous linear
system
0
4 4 x1
=
.
(9I2 A2 )x =
0
1 1 x2
Section 7.1
113
Row reducing the coecient matrix we have the equivalent linear system
11
00
x1
0
=
0
x2
whose solution is x1 = r, x2 = r, or in matrix form
x=r
1
.
1
x1 =
1
.
1
Thus 1 = 9 has eigenvector
To nd eigenvectors corresponding to 2 = 4 we solve the homogeneous linear system
(4I2 A2 )x =
1 4
1 4
0
x1
=
.
0
x2
Row reducing the coecient matrix we have the equivalent linear system
1 4
00
x1
0
=
0
x2
whose solution is x1 = 4r, x2 = r, or in matrix form
x=r
4
.
1
x2 =
4
.
1
Thus 2 = 4 has eigenvector
We note that the eigenvectors of A2 are eigenvectors of A corresponding to the square of the eigenvalues
of A.
23. If A is nilpotent then Ak = O for some positive integer k . If is an eigenvalue of A with associated
eigenvector x, then by Exercise 21 we have O = Ak x = k x. Since x = 0, k = 0 so = 0.
24. (a) The characteristic polynomial of A is
f () = det(In A).
Let 1 , 2 , . . . , n be the roots of the characteristic polynomial. Then
f () = ( 1 )( 2 ) ( n ).
Setting = 0 in each of the preceding expressions for f () we have
f (0) = det(A) = (1)n det(A)
and
f (0) = (1 )(2 ) (n ) = (1)n 1 2 n .
Equating the expressions for f (0) gives det(A) = 1 2 n . That is, det(A) is the product of
the roots of the characteristic polynomial of A.
(b) We use part (a). A is singular if and only if det(A) = 0. Hence 1 2 n = 0 which is true if
and only if some j = 0. That is, if and only if some eigenvalue of A is zero.
114
Chapter 7
(c) Assume that L is not one-to-one. Then ker L contains a nonzero vector, say x. Then L(x) =
0V = (0)x. Hence 0 is an eigenvalue of L. Conversely, assume that 0 is an eigenvalue of L.
Then there exists a nonzero vector x such that L(x) = 0x. But 0x = 0V , hence ker L contains a
nonzero vector so L is not one-to-one.
(d) From Exercise 23, if A is nilpotent then zero is an eigenvalue of A. It follows from part (b) that
such a matrix is singular.
25. (a) Since L(x) = x and since L is invertible, we have x = L1 (x) = L1 (x). Therefore L1 (x) =
(1/)x. Hence 1/ is an eigenvalue of L1 with associated eigenvector x.
(b) Let A be a nonsingular matrix with eigenvalue and associated eigenvector x. Then 1/ is an
eigenvalue of A1 with associated eigenvector x. For if Ax = x, then A1 x = (1/)x.
26. Suppose there is a vector x = 0 in both S1 and S2 . Then Ax = 1 x and Ax = 2 x. So (2 1 )x = 0.
Hence 1 = 2 since x = 0, a contradiction. Thus the zero vector is the only vector in both S1 and S2 .
27. If Ax = x, then, for any scalar r,
(A + rIn )x = Ax + rx = x + rx = ( + r)x.
Thus + r is an eigenvalue of A + rIn with associated eigenvector x.
28. Let W be the eigenspace of A with associated eigenvalue . Let w be in W . Then L(w) = Aw = w.
Therefore L(w) is in W since W is closed under scalar multiplication.
29. (a) (A + B )x = Ax + B x = x + x = ( + )x
(b) (AB )x = A(B x) = A(x) = (Ax) = x = ()x
30. (a) The characteristic polynomial is p() = 3 2 24 36. Then
000
p(A) = A3 A2 24A 36I3 = 0 0 0 .
000
(b) The characteristic polynomial is p() = 3 7 + 6. Then
000
p(A) = A3 7A + 6I3 = 0 0 0 .
000
(c) The characteristic polynomial is p() = 2 7 + 6. Then
p(A) = A2 7A + 6I2 =
00
.
00
31. Let A be an n n nonsingular matrix with characteristic polynomial
p() = n + a1 n1 + + an1 + an .
By the Cayley-Hamilton Theorem (see Exercise 30)
p(A) = An + a1 An1 + + an1 A + an In = O.
Multiply the preceding expression by A1 to obtain
An1 + a1 An2 + + an1 In + an A1 = O.
Section 7.2
115
Rearranging terms we have
an A1 = An1 a1 An2 an1 In .
Since A is nonsingular det(A) = 0. From the discussion prior to Example 11, an = (1)n det(A), so
an = 0. Hence we have
1
A1 =
An1 + a1 An2 + + an1 In .
an
32. The characteristic polynomial of A is
p() =
a b
= ( a)( d) bc = 2 (a + d) + (ad bc) = 2 Tr(A) + det(A).
c d
33. Let A be an n n matrix all of whose columns add up to 1 and let x be the m 1 matrix
1
.
x = . .
.
1
Then
1
.
T
A x = . = x = 1x.
.
1
Therefore = 1 is an eigenvalues of AT . By Exercise 12, = 1 is an eigenvalue of A.
34. Let A = aij . Then akj = 0 if k = j and akk = 1. We now form In A and compute the
characteristic polynomial of A as det(In A) by expanding about the k th row. We obtain ( 1)
times a polynomial of degree n 1. Hence 1 is a root of the characteristic polynomial and is thus an
eigenvalue of A.
35. (a) Since Au = 0 = 0u, it follows that 0 is an eigenvalue of A with associated eigenvector u.
(b) Since Av = 0v = 0, it follows that Ax = 0 has a nontrivial solution, namely x = v.
Section 7.2, p. 461
2. The characteristic polynomial of A is p() = 2 1. The eigenvalues are 1 = 1 and 2 = 1.
Associated eigenvectors are
1
1
x1 =
and x2 =
.
1
1
The corresponding vectors in P1 are
x1 : p(t) = t 1;
x2 : p2 (t) = t + 1.
Since the set of eigenvectors {t 1, t + 1} is linearly independent, it is a basis for P1 . Thus P1 has a
basis of eigenvectors of L and hence L is diagonalizable.
4. Yes. Let S = {sin t, cos t}. We rst nd a matrix A representing L. We use the basis S . We have
L(sin t) = cos t and L(cos t) = sin t. Hence
A=
L(sin t)
S
L(cos t)
S
=
0 1
.
10
116
Chapter 7
We nd the eigenvalues and associated eigenvectors of A. The characteristic polynomial of A is
det(I2 A) =
1
1
= 2 + 1.
This polynomial has roots = i, hence according to Theorem 7.5, L is diagonalizable.
6. (a) Diagonalizable. The eigenvalues are 1 = 3 and 2 = 2. The result follows by Theorem 7.5.
(b) Not diagonalizable. The eigenvalues are 1 = 2 = 1. Associated eigenvectors are x1 = x2 =
0
r
, where r is any nonzero real number.
1
(c) Diagonalizable. The eigenvalues are 1 = 0, 2 = 2, and 3 = 3. The result follows by Theorem
7.5.
(d) Diagonalizable. The eigenvalues are 1 = 1, 2 = 1, and 3 = 2. The result follows by Theorem
7.5.
(e) Not diagonalizable. The eigenvalues are 1 = 2 = 3 = 3. Associated eigenvectors are
1
0
x1 = x2 = x3 = r
0
where r is any nonzero real number.
8. Let
D=
2
0
0 3
and P =
1 1
.
21
Then P 1 AP = D, so
A = P DP 1 =
1 4 5
1
3 10
is a matrix whose eigenvalues and associated eigenvectors are as given.
10. (a) There is no such P . The eigenvalues of A are 1 = 1, 2 = 1, and 3 = 3. Associated eigenvectors
are
1
x1 = x2 = r 0 ,
1
where r is any nonzero real number, and
5
x3 = 2 .
3
1
0
1
(b) P = 0 2
0 . The eigenvalues of A are 1 = 1, 2 = 1, and 3 = 3. Associated eigenvectors
0
1
1
are the columns of P .
1 3
1
(c) P = 0
0 6 . The eigenvalues of A are 1 = 4, 2 = 1, and 3 = 1. Associated
1
2
4
eigenvectors are the columns of P .
Section 7.2
117
1
1
. The eigenvalues of A are 1 = 1, 2 = 2. Associated eigenvectors are the
1 2
columns of P .
(d) P =
12. P is the matrix whose columns are the given eigenvectors:
P=
1 2
,
11
D = P 1 AP =
30
.
04
14. Let A be the given matrix.
(a) Since A is upper triangular its eigenvalues are its diagonal entries. Since = 2 is an eigenvalue of
multiplicity 2 we must show, by Theorem 7.4, that it has two linearly independent eigenvectors.
0 3
0 1
(2I3 A)x =
00
0
0
x1
x2 = 0 .
0
0
0
x3
Row reducing the coecient we obtain the equivalent linear system
0 1 0 x1
0
0 0 0 x2 = 0 .
0 0 0 x3
0
It follows that there are two arbitrary constants in the general solution so there are two linearly
independent eigenvectors. Hence the matrix is diagonalizable.
(b) Since A is upper triangular its eigenvalues are its diagonal entries. Since = 2 is an eigenvalue of
multiplicity 2 we must show it has two linearly independent eigenvectors. (We are using Theorem
7.4.)
0
0 3 1
x1
0 1
x2 = 0 .
(2I3 A)x =
0
0
00
0
x3
Row reducing the coecient matrix we obtain the equivalent linear system
0 1 0 x1
0
0 0 1 x2 = 0 .
0 0 0 x3
0
It follows that there is only one arbitrary constant in the general solution so that there is only
one linearly independent eigenvector. Hence the matrix is not diagonalizable.
(c) The matrix is lower triangular hence its eigenvalues are its diagonal entries. Since they are distinct
the matrix is diagonalizable.
1
(d) The eigenvalues of A are 1 = 0 with associated eigenvector x1 = 1 , and 2 = 3 = 3, with
0
0
associated eigenvector 0 . Since there are not two linearly independent eigenvectors associated
1
with 2 = 3 = 3, A is not similar to a diagonal matrix.
16. Each of the given matrices A has a multiple eigenvalue whose associated eigenspace has dimension 1,
so the matrix is not diagonalizable.
118
Chapter 7
1
.
0
0
(b) A is upper triangular with multiple eigenvalue 1 = 2 = 2 and associated eigenvector 1.
0
1
(c) A has the multiple eigenvalue 1 = 2 = 1 with associated eigenvector 1 .
0
3
7
(d) A has the multiple eigenvalue 1 = 2 = 1 with associated eigenvector .
8
(a) A is upper triangular with multiple eigenvalue 1 = 2 = 1 and associated eigenvector
0
18.
29
0
512
0
=
.
0 (2)9
0 512
20. Necessary and sucient conditions are: (a d)2 + 4bc > 0 or that b = c = 0 with a = d.
Using Theorem 7.4, A is diagonalizable if and only if R2 has a basis consisting of eigenvectors of
A. Thus we must nd conditions on the entries of A to guarantee a pair of linearly independent
eigenvectors. The characteristic polynomial of A is
det(I2 A) =
a b
c d
= ( a)( d) bc = 2 (a + d) + ad bc = 0.
Using the quadratic formula the roots are
=
(a + d)
(a + d)2 4(ad bc)
.
2
Since eigenvalues are required to be real, we require that
(a + d)2 4(ad bc) = a2 + 2ad + d2 4ad + 4bc = (a d)2 + 4bc 0.
Suppose rst that (a d)2 + 4bc = 0. Then
=
a+d
2
is a root of multiplicity 2 and the linear system
da
b x1
a+d
0
I2 A x = 2
a d x2 = 0
2
c
2
must have two linearly independent solutions. A 2 2 homogeneous linear system can have two
linearly independent solutions only if the coecient matrix is the zero matrix. Hence it must follow
that b = c = 0 and a = d. That is, matrix A is a multiple of I2 .
Now suppose (a d)2 + 4bc > 0. Then the eigenvalues are real and distinct and by Theorem 7.5
A is diagonalizable. Thus, in summary, for A to be diagonalizable it is necessary and sucient that
(a d)2 + 4bc > 0 or that b = c = 0 with a = d.
21. Since A and B are nonsingular, A1 and B 1 exist. Then BA = A1 (AB )A. Therefore AB and BA
are similar and hence by Theorem 7.2 they have the same characteristic polynomial. Thus they have
the same eigenvalues.
Section 7.2
119
22. The representation of L with respect to the given basis is A =
1 = 1 and 2 = 1. Associated eigenvectors are et and et .
1
0
. The eigenvalues of L are
0 1
23. Let A be diagonalizable with A = P DP 1 , where D is diagonal.
(a) AT = (P DP 1 )T = (P 1 )T DT P T = QDQ1 , where Q = (P 1 )T . Thus AT is similar to a
diagonal matrix and hence is diagonalizable.
(b) Ak = (P DP 1 )k = P Dk P 1 . Since Dk is diagonal we have Ak is similar to a diagonal matrix
and hence diagonalizable.
24. If A is diagonalizable, then there is a nonsingular matrix P so that P 1 AP = D, a diagonal matrix.
Then A1 = P D1 P 1 = (P 1 )1 D1 P 1 . Since D1 is a diagonal matrix, we conclude that A1 is
diagonalizable.
25. First observe the dierence between this result and Theorem 7.5. Theorem 7.5 shows that if all the
eigenvalues of A are distinct, then the associated eigenvectors are linearly independent. In the present
exercise, we are asked to show that if any subset of k eigenvalues are distinct, then the associated
eigenvectors are linearly independent. To prove this result, we basically imitate the proof of Theorem
7.5
Suppose that S = {x1 , . . . , xk } is linearly dependent. Then Theorem 4.7 implies that some vector xj
is a linear combination of the preceding vectors in S . We can assume that S1 = {x1 , x2 , . . . , xj 1 } is
linearly independent, for otherwise one of the vectors in S1 is a linear combination of the preceding
ones, and we can choose a new set S2 , and so on. We thus have that S1 is linearly independent and
that
xj = a1 x1 + a2 x2 + + aj 1 xj 1 ,
(1)
where a1 , a2 , . . . , aj 1 are real numbers. This means that
Axj = A(a1 x1 + a2 x2 + + aj 1 xj 1 ) = a1 Ax1 + a2 Ax2 + + aj 1 Axj 1 .
(2)
Since 1 , 2 , . . . , j are eigenvalues and x1 , x2 , . . . , xj are associated eigenvectors, we know that Axi =
i xi for i = 1, 2, . . . , n. Substituting in (2), we have
j xj = a1 1 x1 + a2 2 x2 + + aj 1 j 1 xj 1 .
(3)
Multiplying (1) by j , we get
j xj = j a1 x1 + j a2 x2 + + j aj 1 xj 1 .
(4)
Subtracting (4) from (3), we have
0 = j xj j xj = a1 (1 j )x1 + a2 (2 j )x2 + + aj 1 (j 1 j )xj 1 .
Since S1 is linearly independent, we must have
a1 (1 j ) = 0,
a2 (2 j ) = 0,
...,
aj 1 (j 1 j ) = 0.
Now (1 j ) = 0, (2 j ) = 0, . . . , (j 1 j ) = 0, since the s are distinct, which implies that
a1 = a2 = = aj 1 = 0.
This means that xj = 0, which is impossible if xj is an eigenvector. Hence S is linearly independent,
so A is diagonalizable.
26. Since B is nonsingular, B 1 is nonsingular. It now follows from Exercise 21 that AB 1 and B 1 A
have the same eigenvalues.
27. Let P be a nonsingular matrix such that P 1 AP = D. Then
Tr(D) = Tr(P 1 AP ) = Tr(P 1 (AP )) = Tr((AP )P 1 ) = Tr(AP P 1 ) = Tr(AIn ) = Tr(A).
120
Chapter 7
Section 7.3, p. 475
2. (a) AT .
(b) B T .
3. If AAT = In and BB T = In , then
(AB )(AB )T = (AB )(B T AT ) = A(BB T A)T = (AIn )AT = AAT = In .
4. Since AAT = In , then A1 = AT , so (A1 )(A1 )T = (A1 )(AT )T = (A1 )(A) = In .
5. If A is orthogonal then AT A = In so if u1 , u2 , . . . , un are the columns of A, then the (i, j ) entry in
AT A is uT uj . Thus, uT uj = 0 if i = j and 1 if i = j . Since uT uj = (ui , uj ) then the columns of A
i
i
i
form an orthonormal set. Conversely, if the columns of A form an orthonormal set, then (ui , uj ) = 0
if i = j and 1 if i = j . Since (ui , uj ) = uT uj , we conclude that AT A = In .
i
0
0
0
1
0
0
100
6. AAT = 0 cos sin 0 cos sin = 0 1 0 .
0 sin cos
0 sin
cos
001
1
0
0
1
0
0
100
1
1
1
1
2 0
2 = 0 1 0 .
BB T = 0
2
2
1
1
1
1
001
0 2 2
0 2 2
7. P is orthogonal since P P T = I3 .
8. If A is orthogonal then AAT = In so det(AAT ) = det(In ) = 1 and det(A) det(AT ) = [det(A)]2 = 1, so
det(A) = 1.
cos sin
, then AAT = I2 .
sin cos
9. (a) If A =
(b) Let A =
ab
. Then we must have
cd
a2 + b2 = 1
c2 + d2 = 1
(1)
ac + bd = 0
(2)
(3)
ad bc = 1
(4)
Let a = cos 1 , b = sin 1 , c = cos 2 , and d = sin 2 . Then (1) and (2) hold. From (3) and (4)
we obtain
cos(2 1 ) = 0
sin(2 1 ) = 1.
Thus 2 1 = , or 2 = 1 . Then cos 2 = sin 1 and sin 2 = cos 1 .
2
2
10. If x =
a1
a2
and y =
b1
, then
b2
(Ax, Ay) =
=
1
1
1
1
a1 a2
b1 b2
2
2
2
2
,
1
1
1
1
2 a1 2 a2
2 b1 2 b2
1
1
2 (a1 b1 + a2 b2 a1 b2 a2 b1 ) + 2 (a1 b1 + a2 b2
= a1 b1 + a2 b2
= (x, y).
+ a1 b2 + a2 b1 )
Section 7.3
121
11. We have
(L(x), L(y))
=
L(x) L(y)
cos =
(Ax, Ay)
(Ax, Ax)(Ay, Ay)
(x, y)
=
(x, x)(y, y)
=
(x, y)
.
xy
12. Let S = {u1 , u2 , . . . , un }. Recall from Section 5.4 that if S is orthonormal then (u, v) =
n
u
S
,v
S
where the latter is the standard inner product on R . Now the ith column of A is L(ui ) S . Then
L(ui )
S
, L(uj )
S
= (L(ui ), L(uj )) = (ui , uj ) =
ui
S
, uj
S
=0
if i = j and 1 if i = j . Hence, A is orthogonal.
13. The representation of L with respect to the natural basis for R2 is
1
1
2
,
A= 2
1
2
1
2
which is orthogonal.
14. If Ax = x, then (P 1 AP )P 1 x = P 1 (x) = (P 1 x), so that B (P 1 x) = (P 1 x).
1
1
0
0
0
0
2
2
0
16. A is similar to D =
1
0 and P = 1
0
0 .
0
0 1
1
1
0
2
2
0
0
18. A is similar to D =
0
3
0
20. A is similar to D = 0 3
0
0
0
0
22. A is similar to D =
0
0
0
0
0
0
0
0
4
0
0
0
.
0
4
100
24. A is similar to D = 0 1 0 .
005
0
0
26. A is similar to D =
0
0
0
0
0
0
0
0 and P =
3
1
0
0
0
0
0
and P =
1
0
0
0 1
0
0
0
0
0
0
0
0
0
0
.
1
0
0 1
0
0
1
0
0
1
2
1
2
0
1
1
2 6
1
2
1
6
0
2
6
0
0
.
1
2
1
2
1
3
1
3
1
3
.
,
122
Chapter 7
2
0
0
28. A is similar to D = 0 2
0 .
0
0 4
ab
. The characteristic polynomial of A is p() = 2 (a + c) + (ac b2 ). The roots of
cd
p() = 0 are
29. Let A =
(a + c) +
(a + c)2 4(ac b2 )
2
and
(a + c)
(a + c)2 4(ac b2 )
.
2
Case 1. p() = 0 has distinct real roots and A can then be diagonalized.
Case 2. p() = 0 has two equal real roots. Then (a + c)2 4(ac b2 ) = 0. Since we can write
(a + c)2 4(ac b2 ) = (a c)2 + 4b2 , this expression is zero if and only if a = c and b = 0. In this case
A is already diagonal.
30. If L is orthogonal, then L(v) = v for any v in V . If is an eigenvalue of L then L(x) = x,
so L(x) = x , which implies that x = x . By Exercise 17 of Section 5.3 we then have
|| x = x . Since x is an eigenvector, it cannot be the zero vector, so || = 1.
31. Let L : R2 R2 be dened by
L
x
y
1
1
x
2 x .
=A
= 2
1
1
y
y
2
2
To show that L is an isometry we verify Equation (7). First note that matrix A satises AT A = I2 .
(Just perform the multiplication.) Then
(L(u), L(v)) = (Au, Av) = (u, AT Av) = (u, v)
so L is an isometry.
32. (a) By Exercise 9(b), if A is an orthogonal matrix and det(A) = 1, then
A=
cos sin
.
sin
cos
As discussed in Example 8 in Section 1.6, L is then a counterclockwise rotation through the
angle .
(b) If det(A) = 1, then
A=
cos
sin
.
sin cos
Let L1 : R2 R2 be reection about the x-axis. Then with respect to the natural basis for R2 ,
L1 is represented by the matrix
1
0
.
A1 =
0 1
As we have just seen in part (a), the linear operator L2 giving a counterclockwise rotation through
the angle is represented with respect to the natural basis for R2 by the matrix
A2 =
We have A = A2 A1 . Then L = L2 L1 .
cos sin
.
sin
cos
Supplementary Exercises
123
33. (a) Let L be an isometry. Then (L(x), L(x)) = (x, x), so L(x) = x .
(b) Let L be an isometry. Then the angle between L(x) and L(y) is determined by
cos =
(L(x), L(y))
(x, y)
=
,
L(x) L(y)
xy
which is the cosine of the angle between x and y.
34. Let L(x) = Ax. It follows from the discussion preceding Theorem 7.9 that if L is an isometry, then L
is nonsingular. Thus, L1 (x) = A1 x. Now
(L1 (x), L1 (y)) = (A1 x, A1 y) = (x, (A1 )T A1 y).
Since A is orthogonal, AT = A1 , so (A1 )T A1 = In . Thus, (x, (A1 )T A1 y) = (x, y). That is,
(A1 x, A1 y) = (x, y), which implies that (L1 (x), L1 (y)) = (x, y), so L1 is an isometry.
35. Suppose that L is an isometry. Then (L(vi ), L(vj )) = (vi , vj ), so (L(vi ), L(vj )) = 1 if i = j and 0
if i = j . Hence, T = {L(v1 ), L(v2 ), . . . , L(vn )} is an orthonormal basis for Rn . Conversely, suppose
that T is an orthonormal basis for Rn . Then (L(vi ), L(vj )) = 1 if i = j and 0 if i = j . Thus,
(L(vi ), L(vj )) = (vi , vj ), so L is an isometry.
36. Choose y = ei , for i = 1, 2, . . . , n. Then AT Aei = Coli (AT A) = ei for i = 1, 2, . . . , n. Hence AT A = In .
37. If A is orthogonal, then AT = A1 . Since
(AT )T = (A1 )T = (AT )1 ,
we have that AT is orthogonal.
38. (cA)T = (cA)1 if and only if cAT =
1 1
1
1
A = AT . That is, c = . Hence c = 1.
c
c
c
Supplementary Exercises for Chapter 7, p. 477
2. (a) The eigenvalues are 1 = 3, 2 = 3, 3 = 9. Associated eigenvectors are
2
x1 = 2 ,
1
2
x2 = 1 ,
2
1
and x3 = 2 .
2
2 2
1
(b) Yes; P = 2
1 2 . P is not unique, since eigenvectors are not unique.
1 2 2
(c) 1 = 1 , 2 = 1 , 3 = 1 .
3
3
9
(d) The eigenvalues are 1 = 9, 2 = 9, 3 = 81. Eigenvectors associated with 1 and 2 are
2
2
1
and
1
An eigenvector associated with 3 = 81 is 2 .
2
2
1 .
2
124
Chapter 7
3. (a) The characteristic polynomial of A is
a1n
a11 a12
a12 a22
a2n
det(In A) =
.
.
.
.
..
.
.
.
.
.
.
.
an1
an n1 ann
Any product in det(In A), other than the product of the diagonal entries, can contain at most
n 2 of the diagonal entries of In A. This follows because at least two of the column indices
must be out of natural order in every other product appearing in det(In A). This implies that
the coecient of n1 is formed by the expansion of the product of the diagonal entries. The
coecient of n1 is the sum of the coecients of n1 from each of the products
aii ( a11 ) ( ai1 i1 )( ai+1 i+1 ) ( ann )
i = 1, 2, . . . , n. The coecient of n1 in each such term is aii and so the coecient of n1 in
the characteristic polynomial is
a11 a22 ann = Tr(A).
(b) If 1 , 2 , . . . , n are the eigenvalues of A then i , i = 1, 2, . . . , n are factors of the characteristic
polynomial det(In A). It follows that
det(In A) = ( 1 )( 2 ) ( n ).
Proceeding as in (a), the coecient of n1 is the sum of the coecients of n1 from each of the
products
i ( 1 ) ( i1 )( i+1 ) ( n )
for i = 1, 2, . . . , n. The coecient of n1 in each such term is i , so the coecient of n1 in
the characteristic polynomial is 1 2 n = Tr(A) by (a). Thus, Tr(A) is the sum of
the eigenvalues of A.
(c) We have
det(In A) = ( 1 )( 2 ) ( n )
so the constant term is 1 2 n .
3 2
1
has eigenvalues 1 = 1, 2 = 1, but all the eigenvectors are of the form r
. Clearly
2 1
1
A has only one linearly independent eigenvector and is not diagonalizable. However, det(A) = 0, so A
is nonsingular.
4. A =
5. In Exercise 21 of Section 7.1 we show that if is an eigenvalue of A with associated eigenvector x,
then k is an eigenvalue of Ak , k a positive integer. For any positive integers j and k and any scalars
a and b, the eigenvalues of aAj + bAk are aj + bk . This follows since
(aAj + bAk )x = aAj x + bAk x = aj x + bk x = (aj + bk )x.
This result generalizes to nite linear combinations of powers of A and to scalar multiples of the identity
matrix. Thus,
p(A)x = (a0 In + a1 A + + ak Ak )x
= a0 In x + a1 Ax + + ak Ak x
= a0 x + a1 x + + ak k x
= (a0 + a1 1 + + ak k )x
= p()x.
Supplementary Exercises
6. (a) p1 ()p2 ().
8. (a) L(A1 )
S
125
(b) p1 ()p2 ().
1
0
= , L(A2 )
0
0
S
0
0
= , L(A3 )
1
S
0
1
= , L(A4 )
0
0
S
0
0
= .
0
0
1
1000
0 0 1 0
(b) B =
0 1 0 0 .
0001
(c) The eigenvalues of L are 1 = 1, 2 = 1 (of multiplicity 3). An eigenvector associated with
0
1
1 = 1 is x1 = . Eigenvectors associated with 2 = 1 are
1
0
1
0
x2 = ,
0
0
0
1
x3 = ,
1
0
0
and x4 = .
0
0
1
(d) The eigenvalues of L are 1 = 1, 2 = 1 (of multiplicity 3). An eigenvector associated with
01
1 = 1 is
. Eigenvectors associated with 2 = 1 are
1 0
10
,
00
01
,
10
and
00
.
01
(e) The eigenspace associated with 1 = 1 consists of all matrices of the form
0k
,
k 0
where k is any real number, that is, it consists of the set of all 2 2 skew symmetric real matrices.
The eigenspace associated with 2 = 1 consists of all matrices of the form
a
10
01
00
+b
+c
,
00
10
01
where a, b, and c are any real numbers, that is, it consists of all 2 2 real symmetric matrices.
10. The eigenvalues of L are 1 = 0, 2 = i, 3 = i. Associated eigenvectors are x1 = 1, x2 =
i sin x + cos x, x3 = i sin x + cos x.
11. If A is similar to a diagonal matrix D, then there exists a nonsingular matrix P such that P 1 AP = D.
It follows that
D = DT = (P 1 AP )T = P T AT (P 1 )T = ((P T )1 )1 AT (P T )1 ,
so if we let Q = (P T )1 , then Q1 AT Q = D. Hence, AT is also similar to D and thus A is similar to
AT .
126
Chapter 7
Chapter Review for Chapter 7, p. 478
True or False
1. True.
6. True.
11. False.
16. True.
2.
7.
12.
17.
False.
True.
True.
True.
3.
8.
13.
18.
True.
True.
True.
True.
4.
9.
14.
19.
True.
True.
True.
True.
5.
10.
15.
20.
False.
True.
True.
True.
Quiz
1
1
; 2 = 3, x2 =
.
1
2
0
0
0
5
3 .
2. (a) 0
4
4
0 3 5
4
4
1
0
0
0 ; 2 = 1, x2 = 3 ; 3 = 1, x3 = 1 .
(b) 1 = 0, x1 =
0
1
3
1. 1 = 1, x1 =
3. 1 = 1, 2 = 2, and 3 = 2.
4. = 9, x.
0
1
5. 0 , 1 .
0
1
3 0 0
6. 0 2 0 .
003
7. No.
8. No.
2
9. (a) Possible answer: 2 .
2
1
3 1
(b) A = 1
3
1 . Thus
2
0
1
1 1 2
1 3 1
6 00
AT A = 3
3 0 1 3
1 = 0 18 0 .
1
11
20
1
0 03
Since z is orthogonal to x and y, and x and y are orthogonal, all entries not on the diagonal
of this matrix are zero. The diagonal entries are the squares of the magnitudes of the vectors:
x 2 = 6, y 2 = 18, and z 2 = 3.
(c) Normalize each vector from part (b).
(d) diagonal
Chapter Review
(e) Since
127
T
xT
x x xT y xT z
AT A = y T x y z = y T x y T y y T z ,
z T x zT y zT z
zT
it follows that if the columns of A are mutually orthogonal, then all entries of AT A not on the
diagonal are zero. Thus, AT A is a diagonal matrix.
10. False.
11. Let
k
A = a21
a31
0
a22
a32
0
a23 .
a33
Then kI3 A has its rst row all zero and hence det(kI3 A) = 0. Therefore, = k is an eigenvalue
of A.
5
1
2
12. (a) det(4I3 A) = det 1 5
2 = 0.
2
2 2
112
det(10I3 A) = det 1 1 2 = 0.
224
1
Basis for eigenspace associated with = 4: 1 .
2
2
1
Basis for eigenspace associated with = 10: 1 , 0 .
0
1
1
1
1
2 3
6
1
1
1
.
(b) P =
3
2
6
2
1
0
6
3
Chapter 8
Applications of Eigenvalues and
Eigenvectors (Optional)
Section 8.1, p. 486
8
2. 2.
1
4. (b) and (c)
0.2
6. (a) x(1) = 0.3, x(2)
0.5
0.06 0.048
(b) T 3 = 0.3 0.282
0.64 0.67
is
3
4
1
2
1
4
8. (a) T 2 =
1
2
0.06
0.048
0.0564
= 0.24, x(3) = 0.282, x(4) = 0.2856.
0.70
0.67
0.658
0.06
0.282 . Since all entries in T 3 are positive, T is regular. Steady state vector
0.66
0.057
u = 15 0.283 .
53
35
0.660
53
3
53
. Since all entries of T 2 are positive, T reaches a state of equilibrium.
(b) Since all entries of T are positive, it reaches a state of equilibrium.
11 1 13
7
(c) T 2 = 36
18
3
24
1
3
11
48
7
36
1
3
11
48
. Since all entries of T 2 are positive, T reaches a state of equilibrium.
0.2 0.05 0.1
(d) T 2 = 0.3 0.4 0.35 . Since all entries of T 2 are positive, it reaches a state of equilibrium.
0.5 0.55 0.55
10. (a)
T=
A
0.3
0.7
B
0.4
0.6
A
B
130
Chapter 8
1
(b) Compute T x(2) , where x(0) = 2 :
1
2
T x(0) = x(1) =
0.35
,
0.65
T x(1) = x(2) =
0.365
,
0.635
T x(2) = x(3) =
0.364
.
0.636
(3)
The probability of the rat going through door A on the third day is p1 = .364.
4
0.364
(c) u = 11
.
7
0.636
11
12. red, 25%; pink, 50%; white, 25%.
Section 8.2, p. 500
2. A = U SV
T
=
2
3
1
3
2
3
1
3
2
3
2
3
60
01
0 3
10
1
00
3
T
T
3 0
0
0
1 1
100
3 0
1 1
0 0
0 1 0
0
1
0 1 0
3
001
1
1
1
0
0
0
4. A = U SV T
2
3
2
3
1
11
=
3 1
0
6. (a) The matrix has rank 3. Its distance from the class of matrices of rank 2 is smin = 0.2018.
(b) Since smin = 0 and the other two singular values are not zero, the matrix belongs to the class of
matrices of rank 2.
(c) Since smin = 0 and the other three singular values are not zero, the matrix belongs to the class of
matrices of rank 3.
7. The singular value decomposition of A is given by A = U SV T . From Theorem 8.1 we have
rank A = rank U SV T = rank U (SV T ) = rank SV T = rank S.
Based on the form of matrix S , its rank is the number of nonzero rows, which is the same as the
number of nonzero singular values. Thus rank A = the number of nonzero singular values of A.
Section 8.3, p. 514
2. (a) The characteristic polynomial was obtained in Exercise 5(d) of Section 7.1: 2 7 + 6 = (
1)( 6). So the eigenvalues are 1 = 1, 2 = 6. Hence the dominant eigenvalue is 6.
(b) The eigenvalues were obtained in Exercise 6(d) of Section 7.1: 1 = 1, 2 = 2, 3 = 4. Hence
the dominant eigenvalue is 4.
4. (a) 5.
(b) 7.
6. (a) max{7, 5} = 7.
(c) 10.
(b) max{7, 4, 5} = 7.
7. This is immediate, since A = AT .
Section 8.4
131
8. Possible =
12
.
34
9. We have
Ak x
since A
1
1
Ak
1
x
1
A
k
1
x
1
0,
< 1.
10. The eigenvalues of A can all be < 1 in magnitude.
12. Sample mean = 5825.
sample variance = 506875.
standard deviation = 711.95.
791.8
.
826.0
95,996.56 76,203.00
covariance matrix =
.
76,203.00 73,999.20
14. Sample means =
16. S =
1262200 128904
. Eigenvalues and associated eigenvectros:
128904 32225.8
1 = 1275560,
u1 =
0.9947
0.1031
2 = 18861.6;
u2 =
0.1031
.
0.9947
1107.025
3240.89
First principal component = .9947col1 (X ) + 0.1031col2 (X ) = 4530.264 .
3688.985
3173.37
17. Let x be an eigenvector of C associated with the eigenvalue . Then C x = x and xT C x = xT x.
Hence,
xT C x
= T .
xx
We have xT C x > 0, since C is positive denite and xT x > 0, since x = 0. Hence > 0.
18. (a) The diagonal entries of Sn are the sample variances for the n-variables and the total variance is
the sum of the sample variances. Since Tr(Sn ) is the sum of the diagonal entries, it follows that
Tr(Sn ) = total variance.
(b) Sn is symmetric, so it can be diagonalized by an orthogonal matrix P .
(c) Tr(D) = Tr(P T Sn P ) = Tr(P T P Sn ) = Tr(In Sn ) = Tr(Sn ).
(d) Total variance = Tr(Sn ) = Tr(D), where the diagonal entries of D are the eigenvalues of Sn , so
the result follows.
Section 8.4, p. 524
1
0
0
0 et + b2 1 e2t + b3 1 e3t .
2. (a) x(t) = b1
0
0
5
132
Chapter 8
1
0
0
0 et + 3 1 e2t + 4 1 e3t .
(b) x(t) = 2
0
0
5
4. Let x1 and x2 be solutions to the equation x = Ax, and let a and b be scalars. Then
d
(ax1 + bx2 ) = ax1 + bx2 = aAx1 + bAx2 = A(ax1 + bx2 ).
dt
Thus ax1 + bx2 is also a solution to the given equation.
1 5t
1t
e + b2
e.
1
1
6. x(t) = b1
0
1
1
8. x(t) = b1 2 et + b2 0 et + b3 0 e3t .
1
0
1
10. The system of dierential equations is
1
2
x (t)
10
30
=
1
2
30
y (t)
10
x(t)
.
y (t)
The characteristic polynomial of the coecient matrix is p() = 2 + 1 . Eigenvalues and associated
6
eigenvectors are:
2
1
1 = 0, x1 =
; 2 = 1 , x2 = 3 .
6
1
1
Hence the general solution is given by
2
x(t)
1 1 t
= b1
e 6 + b2 3 .
y (t)
1
1
Using the initial conditions x(0) = 10 and y (0) = 40, we nd that b1 = 10 and b2 = 30. Thus, the
particular solution, which gives the amount of salt in each tank at time t, is
x(t) = 10e 6 t + 20
1
y (t) = 10e 6 t + 30.
1
Section 8.5, p. 534
2. The eigenvalues of the coecient matrix are 1 = 2 and 2 = 1 with associated eigenvectors p1 =
1
0
0
. Thus the origin is an unstable equilibrium. The phase portrait shows all trajectories
1
tending away from the origin.
and p2 =
4. The eigenvalues of the coecient matrix are 1 = 1 and 2 = 2 with associated eigenvectors p1 =
0
1
1
. Thus the origin is a saddle point. The phase portrait shows trajectories not in the
1
direction of an eigenvector heading towards the origin, but bending away as t .
and p2 =
Section 8.6
133
6. The eigenvalues of the coecient matrix are 1 = 1 + i and 2 = 1 i with associated eigenvectors
1
1
p1 =
and p2 =
. Since the real part of the eigenvalues is negative the origin is a stable
i
i
equilibrium with trajectories spiraling in towards it.
8. The eigenvalues of the coecient matrix are 1 = 2 + i and 2 = 2 i with associated eigenvectors
1
1
p1 =
and p2 =
. Since the real part of the eigenvalues is negative the origin is a stable
i
i
equilibrium with trajectories spiraling in towards it.
10. The eigenvalues of the coecient matrix are 1 = 1 and 2 = 5 with associated eigenvectors p1 =
1
1
1
. Thus the origin is an unstable equilibrium. The phase portrait shows all trajectories
1
tending away from the origin.
and p2 =
Section 8.6, p. 542
2. (a)
(b)
(c)
x1 x2 x3
xy
x1
5
4. (a) 0
0
1 2
2 3
0
3
4 3
3
2
0
x1
3 x2 .
4
x3
x
.
y
0 1
2
x1
0
3 x2 .
x2 x3 1
2
3
0
x3
0
0
4
0
0
(b) 0
5
0 .
1
0 .
0 5
0
0
1
1
(c) 0
0
0
0
1
0 .
0 1
2
2
6. y1 + 2y2 .
2
8. 4y3 .
2
2
10. 5y1 5y2 .
2
2
12. y1 + y2 .
2
2
2
14. y1 + y2 + y3 .
2
2
2
16. y1 + y2 + y3 .
2
2
2
18. y1 y2 y3 ; rank = 3; signature = 1.
2
2
20. y1 = 1, which represents the two lines y1 = 1 and y1 = 1. The equation y1 = 1 represents no conic
at all.
22. g1 , g2 , and g4 are equivalent. The eigenvalues of the matrices associated with the quadratic forms are:
for g1 : 1, 1, 1; for g2 : 9, 3, 1; for g3 : 2, 1, 1; for g4 : 5, 5, 5. The rank r and signature s of g1 ,
g2 and g4 are r = 3 and s = 2p r = 1.
24. (d)
25. (P T AP )T = P T AT P = P T AP since AT = A.
134
Chapter 8
26. (a) A = P T AP for P = In .
(b) If B = P T AP with nonsingular P , then A = (P 1 )T BP 1 and B is congruent to A.
(c) If B = P T AP and C = QT BQ with P , Q nonsingular, then C = QT P T AP Q = (P Q)T A(P Q)
with P Q nonsingular.
27. If A is symmetric, there exists an orthogonal matrix P such that P 1 AP = D is diagonal. Since P is
orthogonal, P 1 = P T . Thus A is congruent to D.
28. Let
A=
a
b
b
d
and let the eigenvalues of A be 1 and 2 . The characteristic polynomial of A is
f () = 2 (a + d) + ad b2 .
If A is positive denite then both 1 and 2 are > 0, so 1 2 = det(A) > 0. Also,
10
a
b
b
d
1
= a > 0.
0
Conversely, let det(A) > 0 and a > 0. Then 1 2 = det(A) > 0 so 1 and 2 are of the same sign. If
1 and 2 are both < 0 then 1 + 2 = a + d < 0, so d < a. Since a > 0, we have d < 0 and ad < 0.
Now det(A) = ad b2 > 0, which means that ad > b2 so ad > 0, a contradiction. Hence, 1 and 2
are both positive.
29. Let A be positive denite and Q(x) = xT Ax. By Theorem 8.10, g (x) is a quadratic form which is
equivalent to
2
2
2
2
2
h(y) = y1 + y2 + + yp yp+1 yr .
If g and h are equivalent then h(y) > 0 for each y = 0. However, this can happen if and only if all terms
in Q (y) are positive; that is, if and only if A is congruent to In , or if and only if A = P T In P = P T P .
Section 8.7, p. 551
2. Parabola
4. Two parallel lines.
6. Straight line.
8. Hyperbola.
10. None.
2
2
x
y
= 1.
4
4
12. Hyperbola;
2
14. Parabola; x + 4y = 0.
2
16. Ellipse; 4x + 5y
2
18. None; 2x + y
2
2
= 20.
= 2.
2
20. Possible answer: hyperbola;
2
x
y
= 1.
2
2
Section 8.8
135
22. Possible answer: parabola; x
= 4y
2
x
24. Possible answer: ellipse;
2
2
+y
1
2
= 1.
2
x
+y
4
26. Possible answer: ellipse;
2
28. Possible answer: ellipse; x
+
30. Possible answer: parabola; y
2
2
= 1.
2
y
1
2
= 1.
= 1x .
8
Section 8.8, p. 560
2. Ellipsoid.
4. Elliptic paraboloid.
6. Hyperbolic paraboloid.
8. Hyperboloid of one sheet.
10. Hyperbolic paraboloid.
12. Hyperboloid of one sheet.
14. Ellipsoid.
2
2
18. Ellipsoid;
2
2
x
2
25
2
+
2
y
25
4
+
24. Hyperbolic paraboloid;
26. Ellipsoid;
2
x
y
z
+
+ 9 = 1.
9
9
5
20. Hyperboloid of two sheets; x
22. Ellipsoid;
2
x
y
z
+
= 1.
8
4
8
16. Hyperboloid of one sheet;
x
1
2
2
+
2
y
1
2
+
2
y
2
z
2
= 1.
2
z
= 1.
25
10
x
2
1
2
2
y
1
2
=z .
2
z
1
4
= 1.
2
28. Hyperboloid of one sheet;
2
2
x
y
z
+
= 1.
4
2
1
Chapter 10
MATLAB Exercises
Section 10.1, p. 597
Basic Matrix Properties, p. 598
ML.2. (a) Use command size(H)
(b) Just type H
(c) Type H(:,1:3)
(d) Type H(4:5,:)
Matrix Operations, p. 598
ML.2. aug =
2
2
3
4
3
4
6
4
5
12
15
8
ML.4. (a) R = A(2,:)
R=
324
C = B(:,3)
C=
1
3
5
V = R C
V=
11
V is the (2,3)-entry of the product A B.
(b) C = B(:,2)
C=
0
3
2
V = A C
138
Chapter 10
V=
1
14
0
13
V is column 2 of the product A B.
(c) R = A(3,:)
R=
4 2
3
= RB
V
V=
10 0 17 3
V is row 3 of the product A B.
ML.6. (a) Entry-by-entry multiplication.
(b) Entry-by-entry division.
(c) Each entry is squared.
Powers of a Matrix, p. 599
ML.2. (a) A = tril(ones(5), 1)
A
A 2
A 3
ans =
ans =
ans =
00000
00000
0
10000
00000
0
11000
10000
0
11100
21000
1
11110
32100
3
A 4
A 5
ans =
ans =
00000
00000
00000
00000
00000
00000
00000
00000
10000
00000
Thus k = 5.
(b) This exercise uses the random number generator rand. The matrix A
vary.
A = tril(x(10 rand(7)),2)
A=
0000028
0006792
0000374
0000077
0000004
0000000
0000000
Here A 3 is all zeros, so k = 5.
ML.4. (a) (A 2 7 A) (A + 3 eye(A))
ans =
2.8770 7.1070 14.0160
4.9360 5.0480 14.0160
6.9090 7.1070
9.9840
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
and the value of k may
Row Operations and Echelon Forms
139
(b) (A eye(A)) 2 + (A 3 + A)
ans =
1.3730 0.2430 0.3840
0.2640 1.3520 0.3840
0.1410 0.2430 1.6160
(c) Computing the powers of A as A2 , A3 , . . . soon gives the impression that the sequence is converging to
0.2273
0.2273
0.2273
0.2727
0.2727
0.2727
0.5000
0.5000
0.5000
Typing format rat, and displaying the preceding matrix gives
ans =
5/22 3/11 1/2
5/22 3/11 1/2
5/22 3/11 1/2
ML.6. The sequence is converging to the zero matrix.
Row Operations and Echelon Forms, p. 600
ML.2. Enter the matrix A into Matlab and use the following Matlab commands. We use the format
rat command to display the matrix A in rational form at each stage.
A = [1/2 1/3 1/4 1/5;1/3 1/4 1/5 1/6;1 1/2 1/3 1/4]
A=
0.5000 0.3333 0.2500 0.2000
0.3333 0.2500 0.2000 0.1667
1.0000 0.5000 0.3333 0.2500
format rat, A
A=
1/2 1/3 1/4 1/5
1/3 1/4 1/5 1/6
1 1/2 1/3 1/4
format
(a) A(1,:) = 2 A(1,:)
A=
1.0000 0.6667 0.5000
0.3333 0.2500 0.2000
1.0000 0.5000 0.3333
format rat, A
A=
1 2/3 1/2 2/5
1/3 1/4 1/5 1/6
1 1/2 1/3 1/4
format
0.4000
0.1667
0.2500
(b) A(2,:) = ( 1/3) A(1,:) + A(2,:)
A=
1.0000 0.6667 0.5000 0.4000
0 0.0278 0.0333 0.0333
1.0000 0.5000 0.3333 0.2500
140
Chapter 10
format rat, A
A=
1
2/3
1/2
0 1/36 1/30
1
1/2
1/3
format
2/5
1/30
1/4
(c) A(3,:) = 1 A(1,:) + A(3,:)
A=
1.0000 0.6667 0.5000 0.4000
0 0.0278 0.0333 0.0333
0 0.1667 0.1667 0.1500
format rat, A
A=
1
2/3
1/2
2/5
0 1/36 1/30 1/30
0 1/6 1/6 3/20
format
(d) temp = A(2,:)
temp =
0 0.0278 0.0333 0.0333
A(2,:) = A(3,:)
A=
1.0000
0.6667
0.5000
0.4000
0 0.1667 0.1667 0.1500
0 0.1667 0.1667 0.1500
A(3,:) = temp
A=
1.0000
0.6667
0.5000
0.4000
0 0.1667 0.1667 0.1500
0
0.0278
0.0333
0.0333
format rat, A
A=
1
2/3
1/2
2/5
0 1/6 1/6 3/20
0 1/36 1/30
1/30
format
ML.4. Enter A into Matlab, then type reduce(A). Use the menu to select row operations. There are
many dierent sequences of row operations that can be used to obtain the reduced row echelon form.
However, the reduced row echelon form is unique and is
ans =
1.0000
0
0
0.0500
0 1.0000
0 0.6000
0
0 1.0000
1.5000
format rat, ans
ans =
1 0 0 1/20
0 1 0 3/5
001
3/2
format
ML.6. Enter the augmented matrix aug into Matlab. Then use command reduce(aug) to construct row
operations to obtain the reduced row echelon form. We obtain
LU Factorization
141
ans =
10100
01200
00001
The last row is equivalent to the equation 0x + 0y + 0z + 0w = 1, which is clearly impossible. Thus
the system is inconsistent.
ML.8. Enter the augmented matrix aug into Matlab. Then use command reduce(aug) to construct row
operations to obtain the reduced row echelon form. We obtain
ans =
1 0 1 0
01
20
00
00
The second row corresponds to the equation y + 2z = 0. Hence we can choose z arbitrarily. Set
z = r, any real real number. Then y = 2r. The rst row corresponds to the equation x z = 0
which is the same as x = z = r. Hence the solution to this system is
x=r
z = 2r
z=r
ML.10. After entering A into Matlab, use command reduce( 4 eye(size(A)) A). Selecting row
operations, we can show that the reduced row echelon form of 4I2 A is
1
0
1
.
0
Thus the solution to the homogeneous system is
x=
r
.
r
Hence for any real number r, not zero, we obtain a nontrivial solution.
ML.12. (a) A = [1 1 1;1 1 0;0 1 1];
b = [0 3 1] ;
x = A\ b
x=
1
4
3
(b) A = [1 1 1;1 1 2;2 1 1];
b = [1 3 2] ;
x = A\ b
x=
1.0000
0.6667
0.0667
LU -Factorization, p. 601
ML.2. We show the rst few steps of the LU-factorization using routine lupr and then display the matrices
L and U .
[L,U] = lupr(A)
142
Chapter 10
++++++++++++++++++++++++++++++++++++++++++++++++++++++
Find an LU-FACTORIZATION by Row Reduction
L=
U=
1
0
0
0
1
0
0
0
1
8
3
1
1
7
1
2
2
5
OPTIONS
<-1> Undo previous operation.
<1> Insert element into L.
ENTER your choice ===> 1
Enter multiplier. -3/8
Enter first row number. 1
Enter number of row that changes.
<0> Quit.
2
++++++++++++++++++++++++++++++++++++++++++++++++++++++
Replacement by Linear Combination Complete
L=
U=
1
0
0
0
1
0
0
0
1
8
0
1
1
7.375
1
2
1.25
5
You just performed operation 0.375 Row(1) + Row(2)
OPTIONS
<-1> Undo previous operation.
<1> Insert element into L.
ENTER your choice ===> 1
<0> Quit.
++++++++++++++++++++++++++++++++++++++++++++++++++++++
Replacement by Linear Combination Complete
U=
L=
1
0
0
0
1
0
0
0
1
8
0
1
1
7.375
1
2
1.25
5
You just performed operation 0.375 Row(1) + Row(2)
Insert a value in L in the position you just eliminated in U . Let the multiplier you just used be
called num. It has the value 0.375.
Enter row number of L to change. 2
Enter column number of L to change.
Value of L(2,1) = -num
Correct: L(2,1) = 0.375
1
++++++++++++++++++++++++++++++++++++++++++++++++++++++
Continuing the factorization process we obtain
U=
L=
1
0.375
0.125
0
1
0.1525
0
0
1
8
0
0
1
7.375
0
2
1.25
4.559
Matrix Inverses
143
Warning: It is recommended that the row multipliers be written in terms of the entries of matrix
U when entries are decimal expressions. For example, U (3, 2)/U (2, 2). This assures that the
exact numerical values are used rather than the decimal approximations shown on the screen. The
preceding display of L and U appears in the routine lupr, but the following displays which are
shown upon exit from the routine more accurately show the decimal values in the entries.
L=
U=
1.0000
0.3750
0.1250
0
1.0000
0.1525
0
0
1.0000
8.0000
0
0
1.0000
7.3750
0
2.0000
1.2500
4.5593
ML.4. The detailed steps of the solution of Exercises 7 and 8 are omitted. The solution to Exercise 7 is
T
T
2 2 1 and the solution to Exercise 8 is 1 2
5 4 .
Matrix Inverses, p. 601
ML.2. We use the fact that A is nonsingular if rref(A) is the identity matrix.
(a) A = [1 2;2 4];
rref(A)
ans =
1
2
0
0
Thus A is singular.
(b) A = [1 0 0;0 1 0;1 1 1];
rref(A)
ans =
1
0
0
0
1
0
0
0
1
Thus A is nonsingular.
(c) A = [1 2 1;0 1 2;1 0 0];
rref(A)
ans =
1
0
0
0
1
0
0
0
1
Thus A is nonsingular.
ML.4. (a) A = [2 1;2 3];
rref(A eye(size(A)))
ans =
1.0000
0
0.7500
0.2500
0
1.0000
0.5000
0.5000
format rat, ans
ans =
1
0
3/4
1/4
0
1
1/2
1/2
format
(b) A = [1 1 2;0 2 1;1 0 0];
rref(A eye(size(A)))
ans =
1.0000
0
0
0
0
1.0000
0
1.0000
0
0.2000
0.4000
0.2000
0
0
1.0000
0.4000
0.2000
0.4000
144
Chapter 10
format rat,
ans =
1
0
0
format
ans
0
1
0
0
0
1
0
1/5
2/5
0
2/5
1/5
1
1/5
2/5
Determinants by Row Reduction, p. 601
ML.2. There are many sequences of row operations that can be used. Here we record the value of the
determinant so you may check your result.
(a) det(A) = 9.
(b) det(A) = 5.
ML.4. (a) A = [2 3 0;4 1 0;0 0 5];
det(5 eye(size(A)) A)
ans =
0
(b) A = [1 1;5 2];
det(3 eye(size(A)) A) 2
ans =
9
(c) A = [1 1 0;0 1 0;1 0 1];
det(inverse(A) A)
ans =
1
Determinants by Cofactor Expansion, p. 602
ML.2. A = [1 5 0;2 1 3;3 2 1];
cofactor(2,1,A)
cofactor(2,2,A)
ans =
ans =
5
1
cofactor(2,3,A)
ans =
13
ML.4. A = [ 1 2 0 0;2 1 2 0; 0 2 1 2;0 0 2 1];
(Use expansion about the rst column.)
detA = 1 cofactor(1,1,A) + 2 cofactor(2,1,A)
detA =
5
Vector Spaces, p. 603
ML.2. p = [2 5 1 2],q = [1 0 3 5]
p=
2
5
1
2
1
0
3
5
q=
(a) p + q
ans =
3
5
3
4
2
3
which is 3t + 5t + 4t + 3.
Subspaces
145
(b) 5 p
ans =
10
25
5
10
which is 10t + 25t + 5t 10.
3
2
(c) 3 p 4 q
ans =
2
15
9
26
which is 2t + 15t 9t 26.
3
2
Subspaces, p. 603
ML.4. (a) Apply the procedure in ML.3(a).
v1 = [1 2 1];v2 = [3 0 1];v3 = [1 8 3];v = [ 2 14 4];
rref([v1 v2 v3 v ])
ans =
1
0
0
0
1
0
4
1
0
7
3
0
This system is consistent so v is a linear combination of {v1 , v2 , v3 }. In the general solution
if we set c3 = 0, then c1 = 7 and c2 = 3. Hence 7v1 3v2 = v. There are many other linear
combinations that work.
(b) After entering the 22 matrices into Matlab we associate a column with each one by reshaping
it into a 41 matrix. The linear system obtained from the linear combination of reshaped vectors
is the same as that obtained using the 2 2 matrices in c1 v1 + c2 v2 + c3 v3 = v.
v1 = [1 2;1 0];v2 = [2 1;1 2];v3 = [ 3 1;0 1];v = eye(2);
rref([reshape(v1,4,1) reshape(v2,4,1) reshape(v3,4,1) reshape(v,4,1)])
ans =
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
The system is inconsistent, hence v is not a linear combination of {v1 , v2 , v3 }.
ML.6. Follow the method in ML.4(a).
v1 = [1 1 0 1]; v2 = [1 1 0 1]; v3 = [0 1 2 1];
(a) v = [2 3 2 3];
rref([v1 v2 v3 v ])
ans =
1
0
0
0
Since the
00
10
01
00
system
2
0
1
0
is consistent, v is in span S . In fact, v = 2v1 + v3 .
(b) v = [2 3 2 3];
rref([v1 v2 v3 v ])
146
Chapter 10
ans =
1000
0100
0010
0001
The system is inconsistent, hence v is not in span S .
(c) v = [0 1 2 3];
rref([v1 v2 v3 v ])
ans =
1000
0100
0010
0001
The system is inconsistent, hence v is not in span S .
Linear Independence/Dependence, p. 604
ML.2. Form the augmented matrix A
0 and row reduce it.
A = [1 2 0 1;1 1 1 2;2 1 5 7;0 2 2 2];
rref([A zeros(4,1)])
ans =
1
0
0
0
023
1 1 1
000
000
0
0
0
0
The general solution is x4 = s, x3 = t, x2 = t + s, x1 = 2t 3s. Hence
x = 2t 3s t + s t s
and it follows that 2 1 1 0
= t 2 1 1 0
and 3 1 0 1
+ s 3 1 0 1
span the solution space.
Bases and Dimension, p. 604
ML.2. Follow the procedure in Exercise ML.5(b) in Section 5.2.
v1 = [0 2 2] ;v2 = [1 3 1] ;v3 = [2 8 4] ;
rref([v1 v2 v3 zeros(size(v1))])
ans =
1
0
0
0
1
0
1
2
0
0
0
0
It follows that there is a nontrivial solution so S is linearly dependent and cannot be a basis for V .
ML.4. Here we do not know dim(span S ), but dim(span S ) = the number of linearly independent vectors
in S . We proceed as we did in ML.1.
v1 = [1 2 1 0] ;v2 = [2 1 3 1] ;v3 = [2 2 4 2] ;
rref([v1 v2 v3 zeros(size(v1))])
Bases and Dimension
147
ans =
1
0
0
0
0
1
0
0
2
2
0
0
0
0
0
0
The leading 1s imply that v1 and v2 are a linearly independent subset of S , hence dim(span S ) = 2
and S is not a basis for V .
ML.6. Any vector in V has the form
a b c = a 2a c c = a 1 2 0 + c 0 1 1 .
It follows that T = 1 2 0 , 0 1 1 spans V and since the members of T are not multiples
of one another, T is a linearly independent subset of V . Thus dim V = 2. We need only determine
if S is a linearly independent subset of V . Let
v1 = [0 1 1] ;v2 = [1 1 1] ;
then
rref([v1 v2 zeros(size(v1))])
ans =
1
0
0
0
1
0
0
0
0
It follows that S is linearly independent and so Theorem 4.9 implies that S is a basis for V .
In Exercises ML.7 through ML.9 we use the technique involving leading 1s as in Example 5.
ML.8. Associate a column with each 2 2 matrix as in Exercise ML.4(b) in Section 5.2.
v1 = [1 2;1 2] ;v2 = [1 0;1 1] ;v3 = [0 2;0 1] ;v4 = [2 4;2 4] ;v5 = [1 0;0 1] ;
rref([reshape(v1,4,1) reshape(v2,4,1) reshape(v3,4,1) reshape(v4,4,1) reshape(v5,4,1)
zeros(4,1)])
ans =
1
0
0
0
0
1
0
0
1
1
0
0
2
0
0
0
0
0
1
0
0
0
0
0
The leading 1s point to v1 , v2 , and v5 which are a basis for span S . We have dim(span S ) = 3 and
span S = M22 .
ML.10. v1 = [1 1 0 0] ;v2 = [1 0 1 0] ;
rref([v1 v2 eye(4) zeros(size(v1))])
ans =
1
0
0
0
0
1
0
0
0
0
1
0
1
0
1
0
0
1
1
0
0
0
0
1
0
0
0
0
It follows that v1 , v2 , e1 = 1 0 0 0 , e4 = 0 0 0 1
is a basis for V which contains S .
148
Chapter 10
ML.12. Any vector in V has the form a 2d + e a d e . It follows that
a 2d + e a d e = a 1 0 1 0 0 + d 0 2 0 1 0 + e 0 1 0 0 1
and T =
10100,02010,01001
is a basis for V . Hence let
v1 = [0 3 0 2 1] ;w1 = [1 0 1 0 0] ;w2 = [0 2 0 1 0] ;w3 = [0 1 0 0 1] ;
then
rref([v1 w1 w2 w3 eye(4) zeros(size(v1))])
ans =
1
0
0
0
0
0
1
0
0
0
0 1
00
12
00
00
0
0
0
0
0
Thus {v1 , w1 , w2 } is a basis for V containing S .
Coordinates and Change of Basis, p. 605
ML.2. Proceed as in ML.1 by making each of the vectors in S a column in matrix A.
A = [1 0 1 1;1 2 1 3;0 2 1 1;0 1 0 0] ;
rref(A)
ans =
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
To nd the coordinates of v we solve a linear system. We can do all three parts simultaneously as
follows. Associate with each vector v a column. Form a matrix B from these columns.
B = [4 12 8 14;1/2 0 0 0;1 1 1 7/3] ;
rref([A B])
ans =
1.0000
0
0
0
0
1.0000
0
0
0
0
1.0000
0
0
0
0
1.0000
1.0000
3.0000
4.0000
2.0000
0.5000
0
0.5000
1.0000
0.3333
0.6667
0
0.3333
The coordinates are the last three columns of the preceding matrix.
ML.4. A = [1 0 1;1 1 0;0 1 1];
B = [2 1 1;1 2 1;1 1 2];
rref([A B])
ans =
1
0
0
0
1
0
0
0
1
1
0
1
1
1
0
0
1
1
Homogeneous Linear Systems
149
The transition matrix from the T -basis to the S -basis is P = ans(:,4:6).
P=
1
0
1
1
1
0
0
1
1
ML.6. A = [1 2 3 0;0 1 2 3;3 0 1 2;2 3 0 1] ;
B = eye(4);
rref([A B])
ans =
1.0000
0
0
0
0
1.0000
0
0
0
0
1.0000
0
0
0
0
1.0000
0.0417
0.2083
0.2917
0.0417
0.0417
0.0417
0.2083
0.2917
0.2917 0.2083
0.0417 0.2917
0.0417 0.0417
0.2083 0.0417
The transition matrix P is found in columns 5 through 8 of the preceding matrix.
Homogeneous Linear Systems, p. 606
ML.2. Enter A into Matlab and we nd that
rref(A)
ans =
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
The homogeneous system Ax = 0 has only the trivial solution.
ML.4. Form the matrix 3I2 A in Matlab as follows.
C = 3 eye(2) [1 2;2 1]
C=
2
2
2
2
rref(C)
ans =
1
0
1
0
The solution is x =
t
, for t any real number. Just choose t = 0 to obtain a nontrivial solution.
t
Rank of a Matrix, p. 606
ML.2. (a) One basis for the row space of A consists of the nonzero rows of rref(A).
A = [1 3 1;2 5 0;4 11 2;6 9 1];
rref(A)
150
Chapter 10
ans =
1
0
0
0
0
1
0
0
0
0
1
0
Another basis is found using the leading 1s of rref(AT ) to point to rows of A that form a basis
for the row space of A.
rref(A )
ans =
1020
0110
0001
It follows that rows 1, 2, and 4 of A are a basis for the row space of A.
(b) Follow the same procedure as in part (a).
A = [2 1 2 0;0 0 0 0;1 2 2 1;4 5 6 2;3 3 4 1];
ans =
1.0000
0 0.6667
0 1.0000 0.6667
0
0
0
0
0
0
format rat, ans
ans =
1 0 2/3 1/3
0 1 2/3
2/3
00
0
0
00
0
0
format
rref(A )
0.3333
0.6667
0
0
ans =
1
0
0
0
It follows
001
012
000
000
that rows
1
1
0
0
1 and 2 of A are a basis for the row space of A.
ML.4. (a) A = [3 2 1;1 2 1;2 1 3];
rank(A)
ans =
3
The nullity of A is 0.
(b) A = [1 2 1 2 1;2 1 0 0 2;1 1 1 2 1;3 0 1 2 3];
rank(A)
ans =
2
The nullity of A = 5 rank(A) = 3.
Standard Inner Product
151
Standard Inner Product, p. 607
ML.2. (a) u = [2 2 1] ;norm(u)
ans =
3
(b) v = [0 4 3 0] ;norm(v)
ans =
5
(c) w = [1 0 1 0 3] ;norm(w)
ans =
3.3166
ML.4. Enter A, B , and C as points and construct vectors vAB , vBC , and vCA. Then determine the
lengths of the vectors.
A = [1 3 2];B = [4 1 0];C = [1 1 2];
vAB = B C
vAB =
3
2
2
norm(vAB)
ans =
4.1231
vBC = C B
vBC =
3
2
norm(vBC)
ans =
4.1231
vCA = A C
vCA =
0
2
norm(vCA)
ans =
4.4721
2
4
ML.8. (a) u = [3 2 4 0];v = [0 2 1 0];
ang = dot(u,v)/((norm(u) norm(v))
ang =
0
(b) u = [2 2 1];v = [2 0 1];
ang = dot(u,v)/((norm(u) norm(v))
ang =
0.4472
degrees = ang (180/pi)
degrees =
25.6235
(c) u = [1 0 0 2];v = [0 3 4 0];
ang = dot(u,v)/((norm(u) norm(v))
ang =
0
152
Chapter 10
Cross Product, p. 608
ML.2. (a) u = [2 3 1];v = [2 3 1];cross(u,v)
ans =
6
4
0
(b) u = [3 1 1];v = 2 u;cross(u,v)
ans =
0
0
0
(c) u = [1 2 1];v = [3 1 1];cross(u,v)
ans =
1
4
7
ML.4. Following Example 6 we proceed as follows in Matlab.
u = [3 2 1];v = [1 2 3];w = [2 1 2];
vol = abs(dot(u,cross(v,w)))
vol =
8
The Gram-Schmidt Process, p. 608
ML.2. Use the following Matlab commands.
A = [1 0 1 1;1 2 1 3;0 2 1 1;0 1 0 0] ;
gschmidt(A)
ans =
0.5774
0
0.5774
0.5774
0.2582
0.7746
0.2582
0.5164
0.1690
0.5071
0.6761
0.5071
0.7559
0.3780
0.3780
0.3780
ML.4. We have that all vectors of the form a 0 a + b b + c can be expressed as follows:
a 0 a+b b+c =a 1 0 1 0 +b 0 0 1 1 +c 0 0 0 1 .
By the same type of argument used in Exercises 1619 we show that
S = {v1 , v2 , v3 } =
1010,0011,0001
is a basis for the subspace. Apply routine gschmidt to the vectors of S .
A = [1 0 1 0;0 0 1 1;0 0 0 1] ;
gschmidt(A,1)
ans =
1.0000
0
1.0000
0
0.5000
0
0.5000
1.0000
0.3333
0
0.3333
0.3333
The columns are an orthogonal basis for the subspace.
Projections
153
Projections, p. 609
ML.2. w1 = [1 0 1 1] ,w2 = [1 1 1 0]
w1 =
1
0
1
1
w2 =
1
1
1
0
(a) We show the dot product of w1 and w2 is zero and since nonzero orthogonal vectors are linearly
independent they form a basis for W .
dot(w1,w2)
ans =
0
(b) v = [2 1 2 1]
v=
2
1
2
1
proj = dot(v,w1)/norm(w1) 2 w1
proj =
1.6667
0
1.6667
1.6667
format rat
proj
proj =
5/3
0
5/3
5/3
format
(c) proj = dot(v,w1)/norm(w1) 2 w1 + dot(v,w2)/norm(w2) 2 w2
proj =
2.0000
0.3333
1.3333
1.6667
154
Chapter 10
format rat
proj
proj =
2
1/3
4/3
5/3
format
ML.4. Note that the vectors in S are not an orthogonal basis for W = span S . We rst use the Gram
Schmidt process to nd an orthonormal basis.
x = [[1 1 0 1] [2 1 0 0] [0 1 0 1] ]
x=
1
1
0
1
2
1
0
0
0
1
0
1
b = gschmidt(x)
x=
0.5774
0.5774
0
0.5774
0.7715
0.6172
0
0.1543
0.2673
0.5345
0
0.8018
Name these columns w1, w2, w3, respectively.
w1 = b(:,1);w2 = b(:,2);w3 = b(:,3);
Then w1, w2, w3 is an orthonormal basis for W .
v = [0 0 1 1]
v=
0
0
1
1
(a) proj = dot(v,w1) w1 + dot(v,w2) w2 + dot(v,w3) w3
proj =
0.0000
0
0
1.0000
(b) The distance from v to P is the length of vector proj + v.
norm( proj + v)
ans =
1
Least Squares
155
Least Squares, p. 609
ML.2. (a) y = 331.44x + 18704.83.
(b) 24007.58.
ML.4. Data for quadratic least squares: (Sample of cos on [0, 1.5 pi].)
t
0
0.5000
1.0000
1.5000
2.0000
2.5000
3.0000
3.5000
4.0000
4.5000
yy
1.0000
0.8800
0.5400
0.0700
0.4200
0.8000
0.9900
0.9400
0.6500
0.2100
v = polyt(t,yy,2)
v=
0.2006
1.2974
1.3378
Thus y = 0.2006t 1.2974t + 1.3378.
2
Kernel and Range of Linear Transformations, p. 611
ML.2. A = [ 3 2 7;2 1
4;2 2 6];
rref(A)
ans =
1
0
0
0
1
0
1
2
0
It follows that the general solution to Ax = 0 is obtained from
x1
+ x3 = 0
x2 2x3 = 0.
Let x3 = r, then x2 = 2r and x1 = r. Thus
r
1
x = 2r = r 2
r
1
1
and 2 is a basis for ker L. To nd a basis for range L proceed as follows.
1
rref(A )
ans =
1
0
2
0
1
2
0
0
0
156
Chapter 10
1
0
0 , 1 is a basis for range L.
Then
2
2
Matrix of a Linear Transformation, p. 611
ML.2. Enter C and the vectors from the S and T bases into Matlab. Then compute the images of vi as
L(vi ) = C vi .
C = [1 2 0;2 1 1;3 1 0; 1 0 2]
C=
1
2
3
1
2
1
1
0
0
1
0
2
v1 = [1 0 1] ; v2 = [2 0 1] ; v3 = [0 1 2] ;
w1 = [1 1 1 2] ; w2 = [1 1 1 0] ; w3 = [0 1 1 1] ; w4 = [0 0 1 0] ;
Lv1 = C v1; Lv2 = C v2; Lv3 = C v3;
rref([w1 w2 w3 w4 Lv1 Lv2 Lv3])
ans =
1.0000
0
0
0
0
1.0000
0
0
0
0
1.0000
0
0
0
0
1.0000
0.5000
0.5000
0
2.0000
0.5000
1.5000
1.0000
3.0000
0.5000
1.5000
3.0000
2.0000
It follows that A consists of the last 3 columns of = ans(:,5:7)
A=
0.5000
0.5000
0
2.0000
0.5000
1.5000
1.0000
3.0000
0.5000
1.5000
3.0000
2.0000
Eigenvalues and Eigenvectors, p. 612
ML.2. The eigenvalues of matrix A will be computed using Matlab command roots(poly(A)).
(a) A = [1 3;3 5];
r = roots(poly(A))
r=
2
2
(b) A = [3 1 4; 1 0 1;4 1 2];
r = roots(poly(A))
r=
6.5324
2.3715
0.8392
Eigenvalues
157
(c) A = [2 2 0;1 1 0;1 1 0];
r = roots(poly(A))
r=
0
0
1
(d) A = [2 4;3 6];
r = roots(poly(A))
r=
0
8
ML.4. (a) A = [0 2; 1 3];
r = roots(poly(A))
r=
2
1
The eigenvalues are distinct, so A is diagonalizable. We nd the corresponding eigenvectors.
M = ( 2 eye(size(A)) A)
rref([M [0 0] ])
ans =
1
0
1
0
0
0
The general solution is x2 = r, x1 = x2 = r. Let r = 1 and we have that 1 1
eigenvector.
M = (1 eye(size(A)) A)
rref([M [0 0] ])
is an
ans =
1
0
2
0
0
0
The general solution is x2 = r, x1 = 2x2 = 2r. Let r = 1 and we have that 2 1
eigenvector.
P = [1 1;2 1]
P=
12
11
invert(P) A P
ans =
2
0
0
1
(b) A = [1 3;3 5];
r = roots(poly(A))
r=
2
2
is an
158
Chapter 10
M = ( 2 eye(size(A)) A)
rref([M [0 0] ])
ans =
1
0
1
0
0
0
The general solution is x2 = r, x1 = x2 = r. Let r = 1 and it follows that 1 1 is an eigenvector, but there is only one linearly independent eigenvector. Hence A is not diagonalizable.
(c) A = [0 0 4;5 3 6;6 0 5];
r = roots(poly(A))
r=
8.0000
3.0000
3.0000
The eigenvalues are distinct, thus A is diagonalizable. We nd the corresponding eigenvectors.
M = (8 eye(size(A)) A)
rref([M [0 0 0] ])
ans =
1.0000
0 0.5000 0
0 1.0000 1.7000 0
0
0
00
The general solution is x3 = r, x2 = 1.7x3 = 1.7r, x1 = .5x3 = .5r. Let r = 1 and we have that
.5 1.7 1 is an eigenvector.
M = (3 eye(size(A)) A)
rref([M [0 0 0] ])
ans =
1
0
0
0
0
0
0
1
0
0
0
0
Thus 0 1 0 is an eigenvector.
M = ( 3 eye(size(A)) A)
rref([M [0 0 0] ])
ans =
1.0000
0
0
0
1.0000
0
1.3333
0.1111
0
0
0
0
The general solution is x3 = r, x2 = 1 x3 = 1 r, x1 = 4 x3 = 4 r. Let r = 1 and we have that
9
9
3
3
4 1 1 is an eigenvector. Thus P is
39
P = [.5 1.7 1;0 1 0; 4/3 1/9 1]
invert(P) A P
ans =
8
0
0
0
3
0
0
0
3
Eigenvalues
159
ML.6. A = [ 1 1.5 1.5; 2 2.5 1.5; 2 2.0 1.0]
r = roots(poly(A))
r=
1.0000
1.0000
0.5000
The eigenvalues are distinct, hence A is diagonalizable.
M = (1 eye(size(A)) A)
rref([M [0 0 0] 0)
ans =
1
0
0
0
1
0
0
1
0
0
0
0
The general solution is x3 = r, x2 = r, x1 = 0. Let r = 1 and we have that 0 1 1
eigenvector.
is an
M = ( 1 eye(size(A)) A)
rref([M [0 0 0] )
ans =
1
0
0
1
1
0
0
1
0
0
0
0
The general solution is x3 = r, x2 = r, x1 = r. Let r = 1 and we have that 1 1 1
eigenvector.
is an
M = (.5 eye(size(A)) A)
rref([M [0 0 0] )
ans =
1
0
0
1
0
0
0
1
0
0
0
0
The general solution is x3 = 0, x2 = r, x1 = r. Let r = 1 and we have that 1 1 0
eigenvector. Hence let
P = [0 1 1;1 1 1;1 1 0]
P=
0
1
1
1
1
1
1
1
0
then we have
A30 = P (diag([1 1 .5]) 30 invert(P))
A30 =
1.0000
0
0
1.0000
0.0000
0
1.0000
1.0000
1.0000
is an
160
Chapter 10
Since all the entries are not displayed as integers we set the format to long and redisplay the matrix
to view its contents for more detail.
format long
A30
A30 =
1.0000000000000
0
0
0.99999999906868
0.00000000093132
0
0.99999999906868
0.99999999906868
1.00000000000000
Note that this is not the same as the matrix A30 in Exercise ML.5.
Diagonalization, p. 613
ML.2. (a) A = [1 2; 1 4];
[V,D] = eig(A)
V=
0.8944
0.4472
0.7071
0.7071
D=
2
0
0
3
V V
ans =
1.0000 0.9487
0.9487 1.0000
Hence V is not orthogonal. However, since the eigenvalues are distinct A is diagonalizable, so
V can be replaced by an orthogonal matrix.
(b) A = [2 1 2;2 2 2;3 1 1];
[V,D] = eig(A)
V=
0.5482
0.6852
0.4796
0.7071
0.0000
0.7071
1.0000
0
0
0
4.0000
0
0.4082
0.8165
0.4082
D=
0
0
2.0000
V V
ans =
1.0000 0.0485 0.5874
0.0485
1.0000
0.5774
0.5874
0.5774
1.0000
Hence V is not orthogonal. However, since the eigenvalues are distinct A is diagonalizable, so
V can be replaced by an orthogonal matrix.
(c) A = [1 3;3 5];
[V,D] = eig(A)
Diagonalization
161
V=
0.7071
0.7071
0.7071
0.7071
D=
2
0
0 2
Inspecting V , we see that there is only one linearly independent eigenvector, so A is not diagonalizable.
(d) A = [1 0 0;0 1 1;0 1 1];
[V,D] = eig(A)
V=
1.0000
0
0
0
0.7071
0.7071
0
0.7071
0.7071
1.0000
0
0
0
2.0000
0
0
0
0.0000
D=
V V
ans =
1.0000
0
0
0 1.0000
0
0
0 1.0000
Hence V is orthogonal. We should have expected this since A is symmetric.
Complex Numbers
Appendix B.1, p. A-11
2. (a) 1 + 2 i.
(b)
5
5
4. 20.
(b) 10.
9
10
(c)
7
10 i.
13.
(c) 4 3i.
(d) 17.
(d)
1
26
5
26 i.
5. (a) Re(c1 + c2 ) = Re((a1 + a2 ) + (b1 + b2 )i) = a1 + a2 = Re(c1 ) + Re(c2 )
Im(c1 + c2 ) = Im((a1 + a2 ) + (b1 + b2 )i) = b1 + b2 = Im(c1 ) + Im(c2 )
(b) Re(kc) = Re(ka + kbi) = ka = k Re(c)
Im(kc) = Im(ka + kbi) = kb = k Im(c)
(c) No.
(d) Re(c1 c2 ) = Re((a1 + b1 i)(a2 + b2 i)) = Re((a1 a2 b1 b2 ) + (a1 b2 + a2 b1 )i) = a1 a2 b1 b2 =
Re(c1 )Re(c2 )
6.
c = 1 + 4i
c = 2 + 3i
2
2
2
2
2
8. (a)
2
c
A+B
(b)
kA
(c)
CC 1
ij
ij
= aij + bij = aij + bij = A
= kaij = k aij = k A
ij
2
2
c
+B
ij
.
ij
= C 1 C = In = In ; thus (C )1 = C 1 .
10. (a) Hermitian, normal.
(b) None.
(e) Hermitian, normal.
(f) None.
(i) Unitary, normal.
(j) Normal.
(c) Unitary, normal.
(d) Normal.
(g) Normal.
(h) Unitary, normal.
11. (a) aii = aii , hence aii is real. (See Property 4 in Section B1.)
(b) First, AT = A implies that AT = A. Let B =
B=
A+A
2
=
A+A
. Then
2
A+A
A+A
A+A
=
=
= B,
2
2
2
so B is a real matrix. Also,
T
B=
A+A
2
T
AT + A
=
2
T
=
A+A
A T + AT
A+A
=
=
=B
2
2
2
164
Appendix B.1
so B is symmetric.
AA
Next, let C =
. Then
2i
AA
2i
C=
=
AA
AA
=
=C
2i
2i
=
AA
AT AT
AA
=
=
= C
2i
2i
2i
so C is a real matrix. Also,
CT =
AA
2i
T
=
AT A
2i
T
so C is also skew symmetric. Moreover, A = B + iC .
(c) If A = AT and A = A, then AT = A = A. Hence, A is Hermitian.
12. (a) If A is real and orthogonal, then A1 = AT or AAT = In . Hence A is unitary.
T
T
(b) (AT )T AT = (AT ) AT = (AAT )T = In = In . Note: (AT )T = (AT )T .
Similarly, AT (AT )T = In .
(c) (A1 )T A1 = (AT )1 A1 = (AT )1 A1 = (AAT )1 = In 1 = In .
1
Note: (A1 )T = (AT )1 and (AT )1 = (AT ) . Similarly, A1 (A1 )T = In .
13. (a) Let
B=
A + AT
2
and C =
A AT
.
2i
Then
BT =
A + AT
2
T
=
AT + (AT )T
AT + A
A + AT
=
=
=B
2
2
2
so B is Hermitian. Also,
CT =
A AT
2i
T
=
AT (AT )T
A AT
=
=C
2i
2i
so C is Hermitian. Moreover, A = B + iC .
(b) We have
AT A = (B T + iC T )(B + iC ) = (B T + iC T )(B + iC )
= (B iC )(B + iC )
= B 2 iCB + iBC i2 C 2
= (B 2 + C 2 ) + i(BC CB ).
Similarly,
AAT = (B + iC )(B T + iC )T = (B + iC )(B T + iC T )
= (B + iC )(B iC )
= B 2 iBC + iCB i2 C 2
= (B 2 + C 2 ) + i(CB BC ).
Since AT A = AAT , we equate imaginary parts obtaining BC CB = CB BC , which implies
that BC = CB . The steps are reversible, establishing the converse.
Appendix B.2
165
14. (a) If AT = A, then AT A = A2 = AAT , so A is normal.
(b) If AT = A1 , then AT A = A1 A = AA1 = AAT , so A is normal.
i
i
(c) One example is
i
. Note that this matrix is not symmetric since it is not a real matrix.
i
15. Let A = B + iC be skew Hermitian. Then AT = A so B T iC T = B iC . Then B T = B and
C T = C . Thus, B is skew symmetric and C is symmetric. Conversely, if B is skew symmetric and C
is symmetric, then B T = B and C T = C so B T iC T = B iC or AT = A. Hence, A is skew
Hermitian.
1 i 3
16. (a) x =
.
(b) 2, i.
(c) 1, i, 1, 1 (1 is a double root).
2
18. (a) Possible answers: A1 =
i0
i 0
, A2 =
.
0i
0 i
i0
i 0
,
.
00
00
20. (a) Possible answers:
i i
i i
,
.
i i
i i
(b) Possible answers:
Appendix B.2, p. A-20
7
2. (a) x = 30
4
30 i,
y = 11 (1 + 2i), z =
15
(b) x = 1 + 4i, y =
4. (a) 4i
6. (a) Yes.
(b) 0.
(b) No.
1
2
3
5
4 i.
5
+ 3 i, z = 2 i.
2
(c) 9 8i.
(d) 10.
(c) Yes.
7. (a) Let A and B be Hermitian and let k be a complex scalar. Then
T
T
(A + B )T = (A + B )T = A + B = A + B,
so the sum of Hermitian matrices is again Hermitian. Next,
T
(kA)T = kA = kA = kA,
so the set of Hermitian matrices is not closed under scalar multiplication and hence is not a
complex subspace of Cnn .
(b) From (a), we have closure of addition and since the scalars are real here, k = k , hence (kA)T = kA.
Thus, W is a real subspace of the real vector space of n n complex matrices.
8. The zero vector 0 is not unitary, so W cannot be a subspace.
10. (a) No.
(b) No.
11
1
.
(b) P =
i i
i
0
10
0
(c) P1 = 1 0
1 , P2 = 1
i 0 i
i
12. (a) P =
1
.
i
0
1
1
, P3 = 0
10
i 0
0
00
1
1 .
i i
166
Appendix B.2
13. (a) Let A be Hermitian and suppose that Ax = x, = 0. We show that = . We have
(Ax)T = Ax
T
= xT A = xT A.
Also, ( x)T = xT , so xT A = xT . Multiplying both sides by x on the right, we obtain
xT Ax = xT x. However, xT Ax = xT x = xT x. Thus, xT x = xT x. Then ( )xT x = 0
and since xT x > 0, we have = .
2
00
2
0
0
(b) AT = 0
2 i = 0
2
i = A.
0
i2
0 i
2
(c) No, see 11(b). An eigenvector x associated with a real eigenvalue of a complex matrix A is in
general complex, because Ax is in general complex. Thus x must also be complex.
T
14. If A is unitary, then A = A1 . Let A = u1 u2 un . Since
In = AAT = u1 u2 un
uT
1
T
u2
. ,
.
.
uT
n
then
uk uT =
j
0
1
if j = k
if j = k .
It follows that the columns u1 , u2 , . . . , un form an orthonormal set. The steps are reversible establishing
the converse.
15. Let A be a skew symmetric matrix, so that AT = A, and let be an eigenvalue of A with corresponding
eigenvector x. We show that = . We have Ax = x. Multiplying both sides of this equation by
xT on the left we have xT Ax = xT x. Taking the conjugate transpose of both sides yields
xT AT x = xT x.
Therefore xT Ax = xT x, or xT x = xT x, so ( + )(xT x) = 0. Since x = 0, xT x = 0, so
= . Hence, the real part of is zero.

**Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with the best way to expand their education.**

Below is a small sample set of documents:

Purdue - FRENCH - 102

9/26/11Franais102ClicktoeditMastersubtitlestyle23aot2011Pouvezvousimaginerlaconversationentrecethommeetcettedame?Travaillezavecunpartenaire!9/26/11SeprsenterVousavezutilisquelsmots? Bonjour! Jemappelle Commentvousappelezvous? Enchant(e).What

Purdue - FRENCH - 102

Franais102ClicktoeditMastersubtitlestyle25aout20119/26/11RvisiondhierMettezlesdeuxconversationscidessousdanslebonordre:Trsbien.Etvous?Quiestcettefille?Bonjour.Non,elleestanglaise.Bien,merci.CestmacousineAnne.Bonjour.CommentEstelleamricaine?9

Purdue - FRENCH - 102

Franais102ClicktoeditMastersubtitlestyle26aout20119/26/11RvisiondhierDcrivezcesgens.Avecvotrepartenaire,faitesunelistededescriptionsphysiquesdechaquepersonne:9/26/11Expressions utiles:Fairedu sportdu joggingdu tennisde larobicde la natation

Purdue - FRENCH - 102

Franais10229aot2011ClicktoeditMastersubtitlestyle9/26/11AujourdhuinousallonsRviserlepasscomposRviserlespronomsobjetsdirectsRviserlesadjectifs/adverbesRviserlevocabulairedesfilms9/26/11PassComposLeverbeauxiliaireAvoirtreLeparticipepassFaire

Purdue - FRENCH - 102

F r anais 1021 sept embr e 2011 er subt it le st yleClick t o edit M ast9/26/11 cout ez!Vousavez dit que la chose la plusdifficile cest le list ening.A ujour dhui, on va t r availler a!Tour nez la page 237, et puiscout ez bien!ht t p:/bcs.wil

Purdue - FRENCH - 102

9/26/11Francais102ClicktoeditMastersubtitlestyle2septembre2011DcouvrirlesressourcessurBlackboardFairequelquesactivitsavecInternetEcrireRdaction#1!9/26/11Aujourdhui,onvawww.purdue.edu Aubasdelapage,cliquezsurBlackboard ChoisissezWestLafayetteAca

Purdue - ME - 200

Briarcliffe College - EAF - 211

der Student (wk)die Studenten/dendas Fensterdie Fenster/den Fensterndie Zeitungdie Zeitungen/den Zeitungender Schuler (u)das Lichtdie Schuler/den Schulerndie Lichtes/den Lichtendie Universitatder Meisterdie Universitaten/den Universitatendie

Florida Coastal School of Law - CRIM LAW - 1l

ActusReusDefined:Thephysicalcomponentofacrime.thecomprehensivenotionofact,harm,anditsconnectinglink,causation,withactusexpressingthevoluntaryphysicalmovementinthesenseofconductandreusexpressingthefactthatthisconductresultsinacertainproscribedharmAct

HDM Stuttgart - TOC - 1345

MCS4653, Theory of ComputationHomework Assignment 1, Due 9/15/03Sample AnswersFrom ClassPage11. (Sudkamp 1.2) Let X = cfw_a, b, c and Y = cfw_1, 2.a) List all the subsets of X .cfw_a, b, c, cfw_a, b, cfw_a, c, cfw_b, c, cfw_a, cfw_b, cfw_c, cfw_b

HDM Stuttgart - TOC - 1345

Sample AnswersFrom ClassMCS4653, Theory of ComputationHomework Assignment 2, Due 9/22/03Page11. (GRE Problem 5, Page 13) A procedure that printed this binary tree in postorder would produce what output?ABCDEFDEBFCA2. (GRE Problem 6, Page 14) W

HDM Stuttgart - TOC - 1345

MCS4653, Theory of ComputationHomework Assignment 3, Due 9/29/03NameStudent IDPage11. (Sudkamp Problem 2.10, Page 49) Let X = cfw_aa, bb and Y = cfw_, b, ab.a) List the strings in the set XY . aa, bb, aab, aaab, bb, bbb, bbabb) List the strings of

HDM Stuttgart - TOC - 1345

SampleAnswersMCS4653, Theory of ComputationHomework Assignment 4, Due 10/6/03Page11. (Sudkamp 3.3 page 82-3) Let G be the grammar S SAB | A aA | aB bB | a) Give a leftmost derivation of abbaab. S SAB SABAB ABAB aBAB abBAB abbAB abbaAB abba

HDM Stuttgart - TOC - 1345

MCS4653, Theory of ComputationHomework Assignment 5, Due 10/13/03NameStudent IDPage11. (Sudkamp 3.6 page 83) For each of the following context-free grammars, use set notation to dene the languagegenerated by the grammar.a) S aaSB | cfw_(aa)m bn |

HDM Stuttgart - TOC - 1345

MCS4653, Theory of ComputationHomework Assignment 6, Due 10/20/03Sample AnswersPage11. (Sudkamp 6.5 page 189) Build a DFA that accepts the set of strings over cfw_a, b, c in which all the as precedethe bs, which in turn precede the cs. It is possibl

HDM Stuttgart - TOC - 1345

NameStudent IDMCS4653, Theory of ComputationHomework Assignment 7, Due 10/27/03Page11. (Sudkamp 6.19 page 191) For each of the following languages, give the state diagram of an NFA that acceptsthe languages.a) (ab) abjg a q j a qgjq012aj

HDM Stuttgart - TOC - 1345

MCS4653, Theory of ComputationNameHomework Assignment 8, Due 11/3/03Student IDPrepare this problem for class discussion. (Sudkamp 6.30 page 192)Let M be the NFA-, aq0q1bPage1jjgbabj qj aq23ba) Compute closure(qi ) for i = 0, 1, 2, 3.i

HDM Stuttgart - TOC - 1345

MCS4653, Theory of ComputationNameHomework Assignment 9, Due 11/10/03Student IDUse Theorem 6.5.3 and Algorithm 6.6.3 (Sudkamp 6.35 page 193)a) Build an NFA M1 that accepts (ab)Page1ga qjjq01bb) Build an NFA M2 that accepts (ba)gb a qjjq01

HDM Stuttgart - TOC - 1345

MCS4653, Theory of ComputationClassHomework Assignment 10, Due 11/17/03DiscussionConstruct a PDAs that accept each of the following languages.Page11. (Sudkamp 9.3d) cfw_w | w cfw_a, b and w has the same number of as and bs .jq2 /Aa A/Aa E/E /

HDM Stuttgart - TOC - 1345

MCS4653, Theory of ComputationPageSampleHomework Assignment 11, Due 11/26/031Answers(Sudkamp 9.3 pages 291-2)Construct a Turing machine with the input alphabet cfw_a, b to perform each of the following operations. Note that thetape head is scannin

Broward College - SPC - 1608

SPC 1608 Public SpeakingAssignment SheetDr. WilsonIntroductory Speeches-Students will introduce themselves or aclassmate to the class. 2-3 minutes minimum. 10 points.Impromptu Speeches-Students will select a topic at random and speakon that subject.

Broward College - SPC - 1608

Attributing SourcesDr. WilsonClass SpeechesInformative and persuasive speeches require the use of quotations (expert testimony),examples, and statistics/facts to be effective. It is essential that you include thosesupporting materials and important t

Broward College - SPC - 1608

Dr. WilsonPublic SpeakingLecture NotesClass DebatesDebates consist of a series of speeches for (affirmativespeeches) and against (negative speeches) a topic. Each debatefeatures two speakers: an affirmative speaker and a negativespeaker. The debate

Broward College - SPC - 1608

SPC 1608 Term 2 2010Introduction to Public SpeakingDr. WilsonCourse InformationOffice Location: Building 9, Room 214Phone Number:954-201-6704 (Office)E-mail:Web Site:Office Hours:jwilson@broward.eduhttp:/www1.broward.edu/~jwilson/Monday:8:30-

Broward College - SPC - 1608

Suggested Debate TopicsPublic SpeakingDr. WilsonResolved: that the federal government should legalize the sale andpossession of marijuana.Resolved: that the federal government should significantly increasetariffs on imported goods.Resolved: the fed

Broward College - SPC - 1608

Speech CritiquesDr. WilsonEach student must write a one-page evaluation of aspeaker (single-spaced if handwritten, double-spaced iftyped). The speaker could be a guest speaker on campus, alecturer or candidate who speaks in the community, or aspeake

Broward College - SPC - 1608

Sample Informative Speech OutlineMuscular DystrophyBy Jim WilsonMattie Stepanek is a gifted poet and songwriter. His first book, Heartsongs, sold over1.5 million copies. He wrote the lyrics to a top-40 country single for 14 year-old BillyGilman. Matt

Broward College - SPC - 1608

Sample Preparation Outline: Persuasive SpeechesOrgan Donation: the Gift of LifeBy Jim WilsonOne week before Thanksgiving, 1992, Bobby collapsed while at work from a brainaneurism and later died while waiting in a hospital for surgery. Although his wif

Broward College - SPC - 1608

SPC 1608Public SpeakingDr. WilsonStudent Record SheetName: _SS#: _Address: _Home Phone: _ Work Phone: _E-mail Address:_ @ _Grades:Introductory Speeches_(10)Impromptu Speeches_(10)Informative Speech_ (100)Informative Outline_Persuasive

Broward College - SPC - 1608

Speech Criticism Bibliography The following collection of books should be available on reserve in the BCC University/College Library. The books listed include collections of speeches and books that contain examples of speech criticisms. Together, they sho

Broward College - SPC - 1608

Speech Critique SheetDr. WilsonSpeaker: _Rating Scale: 1-Poor2-FairCharacteristic:CONTENT: Topic appropriate,relevant, interesting? Adequatevariety of data, examples, andquotes? Documentation? Visualaids effective?ORGANIZATION: Clearthesis? In

Broward College - SPC - 1608

Lecture NotesClass SpeechesSpeech topicspractical, andsuggested topicensure the topicDr. Wilsonshould be interesting to the audience,substantial. If your topic is not on thelist, you should check with the professor tois suitable for your specifi

Broward College - SPC - 1608

Speech TopicsAcid RainAcupunctureAdoptionAdvertisingAfghanistanAIDS ResearchAlcoholismAllergiesAlligatorsAnimal RightsAnorexia or BulimiaArthritisAstrologyAuto Repair ScamsBalloon MortgagesBermuda TriangleBlack HolesBlood DonationBonsai

Broward College - SPC - 1608

Study Guide for Test 1Chapters 1,4-6,145-26-2009Dr. WilsonPublic SpeakingTests will normally consist of 20 multiple choice and 3 shortanswer questions. Multiple choice questions will be basedmainly on definitions in the book. Short answer questions

Broward College - SPC - 1608

Study Guide for Test 2Chapters 7-9,11,155-26-2009Dr. WilsonPublic SpeakingThe textbook includes several definitions that are ofimportance to understanding the concepts in the text:Chapter 7: Gathering Supporting MaterialsReader's GuideWorld-wide

Broward College - SPC - 1608

Study Guide for Test 3Chapters 10,13,16-175-26-2009Dr. WilsonPublic SpeakingYou need to be familiar with the following concepts:Chapter 10: Introducing and Concluding Your SpeechCredibilityRhetorical questionsClosureIllustrationsAnecdotesThe f

Broward College - SPC - 1608

Study Guide for Test 4Chapters 2,3,12,18,Appendix A5-26-2009Dr. WilsonPublic SpeakingChapter 2: The Audience-centered Speech-making ProcessDispositionInventionCentral IdeasSpecific Purpose SentenceGeneral Speech PurposesChapter 3: Speaking Free

Broward College - SPC - 1608

SPC 1608 Public SpeakingMonday-Wednesday Term 2 2010Dr. WilsonSyllabusDateClass ActivityHomework Assignment1-6Discuss CourseGet AcquaintedDistribute SyllabusRead Chapters 1 and 41-11Introductory SpeechesRead Chapters 5 and 61-13Impromptu S

Broward College - MCB - MCB2010

Chapter 1: The Microbial World and YouWhat is microbiology?Microbiology is a branch of biology that deals withscientific study of microorganisms such as:bacteria-bacteriologyfungi-mycologyviruses-virologyparasites-parasitology(protozoa, helmin

Broward College - MCB - MCB2010

Chapter 3 (Tortora) Observing Microorganisms Through aMicroscopeChapter 3: Observing Microorganisms Through a MicroscopeThe diameter of most living cells is between 1 and 100 um(micrometer), and therefore are visible only under a microscope.Types of

Broward College - MCB - MCB2010

Chapter 4 (Tortora) Functional Anatomy of Prokaryotic cellTortoraChapter 4: Functional Anatomy of Prokaryotic And Eukaryotic CellsProkaryotic Cell:1. No nuclear membrane (no nucleus only nuclear material)2. Chemically complexPeptidoglycan).cellwal

Broward College - MCB - MCB2010

Chapter 5 (Tortora) Microbial MetabolismChapter 5: Metabolism: Fueling the Cell GrowthMetabolism: The sum of all chemical reactions (anabolic andcatabolic) within a living organism.OrAn energy balance act.OrYIN-YANG Anabolism or Synthesis: Chemica

Broward College - MCB - MCB2010

C hapter 6 (Tortora) Microbial GrowthC HAPTER 6Microbial GrowthWhat is Growth?Growth is an increase in cell numbers. An increase in cellmass does not necessarily signify growth; it may be due toincrease in the number of inclusion bodies.Bacterial g

Broward College - MCB - MCB2010

Chapter 7 (Tortora) Control of Microbial GrowthThe Control of Microbial GrowthPHYSICAL AND CHEMICAL CONTROL OF MICROBES1. Antimicrobial Agent: substances that kill microbes or prevent theirgrowth. (antibacterial, antifungal, antiviral)2. Microbicidal

Broward College - MCB - MCB2010

Chapter 8 (Tortora) Microbial GeneticsMicrobial GeneticsGenetics: study of the inheritance or heredity.Genome: total genetic information in an organismChromosome: structures that contain the DNA of organism.Gene: fundamental unit of a chromosome.Gen

Broward College - MCB - MCB2010

Chapter 12 (Tortora) Fungi, Protozoa, and Multicellular Parasites.Microbial Genetics - Fungi, Protozoa, and Multicellular ParasitesFungi: A morphologically diverse (include yeasts, molds, andfleshy fungi) group of spore-bearing, achlorophyllus, usually

Broward College - MCB - MCB2010

Chapter 14 and 15 Normal Flora and Microbial Diseases (Tortora)Humans live in a dynamic relationship with microbes of which are harmlessbut some have potential to cause disease.Pathogen: is a microbe that can cause disease in a susceptiblehost.Pathol

Broward College - MCB - MCB2010

Chapter 20 Antimicrobial DrugsThe History of ChemotherapyThe modern era of chemotherapy (treatment of disease with chemical compound) beganin 1908 with the discovery of magic bullet salvarsan (arsphenamine, and 606) by the GermanChemist Paul Ehrlich.

Broward College - MCB - MCB2010

Broward College - MCB - MCB2010

MICROBIOLOGY- 2010 LABORATORYWinter 2009ProfessorA. Fuad KhanOffice:57-135Office Hours: M-R: 9:00 - 9:30MW 2:30- 3:00TR 2:00- 2:30Phone:954-201-2421Science dept.:954-201-2284Fax:954-201-2479E-mail:akhan@broward.eduScience is an active lea

Broward College - MCB - MCB2010

StudentIDQuiz1Quiz2Quiz3Quiz4Quiz5Quiz5Quiz6MidtermStudentIDQuiz1Quiz2Quiz3Quiz4Quiz5Quiz5Quiz6MidtermR07011216J08022774K91001651M08045650M05004395J99000374B00003558E97006274D00134601K08000978M85001204P06014831T07015605I0600

Broward College - MCB - MCB2010

IDExam1 Bonus Acidfast Total Grade Quiz1 %StudentIDK91001651341035 C8.5 72.5J02031599401041 B8.5 82.5A060602970000F7 11.67L01011428332035 C9 73.33W06006514331034 D7.5 69.17L07003866411143 B9.5 87.5A07034622241025

Broward College - MCB - MCB2010

B roward Community CollegeCourse OutlineSTATUS:ACOMMON COURSE NUMBER:COURSE TITLE:MCB 2010Microbiology3CREDIT HOURS:CONTACT HOURS BREAKDOWN:L ecture/Discussion48L abOtherContact Hours/Week3CATALOG COURSE DESCRIPTION:Prerequisite:Four ho

Broward College - MAN - MAN2021

Part One: OverviewPartChapter 1: Management andChapterIts EvolutionItsChapter 2: Managing in aChapterGlobal EnvironmentGlobalChapter 1ChapterManagement and ItsManagementEvolutionEvolutionManagement ChallengesAfter reading this chapter, yo

Broward College - MAN - MAN2021

Chapter 2ChapterManaging in a GlobalManagingEnvironmentEnvironmentManagement ChallengesAfter reading this chapter, you should beAfterable to:ableUnderstand the major threats andUnderstandopportunities firms face in a global market.opportunit

Broward College - MAN - MAN2021

Part Two: TheCulture ofManagementManagementChapter 3: Managing SocialChapterResponsibility and EthicsResponsibilityChapter 4: ManagingChapterEmployee DiversityEmployeeChapter 5: ManagingChapterOrganizational Culture andChangeChangeChapter

Broward College - MAN - MAN2021

Chapter 4ChapterManaging EmployeeManagingDiversityDiversityManagement ChallengesAfter reading this chapter, you should beAfterable to:ableMonitor labor force trends and theirMonitorimplications for U.S. firms.implicationsRecognize the advan

Broward College - MAN - MAN2021

Chapter 5ChapterManagingManagingOrganizational Cultureand ChangeandManagement ChallengesAfter reading this chapter, you should beAfterable to:ableDescribe how organizational culture helpsDescribemanagement achieve its objectives.management

Broward College - MAN - MAN2021

Chapter 6ChapterEntrepreneurshipManagement ChallengesAfter reading this chapter, you should beAfterable to:ableDistinguish between an entrepreneurshipDistinguishand a small business.andDevelop negotiation, networking, andDevelopleadership sk

Broward College - MAN - MAN2021

Part Three:Management Strategyand Decision MakingandChapter 7: StrategicChapter StrategicManagementManagementChapter 8: Managing theChapterPlanning ProcessPlanningChapter 9: Decision MakingChapter 7ChapterStrategic ManagementManagement Cha

Broward College - MAN - MAN2021

Chapter 8ChapterManaging the PlanningManagingProcessProcessManagement ChallengesAfter reading this chapter, you should beAfterable to:ableUnderstand how to take advantage of theUnderstandbenefits of planning at various levels of theorganizat