Unformatted Document Excerpt
Coursehero >>
New Jersey >>
Princeton >>
MAT 202
Course Hero has millions of student submitted documents similar to the one
below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
7
ISM: Chapter Linear Algebra
Chapter 7 7.1
1. If v is an eigenvector of A, then Av = v. Hence A3 v = A2 (Av) = A2 (v) = A(Av) = A(Av) = A(2 v) = 2 Av = 3 v, so v is an eigenvector of A3 with eigenvalue 3 .
1 2. We know Av = v so v = A1 Av = A1 v = A1 v, so v = A1 v or A1 v = v.
Hence v is an eigenvector of A1 with eigenvalue
1 .
3. We know Av = v, so (A + 2In )v = Av + 2In v = v + 2v = ( + 2)v, hence v is an eigenvector of (A + 2In ) with eigenvalue + 2. 4. We know Av = v, so 7Av = 7v, hence v is an eigenvector of 7A with eigenvalue 7. 5. Assume Av = v and Bv = v for some eigenvalues , . Then (A + B)v = Av + Bv = v + v = ( + )v so v is an eigenvector of A + B with eigenvalue + . 6. Yes. If Av = v and Bv = v, then ABv = A(v) = (Av) = v 7. We know Av = v so (A  In )v = Av  In v = v  v = 0 so a nonzero vector v is in the kernel of (A  In ) so ker(A  In ) = {0} and A  In is not invertible. 8. We want all a c b d such that a b c d 5 b . 0 d 1 1 =5 0 0 hence 5 a , i.e. the desired = 0 c
matrices must have the form 9. We want a c b d 1 0 = 1 0
for any . Hence
a c
=
, i.e., the desired matrices 0
must have the form 10. We want 11. We want a c a c b d b d
b , they must be upper triangular. 0 d
5  2b b 1 1 . , i.e. the desired matrices must have the form =5 10  2d d 2 2 2 3 =
22a , 3
2 . So, 2a + 3b = 2 and 2c + 3d = 3. Thus, b = 3 a 22a 3 and d = 32c . So all matrices of the form will fit. 3 c 32c 3 2 0 3 4 v1 v2 =2 v1 v2 we get v1 v2 328 = t
3 2t
12. Solving
(with t = 0) and
ISM: Linear Algebra 2 0 3 4 v1 v2 v1 v2 =4 v1 v2 0 t =
Section 7.1
solving 13. Solving
=4 v1 v2
we get
= v1 v2
(with t = 0).
3 5t
6 6 15 13
v1 , we get v2
t
(with t = 0).
14. We want to find all 4 4 matrices A such 0 a 0 e must be of the form , so A = 0 h 0 0 k 0
15. Any vector on L is unaffected by the reflection, so that a nonzero vector on L is an eigenvector with eigenvalue 1. Any vector on L is flipped about L, so that a nonzero vector on L is an eigenvector with eigenvalue 1. Picking a nonzero vector from L and one from L , we obtain a basis consisting of eigenvectors. 16. Rotation by 180 is a flip about the origin so every nonzero vector is an eigenvector with the eigenvalue 1. Any basis for R2 consists of eigenvectors. 17. No (real) eigenvalues 18. Any nonzero vector in the plane is unchanged, hence is an eigenvector with the eigenvalue 1. Since any nonzero vector in V is flipped about the origin, it is an eigenvector with eigenvalue 1. Pick any two noncollinear vectors from V and one from V to form a basis consisting of eigenvectors. 19. Any nonzero vector in L is an eigenvector with eigenvalue 1, and any nonzero vector in the plane L is an eigenvector with eigenvalue 0. Form a basis consisting of eigenvectors by picking any nonzero vector in L and any two nonparallel vectors in L .
that Ae2 = e2 , i.e. the second column of A c d f g . i j l m
20. Any nonzero vector along the e3 axis is unchanged, hence is an eigenvector with eigenvalue 1. No other (real) eigenvalues can be found. 21. Any nonzero vector in R3 is an eigenvector with eigenvalue 5. Any basis for R3 consists of eigenvectors. 22. Any nonzero scalar multiple of v is an eigenvector with eigenvalue 1.
23. a. Since S = [v1 vn ], S 1 vi = S 1 (Sei ) = ei . b. ith column of S 1 AS = S 1 ASei = S 1 Avi (by definition of S) 329
Chapter 7 = S 1 i vi (since vi is an eigenvector) = i S 1 vi = i ei (by part a)
ISM: Linear Algebra
1 0 hence S 1 AS = . . . 0
0 2 0
0 0
0 n
0 0 .
24. See Figure 7.1.
Figure 7.1: for Problem 7.1.24. 25. See Figure 7.2.
Figure 7.2: for Problem 7.1.25. 26. See Figure 7.3. 27. See Figure 7.4. 330
ISM: Linear Algebra
Section 7.1
Figure 7.3: for Problem 7.1.26.
Figure 7.4: for Problem 7.1.27.
Figure 7.5: for Problem 7.1.28. 28. See Figure 7.5. 29. See Figure 7.6. 30. Since the matrix is diagonal, e1 and e2 are eigenvectors. See Figure 7.7. 31. See Figure 7.8. 331
Chapter 7
ISM: Linear Algebra
Figure 7.6: for Problem 7.1.29.
Figure 7.7: for Problem 7.1.30.
Figure 7.8: for Problem 7.1.31.
32. Since the matrix is diagonal, e1 and e2 are eigenvectors. See Figure 7.9. 33. We are given that x(t) = 2t 1 1 + 6t , hence we know that the eigenvalues are 2 1 1 332
ISM: Linear Algebra
Section 7.1
Figure 7.9: for Problem 7.1.32. 1 1 1 1 we want a matrix A such that A 1 1 and 6 with corresponding eigenvectors 1 1 1 1
1
and =
1 respectively (see Fact 7.1.3), so 1 2 6 . Multiplying on the right by 2 6
, we get A =
4 2 . 2 4
34. (A2 + 2A + 3In )v = A2 v + 2Av + 3In v = 42 v + 2 4v + 3v = (16 + 8 + 3)v = 27v so v is an eigenvector of A2 + 2A + 3In with eigenvalue 27. 35. Let be an eigenvalue of S 1 AS. Then for some nonzero vector v, S 1 ASv = v, i.e., ASv = Sv = Sv so is an eigenvalue of A with eigenvector Sv. Conversely, if is an eigenvalue of A with eigenvector w, then Aw = w, for some nonzero w. Therefore, S 1 AS(S 1 w) = S 1 Aw = S 1 w = S 1 w, so S 1 w is an eigenvector of S 1 AS with eigenvalue . 36. We want A such that A A= 15 10 5 20 3 1 1 2
1
15 10 3 1 10 1 15 3 , so = , i.e. A = and A = 5 20 1 2 20 2 5 1 = 4 3 . 2 11
37. a. A = 5
0.6 0.8 is a scalar multiple of an orthogonal matrix. By Fact 7.1.2, the 0.8 0.6 possible eigenvalues of the orthogonal matrix are 1, so that the possible eigenvalues of A are 5. In part b we see that both are indeed eigenvalues. 2 1 , v2 = . 1 2 333
b. Solve Av = 5v to get v1 =
Chapter 7
ISM: Linear Algebra
4 1 1 1 2 1 38. 5 0 3 1 = 2 = 2 1 . The associated eigenvalue is 2. 1 1 2 1 2 1 39. We want 0 0 0 . So b = 0, and d = (for any ). Thus, we need = = 1 1 0 0 0 0 1 0 a 0 . +d +c =a matrices of the form 0 1 1 0 0 0 c d a c b d 0 0 0 0 1 0 , , 0 1 1 0 0 0 is a basis of V , and dim(V )= 3. a c b d 1 1 = = . 3 3 3
So,
40. We need all matrices A such that
Thus, a  3b = and c  3d = 3. Thus, c  3d = 3(a  3b) = 3a + 9b, or c = 3a + 9b + 3d. So A must be of the form Thus, a basis of V is a c a 3a + 9b + 3d 0 0 1 1 0 , , 3 9 0 3 0 a c b d b 1 0 0 1 0 0 = a +b +d . d 3 0 9 0 3 1 0 , and the dimension of V is 3. 1 1 1 = 2 . So, a + b = 1 = c + d and 2 2
41. We want
b 1 1 = 1 , and d 1 1 a + 2b = 2 and 22 = c + 2d.
So (a + 2b)  (a + b) = 2  1 = b, a = 1  b = 21  2 . Also, (c + 2d)  (c + d) = 21  2 2  1 = 22 1 = d, c = 1 d = 21 22 . So A must be of the form: 21  22 22  1 2 1 1 1 1 + 2 . 2 1 2 2 So a basis of V is 2 1 1 1 , , and dim(V )= 2. 2 1 2 2
1 42. We will do this in a slightly simpler manner than Exercise 40. Since A 0 is simply the 0 first column of A, the first column must be a multiple of e1 . Similarly, the third column must be a multiple of e3 . There are no other restrictions on the form of A, meaning it can 334
ISM: Linear Algebra a be any matrix of the form 0 0 0 0 0 0 0 0 d 0 0 0 + e 0 0 0 . 0 0 1 0 1 0 1 0 Thus, a basis of V is 0 0 0 0 and the dimension of V is 5. b c d
Section 7.1 0 1 0 0 0 1 0 0 0 0 0 = a0 0 0+b0 0 0+c0 1 0+ e 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0,0 0 0,0 1 0,0 0 0,0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1
43. A = AIn = A[ e1 . . . en ] = [ 1 e1 . . . n en ], where the eigenvalues 1 , . . . , n are arbitrary. Thus A can be any diagonal matrix, and dim(V ) = n. 44. We see that each of the columns 1 through m of A will have to be a multiple of its respective vector ei . Thus, there will be m free variables in the first m columns. The remaining n  m columns will each have n free variables. Thus, in total, the dimension of V is m + (n  m)n = m + n2  nm. 45. Consider a vector w that is not parallel to v. We want A[v w] = [v av + bw], where , a and b are arbitrary constants. Thus the matrices A in V are of the form A = [v av + bw][v w]1 . Using summary 4.1.6, we see that [v 0][v w]1 , [0 v][v w]1 , [0 w][v w]1 is a basis of V , so that dim(V ) = 3.
46. a. We need all matrices A such that
a c
b d
k 1 1 . = =k 2k 2 2 = 2d + 2a + 4b and A 0 0 1 . So a +d 2 1 0 of V is 3.
Thus, a + 2b = k and c + 2d = 2k. So, c + 2d = 2a + 4b, or c 0 1 0 a b +b =a must be of the form 4 2 0 2d + 2a + 4b d 1 0 0 1 0 0 basis of V is , , , and the dimension 2 0 4 0 2 1 b. Clearly 1 2
is a basis of the image of T by definition of V , so that the rank of T is 1. a c
b a b 1 such that = 0, or a+2b = d c d 2 0 0 2 1 2b b . +d =b 0, c+2d = 0. These are the matrices of the form 2 1 0 0 2d d 2 1 0 0 Thus a basis of the kernel of T is , . 0 0 2 1 The kernel of T consists of all matrices 335
Chapter 7
ISM: Linear Algebra
c. Let's find the kernel of L first. In part (a) we saw that the matrices in V are a b . A matrix A in V is in the kernel of L if of the form A = 2d + 2a + 4b d a b 1 = 0, or a + 3b = 0, 2a + 4b + d = 0. This system simpli2d + 2a + 4b d 3 fies to a = 3b and d = 2b, so that the matrices in the kernel of L are of the form 3b b 3 1 3 1 =b . The matrix forms a basis of the kernel of L. By 6b 2b 6 2 6 2 the ranknullity theorem, the rank of L is dim(V )dim(ker L) = 3  1 = 2, and the image of L is all of R2 .
47. Suppose V is a onedimensional Ainvariant subspace of Rn , and v is a nonzero vector in V . Then Av will be in V, so that Av = v for some , and v is an eigenvector of A. Conversely, if v is any eigenvector of A, then V = span(v) will be a onedimensional Ainvariant subspace. Thus the onedimensional Ainvariant subspaces V are of the form V = span(v), where v is an eigenvector of A.
48. a. Since span(e1 ) is an Ainvariant subspace of R3 , it must be that e1 is an eigenvector of a A, as revealed in Exercise 47. Thus, the first column of A must be of the form 0 . 0 Since span(e1 , e2 ) is also an Ainvariant subspace, it must that Ae2 is in span(e1 , e2 ). be b Thus, the second column of A must have the form c . The third column may be 0 1 1 1 any vector in R3 . Thus, we can choose A = 0 1 1 to maximize the number of 0 0 1 nonzero entries. b. We see, from our construction above, that uppertriangular a b tion. This space, V consists of all matrices of the form 0 c 0 0 of 6. matrices fit this descrip d e and has a dimension f
49. The eigenvalues of the system are 1 = 1.1, and 2 = 0.9 and corresponding eigenvectors 100 200 100 are v1 = and v2 = , respectively. So if x0 = , we can see that 300 100 800 336
ISM: Linear Algebra x0 = 3v1  v2 . Therefore, by Fact 7.1.3, we have x(t) = 3(1.1)t c(t) = 300(1.1)t  200(0.9)t and r(t) = 900(1.1)t  100(0.9)t . h(t) , and Av(t) = v(t + 1), where A = f (t) in the example worked on Pages 292 through 295.
Section 7.1 100 200  (0.9)t , i.e. 300 100
50. Let v(t) =
4 2 . Now we will proceed as 1 1
a. v(0) =
100 , and we see that Av(0) = 100 100 100 . = 2t v(t) = At v(0) = At 100 100 So c(t) = r(t) = 100(2)t .
4 2 1 1
100 100
=
200 200
=2
100 . So, 100
b. v(0) =
200 , and we see that Av(0) = 100 200 200 v(t) = At v(0) = At = 3t . 100 100 So c(t) = 200(3)t and r(t) = 100(3)t .
4 2 1 1
200 100
=
600 300
=3
200 . So, 100
c. v(0) =
600 . We can write this in terms of the previous eigenvectors as v(0) = 500 100 200 100 200 100 + = 4(2)t + At . So, v(t) = At v(0) = At 4 + 4 100 100 100 100 100 200 (3)t . 100 So c(t) = 400(2)t + 200(3)t and r(t) = 400(2)t + 100(3)t .
51. Let v(t) =
c(t) 0 .75 , and Av(t) = v(t + 1), where A = . Now we will proceed r(t) 1.5 2.25 as in the example worked on Pages 292 through 295.
a. v(0) =
100 0 .75 , and we see that Av(0) = 200 1.5 2.25 100 100 . = (1.5)t So, v(t) = At v(0) = At 200 200 So c(t) = 100(1.5)t and r(t) = 200(1.5)t. 337
100 150 100 = = 1.5 . 200 300 200
Chapter 7 100 0 .75 , and we see that Av(0) = 100 1.5 2.25 100 100 So, v(t) = At v(0) = At = (.75)t . 100 100 So c(t) = 100(.75)t and r(t) = 100(.75)t. c. v(0) = 100 100
ISM: Linear Algebra 75 75 100 . 100
b. v(0) =
=
= .75
500 . We can write this in terms of the previous eigenvectors as v(0) = 700 100 100 100 100 100 + = 3(.75)t + At 2 . So, v(t) = At v(0) = At 3 +2 3 100 200 100 200 100 100 . 2(1.5)t 200 So c(t) = 300(.75)t + 200(1.5)t and r(t) = 300(.75)t + 400(1.5)t .
52. a.
0.978 0.006 0.004 0.992 and
1 0.99 1 = = 0.99 , 2 1.98 2 3 2.94 3 = = 0.98 . The eigenvalues are 1 = 0.99 1 0.98 1
0.978 0.006 0.004 0.992 and 2 = 0.98. g0 l0 =
b. x0 = hence
100 1 3 1 3 = 20 +40 so x(t) = 20(0.99)t +40(0.98)t , 0 2 1 2 1
g(t) = 20(0.99)t + 120(0.98)t and h(t) = 40(0.99)t  40(0.98)t.
Figure 7.10: for Problem 7.1.52b. h(t) first rises, then falls back to zero. g(t) falls a little below zero, then goes back up to zero. See Figure 7.10. c. We set g(t) = 20(0.99)t + 120(0.98)t = 0. 338
ISM: Linear Algebra
Section 7.1
Solving for t we get that g(t) = 0 for t 176 minutes. (After t = 176, g(t) < 0). a(t) 53. Let v(t) = b(t) be the amount of gold each has after t days. And Av(t) = v(t + 1). c(t) 0 1 1 1 1 1 1 1 1 a(t + 1) = 2 b(t) + 2 c(t), etc, so that A = 2 1 0 1 . A 1 = 1 , so 1 1 1 1 1 1 0 1 1 1 2 1 has eigenvalue 1 = 1. A 1 = 1 , so 1 has eigenvalue 2 =  2 . Also, 2 0 0 1 0 2 1 1 1 A 0 = 0 , so 0 has eigenvalue 3 =  2 . 1 1 1 2 6 1 1 1 a. v(0) = 1 = 3 1 + 2 1 + 0 . 2 1 0 1 1 1 1 So, v(t) = At v(0) = At 3 1 + 2 1 + 0 1 0 1 1 1 1 1 1 1 = 3At 1 + 2At 1 + At 0 = 3t 1 + 2t 1 + t 0 1 2 3 1 0 1 1 0 1 1 1 1 1 = 3 1 + 2( 2 )t 1 + ( 1 )t 0 . 2 0 1 1
1 1 1 So a(t) = 3 + 3( 2 )t , b(t) = 3  2( 2 )t and c(t) = 3  ( 2 )t . 3 2365 ,
b. a(365) = 3 + 3( 1 )365 = 3  2
1 c(365) = 3  ( 2 )365 = 3 +
b(365) = 3  2( 1 )365 = 3 + 2
1 2364
and
1 2365 .
So, Benjamin will have the most gold.
54. a. We are given that n(t + 1) = 2a(t) a(t + 1) = n(t) + a(t), 339
Chapter 7 0 2 . 1 1 2 2 = 2 1 1 and A 2 1 =
ISM: Linear Algebra
so that the matrix is A = b. A 1 1 = 0 2 1 1 1 1 =
0 2 1 1 1 2 , hence 2 and 1 are the eigenvalues associated with  1 1 tively. n0 a0 = 1 0 so x0 =
1 3
2 1 and
= 2 1
2 1
=
respec
c. We are given x0 =
1 t 3 (1)
1 + 1
1 3
2 1
2 , and x(t) = 1
1 t 32
1 + 1
1 1 (by Fact 7.1.3), hence n(t) = 3 2t + 2 (1)t and a(t) = 1 2t  3 (1)t . 3 3
7.2
1. 1 = 1, 2 = 3 by Fact 7.2.2. 2. 1 = 2 (Algebraic multiplicity 2) 2 = 1 (Algebraic multiplicity 2), by Fact 7.2.2. 3. det(AI2 ) = det 4. det(A  I2 ) = det multiplicity 2. 5. det(A  I2 ) = det 6. det(A  I2 ) = det 5 2 4 = 2 4+3 = (1)(3) = 0 so 1 = 1, 2 = 3. 1 
 4 = (4  ) + 4 = (  2)2 = 0 so = 2 with algebraic 1 4  11  6 1 3 15 = 2  4 + 13 so det(A  I2 ) = 0 for no real . 7  2 = 2  5  2 = 0 so 1,2 = 4
5 33 . 2
7. = 1 with algebraic multiplicity 3, by Fact 7.2.2. 8. fA () = 2 ( + 3) so 1 = 0 (Algebraic multiplicity 2) 2 = 3. 9. fA () = (  2)2 (  1) so 1 = 2 (Algebraic multiplicity 2) 340
ISM: Linear Algebra 2 = 1. 10. fA () = (1 + )2 (1  ) so 1 = 1 (Algebraic multiplicity 2), 2 = 1. 11. fA () = 3  2   1 = ( + 1)(2 + 1) = 0 = 1 (Algebraic multiplicity 1).
Section 7.2
12. fA () = ( + 1)(  1)2 so 1 = 0, 2 = 1, 3 = 1 (Algebraic multiplicity 2). 13. fA () = 3 + 1 = (  1)(2 + + 1) so = 1 (Algebraic multiplicity 1). 14. fA () = det(B  I2 ) det(D  I2 ) (see Fact 6.1.8). The eigenvalues of A are the eigenvalues of B and D. The eigenvalues of C are irrelevant. 2 44(1k) =1 k 15. fA () = 2  2 + (1  k) = 0 if 1,2 = 2 The matrix A has 2 distinct real eigenvalues when k > 0, no real eigenvalues when k < 0. 16. fA () = 2  (a + c) + (ac  b2 ) The discriminant of this quadratic equation is (a+c)2 4(acb2) = a2 +2ac+c2 4ac+4b2 = (a  c)2 + 4b2 ; this quantity is always positive (since b = 0). There will always be two distinct real eigenvalues. 17. fA () = 2  a2  b2 = 0 so 1,2 = a2 + b2 . The matrix A represents a reflection about a line followed by a scaling by a2 + b2 , hence the eigenvalues. +2a 4a2 4(a2 b2 ) 2 2 2 = a b. 18. fA () =  2a + a  b so 1,2 = 2 Hence the eigenvalues are a b. 19. True, since fA () = 2  tr(A) + det(A) and the discriminant [tr(A)]2  4 det(A) is positive if det(A) is negative. 20. The characteristic polynomial of A is fA () = (  1 )(  2 ) = 2  (1 + 2 ) + 1 2 . But from Fact 7.2.4 we know that fA () = 2 tr(A)+det(A). Comparing the coefficient of , we see that 1 + 2 = tr(A), as claimed. 21. If A has n eigenvalues, then fA () = (1  )(2  ) (n  ). Then fA () = ()n + (1 + 2 + + n )()n1 + + (1 2 n ). But, by Fact 7.2.5, the coefficient of ()n1 is tr(A). So, tr(A) = 1 + + n . 22. By Fact 6.2.7, fA () = det(A  In ) = det(A  In )T = det(AT  In ) = fAT (). Since the characteristic polynomials of A and AT are identical, the two matrices have the same eigenvalues, with the same algebraic multiplicities. 341
Chapter 7 23. fB () = det(B  In ) = det(S 1 AS  In ) = det(S 1 AS  S 1 In S) = det(S 1 (A  In )S) = det(S 1 ) det(A  In ) det(S) = (det S)1 det(A  In ) det(S) = det(A  In ) = fA () Hence, since fA () = fB (), A and B have the same eigenvalues. 24. 1 = 0.25, 2 = 1 25. A b ab + cb (a + c)b = = = c cb + cd (b + d)c eigenvector with eigenvalue 1 = 1. b c
ISM: Linear Algebra
since a + c = b + d = 1; therefore,
b c
is an
1 ab 1 1 = = (a  b) since a  b = (c  d); therefore, is an 1 cd 1 1 eigenvector with eigenvalue 2 = a  b. Note that a  b < 1; a possible phase portrait is shown in Figure 7.11. Also, A
Figure 7.11: for Problem 7.2.25. 26. Here 0.25 b = 0.5 c with 1 = 1 and 1 1 with 2 = a  b = 0.25. See Figure 7.12.
342
ISM: Linear Algebra
Section 7.2
Figure 7.12: for Problem 7.2.26. 1 1 1 1 , 1 = 1 and v2 = , 2 = 1 . If x0 = then x0 = 3 v1 + 2 v2 , 4 3 2 1 0 so by Fact 7.1.3, x1 (t) = x2 (t) = If x0 = x1 (t) = x2 (t) =
1 3 2 3 1 3 2 3
27. a. We know v1 =
+  0 1  +
2 3 2 3
1 t 4 1 t 4 .
1 then x0 = 3 v1  1 v2 , so by Fact 7.1.3, 3 1 3 1 3 1 t 4 1 t 4 . 1 3
See Figure 7.13.
b. At approaches
1 1 , as t . See part c for a justification. 2 2
c. Let us think about the first column of At , which is At e1 . We can use Fact 7.1.3 to compute At e1 . Start by writing e1 = c1 c1 =
1 b+c
and c2 =
c b+c .
b 1 + c2 ; a straightforward computation shows that c 1
343
Chapter 7
ISM: Linear Algebra
Figure 7.13: for Problem 7.2.27a. Now At e1 =
1 b+c
b + c
c t b+c (2 )
1 , where 2 = a  b. 1
t 1 b+c
Since 2  < 1, the second summand goes to zero, so that lim (At e1 ) = Likewise, lim (At e2 ) =
t 1 b+c
b . c
b , so that lim At = c t
1 b+c
b b . c c
28. a. w(t + 1) = 0.8w(t) + 0.1m(t) m(t + 1) = 0.2w(t) + 0.9m(t) so A = 0.8 0.1 which is a regular transition matrix since its columns sum to 1 and 0.2 0.9 its entries are positive. 0.1 0.2 or 1 2 with 1 = 1, and 1 1 with 2 = 0.7. or
b. The eigenvectors of A are x0 =
1 1 1200 + 800 = 400 1 2 0
so x(t) = 400
1 1 + 800(0.7)t 1 2
w(t) = 400 + 800(0.7)t m(t) = 800  800(0.7)t. c. As t , w(t) 400 so Wipfs won't have to close the store.
n
29. The ith entry of Ae is [ai1 ai2 ain ]e =
aij = 1, so Ae = e and = 1 is an eigenvalue
j=1
of A, corresponding to the eigenvector e. 344
ISM: Linear Algebra
Section 7.2
30. a. Let vi be the largest component of the vector v, that is, vi vj for j = 1, . . . , n. Then the ith component of
n n n
A v is vi =
j=1
aij vj
j=1
aij vi =
j=1
aij vi = vi
n j=1
vj v i
aij = 1
We can conclude that vi vi , and therefore 1, as claimed. Also note that if v is not a multiple of the eigenvector e discussed in Exercise 29, then vj < vi for some
n n
index j, so that
j=1
aij vj <
j=1
aij vi and therefore < 1.
b. Let vi be the component of v with the largest absolute value, that is, vi  vj  for j = 1, 2, . . . , n. Then the absolute value of ith component of Av is  vi  = the
n n n n j=1
aij vj
j=1
aij vj 
j=1
 1, as claimed.
aij vi  =
j=1
aij vi  = vi  so that  vi  vi  and
31. Since A and AT have the same eigenvalues (by Exercise 22), Exercise 29 states that = 1 is an eigenvalue of A, and Exercise 30 says that  1 for all eigenvalues . Vector e 0.9 0.9 need not be an eigenvector of A; consider A = . 0.1 0.1 32. fA () = 3 +3+k. The eigenvalues of A are the solutions of the equation 3 +3+k = 0, or, 3  3 = k. Following the hint, we graph the function g() = 3  3 as shown in Figure 7.14. We use the derivative f () = 32  3 to see that g() has a global minimum at (1, 2) and a global maximum at (1, 2). To count the eigenvalues of A, we need to find out how many times the horizontal line y = k intersects the graph of g(). In Figure 7.14, we see that there are three solutions if k satisfies the inequality 2 > k > 2, two solutions if k = 2 or k = 2, and one solution if k > 2. 33. a. fA () = det(A  I3 ) = 3 + c2 + b + a 345
Chapter 7
ISM: Linear Algebra
(1, 2)
g() = 3  3 (1,  2)
Figure 7.14: for Problem 7.2.32. 0 b. By part a, we have c = 17, b = 5 and a = , so M = 0 1 0 0 1 . 5 17
34. Consider the possible graphs of fA () assuming that it has 2 distinct real roots.
(1 ) ( 2) (2 + 1)
Figure 7.15: for Problem 7.2.34.
1 0 0 0 0 0 2 0 Algebraic multiplicity of each eigenvalue is 1. Example: . See 0 0 0 1 0 0 1 0 Figure 7.15. 2 0 0 0 0 2 0 0 Algebraic multiplicity of each eigenvalue is 2. Example: . See 0 0 1 0 0 0 0 1 346
ISM: Linear Algebra
Section 7.2
( 2)2 (1 )2
Figure 7.16: for Problem 7.2.34. Figure 7.16.
( 2) (1 )3
Figure 7.17: for Problem 7.2.34. Algebraic multiplicity of 1 is 1, and of 2 is 3. 2 0 0 0 0 1 0 0 Example: . See Figure 7.17. 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 35. A = , with fA () = (2 + 1)2 0 0 0 1 0 0 1 0 B 0 B where B = 0 1 , fA () = (2 + 1)n . 36. Let A = .. 1 0 . 0 B 347
Chapter 7
ISM: Linear Algebra
37. We can write fA () = (  0 )2 g(), for some polynomial g. The product rule for derivatives tells us that fA () = 2(  0 )g() + (  0 )2 g (), so that fA (0 ) = 0, as claimed. 38. By Fact 7.2.4, the characteristic polynomial of A is fA () = 2  5  14 = (  7)( + 2), so that the eigenvalues are 7 and 2. 39. tr(AB) =tr tr(BA) =tr are equal. a b c d e g f h e g a c f h b d =tr =tr ae + bg  ea + f c   cf + dh  gb + hd = ae + bg + cf + dh. = ea + f c + gb + hd. So they
40. Let the entries of A be aij and the entries of B be bij . Now, tr(AB) = (a11 b11 + a12 b21 + + a1n bn1 )+(a21 b12 + + a2n bn2 )+ +(an1 b1n + +ann bnn ). This is the sum of all products of the form aij bji . We see that tr(BA) = (b11 a11 + + b1n an1 ) + + (bn1 a1n + + bnn ann ) , which also is the sum of all products of the form bji aij = aij bji . Thus, tr(AB) = tr(BA). 41. So there exists an invertible S such that B = S 1 AS, and tr(B) =tr(S 1 AS) =tr((S 1 A)S). By Exercise 40, this equals tr(S(S 1 A)) =tr(A). 42. tr (A + B)2 = tr(A2 + AB + BA + B 2 ) = tr(A2 ) + tr(AB) + tr(BA) + tr(B 2 ). By Exercise 40, tr(AB) = tr(BA). Thus, tr (A + B)2 = tr(A2 ) + 2tr(BA) + tr(B 2 ) = tr(A2 ) + tr(B 2 ), since BA = 0. 43. tr(AB  BA) =tr(AB)tr(BA) =tr(AB)tr(AB) = 0, but tr(In ) = n, so no such A, B exist. We have used Exercise 40. 44. No, there are no such matrices A and B. We will argue indirectly, assuming that invertible matrices A and B with AB  BA = A do exist. Then AB = BA + A = (B + In )A, and ABA1 = B + In . Using Exercise 41, we see that tr(B) = tr(ABA1 ) = tr(B + In ) = tr(B) + n, a contradiction. 45. fA () = 2 tr(A)+det(A) = 2 2+(34k). We want fA (5) = 251034k = 0, or, 12  4k = 0, or k = 3.
46. a. 2 +2 = (1 +2 )2 21 2 = (trA)2 2 det(A) = (a+d)2 2(adbc) = a2 +d2 +2bc. 1 2 348
ISM: Linear Algebra
Section 7.2
b. Based on part (a), we need to show that a2 +d2 +2bc a2 +b2 +c2 +d2 , or 2bc b2 +c2 , or 0 (b  c)2 . But the last inequality is obvious. c. By parts (a) and (b), the equality 2 + 2 = a2 + b2 + c2 + d2 holds if (and only if) 1 2 0 = (b  c)2 , or b = c. Thus equality holds for symmetric matrices A. 2 0 , or, [ Av1 0 3 Since v1 or v2 must be nonzero, 2 or 3 must be an eigenvalue of A. v2 ]. We want A[ v1 v2 ] = [ v 1 v2 ]
47. Let M = [ v1
Av2 ] = [ 2v1
3v2 ].
48. Let S = [v1 v2 ]. Then AS = [Av1 Av2 ] and SD = [2v1 3v2 ], so that v1 must be an eigenvector with eigenvalue 2, and v2 must be an eigenvector with eigenvalue 3. Thus, both 2 and 3 must be eigenvalues of A. 49. As in problem 47, such an M will exist if A has an eigenvalue 2, 3 or 4.
50. a. If f (x) = x3 + 6x  20 then f (x) = 3x2 + 6 so f (x) > 0 for all x, i.e. f is always increasing, hence has only one real root. b. If v 3  u3 = 20 and vu = 2 then (v  u)3 + 6(v  u) = v 3  3v 2 u + 3vu2  u3 + 6(v  u) = v 3  u3  3vu(v  u) + 6(v  u) = 20  6(v  u) + 6(v  u) = 20 Hence x = v  u satisfies the equation x3 + 6x = 20. c. The second equation tells us that u = we find that v3 
8 v3 2 v
or u3 =
8 v3 .
Substituting into the first equation
= 20, or, (v 3 )2  8 = 20v 3 or (v 3 )2  20v 3  8 = 0, with solutions 3 v 3 = 20 400+32 = 10 108 = 10 6 3 and v = 10 108. 2 3 Now u3 = v 3  20 = 10 108 and u = 10 108.
3
d. Let v =
q 2
+
q 2 2
+
p 3 3
3
and u =
q 2 2
3
q 2 +
q 2 2
+
p 3 3 .
Then v 3  u3 = q and vu = Since x = v  u we have
+
p 3 3

q 2 2
= p. 3
x3 + px = v 3  3v 2 u + 3vu2  u3 + p(v  u) = v 3  u3  3vu(v  u) + p(v  u) 349
Chapter 7 = q  p(v  u) + p(v  u) = q, as claimed.
ISM: Linear Algebra
may be negative. Also, the equation If p is negative, the expression q + p 2 3 x3 + px = q may have more than one solution in this case. e. Setting x = t 
a 3
2
3
we get t 
a 3 3
+a t
a 2 3
+b t
a 3
+ c = 0 or
t3 at2 +at2 +(linear and constant terms) = 0 or t3 +(linear and constant terms) = 0, as claimed (bring the constant terms to the righthand side).
7.3
1. 1 = 7, 2 = 9, E7 = ker Eigenbasis: 1 , 0 4 1 1 1 , E0 = span 1 1 0 8 1 2 8 4 = span , E9 = ker = span 0 2 0 0 0 1
2. 1 = 2, 2 = 0, E2 = span Eigenbasis: 1 , 1 1 1
3. 1 = 4, 2 = 9, E4 = span Eigenbasis: 3 , 2 1 1
3 1 , E9 = span 2 1
4. 1 = 2 = 1, E1 = span No eigenbasis
1 1
5. No real eigenvalues as fA () = 2  2 + 2. 6. 1,2 =
7 57 2
Eigenbasis:
3 3 , 1  2 5.27
3 3 2  2 2.27
7. 1 = 1, 2 = 2, 3 = 3, eigenbasis: e1 , e2 , e3 350
ISM: Linear Algebra 1 1 1 8. 1 = 1, 2 = 2, 3 = 3, eigenbasis: 0 , 1 , 2 0 0 1 1 0 1 9. 1 = 2 = 1, 3 = 0, eigenbasis: 0 , 1 , 0 1 0 0 0 1 10. 1 = 2 = 1, 3 = 0, E1 = span 0 , E0 = span 0 1 0 No eigenbasis 1 1 1 = 2 = 0, 3 = 3, eigenbasis: 1 , 0 , 1 0 1 1 1 = 2 = 3 = 1, E1 = span 0 , no eigenbasis 0 0 1 1 = 0, 2 = 1, 3 = 1, eigenbasis: 1 , 3 , 1 0 1 2 0 1 0 = 0, 1 = 3 = 1, eigenbasis: 1 , 5 , 2 0 0 1 0 = 0, 2 = 3 = 1, E0 = span 1 . We can use Kyle Numbers to see that 0 1 0 1 0 2 1 1 = span 1 . 1 2 2
Section 7.3
11. 1
12. 1
13. 1
14. 1
15. 1
There is no eigenbasis since the eigenvalue 1 has algebraic multiplicity 2, but the geometric multiplicity is only 1. 1 16. 1 = 0 (no other real eigenvalues), with eigenvector 1 1 No real eigenbasis 351
1 2 E1 = ker 3 4
Chapter 7 17. 1 = 2 = 0, 3 = 4 = 1 0 0 0 1 0 1 1 0 with eigenbasis , , , 0 0 1 0 1 0 0 0 18. 1 = 2 = 0, 3 = 4 = 1, E0 = span(e1 , e3 ), E1 = span(e2 ) No eigenbasis
ISM: Linear Algebra
19. Since 1 is the only eigenvalue, with algebraic multiplicity 3, there exists an eigenbasis for A if (and only if) the geometric multiplicity of the eigenvalue 1 is 3 as well, that is, if 0 a b E1 = R3 . Now E1 = ker 0 0 c is R3 if (and only if) a = b = c = 0. 0 0 0 If a = b = c = 0 then E1 is 3dimensional with eigenbasis e1 , e2 , e3 . If a = 0 and c = 0 then E1 is 1dimensional and otherwise E1 is 2dimensional. The geometric multiplicity of the eigenvalue 1 is dim(E1 ). 0 a b 0 a 0 20. For 1 = 1, E1 = ker 0 0 c = ker 0 0 1 so if a = 0 then E1 is 2dimensional, 0 0 1 0 0 0 otherwise it is 1dimensional. 1 a b For 2 = 2, E2 = ker 0 1 c so E2 is 1dimensional. 0 0 0 Hence, there is an eigenbasis if a = 0. 1 1 2 2 4 1 2 1 4 = and A =2 = , i.e. A = 2 2 3 3 6 2 3 2 6
1
21. We want A such that A so A = 1 4 2 6 1 2 2 3
=
5 2 . 6 2
The answer is unique. 22. We want A such that Ae1 = 7e1 and Ae2 = 7e2 hence A = 7 0 . 0 7
23. 1 = 2 = 1 and E1 = span(e1 ), hence there is no eigenbasis. The matrix represents a shear parallel to the xaxis. 352
ISM: Linear Algebra a b . First we want c d a c b d 2 1
Section 7.3
24. Let A =
2 , or 2a + b = 2, 2c + d = 1. This 1 a 2  2a condition is satisfied by all matrices of the form A = . Next, we want there c 1  2c to be no other eigenvalue, besides 1, so that 1 must have an algebraic multiplicity of 2. = We want the characteristic polynomial to be (  1)2 = 2  2 + 1, so that the trace must be 2, and a + (1  2c) = 2, or, a = 1 + 2c. Thus we want a matrix of the form 1 + 2c 4c . A= c 1  2c
2 instead of E1 = R2 . This means 1 that we must exclude the case A = I2 . In order to ensure this, we state simply that 1 + 2c 4c A= , where c is any nonzero constant. c 1  2c  1 0 25. If is an eigenvalue of A, then E = ker(A  I3 ) = ker 0  1 . a b c Finally, we have to make sure the E1 = span
The second and third columns of the above matrix aren't parallel, hence E is always 1dimensional, i.e., the geometric multiplicity of is 1.
26. Note that fA (0) = det(A  0I6 ) = det(A) is negative. Since lim fA () = , there must be a positive root, by the Intermediate Value Theorem (see Exercise 2.2.47c). Therefore, the matrix A has a positive eigenvalue. See Figure 7.18.
Figure 7.18: for Problem 7.3.26. 27. By Fact 7.2.4, we have fA () = 2  5 + 6 = (  3)(  2) so 1 = 2, 2 = 3. 28. Since Jn (k) is triangular, its eigenvalues are its diagonal entries, hence its only eigenvalue is k. Moreover, 353
Chapter 7 0 1 0 0 . . . . Ek = ker(Jn (k)  kIn ) = ker . . . . . . . . 0 0 0 0 . . 1 . . . 0 . = span (e1 ). . . . 1 0 0
ISM: Linear Algebra
The geometric multiplicity of k is 1 while its algebraic multiplicity is n. 29. Note that r is the number of nonzero diagonal entries of A, since the nonzero columns of A form a basis of im(A). Therefore, there are n  r zeros on the diagonal, so that the algebraic multiplicity of the eigenvalue 0 is n  r. It is true for any n n matrix A that the geometric multiplicity of the eigenvalue 0 is dim(ker(A)) = n  rank(A) = n  r. 30. Since A is triangular, fA () = (a11  )(a22  ) (amm  )(0  )nm . Hence the algebraic multiplicity of = 0 is (n  m). Also note that the rank of A is at least m, since the first m columns of A are linearly independent. Therefore, the geometric multiplicity of the eigenvalue 0 is dim(ker(A)) = n  rank(A) n  m. 31. They must be the same. For if they are not, by Fact 7.3.7, the geometric multiplicities would not add up to n. 32. Recall that a matrix and its transpose have the same rank (Fact 5.3.9c). The geometric multiplicity of as an eigenvalue of A is dim(ker(A  In )) = n  rank(A  In ). The geometric multiplicity of as an eigenvalue of AT is dim(ker(AT  In )) = dim(ker(A  In )T ) = n  rank(A  In )T = n  rank(A  In ). We can see that the two multiplicities are the same. 33. If S 1 AS = B, then S 1 (A  In )S = S 1 (AS  S) = S 1 AS  S 1 S = B  In . 34. Note that SB = AS.
a. If x is in the kernel of B, then ASx = SBx = S 0 = 0, so that Sx is in ker(A). b. T is clearly linear, and the transformation R(x) = S 1 x is the inverse of T (if x is in the kernel of B, then S 1 x is in the kernel of A, by part (a)). c. The equation nullity(A) = nullity(B) follows from part (b); the equation rank(A) = rank(B) then follows from the ranknullity theorem (Fact 3.3.7). 354
ISM: Linear Algebra
Section 7.3
35. No, since the two matrices have different eigenvalues (see Fact 7.3.6c). 36. No, since the two matrices have different traces (see Fact 7.3.6.d)
37. a. Av w = (Av)T w = (v T AT )w = (v T A)w = v T (Aw) = v Aw A symmetric b. Assume Av = v and Aw = w for = , then (Av) w = (v) w = (v w), and v Aw = v w = (v w). By part a, (v w) = (v w) i.e., (  )(v w) = 0. Since = , it must be that v w = 0, i.e., v and w are perpendicular.
38. Note that fA (0) = det(A  0I3 ) = det(A) = 1. Since lim fA () = , the polynomial fA () must have a positive root 0 , by the Intermediate Value Theorem. In other words, the matrix A will have a positive eigenvalue 0 . Since A is orthogonal, this eigenvalue 0 will be 1, by Fact 7.1.2. This means that there is a nonzero vector v in R3 such that Av = 1v = v, as claimed. See Figure 7.19.
fA () 1 0 = 1
Figure 7.19: for Problem 7.3.38.
39. a. There are two eigenvalues, 1 = 1 (with E1 = V ) and 2 = 0 (with E0 = V ). 355
Chapter 7 Now geometric multiplicity(1) = dim(E1 ) = dim(V ) = m, and
ISM: Linear Algebra
geometric multiplicity(0) = dim(E0 ) = dim(V ) = n  dim(V ) = n  m. Since geometric multiplicity() algebraic multiplicity(), by Fact 7.3.7, and the algebraic multiplicities cannot add up to more than n, the geometric and algebraic multiplicities of the eigenvalues are the same here. b. Analogous to part a: E1 = V and E1 = V . geometric multiplicity(1) = algebraic multiplicity(1) = dim(V ) = m, and geometric multiplicity(1) = algebraic multiplicity(1) = dim(V ) = n  m. a b b a
40. The matrix of the dynamical system is A =
so fA () = (a  )2  b2 . 1 1 and 1 . 1
Hence, 1,2 = a b, and the respective eigenvectors are Since x(0) = 3
1 2
=
7 4
1 1 1 5 1 7 5 4 , by Fact 7.1.3, x(t) = 4 (a+b)t + 4 (ab)t . 1 1 1 1
Note that ab is between 0 and 1, so that the second summand in the formula above goes to 0 as t goes to infinity. Qualitatively different outcomes occur depending on whether a + b exceeds 1, equals 1, or is less than 1. See Figure 7.20.
Figure 7.20: for Problem 7.3.40.
356
ISM: Linear Algebra
Section 7.3
9 2 1 41. The eigenvalues of A are 1.2, 0.8, 0.4 with eigenvectors 6 , 2 , 2 . 2 1 2 2 9 1 2 9 Since x0 = 50 6 +50 2 +50 2 we have x(t) = 50(1.2)t 6 +50(0.8)t 2 + 1 2 2 1 2 1 50(0.4)t 2 , so, as t goes to infinity, j(t) : n(t) : a(t) approaches the proportion 2 9 : 6 : 2. 42. C(t+1) = 0.8C(t)+10 so if A 1 50 , 0 1 1 + 50 0 spectators. 0.8 10 C(t + 1) C(t) . A has eigenvectors ,A = = 0 1 1 1 C(0) 0 0 corresponding to 1 = 0.8 and 2 = 1. Since = , and = 1 1 1 50 , we have C(t) = 50(0.9)t + 50, hence in the long run, there will be 50 1 The graph of C(t) looks similar to the graph in Figure 7.21.
Figure 7.21: for Problem 7.3.42.
0 1 1 43. a. A = 1 1 0 1 2 1 1 0
7.6660156 7 b. After 10 rounds, we have A10 11 7.6699219 . 7.6640625 5 7.66666666667 7 After 50 rounds, we have A50 11 7.66666666667 . 7.66666666667 5 c. The eigenvalues of A are 1 and  1 with 2 0 1 1 E1 = span 1 and E 1 = span 1 , 1 2 1 1 2 357
Chapter 7 1 1 + 1 2 1 0 1 + 1 2 1 1 1 . 2
ISM: Linear Algebra
so x(t) = 1 +
c0 3
t
t c0 3
, so that Carl After 1001 rounds, Alberich will be ahead of Brunnhilde by 1 2 needs to beat Alberich to win the game. A straightforward computation shows that 1001 (1  c0 ); Carl wins if this quantity is positive, which is the c(1001)  a(1001) = 1 2 case if c0 is less than 1. Alternatively, observe that the ranking of the players is reversed in each round: Whoever is first will be last after the next round. Since the total number of rounds is odd (1001), Carl wants to be last initially to win the game; he wants to choose a smaller number than both Alberich and Brunnhilde. 44. a. a11 = 0.7 means that only 70% of the pollutant present in Lake Silvaplana at a given time is still there a week later; some is carried down to Lake Sils by the river Inn, and some is absorbed or evaporates. The other diagonal entries can be interpreted analogously. a21 = 0.1 means that 10% of the pollutant present in Lake Silvaplana at any given time can be found in Lake Sils a week later, carried down by the river Inn. The significance of the coefficient a32 = 0.2 is analogous; a31 = 0 means that no pollutant is carried down from Lake Silvaplana to Lake St. Moritz in just one week. The matrix is lower triangular since no pollutant is carried from Lake Sils to Lake Silvaplana, for example (the river Inn flows the other way). b. The eigenvalues of A are 0.8, 0.6, 0.7, with corresponding eigenvectors 0 0 1 0 , 1 , 1 . 1 1 2 1 0 0 100 x(0) = 0 = 100 0  100 1 + 100 1 , 2 1 1 0 0 0 1 so x(t) = 100(0.8)t 0  100(0.6)t 1 + 100(0.7)t 1 or 1 1 2 x1 (t) = 100(0.7)t x2 (t) = 100(0.7)t  100(0.6)t x3 (t) = 100(0.8)t + 100(0.6)t  200(0.7)t . See Figure 7.22. 358
1001
ISM: Linear Algebra
Section 7.3
Figure 7.22: for Problem 7.3.44b. Using calculus, we find that the function x2 (t) = 100(0.7)t  100(0.6)t reaches its maximum at t 2.33. Keep in mind, however, that our model holds for integer t only. 45. a. A = b. B = 0.1 0.2 1 ,b = 0.4 0.3 2 A b 0 1 1 2 and so 1 . 1 v 0 is
c. The eigenvalues of A are 0.5 and 0.1 with associated eigenvectors The eigenvalues of B are 0.5, 0.1, and 1. If Av = v then B
v v = 0 0
an eigenvector of B. 2 Furthermore, 4 is an eigenvector of B corresponding to the eigenvalue 1. Note that 1 (A  I2 )1 b . this vector is 1 2 1 1 x1 (0) d. Write y(0) = x2 (0) = c1 2 + c2 1 + c3 4 . 1 0 0 1 Note that c3 = 1. 1 1 2 2 t t Now y(t) = c1 (0.5)t 2 + c2 (0.1)t 1 + 4  4 so that x(t)  0 0 1 1 359 2 . 4
Chapter 7 46. a. T1 (t + 1) = 0.6T1 (t) + 0.1T2(t) + 20 T2 (t + 1) = 0.1T1 (t) + 0.6T2(t) + 0.1T3 (t) + 20 T3 (t + 1) = 0.1T2 (t) + 0.6T3(t) + 40 0.6 0.1 0 20 so A = 0.1 0.6 0.1 , b = 20 0 0.1 0.6 40 b. B = A b 0 1
ISM: Linear Algebra
0 70.86 0 93.95 c. y(10) = B 10 0 120.56 1 1
0 74.989 0 99.985 y(30) = B 30 0 124.989 1 1
d. The eigenvalues of A are 1 0.45858, 2 = 0.6, 3 0.74142 so the eigenvalues of B are 1 0.45858, 2 = 0.6, 3 0.74142, 4 = 1. v1 v v If v1 , v2 , v3 are eigenvectors of A (with Avi = i vi ), then , 2 , 3 are 0 0 0 75 100 corresponding eigenvectors of B. Furthermore, is an eigenvector if B with 125 1 75 eigenvalue 1. Since 1 , 2 , 3 are all less than 1, lim x(t) = 100 , as in Exercise 45. t 125 1 1 0 r(t) 2 4 1 47. a. If x(t) = p(t) , then x(t + 1) = Ax(t) with A = 1 1 2 . 2 2 w(t) 0 1 1
4 2
75 100 y(t) seems to approach as t 125 1
360
ISM: Linear Algebra
Section 7.3
1 1 1 The eigenvalues of A are 0, 1 , 1 with eigenvectors 2 , 0 , 2 . 2 1 1 1 1 1 1 1 1 t Since x(0) = 0 = 1 2 + 1 0 + 1 2 , x(t) = 1 1 0 + 4 2 4 2 2 1 1 1 1 0 t > 0. b. As t the ratio is 1 : 2 : 1 (since the first term of x(t) drops out). 48. a. We are told that a(t + 1) = a(t) + j(t) j(t + 1) = a(t), so that A = 1 1 . 1 0
1 5 2
1 1 2 for 4 1
b. fA () = (  1)  1 = 2   1 so 1,2 = Since x(0) = i.e. a(t) = j(t) =
1 ((1 )t+1 5 1 ((1 )t 5 a(t) j(t)
with eigenvectors
1 (1 )t 5
1 1
and
2 . 1
1 = 0
1 5
1 2 1  5 we have x(t) = 1 1
1 1  5 (2 )t 2 , 1 1
 (2 )t+1 )
1+ 5 2 ,
 (2 )t ). 1 = since 2  < 1.
c. As t ,
49. This "random" matrix A = [0 v2 vn ] is unlikely to have any zeros above the diagonal. In this case, the columns v2 , . . . , vn will be linearly independent (none of them is redundant), so that rank(A) = n  1 and geometric multiplicity(0) = dim(ker(A)) = n  rank(A) = 1. Alternatively, you can argue in terms of rref(A). 50. a. fA () = (2)2 3 so 1,2 = 2 3 (or approximately 3.73 and 0.27) with eigenvectors 1 1 and . See Figure 7.23. 3 3 b. The trajectory starting at 0 1 is above the line E1 , so that At 361 0 = 1
Chapter 7
ISM: Linear Algebra
Figure 7.23: for Problem 7.3.50a. (second column of t ) has a slope of more than A gives the estimate 3 < 1351 . 780 Likewise, the trajectory starting at 1 1 3, for all t. Applying this to t = 6
is below E1 , so that At
1 = 1 < 3.
(sum of the two columns of At ) has a slope of less than 3. Applying this to t = 4 gives
265 153
c. det(A6 ) = (det A)6 = 1 and det(A6 ) = 13512 7802340, so that 13512 7802340 = 1. Dividing both sides by 1351 780 we obtain Now note that Therefore
1351 780 2340 1351 1351 780

2340 1351
=
1 7801351
< 106 .
is the slope of A6 3<
1351 780
1 , which is less than 3. 0


2340 1351
< 106 , as claimed. 3, i.e.
3691 2131
d. The slope of A6
2131 1 = 3691 1
is less than
<
3.
7.4
1. Matrix A is diagonal already, so it's certainly diagonalizable. Let S = I2 . 2. Diagonalizable. The eigenvalues are 2,3, with associated eigenvectors let S = 1 1 2 0 , then S 1 AS = D = . 0 1 0 3 362 1 1 . If we , 1 0
ISM: Linear Algebra
Section 7.4 1 1 , . If we 1 2
3. Diagonalizable. The eigenvalues are 0,3, with associated eigenvectors let S = 1 1 0 0 , then S 1 AS = D = . 1 2 0 3
4. Diagonalizable. The eigenvalues are 0,7, with associated eigenvectors let S = 0 0 2 1 . , then S 1 AS = D = 0 7 1 3
2 1 , . If we 1 3
5. Fails to be diagonalizable. There is only one eigenvalue, 1, with a onedimensional eigenspace. 6. Fails to be diagonalizable. There is only one eigenvalue, 2, with a onedimensional eigenspace. 7. Diagonalizable. The eigenvalues are 2,3, with associated eigenvectors we let S = 2 0 4 1 . , then S 1 AS = D = 0 3 1 1 1 1 . If , 1 1 4 1 , . If 1 1
8. Diagonalizable. The eigenvalues are 4,2, with associated eigenvectors we let S = 1 1 4 0 , then S 1 AS = D = . 1 1 0 2
9. Fails to be diagonalizable. There is only one eigenvalue, 1, with a onedimensional eigenspace. 10. Fails to be diagonalizable. There is only one eigenvalue, 1, with a onedimensional eigenspace. 11. Fails to be diagonalizable. The eigenvalues are 1,2,1, and the eigenspace E1 = ker(A  I3 ) = span(e1 ) is only onedimensional. 1 0 1 12. Diagonalizable. The eigenvalues are 2,1,1, with associated eigenvectors 0 , 1 , 0 . 0 0 1 1 0 1 2 0 0 If we let S = 0 1 0 , then S 1 AS = D = 0 1 0 . 0 0 1 0 0 1 363
Chapter 7
ISM: Linear Algebra
13.
14.
15.
16.
17. Diagonalizable.
1 1 1 Diagonalizable. The eigenvalues are 1,2,3, with associated eigenvectors 0 , 1 , 2 . 0 0 1 1 0 0 1 1 1 If we let S = 0 1 2 , then S 1 AS = D = 0 2 0 . 0 0 3 0 0 1 1 1 0 Diagonalizable. The eigenvalues are 3,2,1, with associated eigenvectors 0 , 1 , 1 . 0 0 1 1 1 0 3 0 0 1 1 , then S 1 AS = D = 0 2 0 . If we let S = 0 0 0 1 0 0 1 2 1 0 Diagonalizable. The eigenvalues are 1, 1, 1, with associated eigenvectors 1 , 1 , 0 . 0 0 1 2 1 0 1 0 0 If we let S = 1 1 0 , then S 1 AS = D = 0 1 0 . 0 0 1 0 0 1 2 1 0 Diagonalizable. The eigenvalues are 3,2,1, with associated eigenvectors 0 , 0 , 1 . 1 1 0 3 0 0 2 1 0 If we let S = 0 0 1 , then S 1 AS = D = 0 2 0 . 0 0 1 1 1 0
1 1 1 The eigenvalues are 0,3,0, with associated eigenvectors 1 , 1 , 0 . If we let 0 1 1 0 0 0 1 1 1 0 , then S 1 AS = D = 0 3 0 . S = 1 1 0 0 0 0 1 1 1 1 18. Diagonalizable. The eigenvalues are 0,2,1, with associated eigenvectors 0 , 0 , 1 1 0 1 1 0 0 0 0 1 . If we let S = 0 0 1 , then S 1 AS = D = 0 2 0 . 0 1 1 0 0 0 1 364
ISM: Linear Algebra
Section 7.4
19. Fails to be diagonalizable. The eigenvalues are 1,0,1, and the eigenspace E 1 = ker(A  I3 ) = span(e1 ) is only onedimensional. 20. Diagonalizable. The eigenvalues are 1,2,0, with associated 1 0 1 1 1 0 . If we let S = 1 2 0 , then S 1 AS = D = 0 0 0 1 1 1 1 0 eigenvectors 1 , 2 , 1 0 0 0 2 0 . 0 0
21. Diagonalizable for all values of a, since there are always two distinct eigenvalues, 1 and 2. See Fact 7.4.3.
22. Diagonalizable except if b = 1 and a = 0. (In that case we have only one eigenvalue, 1, with a onedimensional eigenspace.). 23. Diagonalizable for positive a. The characteristic polynomial is (  1) 2  a, so that the eigenvalues are = 1 a. If a is positive, then we have two distinct real eigenvalues, so that the matrix is diagonalizable. If a is negative, then there are no real eigenvalues. If a is 0, then 1 is the only eigenvalue, with a onedimensional eigenspace. 24. Diagonalizable for all values of a, b, and c. The characteristic polynomial is 2 (a+c)+ a+c (ac)2 +4b2 a+c (a+c)2 4(acb2 ) = . Note ac  b2 , so that the eigenvalues are = 2 2 that the expression whose square root we take (the "discriminant") is always positive or 0, since it is the sum of two squares. If the discriminant is positive, then we have two distinct real eigenvalues, and everything is fine. The discriminant is 0 only if a = c and b = 0. In that case the matrix is diagonal already, and certainly diagonalizable as well. 25. Diagonalizable for all values of a, b, and c, since we have three distinct eigenvalues, 1, 2, and 3. 26. The eigenvalues are 1, 2, 1, and the matrix is diagonalizable if (and only if) the eigenspace 0 a b 0 1 c E1 is twodimensional. Now E1 = ker(AI3 ) = ker 0 1 c = ker 0 0 b  ac 0 0 0 0 0 0 is twodimensional if (and only if) b  ac = 0. Thus the matrix is diagonalizable if and only if b = ac. 27. Diagonalizable only if a = b = c = 0. Since 1 is the only eigenvalue, it is required that E1 = R3 , that is, the matrix must be the identity matrix. 28. Diagonalizable for positive values of a. The characteristic polynomial is  3 + a = (2  a). If a is positive, then we have three distinct real eigenvalues, 0, a, so that the matrix will be diagonalizable. If a is negative or 0, then 0 is the only real eigenvalue, and the matrix fails to be diagonalizable. 365
Chapter 7
ISM: Linear Algebra
29. Not diagonalizable for any a. The characteristic polynomial is 3 + a, so that there is only one real eigenvalue, 3 a, for all a. Since the corresponding eigenspace isn't all of R3 , the matrix fails to be diagonalizable. 0 0 a 30. First we observe that all the eigenspaces of A = 1 0 3 are onedimensional, regard0 1 0 1 0 less of the value of a, since rref(A  I3 ) is of the form 0 1 for all . Thus A is 0 0 diagonalizable if and only if there are three distinct real eigenvalues. The characteristic polynomial of A is 3 + 3 + a. Thus the eigenvalues of A are the solutions of the equation 3  3 = a. See Figure 7.24 with the function f () = 3  3; using calculus, we find the local maximum f (1) = 2 and the local minimum f (1) = 2. To count the distinct eigenvalues of A, we have to examine how many times the horizontal line y = a intersects the graph of f (). The answer is three if a < 2, two if a = 2, and one if a > 2. Thus A is diagonalizable if and only if a < 2, that is, 2 < a < 2.
(1, 2)
f() = 3  3 (1,  2)
Figure 7.24: for Problem 7.4.30. 1 2 are 1 and 5, 4 3 1 1 , then S 1 AS = 1 2
31. In Example 2 of Section 7.3 we see that the eigenvalues of A = with associated eigenvectors D= 1 0 . 0 5 1 1 and 1 . If we let S = 2
Thus A = SDS 1 and At = SDt S 1
= =
1 3 1 3
1 1 1 2
(1)t 0
0 55
2 1 1 1
2(1)t + 5t 2(5t )  2(1)t
(1)t+1 + 5t 2(5t ) + (1)t
366
ISM: Linear Algebra
Section 7.4
32. The eigenvalues of A =
4 2 2 are 3 and 2, with associated eigenvectors and 1 1 1 1 2 1 3 0 . If we let S = , then S 1 AS = D = . Thus A = SDS 1 and 1 1 1 0 2 2 1 3t 0 1 1 2(3t )  2t 2t+1  2(3t ) At = SDt S 1 = = . t 1 1 0 2 1 2 3t  2 t 2t+1  3t
33. The eigenvalues of A =
1 2 2 are 0 and 7, with associated eigenvectors and 3 6 1 0 0 2 1 1 . Thus A = SDS 1 and , then S 1 AS = D = . If we let S = 0 7 1 3 3 2 1 7t 2(7t ) 0 0 3 1 = 7t1 A. We can = 1 At = SDt S 1 = 1 t 7 7 3(7t ) 6(7t ) 1 3 0 7 1 2 find the same result more directly by observing that A2 = 7A. 1 1 . If we let and 2 1
34. The eigenvalues of A are 1/4 and 1, with associated eigenvectors S=
1 3
1/4 0 1 1 . Thus A = SDS 1 and At = SDt S 1 = , then S 1 AS = D = 0 1 1 2 1 1 1 + 2(1/4)t 1  (1/4)t (1/4)t 0 2 1 =1 3 2  2(1/4)t 2 + (1/4)t 1 2 0 1 1 1
35. Matrix
1 6 has the eigenvalues 3 and 2. If v and w are associated eigenvectors, and 2 6 1 6 3 0 1 6 if we let S = [v w], then S 1 S= , so that matrix is indeed 2 6 0 2 2 6 3 0 . similar to 0 2
36. Yes. The matrices
1 6 1 2 and both have the eigenvalues 3 and 2, so that each 2 6 1 4 1 6 3 0 is , by Algorithm 7.4.4. Thus of them is similar to the diagonal matrix 2 6 0 2 1 2 , by parts and b c of Fact 3.4.6. similar to 1 4
37. Yes. Matrices A and B have the same characteristic polynomial, 2  7 + 7, so that they have the same two distinct real eigenvalues 1,2 = 72 21 . Thus both A and B are 1 0 , by Algorithm 7.4.4. Therefore A is similar to similar to the diagonal matrix 0 2 B, by parts b and c of Fact 3.4.6. 38. No. As a counterexample, consider A = 2 0 0 2 367 and B = 2 1 . 0 2
Chapter 7
ISM: Linear Algebra
39. The eigenfunctions with eigenvalue are the nonzero functions f (x) such that T (f (x)) = f (x)  f (x) = f (x), or f (x) = ( + 1)f (x). From calculus we recall that those are the exponential functions of the form f (x) = Ce(+1)x , where C is a nonzero constant. Thus all real numbers are eigenvalues of T , and the eigenspace E is onedimensional, spanned by e(+1)x . 40. The eigenfunctions with eigenvalue are the nonzero functions f (x) such that T (f (x)) = 5f (x)  3f (x) = f (x), or f (x) = +3 f (x). From calculus we recall that those are 5 the exponential functions of the form f (x) = Ce(+3)x/5 , where C is a nonzero constant. Thus all real numbers are eigenvalues of T , and the eigenspace E is onedimensional, spanned by e(+3)x/5 . 41. The nonzero symmetric matrices are eigenmatrices with eigenvalue 2, since L(A) = A + AT = 2A in this case. The nonzero skewsymmetric matrices have eigenvalue 0, since L(A) = A + AT = A  A = 0. Yes, L is diagonalizable, since we have the eigenbasis 1 0 0 1 0 0 0 1 , , , (three symmetric matrices, and one skewsymmetric 0 0 1 0 0 1 1 0 one). 42. The nonzero symmetric matrices are eigenmatrices with eigenvalue 0, since L(A) = A  AT = A  A = 0 in this case. The nonzero skewsymmetric matrices have eigenvalue 2, since L(A) = AAT = A+A = 2A. Yes, L is diagonalizable, since we have the eigenbasis 1 0 0 1 0 0 0 1 , , , (three symmetric matrices, and one skewsymmetric 0 0 1 0 0 1 1 0 one). 43. The nonzero real numbers are "eigenvectors" with eigenvalue 1, and the nonzero imaginary numbers (of the form iy) are "eigenvectors" with eigenvalue 1. Yes, T is diagonalizable, since we have the eigenbasis 1,i. 44. The nonzero sequence (x0 , x1 , x2 , . . .) is an eigensequence with eigenvalue if T (x0 , x1 , x2 , . . .) = (x2 , x3 , x4 , . . .) = (x0 , x1 , x2 , . . .) = (x0 , x1 , x2 , . . .). This means that x2 = x0 , x3 = x1 , . . . , xn+2 = xn , . . .. These are the sequences of the form (a, b, a, b, 2 a, 2 b, . . .), where at least one of the first two terms, a and b, is nonzero. Thus all real numbers are eigenvalues of T , and the eigenspace E is twodimensional, with basis (1, 0, , 0, 2 , 0, . . .), (0, 1, 0, , 0, 2 , . . .). 45. The nonzero sequence (x0 , x1 , x2 , . . .) is an eigensequence with eigenvalue if T (x0 , x1 , x2 , . . .) = (0, x0 , x1 , x2 , . . .) = (x0 , x1 , x2 , . . .) = (x0 , x1 , x2 , . . .). This means that 0 = x0 , x0 = x1 , x1 = x2 , . . . , xn = xn+1 , . . . . If is nonzero, then 1 1 1 these equations imply that x0 = 0 = 0, x1 = x0 = 0, x2 = x1 = 0, . . . , so that there are no eigensequences in this case. If = 0, then we have x0 = x1 = 0, x1 = x2 = 368
ISM: Linear Algebra
Section 7.4
0, x2 = x3 = 0, . . . , so that there aren't any eigensequences either. In summary: There are no eigenvalues and eigensequences for T . 46. The nonzero sequence (x0 , x1 , x2 , . . .) is an eigensequence with eigenvalue if T (x0 , x1 , x2 , . . .) = (x0 , x2 , x4 , . . .) = (x0 , x1 , x2 , . . .) = (x0 , x1 , x2 , . . .). This means that x0 = x0 , x2 = x1 , x4 = x2 , . . . , x2n = xn , . . . . For each , there are lots of eigensequences: we can choose the terms xk for odd k freely and then fix the xk for even k according to the formula x2n = xn . For example, eigenspace E3 consists of the sequences of the form (x0 = 0, x1 , x2 = 3x1 , x3 , x4 = 9x1 , x5 , x6 = 3x3 , x7 , x8 = 27x1 , x9 , . . .), where x1 , x3 , x5 , x7 , x9 , . . . are arbitrary. Note that all the eigenspaces are infinitedimensional. The condition x0 = x0 implies that x0 = 0, except for = 1, in which case x0 is arbitrary. 47. The nonzero even functions, of the form f (x) = a+cx2 , are eigenfunctions with eigenvalue 1, and the nonzero odd functions, of the form f (x) = bx, have eigenvalue 1. Yes, T is diagonalizable, since the standard basis, 1, x, x2 , is an eigenbasis for T . 48. Apply T to the standard basis: T (1) = 1, T (x) = 2x, and T (x2 ) = (2x)2 = 4x2 . This gives the eigenvalues 1, 2, and 4, with corresponding eigenfunctions 1, x, x2 . Yes, T is diagonalizable, since the standard basis is an eigenbasis for T . 1 1 1 49. The matrix of T with respect to the standard basis 1, x, x2 is B = 0 3 6 . The 0 0 9 1 1 1 eigenvalues of B are 1, 3, 9, with corresponding eigenvectors 0 , 2 , 4 . The 4 0 0 eigenvalues of T are 1,3,9, with corresponding eigenfunctions 1, 2x  1, 4x2  4x + 1 = (2x  1)2 . Yes, T is diagonalizable, since the functions 1, 2x  1, (2x  1)2 from an eigenbasis. 1 3 9 50. The matrix of T with respect to the standard basis 1, x, x2 is B = 0 1 6 . The 0 0 1 1 only eigenvalue of B is 1, with corresponding eigenvector 0 . The only eigenvalue of 0 T is 1 as well, with corresponding eigenfunction f (x) = 1. T fails to be diagonalizable, since there is only one eigenvalue, with a onedimensional eigenspace. 51. The nonzero constant functions f (x) = b are the eigenfunctions with eigenvalue 0. If f (x) is a polynomial of degree 1, then the degree of f (x) exceeds the degree of f (x) by 1 (by the power rule of calculus), so that f (x) cannot be a scalar multiple of f (x). Thus 0 is the only eigenvalue of T , and the eigenspace E0 consists of the constant functions. 369
Chapter 7
ISM: Linear Algebra
52. Let f (x) = a0 + a1 x + a2 x2 + + an xn , with an = 0, be an eigenfunction of T with eigenvalue . Then T (f (x)) = x(a1 +2a2 x+ +nan xn1 ) = a1 x+2a2 x2 + +nan xn = (a0 + a1 x + a2 x2 + + an xn ) = a0 + a1 x + a2 x2 + + an xn . This means that a0 = 0, a1 = a1 , a2 = 2a2 , . . . , an = nan . Since we assumed that an = 0, we can conclude that = n. Now it follows that a0 = a1 = = an1 = 0, so that the eigenfunctions with eigenvalue n are the nonzero scalar multiples of xn , of the form f (x) = an xn . This makes good sense, since T (xn ) = x(nxn1 ) = nxn . In summary: The eigenvalues are the integers n = 0, 1, 2, . . ., and the eigenspace En is span (xn ) 53. Suppose basis D consists of f1 , . . . , fn . We are told that the Dmatrix D of T is diagonal; let 1 , 2 , . . . , n be the diagonal entries of D. By Fact 4.3.3., we know that [T (fi )]D = (ith column of D) = i ei , for i = 1, 2, . . . , n, so that T (fi ) = i fi , by definition of coordinates. Thus f1 , . . . , fn is an eigenbasis for T , as claimed. 54. Note that A2 = 0, but B 2 = 0. Since A2 fails to be similar to B 2 , matrix A isn't similar to B (see Example 7 of Section 3.4). 55. Let A = 0 1 0 0 and B = 1 0 , for example. 0 0
56. The hint shows that matrix M =
0 0 AB 0 ; thus matriis similar to N = B BA B 0 ces M and N have the same characteristic polynomial, by Fact 7.3.6a. Now f M () = AB  In 0 = ()n det(AB  In ) = ()n fAB (). To understand the det B In second equality, consider Fact 6.1.8. Likewise, fN () = ()n fBA (). It follows that ()n fAB () = ()n fBA () and therefore fAB () = fBA (), as claimed. Im A 0 I A = m 0 In 0 0 In 0 0 AB 0 0 0 . By Fact 7.3.6a, is similar to N = . Thus matrix M = B BA B 0 B BA matrices M and N have the same characteristic polynomial. AB B AB  Im 0 = ()n det(AB  Im ) = ()n fAB (). To B In understand the second equality, consider Fact 6.1.8. Likewise, fN () = det Im B 0 BA  In = ()m fBA ().
57. Modifying the hint in Exercise 56 slightly, we can write
Now fM () = det
It follows that ()n fAB () = ()m fBA (). Thus matrices AB and BA have the same nonzero eigenvalues, with the same algebraic multiplicities. If mult(AB) and mult(BA) are the algebraic multiplicities of 0 as an eigenvalue of AB and BA, respectively, then the equation ()n fAB () = ()m fBA () implies that 370
ISM: Linear Algebra n + mult(AB) = m + mult(BA).
Section 7.4
58. Let Bi = A  i In ; note that Bi and Bj commute for any two indices i and j. If v is an eigenvector of A with eigenvalue i , then Bi v = 0 and B1 B2 . . . Bi . . . Bm v = B1 . . . Bi1 Bi+1 . . . Bm Bi v = 0. Since A is diagonalizable, any vector x in Rn can be written as a linear combination of eigenvectors, so that B1 B2 . . . Bm x = 0 and therefore B1 B2 . . . Bm = 0, as claimed. 59. If v is an eigenvector with eigenvalue , then fA (A)v = ((A)n + an1 An1 + + a1 A + a0 In )v = ()n v + an1 n1 v + + a1 v + a0 v = (()n + an1 n1 + + a1 + a0 )v = fA ()v = 0v = 0. Since A is diagonalizable, any vector x in Rn can be written as a linear combination of eigenvectors, so that fA (A)x = 0. Since this equation holds for all x in Rn , we have fA (A) = 0, as claimed.
60. a. For a diagonalizable n n matrix A with only two distinct eigenvalues, 1 and 2 , we have (A  1 In )(A  2 In ) = 0, by Exercise 58. Thus the column vectors of A  2 In are in the kernel of A  1 In , that is, they are eigenvectors of A with eigenvalue 1 (or else they are 0). Conversely, the column vectors of A  1 In are eigenvectors of A with eigenvalue 2 (or else they are 0). b. If A is a 2 2 matrix with distinct eigenvalues 1 and 2 , then the nonzero columns of A  1 I2 are eigenvectors of A with eigenvalue 2 , as we observed in part (a). 1 0 Since the matrices A  and A  1 I2 have the same first column, the first 0 2 1 0 will be an eigenvector of A with eigenvalue 2 as well (or column of A  0 2 1 0 it is zero). Likewise, the second column of A  will be an eigenvector of A 0 2 with eigenvalue 1 (or it is zero). 61. a. B is diagonalizable since it has three distinct eigenvalues, so that S 1 BS is diagonal for some invertible S. But S 1 AS = S 1 I3 S = I3 is diagonal as well. Thus A and B are indeed simultaneously diagonalizable. 371
Chapter 7
ISM: Linear Algebra
b. There is an invertible S such that S 1 AS = D1 and S 1 BS = D2 are both diagonal. Then A = SD1 S 1 and B = SD2 S 1 , so that AB = (SD1 S 1 )(SD2 S 1 ) = SD1 D2 S 1 and BA = (SD2 S 1 )(SD1 S 1 ) = SD2 D1 S 1 . These two results agree, since D1 D2 = D2 D1 for the diagonal matrices D1 and D2 . c. Let A be In and B a nondiagonalizable n n matrix, for example, A = B= 1 1 . 0 1 1 0 0 1 and
d. Suppose BD = DB for a diagonal D with distinct diagonal entries. The ij th entry of the matrix BD = DB is bij djj = dii bij . For i = j this implies that bij = 0. Thus B must be diagonal. e. Since A has n distinct eigenvalues, A is diagonalizable, that is, there is an invertible S such that S 1 AS = D is a diagonal matrix with n distinct diagonal entries. We claim that S 1 BS is diagonal as well; by part d it suffices to show that S 1 BS commutes with D = S 1 AS. This is easy to verify: (S 1 BS)D = (S 1 BS)(S 1 AS) = S 1 BAS = S 1 ABS = (S 1 AS)(S 1 BS) = D(S 1 BS).
62. A nonzero function f is an eigenfunction of T with eigenvalue if T (f ) = f + af + bf = f , or, f + af + (b  )f = 0. By Fact 4.1.7, this differential equation has a twodimensional solution space. Thus all real numbers are eigenvalues of T , and all the eigenspaces are twodimensional. 63. Recall from Exercise 62 that all the eigenspaces are twodimensional.
a. We need to solve the differential equation f (x) = f (x). As in Example 18 of Section 4.1, we will look for exponential solutions. The function f (x) = ekx is a solution if k 2 = 1, or k = 1. Thus the eigenspace E1 is the span of functions ex and ex . b. We need to solve the differential equation f (x) = 0. Integration gives f (x) = C, a constant. If we integrate again, we find f (x) = Cx + c, where c is another arbitrary constant. Thus E0 = span(1, x). c. The solutions of the differential equation f (x) = f (x) are the functions f (x) = a cos(x) + b sin(x), so that E1 = span(cos x, sin x). See the introductory example of Section 4.1 and Exercise 4.1.58. d. Modifying part c, we see that the solutions of the differential equation f (x) = 4f (x) are the functions f (x) = a cos(2x) + b sin(2x), so that E4 = span(cos(2x), sin(2x)). 372
ISM: Linear Algebra
Section 7.4
64. The eigenvalues of A are 1 and 3, with associated eigenvectors as in Exercise 65, we find the basis 1 0 0 1 , 0 0 0 1
1 0
and
1 . Arguing 1
for V, so that dim(V ) = 2.
65. Let's write S in terms of its columns, as S = [ v We want A [ v w] = [v w] 5 0 , or, [ Av 0 1
w]. Aw ] = [ 5v w ] , that is, we want 1 2 and E1 =
v to be in the eigenspace E5 , and w in E1 . We find that E5 = span span
1 1 1 1 0 0 1 , so that S must be of the form a b =a +b . 1 2 1 2 0 0 1 0 1 1 0 , and dim(V ) = 2. , Thus, a basis of the space V is 0 1 2 0 1 0 1 66. For A we find the eigenspaces E1 = span 0 , 1 and E2 = span 1 . If we 0 0 1 1 0 0 write S = [u v w], then we want A[u v w] = [u v w] 0 1 0 , or [Au Av Aw] = [u v 2w], 0 0 2 that is, u and v must be in E1 , and w must be in E2 . The matrices S we seek are 1 0 1 0 1 of the form S = a 0 + b 1 c 0 + d 1 e 1 , and a basis of V is 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 , 1 0 0 , 0 0 0 , 0 1 0 , 0 0 1 . The dimension of 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 V is five. 67. Let E1 = span(v1 , v2 , v3 ) and E2 = span(w1 , w2 ). As in Exercise 65, we can see that S must be of the form [ x1 x2 x3 x4 x5 ] where x1 , x2 and x3 are in E1 and x4 and x5 are in E2 . Thus, we can write x1 = c1 v1 + c2 v2 + c3 v3 , for example, or x5 = d1 w1 + d2 w2 . Using Summary 4.1.6, we find a basis: [ v1 [ v3 [0 [0 0 0 0 0 0 ] , [ 0 v1 0 0 0 0 0 0 0 ] , [ v2 0 0 0 0 0], 0 0], 0],
0 ] , [ 0 v2
0 0 0
0 ] , [ 0 v3 0],[0 0
0 v1 0
0 ] , [ 0 0 v2 0],[0 0
0 ] , [ 0 0 v3 0 0
0 w1
0 w2
0 0 w1 ] , [ 0 373
0 w2 ] .
Chapter 7
ISM: Linear Algebra
Thus, the dimension of the space of matrices S is 3 + 3 + 3 + 2 + 2 = 13. 68. Let v1 , . . . , vn be an eigenbasis for A, with Avi = i vi . Arguing as in Exercises 64 through 67, we see that the ith column of S must be in Ei , so that it must be of the form ci vi for some scalar ci . The matrices S we seek are of the form S = [c1 v1 . . . cn vn ], involving the n arbitrary constants c1 , . . . , cn , so that the dimension of V is n.
7.5
1. z = 3  3i so z = 32 + (3)2 = 18 and arg(z) =  , 4 so z = 18 cos  4 + i sin  4 . 2. If z = r(cos + i sin ) then z 4 = r4 (cos 4 + i sin 4). z 4 = 1 if r = 1, cos 4 = 1 and sin 4 = 0 so 4 = 2k for an integer k, and = i.e. z = cos
k 2 k 2 ,
+ i sin
k 2
, k = 0, 1, 2, 3. Thus z = 1, i, 1, i. See Figure 7.25.
Figure 7.25: for Problem 7.5.2. 3. If z = r(cos + i sin ), then z n = rn (cos(n) + i sin(n)). z n = 1 if r = 1, cos(n) = 1, sin(n) = 0 so n = 2k for an integer k, and = i.e. z = cos
2k n 2k n ,
, k = 0, 1, 2, . . . , n  1. See Figure 7.26. 4. Let z = r(cos + i sin ) then w = r cos +2k + i sin +2k , k = 0, 1. 2 2 +2k +2k 5. Let z = r(cos + i sin ) then w = n r cos + i sin , k = 0, 1, 2, . . . , n  1. n n 6. If we have z = r(cos +i sin ) then
1 z 1 must have the property that z z = 1 = cos 0+i sin 0 1 1 1 1 i.e. z z = 1 and arg z z = arg(z) + arg z = 0 so z = 1 (cos() + i sin()) = r 1 1 r (cos  i sin ) (since cosine is even, sine odd). Hence z is a real scalar multiple of z . See Figure 7.27. 7. T (z) = z 2 and arg(T (z)) = arg(1  i) + arg(z) =  + arg(z) so T is a clockwise 4 rotation by followed by a scaling of 2. 4
+ i sin
2k n
374
ISM: Linear Algebra
Section 7.5
Figure 7.26: for Problem 7.5.3.
Figure 7.27: for Problem 7.5.6. 8. By Fact 7.5.1, (cos 3 + i sin 3) = (cos + i sin )3 , i.e. cos 3 + i sin 3 = cos3 + 3i cos2 sin  3 cos sin2  i sin3 = (cos3  3 cos sin2 ) + i(3 cos2 sin  sin3 ). Equating real and imaginary parts, we get cos 3 = cos3  3 cos sin2 sin 3 = 3 cos2 sin  sin3 . 0.7 9. z = 0.82 + 0.72 = 1.15, arg(z) = arctan  0.8 0.72. See Figure 7.28. The trajectory spirals outward, in the clockwise direction. 375
Chapter 7
ISM: Linear Algebra
Figure 7.28: for Problem 7.5.9. 10. Let p(x) = ax3 + bx2 + cx + d, where a = 0. Since p must have a real root, say 1 , we can write p(x) = a(x  1 )g(x) where g(x) is of the form g(x) = x2 + px + q. On page 346 we see that g(x) = (x  2 )(x  3 ), so that p(x) = a(x  1 )(x  2 )(x  3 ), as claimed. 11. Notice that f (1) = 0 so = 1 is a root of f (). Hence f () = (  1)g(), where g() = f () = 2  2 + 5. Setting g() = 0 we get = 1 2i so that f () = 1 (  1)(  1  2i)(  1 + 2i). 12. We will use the facts: i) z + w = z + w and ii) (z n ) = z n These are easy to check. Assume 0 is a complex root of f () = an n + + a1 + a0 n1 where the coefficients ai are real. Since 0 is a root of f , we have an n + an1 0 + 0 + a1 0 + a0 = 0.
n1 Taking the conjugate of both sides we get an n + an1 0 + + a1 0 + a0 = so by 0 0 n1 n+a + + a1 0 + a0 = 0. fact i), and factoring the real constants we get an 0 n1 0
Now, by fact ii), an (0 )n + an1 (0 )n1 + + a1 0 + a0 = 0, i.e. 0 is also a root of f , as claimed. 13. Yes, Q is a field. Check the axioms on Page 347. 14. No, Z is not a field since multiplicative inverses do not exist, i.e. division within Z is not possible (Axiom 8 does not hold). 15. Yes, check the axioms on Page 347. (additive identity 0 and multiplicative identity 1) 16. Yes, check the axioms on p 347. 376
ISM: Linear Algebra 0 0 , multiplicative identity 0 0 1 0 , 0 1
Section 7.5
(additive identity
also notice that rotationscaling matrices commute when multiplied.) 17. No, since multiplication is not commutative; Axiom 5 does not hold. 18. Let v1 , v2 be two eigenvectors of A. Then they define a parallelogram of area S =  det[v1 v2 ]. Now Av1 = 1 v1 and Av2 = 2 v2 define a parallelogram of area S1 =  det[1 v1 2 v2 ] = 1 2 det[v1 v2 ] so S1 = 1 2  =  det(A) by Fact 6.3.8. Hence S  det(A) = 1 2 , as claimed. In R3 , a similar argument holds if we replace areas by volumes. See Figure 7.29.
Figure 7.29: for Problem 7.5.18.
19. a. Since A has eigenvalues 1 and 0 associated with V and V respectively and since V is the eigenspace of = 1, by Fact 7.5.5, tr(A) = m, det(A) = 0. b. Since B has eigenvalues 1 and 1 associated with V and V respectively and since V is the eigenspace associated with = 1, tr(A) = m(nm) = 2mn, det B = (1) nm .
20. fA () = (3  )(3  ) + 10 = 2 + 1 so 1,2 = i. 21. fA () = (11  )(7  ) + 90 = 2  4 + 13 so 1,2 = 2 3i. 22. fA () = (1  )(10  ) + 12 = 2  11 + 22 so 1,2 = 23. fA () = 3 + 1 = (  1)(2 + + 1) so 1 = 377
11 33 . 2 1, 2,3 = 1 3i . 2
Chapter 7
ISM: Linear Algebra
24. fA () = 3 + 32  7 + 5 so 1 = 1, 2,3 = 1 2i. (See Exercise 11.) 25. fA () = 4  1 = (2  1)(2 + 1) = (  1)( + 1)(  i)( + i) so 1,2 = 1 and 3,4 = i 26. fA () = (2  2 + 2)(2  2) = (2  2 + 2)(  2) = 0, so 1,2 = 1 i, 3 = 2, 4 = 0. 27. By Fact 7.5.5, tr(A) = 1 + 2 + 3 , det(A) = 1 2 3 but 1 = 2 = 3 by assumption, so tr(A) = 1 = 22 + 3 and det(A) = 3 = 2 3 . 2 Solving for 2 , 3 we get 1, 3 hence 1 = 2 = 1 and 3 = 3. (Note that the eigenvalues must be real; why?) 28. Suppose the complex eigenvalues are z = a + ib and z = a  ib. By Fact 7.5.5, we have tr(A) = 2 + z + z = 2 + 2a = 8, so that a = 3. Furthermore , det(A) = 2z z = 2(a 2 + b2 ) = 2(9 + b2 ) = 50, so that b = 4. Hence the complex eigenvalues are 3 4i. 29. tr(A) = 0 so 1 + 2 + 3 = 0. Also, we can compute det(A) = bcd > 0 since b, c, d > 0. Therefore, 1 2 3 > 0. Hence two of the eigenvalues must be negative, and the largest one (in absolute value) must be positive.
n
30. a. The ith entry of Ax is
k=1 n n n n
aik xk , so that the sum of all the entries of Ax is
n n n
aik xk =
i=1 k=1 k=1 i=1
aik xk =
k=1 i=1
aik 1
xk =
k=1
xk = 1.
b. As we do some computer experiments, At appears to approach a matrix with identical columns, with column sum 1. Let v1 , v2 , . . . , vn be an eigenbasis with 1 = 1 and j  < 1 for j = 2, . . . , n. For a fixed i, write ei = c1 v1 + c2 v2 + + cn vn , so that (ith column of At ) = At ei = c1 v1 + [c2 t v2 + + cn t vn ]. 2 n (The term in square brackets goes to zero as t goes to infinity.) Therefore, lim (ith column of At ) = lim (At ei ) = c1 v1 .
t t
Furthermore, the entries of At ei add up to 1, for all t, by part a. Therefore, the same is true for the limit (since the limit of a sum is the sum of the limits). 378
ISM: Linear Algebra
Section 7.5
It follows that lim (At ) exists and has identical columns, with column sum 1, as t claimed.
31. No matter how we choose A,
1 15 A
is a regular transition matrix, so that lim
a matrix with identical columns by Exercise 30. Therefore, the columns of A "become ij th entry of At =1 more and more alike" as t approaches infinity, in the sense that lim t ikth entry of At for all i, j, k.
t 1 A t 15 t
is
a(t) 0.6a(t) + 0.1m(t) + 0.5s(t) 0.6 0.1 0.5 32. a. x(t) = m(t) = 0.2a(t) + 0.7m(t) + 0.1s(t) so A = 0.2 0.7 0.1 . s(t) 0.2a(t) + 0.2m(t) + 0.4s(t) 0.2 0.2 0.4 Note that A is a regular transition matrix. b. By Exercise 30, lim (At ) = [v v v], where v is the unique eigenvector of A with eigent 0.4 value 1 and column sum 1. We find that v = 0.35 . 0.25 Now lim x(t) = lim (At x0 ) =
t t t
lim At x0 = [v v v]x0 = v, since the components
of x0 add up to 1. The market shares approach 40%, 35%, and 25%, respectively, regardless of the initial shares. 33. a. C is obtained from B by dividing each column of B by its first component. Thus, the first row of C will consist of 1's. b. We observe that the columns of C are almost identical, so that the columns of B are "almost parallel" (that is, almost scalar multiples of each other). c. Let 1 , 2 , . . . , 5 be the eigenvalues. Assume 1 real and positive and 1 > j  for 2 j 5.
5
Let v1 , . . . , v5 be corresponding eigenvectors. For a fixed i, write ei =
j=1
cj vj ; then
(ith column of At ) = At ei = c1 t v1 + + c5 t v5 . 1 5 But in the last expression, for large t, the first term is dominant, so the ith column of At is almost parallel to v1 , the eigenvector corresponding to the dominant eigenvalue. 379
Chapter 7
ISM: Linear Algebra
d. By part c, the columns of B and C are almost eigenvectors of A associated with the largest eigenvalue, 1 . Since the first row of C consists of 1's, the entries in the first row of AC will be close to 1 . 34. a. The eigenvalues of A  In are 1  , 2  , . . . , n  , and we were told that 1   < i   for i = 2, . . . , n. We may assume that 1 = (otherwise we are done). The eigenvalues of (AIn )1 are (1 )1 , (2 )1 , . . . , (n )1 , and (1 )1 has the largest modulus. The matrices A, A  In , and (A  In )1 have the same eigenvectors. For large t, the columns of the tth power of (A  In )1 will be almost eigenvectors of A. If v is such a column, compare v and Av to find an approximation of 1 . b. See Figure 7.30.
3 17 1 1 2 0
(not to scale)
Figure 7.30: for Problem 7.5.34b. Let = 1.
Obtain C from B as in Exercise 33: 1 1 1 C 0.098922005729 0.098922005729 0.098922005729 0.569298722688 0.569298722688 0.569298722688 0.905740179522 0.905740179522 0.905740179522 AC 380
9 1 3 2 2 3 0 , and B = N 20 . A  I3 = 4 6 6 , N = (A  I3 )1 = 1 0.5 5 1 2 7 8 11
ISM: Linear Algebra
Section 7.5
The entries in the first row of AC give us a good approximation for 1 , and the columns of C give us a good approximation for a corresponding eigenvector.
35. We have fA () = (1  )(2  ) (n  ) = ()n + (1 + 2 + + n )()n1 + + (1 2 n ). But, by Fact 7.2.5, the coefficient of ()n1 is tr(A). So, tr(A) = 1 + + n .
36. a. The entries in the first row are agespecific birth rates and the entries just below the diagonal are agespecific survival rates. For example, the entry 1.6 in the first row tells us that during the next 15 years the people who are 1530 years old today will on average have 1.6 children (3.2 per couple) who will survive to the next census. The entry 0.53 tells us that 53% of those in the age group 4560 today will still be alive in 15 years (they will then be in the age group 6075). b. Using technology, we find the largest eigenvalue 1 = 1.908 with associated eigenvector 0.574 0.247 0.115 v1 . 0.047 0.014 0.002 The components of v1 give the distribution of the population among the age groups in the long run, assuming that current trends continue. 1 gives the factor by which the population will grow the long run in a period of 15 years; this translates to an in annual growth factor of 15 1.908 1.044, or an annual growth of about 4.4%. 37. a. Use that w + z = w + z and wz = wz. w1 z1 w1 z1 w2 z 1 + z2 w1 z 1 w1 w2 z2 z 2 w2 z 2 w2 = = w1 + w 2 z1 + z 2 (z1 + z2 ) w1 + w 2 is in H. is in H.
w1 w2  z 1 z2 z1 w2 + w 1 z2
(z1 w2 + w 1 z2 ) w1 w2  z 1 z2
b. If A in H is nonzero, then det(A) = ww + zz = w2 + z2 > 0, so that A is invertible. c. Yes; if A = w z z , then A1 = w
1 w2 +z2
w z
z w
is in H.
381
Chapter 7 i 0 0 i
ISM: Linear Algebra 0 1 , then AB = 1 0 0 i i 0
d. For example, if A = BA = 0 i i . 0 0 0 0 1 1 0 0 0
and B =
and
0 0 2 38. a. C4 = 1 0
Figure 7.31 illustrates how C4 acts on the basis vectors ei .
0 0 1 1 3 0 0 , C4 = 0 0 0 0 1 0
0 1 0 0
0 0 4 4+k k , C4 = I4 , then C4 = C4 . 1 0
Figure 7.31: for Problem 7.5.38a. b. The eigenvalues are 1 = 1, 2 = 1, 3 = i, and 4 = i, and for each eigenvalue 3 k 2 k , vk = k is an associated eigenvector. k 1
2 3 c. M = aI4 + bC4 + cC4 + dC4
If v is an eigenvector of C4 with eigenvalue , then M v = av + bv + c2 v + d3 v = (a + b + c2 + d3 )v, so that v is an eigenvector of M as well, with eigenvalue a + b + c2 + d3 . The eigenbasis for C4 we found in part b is an eigenbasis for all circulant 44 matrices.
39. Figure 7.32 illustrates how Cn acts on the standard basis vectors e1 , e2 , . . . , en of Rn .
k a. Based on Figure 7.9, we see that Cn takes ei to ei+k "modulo n," that is, if i + k k exceeds n then Cn takes ei to ei+kn (for k = 1, . . . , n  1). k To put it differently: Cn is the matrix whose ith column is ei+k if i + k n and ei+kn if i + k > n (for k = 1, . . . , n  1).
382
ISM: Linear Algebra
Section 7.5
Figure 7.32: for Problem 7.5.39. b. The characteristic polynomial is 1  n , so that the eigenvalues are the n distinct solutions of the equation n = 1 (the socalled nth roots of unity), equally spaced points along the unit circle, k = cos 2k +i sin 2k , for k = 0, 1, . . . , n1 (compare n n with Exercise 5 and Figure 7.7.). For each eigenvalue k ,
n1 k . . . vk = 2 is an associated eigenvector. k k 1
c. The eigenbasis v0 , v1 , . . . , vn1 for Cn we found in part b is in fact an eigenbasis for all circulant n n matrices.
40. In Exercise 7.2.50 we derived the formula x =
3
q 2
+
q 2 2 q 2 2
+
p 3 3 + p 3 3
3
q 2

q 2 2
+
p 3 3
for the solution of the equation x3 + px = q. Here write x=
3
+
is negative, and we can
q 2
+i

q 2 2
+
p 3 3
+
3
q 2
i

q 2 2
+
p 3 . 3
Let us write this solution in polar coordinates: x= = =2
3
p 3
3/2
(cos + i sin ) +
3
p 3 p 3
3/2
(cos  i sin )
 p cos +2k + i sin +2k + 3 3 3
p  3 cos +2k 3
cos +2k  i sin +2k 3 3
, k = 0, 1, 2. See Figure 7.33.
Answer: 383
Chapter 7
ISM: Linear Algebra
Figure 7.33: for Problem 7.5.40.
p 3
q 2 p 3/2 3
x1,2,3 = 2
cos
+2k 3
, k = 0, 1, 2, where = arccos
p 3 ,2 p 3
(
)
.
p 3 , p 3
Note that x is on the interval k = 1 and on 41. Substitute =
14 x2
when k = 0, on
2
when

1 x
p 3 ,
p 3
when k = 1 (Think about it!).
into 142 + 123  1 = 0;
+
12 x3
1=0
14x + 12  x3 = 0 x3  14x = 12 Now use the formula derived in Exercise 40 to find x, with p = 14 and q = 12. There 1 is only one positive solution, x 4.114, so that = x 0.243.
42. a. We will use the fact that for any two complex numbers z and w, z + w = z + w and zw = zw.
p p p
The ij th entry of AB is
k=1
aik bkj =
k=1
aik bkj =
k=1
aik bkj , which is the ij th entry of
AB, as claimed. b. Use part a, where B is the n 1 matrix v + iw. We are told that AB = B, where = p + iq. Then AB = A B = AB = B = B, or A(v  iw) = (p  iq)(v  iw).
384
ISM: Linear Algebra
Section 7.5
43. Note that f (z) is not the zero polynomial, since f (i) = det(S1 + iS2 ) = det(S) = 0, as S is invertible. A nonzero polynomial has only finitely many zeros, so that there is a real number x such that f (x) = det(S1 + xS2 ) = 0, that is, S1 + xS2 is invertible. Now SB = AS or (S1 + iS2 )B = A(S1 + iS2 ). Considering the real and the imaginary part, we can conclude that S1 B = AS1 and S2 B = AS2 and therefore (S1 + xS2 )B = A(S1 + xS2 ). Since S1 + xS2 is invertible, we have B = (S1 + xS2 )1 A(S1 + xS2 ), as claimed. 44. Let A be a complex 2 2 matrix. Let be a complex eigenvalue of A, and consider an associated eigenvector v, so that Av = v. Now let P be an invertible 2 2 matrix of the form P = [v w] (the first column of P is our eigenvector v). Then P 1 AP will be of the , so that we have found an upper triangular matrix similar to A (compare form 0 with the proof of Fact 7.4.1). Yes, any complex square matrix is similar to an upper triangular matrix, although the proof is challenging at this stage of the course. Following the hint, we will assume that the claim holds for n n matrices, and we will prove it for an (n + 1) (n + 1) matrix A. As in the case of a 2 2 matrix discussed above, we can find an invertible P such that w P 1 AP is of the form for some scalar , a row vector w with n components, and 0 B an nn matrix B (just make the first column of P an eigenvector of A). By the induction hypothesis, B is similar to some upper triangular matrix T , that is, R 1 BR = T for some invertible R. Now let S = P S 1 AS = 1 0 , an invertible (n + 1) (n + 1) matrix. Then 0 R
1 0 w 1 0 1 0 1 0 wR P 1 AP = = , an 0 R 0 B 0 R1 0 R1 0 R 0 T upper triangular matrix, showing that A is indeed similar to an upper triangular matrix. You will see an analogous proof in Section 8.1 (proof of Fact 8.1.1, Page 368). 45. If a = 0, then there are two distinct eigenvalues, 1 a, so that the matrix is diagonal1 1 1 1 fails to be diagonalizable. = izable. If a = 0, then 0 1 a 1 46. If a = 0, then there are two distinct eigenvalues, ia, so that the matrix is diagonaliz0 a 0 0 able. If a = 0, then = is diagonalizable as well. Thus the matrix is a 0 0 0 diagonalizable for all a. 47. If a = 0, then there are three distinct eigenvalues, 0, a, so that the matrix is diago 0 0 0 0 0 0 nalizable. If a = 0, then 1 0 a = 1 0 0 fails to be diagonalizable. 0 1 0 0 1 0 385
Chapter 7
ISM: Linear Algebra
48. The characteristic polynomial is f () = 3 + 3 + a. We need to find the values a such that this polynomial has multiple roots. Now is a multiple root if (and only if) f () = f () = 0 (see Exercise 7.2.37). Since f () = 32 + 3 = 3(  1)( + 1), the only possible multiple roots are 1 and 1. Now 1 is a multiple root if f (1) = 2 + a = 0, or, a = 2, and 1 is a multiple root if a = 2. Thus, if a is neither 2 nor 2, then the matrix is diagonalizable. Conversely, if a = 2 or a = 2, then the matrix fails to be diagonalizable, since all the eigenspaces will be onedimensional (verify this!). 49. The eigenvalues are 0, 1, a  1. If a is neither 1 nor 2, then there are three distinct eigenvalues, so that the matrix is diagonalizable. Conversely, if a = 1 or a = 2, then the matrix fails to be diagonalizable, since all the eigenspaces will be onedimensional (verify this!). 50. The eigenvalues are 0, 0, 1. Since the kernel is always twodimensional, with basis 1 0 1 , 1 , the matrix is diagonalizable for all values of constant a. 0 1
7.6
1. 1 = 0.9, 2 = 0.8, so, by Fact 7.6.2, 0 is a stable equilibrium. 2. 1 = 1.1, 2 = 0.9, so by Fact 7.6.2, 0 is not a stable equilibrium. (1  > 1) 3. 1,2 = 0.8 (0.7)i so 1  = 2  = 0.64 + 0.49 > 1 so 0 is not a stable equilibrium. 4. 1,2 = 0.9 (0.4)i so 1  = 2  = 0.81 + 0.16 < 1 so 0 is a stable equilibrium. 5. 1 = 0.8, 2 = 1.1 so 0 is not a stable equilibrium. 6. 1,2 = 0.8 (0.6)i so 1  = 2  = 0.64 + 0.36 = 1 and 0 is not a stable equilibrium. 7. 1,2 = 0.9 (0.5)i so 1  = 2  = 0.81 + 0.25 > 1 and 0 is not a stable equilibrium. 8. 1 = 0.9, 2 = 0.8 so 0 is a stable equilibrium. 9. 1,2 = 0.8 (0.6)i, 3 = 0.7, so 1  = 2  = 1 and 0 is not a stable equilibrium. 10. 1,2 = 0, 3 = 0.9 so 0 is a stable equilibrium. 11. 1 = k, 2 = 0.9 so 0 is a stable equilibrium if k < 1. 12. 1,2 = 0.6 ik so 0 is a stable equilibrium if 1  = 2  = or k < 0.8. 0.36 + k 2 < 1 i.e. if k 2 < 0.64
13. Since 1 = 0.7, 2 = 0.9, 0 is a stable equilibrium regardless of the value of k. 386
ISM: Linear Algebra
Section 7.6
1 14. 1 = 0, 2 = 2k so 0 is a stable equilibrium if 2k < 1 or k < 2 . 1 15. 1,2 = 1 10 k 1 If k 0 then 1 = 1 + 10 k 1. If k < 0 then 1  = 2  > 1. Thus, the zero state isn't a stable equilibrium for any real k. 1+30k 1 16. 1,2 = 2 10 so 2 1 + 30k must be less than 10. 1,2 are real if k  30 . In this case it is required that 2 + 1 + 30k < 10 and 10 < 2  1 + 30k, which means that 21 1 + 30k < 8 or k < 10 . 1 1,2 are complex if k <  30 . Here it is required that 4 + (1  30k) < 100 or k >  97 . 30 21 Overall, 0 is a stable equilibrium if  97 < k < 10 . 30
17. 1,2 = 0.6 (0.8)i = 1(cos i sin ), where = arctan E1 = ker x0 = x(t) =
0.8 0.6
= arctan
0.8 0.6
= arctan
4 3
0.927. 0 1 ,v = . 1 0
0.8i 0.8 1 = span 0.8 0.8i i
so w =
0 = 1w + 0v, so a = 1 and b = 0. Now we use Fact 7.6.3: 1 0 1 1 0
4 3
cos(t) sin(t) 0.927.
 sin(t) cos(t)
0 1 1 = 1 0 0
cos t = sin t
 sin t , where cos t
= arctan
The trajectory is the circle shown in Figure 7.34. 18. 1,2 =
42 3i 5
2
= r(cos i sin ), where r 1.058 and =  arctan 0 ,v = 1
3 2
3 5 4 5
2.428
(second quadrant).
E1 = span
3 2
i
, so w =
0
.
x0 = 1w + 0v, so a = 1, b = 0. x(t) = rt [ w v] cos(t) sin(t)  sin(t) cos(t) 0 0.866 1 (1.058)t 1 0 0 cos(t) sin(t)
(1.058)t
0.866 sin(2.428t) . See Figure 7.35. cos(2.428t)
Spirals slowly outwards (plot the first few points). 387
Chapter 7
ISM: Linear Algebra
Figure 7.34: for Problem 7.6.17.
Figure 7.35: for Problem 7.6.18. 19. 1,2 = 2 3i, r = 1 13, and = arctan
3 2
0.98, so and x(t)
1 a 0 1 = , 13(cos(0.98) + i sin(0.98)), [w v] = 0 b 1 0
t
13
 sin(0.98t) . cos(0.98t)
The trajectory spirals outwards; see Figure 7.36.
3 20. 1,2 = 4 3i, r = 5, = arctan 4 0.64, so 1 5(cos(0.64) + i sin(0.64)), [w v] = 0 1 a 1 sin(0.64t) , = and x(t) 5t . See Figure 7.37. 1 0 b 0 cos(0.64t)
388
ISM: Linear Algebra
Section 7.6
Figure 7.36: for Problem 7.6.19. Spirals outwards (rotationdilation).
Figure 7.37: for Problem 7.6.20. 21. 1,2 = 4 i, r = 1 17, = arctan
1 4
0.245 so 0 5 a 1 , = 1 3 b 0
17(cos(0.245) + i sin(0.245)), [w v] = 17
t
and x(t)
5 sin(0.245t) cos(0.245t) + 3 sin(0.245t)
The trajectory spirals outwards; see Figure 7.38. 22. 1,2 = 2 3i, r = 13, 2.16 (in second quadrant) 389
Chapter 7
ISM: Linear Algebra
Figure 7.38: for Problem 7.6.21. [w v] = 0 5 a 1 , = 1 3 b 0 so x(t) = 13
t
5 sin(t) , where 2.16. cos(t)  3 sin(t)
Spirals outwards, as in Figure 7.39.
Figure 7.39: for Problem 7.6.22. 23. 1,2 = 0.4 0.3i, r = 1 , = arctan 2 [w v] = 0 5 a 1 , = 1 3 b 0
0.3 0.4
0.643
1 t 2
so x(t) =
5 sin(t) . cos(t) + 3 sin (t)
The trajectory spirals inwards as shown in Figure 7.40. 24. 1,2 = 0.8 0.6i, r = 1, =  arctan
.6 .8
2.5 (second quadrant)
390
ISM: Linear Algebra
Section 7.6
Figure 7.40: for Problem 7.6.23. 0 5 a , 1 3 b Figure 7.41. [w v] = = 1 0 so x(t) = 5 sin(t) , an ellipse, as shown in cos(t)  3 sin(t)
Figure 7.41: for Problem 7.6.24. 25. Not stable since if is an eigenvalue of A, then
1
is an eigenvalue of A1 and
1
=
1 
> 1.
26. Stable since A and AT have the same eigenvalues. 27. Stable since if is an eigenvalue of A, then  is an eigenvalue of A and    = . 28. Not stable, since if is an eigenvalue of A, then (  2) is an eigenvalue of (A  2I n ) and   2 > 1. 391
Chapter 7
1 2
ISM: Linear Algebra 0
1 2 3 2
29. Cannot tell; for example, if A = not stable, but if A = 1 2 0 1 2 0
0
, then A + I2 is
1 2
0
3 2
0
and the zero state is
then A + I2 =
0
1 2
0
and the zero state is stable.
30. Consider the dynamical systems x(t + 1) = A2 x(t) and y(t + 1) = Ay(t) with equal initial values, x(0) = y(0). Then x(t) = y(2t) for all positive integers t. We know that lim y(t) = 0; thus lim x(t) = 0, proving that the zero state is a stable equilibrium of the system x(t + 1) = A2 x(t). 31. We need to determine for which values of det(A) and tr(A) the modulus of both eigenvalues is less than 1. We will first think about the border line case and examine when one of the moduli is exactly 1: If one of the eigenvalues is 1 and the other is , then tr(A) = + 1 and det(A) = , so that det(A) = tr(A)  1. If one of the eigenvalues is 1 and the other is , then tr(A) =  1 and det(A) = , so that det(A) = tr(A)  1. If the eigenvalues are complex conjugates with modulus 1, then det(A) = 1 and tr(A) < 2 (think about it!). It is convenient to represent these conditions in the trdet plane, where each 2 2 matrix A is represented by the point (trA, detA), as shown in Figure 7.42.
t t
Figure 7.42: for Problem 7.6.31.
If tr(A) = det(A) = 0, then both eigenvalues of A are zero. We can conclude that throughout the shaded triangle in Figure 7.42 the modulus of both eigenvalues will be less than 1, since the modulus of the eigenvalues changes continuously with tr(A) and det(A) (consider the quadratic formula!). Conversely, we can choose sample points to show that in all the other four regions in Figure 7.42 the modulus of at least one of the eigenvalues exceeds one; consider 392
ISM: Linear Algebra 2 0 , 0 0 in (I) 2 0 , 0 0 in (II) 2 0 , 0 2 in (III) 0 2 . 2 0 in (IV)
Section 7.6
the matrices
and
It follows that throughout these four regions, (I), (II), (III), and (IV), at least one of the eigenvalues will have a modulus exceeding one. The point (trA, detA) is in the shaded triangle if det(A) < 1, det(A) > tr(A)  1 and det(A) > tr(A)  1. This means that trA  1 < det(A) < 1, as claimed. 32. Take conjugates of both sides of the equation x0 = c1 (v + iw) + c2 (v  iw): x0 = x0 = c1 (v + iw) + c2 (v  iw) = c1 (v  iw) + c2 (v + iw) = c2 (v + iw) + c1 (v  iw). The claim that c2 = c1 now follows from the fact that the representation of x0 as a linear combination of the linearly independent vectors v + iw and v  iw is unique. 33. Take conjugates of both sides of the equation x0 = c1 (v + iw) + c2 (v  iw): x0 = x0 = c1 (v + iw) + c2 (v  iw) = c1 (v  iw) + c2 (v + iw) = c2 (v + iw) + c1 (v  iw). The claim that c2 = c1 now follows from the fact that the representation of x0 as a linear combination of the linearly independent vectors v + iw and v  iw is unique.
34. a. If  det A = 1 2 n  = 1 2  n  > 1 then at least one eigenvalue is greater than one in modulus and the zero state fails to be stable. b. If  det A = 1 2  n  < 1 we cannot conclude anything about the stability of 0. 2 0.1 < 1 and 0.2 0.1 < 1 but in the first case we would not have stability, in the second case we would.
n
35. a. Let v1 , . . . , vn be an eigenbasis for A. Then x(t) =
i=1 n n n
ci t vi and i
n
x(t) = 
i=1
c i t v i  i
c i t v i = i
i=1 i=1
i t ci vi 1
ci vi .
i=1
n
The last quantity,
i=1
ci vi , gives the desired bound M . 393
Chapter 7 1 1 0 1
ISM: Linear Algebra k k+1 = , so that 1 1
b. A =
represents a shear parallel to the xaxis, with A
x(t) = At
0 t = 1 1 no eigenbasis for A.
is not bounded. This does not contradict part a, since there is
36. If the zero state is stable, then lim (ith column of At ) = lim (At ei ) = 0, so that all columns and therefore all entries of At approach 0. Conversely, if lim At = 0, then lim (At x0 ) =
t t t t t
lim At x0 = 0 for all x0 (check the
details).
37. a. Write Y (t + 1) = Y (t) = Y, C(t + 1) = C(t) = C, I(t + 1) = I(t) = I. Y = C + I + G0 Y = Y + G0 C = Y G0 Y = 1 I =0 Y =
G0 1 , C
=
G0 1 , I
=0 = C(t) 
G0 1 , i(t)
b. y(t) = Y (t) 
G0 1 , c(t)
= I(t)
Substitute to verify the equations. C(t + 1) = i(t + 1)  c. A = 0.2 0.2 4 1 c(t) i(t)
eigenvalues 0.6 0.8i
not stable d. A = e. A = , trA = 2, detA = , stable (use Exercise 31) 1 trA = (1 + ) > 0, detA = 
Use Exercise 31; stable if det(A) = < 1 and trA  1 = +  1 < . The second condition is satisfied since < 1. Stable if <
1
394
ISM: Linear Algebra eigenvalues are real if
4 (1+)2
Section 7.6
38. a. T (v) = Av + b = v if v  Av = b or (In  A)v = b. In  A is invertible since 1 is not an eigenvalue of A. Therefore, v = (In  A)1 b is the only solution. b. Let y(t) = x(t)  v be the deviation of x(t) from the equilibrium v. Then y(t + 1) = x(t + 1)  v = Ax(t) + b v = A(y(t) + v) + b v = Ay(t) + Av + b v = Ay(t), so that y(t) = At y(0), or x(t) = v + At (x0  v).
t
lim x(t) = v for all x0 if lim At (x0  v) = 0. This is the case if the modulus of all t the eigenvalues of A is less than 1.
39. Use Exercise 38: v = (I2  A)1 b = 2 4
0.9 0.2 0.4 0.7
1
1 2 = . 2 4
is a stable equilibrium since the eigenvalues of A are 0.5 and 0.1.
40. Note that A can be partitioned as A =
B C T , where B and C are rotationscaling C BT matrices. Also note that BC = CB, B T B = (p2 + q 2 )I2 , and C T C = (r2 + s2 )I2 .
a. AT A =
BT C
CT B
B C
C T BT
= (p2 + q 2 + r2 + s2 )I4 if A = 0.
b. By part a, A1 =
1 T p2 +q 2 +r 2 +s2 A
c. (det A)2 = (p2 + q 2 + r2 + s2 )4 , by part a, so that det A = (p2 + q 2 + r2 + s2 )2 . Laplace Expansion along the first row produces the term +p4 , so that det(A) = (p2 + q 2 + r2 + s2 )2 . d. Consider det(A  I4 ). Note that the matrix A  I4 has the same "format" as A, with p replaced by p  and q, r, s remaining unchanged. By part c, det(A  I4 ) = ((p  )2 + q 2 + r2 + s2 )2 = 0 when (p  )2 = q 2  r2  s2 p  = i q 2 + r2 + s2 395
Chapter 7 =pi q 2 + r 2 + s2
ISM: Linear Algebra
Each of these eigenvalues has algebraic multiplicity 2 (if q = r = s = 0 then = p has algebraic multiplicity 4). e. By part a we can write A = p2 + q 2 + r 2 + s2 1 p2 + q 2 + r 2 + s2
S
A , where S is
orthogonal.
p2 + q 2 + r2 + s2 (Sx) = p2 + q 2 + r2 + s2 x . Therefore, Ax = 3 3 4 5 1 39 3 5 4 3 2 13 f. Let A = and x = ; then Ax = . 4 5 3 3 4 18 5 4 3 3 4 13 By part e, Ax
2
= (32 + 32 + 42 + 52 ) x 2 , or
392 + 132 + 182 + 132 = (32 + 32 + 42 + 52 )(12 + 22 + 42 + 42 ), as desired. g. Any positive integer m can be written as m = p1 p2 . . . pn . Using part f repeatedly we see that the numbers p1 , p1 p2 , p1 p2 p3 , . . . , p1 p2 p3 pn1 , and finally m = p1 pn can be expressed as the sums of four squares.
41. Find the 2 2 matrix A that transforms A 8 3 3 8 = 6 4 4 6 and A =
8 6
into
3 4
and
1
3 4
1 50
into
8 : 6
3 8 4 6
8 3 6 4
=
36 73 . 52 36
There are many other correct answers.
42. a. x(t + 1) = x(t)  ky(t) y(t + 1) = kx(t) + y(t) = kx(t) + (1  k 2 )y(t) so b. fA () = 2  (2  k 2 ) + 1 = 0 The discriminant is (2  k 2 )2  4 = 4k 2 + k 4 = k 2 (k 2  4), which is negative if k is a small positive number (k < 2). Therefore, the eigenvalues are complex. By Fact 7.6.4 the trajectory will be an ellipse, since det(A) = 1. 396 1 x(t + 1) = k y(t + 1) k 1  k2 x(t) . y(t)
ISM: Linear Algebra
True or False
True or False
1. T, by Fact 7.2.2 2. T, by Definition 7.2.3 3. F; If 1 1 , then eigenvalue 1 has geometric multiplicity 1 and algebraic multiplicity. 2. 0 1
4. T, by Fact 7.4.3 5. T; A = AIn = A[e1 . . . en ] = [1 e1 . . . n en ] is diagonal. 6. T; If Av = v, then A3 v = 3 v. 7. T; Consider a diagonal 5 5 matrix with only two distinct diagonal entries. 8. F, by Fact 7.2.7. 9. T, by Summary 7.1.5 10. T, by Fact 7.2.4 11. F; Consider A = 12. F; Let A = 1 1 . 0 1 4 0 , = 5, for example. Then = 10 isn't an 0 5
2 0 , = 2, B = 0 3 8 0 eigenvalue of AB = . 0 15
13. T; If Av = 3v, then A2 v = 9v. 14. T; Construct an eigenbasis by concatenating a basis of V with a basis of V . 15. T, by Fact 7.5.5 16. F; Let A = 1 1 , for example. 0 1
17. T, by Example 6 of Section 7.5 18. T; The geometric multiplicity of eigenvalue 0 is dim(kerA) = n  rank(A). 19. T; If S 1 AS = D, then S T AT (S T )1 = D. 397
Chapter 7 2 0 0 1 0 0 20. F; Let A = 0 3 0 and B = 0 4 0 , for example. 0 0 0 0 0 0 21. F; Consider A = 22. T, by Fact 7.5.5 23. F; Let A = 24. F; Let A = 1 0 , for example. 1 1 1 1 0 0 and B = 0 0 0 1 . , with A2 = 0 0 0 0
ISM: Linear Algebra
0 2 0 1 , for example. , with AB = 0 0 0 1
25. T; If S 1 AS = D, then S 1 A1 S = D1 is diagonal. 26. F; the equation det(A) = det(AT ) holds for all square matrices, by Fact 6.2.7. 27. T; The sole eigenvalue, 7, must have geometric multiplicity 3. 28. F; Let A = 1 1 0 0 and B = 1 2 0 1 , for example. , with A + B = 0 1 0 1
29. F; Consider the zero matrix. 30. T; If Av = v and Bv = v, then (A + B)v = Av + Bv = v + v = ( + )v. 31. F; Consider the identity matrix. 1 0 0 32. T; Both A and B are similar to 0 2 0 , by Fact 7.4.1 0 0 3 33. F; Let A = 34. F; Consider 35. F; Let A = 1 1 0 1 and v = 1 , for example. 0
1 1 0 1 0 1 2 0 , for example. , and w = ,v = 1 0 0 3
36. T; A nonzero vector on L and a nonzero vector on L form an eigenbasis. 37. T; The eigenvalues are 3 and 2. 38. T, We will us Fact 7.3.7 throughout: The geometric multiplicity of an eigenvalue is its algebraic multiplicity. 398
ISM: Linear Algebra
True or False
Now let's show the contrapositive of the given statement: If the geometric multiplicity of some eigenvalue is less than its algebraic multiplicity, then the matrix A fails to be diagonalizable. Indeed, in this case the sum of the geometric multiplicities of all the eigenvalues is less than the sum of their algebraic multiplicities, which in turn is n (where A is an n n matrix). Thus the geometric multiplicities do not add up to n, so that A fails to be diagonalizable, by Fact 7.3.4b. 39. T, Consider the proof of Fact 7.3.4a. 40. T; An eigenbasis for A is an eigenbasis for A + 4I4 as well. 41. F; Consider a rotation through /2. 42. T; Suppose v v A(v + w) v A A . If w for a nonzero vector = = w w Aw w 0 A is nonzero, then it is an eigenvector of A with eigenvalue ; otherwise v is such an eigenvector. 1 0 0 1 and 1 1 . 0 1
43. F; Consider
44. T; Note that S 1 AS = D, so that D4 = S 1 A4 S = S 1 0S = 0, and therefore D = 0 (since D is diagonal) and A = SDS 1 = 0. 45. T; There is an eigenbasis v1 , . . . , vn , and we can write v = c1 v1 + + cn vn . The vectors ci vi are either eigenvectors or zero. 46. T; If Av = v and Bv = v, then ABv = v. 47. T, by Fact 7.3.6a 48. F; Let A = 1 0 , for example. 0 0
49. T; Recall that the rank is the dimension of the image. If v is in the image of A, then Av is in the image of A as well, so that Av is parallel to v. 50. F; Consider 0 1 . 0 0
51. T; If Av = v for a nonzero v, then A4 v = 4 v = 0, so that 4 = 0 and = 0. 52. F; Let A = 1 1 0 0 and B = 0 1 , for example. 0 1
53. T; If the eigenvalue associated with v is = 0, then Av = 0, so that v is in the kernel of 1 A; otherwise v = A v , so that v is in the image of A. 399
Chapter 7
ISM: Linear Algebra
54. T; either there are two distinct real eigenvalues, or the matrix is of the form kI 2 . 55. T; Either Au = 3u or Au = 4u. 56. T; Note that (uuT )u = u 2 u. 57. T; Suppose Avi = i vi and Bvi = i vi , and let S = [v1 . . . vn ]. Then ABS = BAS = [1 1 v1 . . . n n vn ], so that AB = BA. 58. T; Note that a nonzero vector v = a b p if (and only if) is an eigenvector of A = c d q p ap + bq p ap + bq = 0. Check that this , that is, if det is parallel to v = Av = q cp + dq q cp + dq is the case if (and only if) v is an eigenvector of adj(A) (use the same criterion).
400
Find millions of documents on Course Hero  Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with the best way to expand their education.
Below is a small sample set of documents:
Princeton  MAT  202
ISM: Linear AlgebraSection 8.1Chapter 8 8.11. e1 , e2 is an orthonormal eigenbasis. 2. 3.1 21 1 , 12 1 1is an orthonormal eigenbasis.2 1 , 15 is an orthonormal eigenbasis. 1 2 1 1 1 1 1 1 4. 3 1 , 2 1 , 6 1 is an orthono
Princeton  MAT  202
Chapter 9ISM: Linear AlgebraChapter 9 9.11. x(t) = 7e5t , by Fact 9.1.1. 2. x(t) = e e0.71t = e10.71t , by Fact 9.1.1. 3. P (t) = 7e0.03t , by Fact 9.1.1. 4. This is just an antiderivative problem: y(t) = 0.8 t2 + C = 0.4t2 + C, and C = 0.
Waterloo  ECE  100
Waterloo  ECE  100
UCLA  ECON  M134
UCLA  ECON  M134
UCLA  ECON  M134
Northeastern  CRJU  360
JD: Causes and Control (4) most people assume lowerclass juveniles are more delinquent than middleclass juveniles large number of studies yields contradictory findingsHow do we determine the relationship bt social class and delinquency? Unti
Berkeley  PACS  10
Weede and Waltz are scholars possessed with antipodal theories explaining the international system as it exists today. Kenneth Waltz is a realist who, in his piece Structural Realism after the Cold War, contends that the fall of the Soviet Union repr
Northeastern  CRJU  360
JD: Causes and Control (2)Method of measurement impacts conclusions about delinquencyThree major ways to measure1. official statistics2. selfreport data 3. victimization data primarily arrest data from police compiled by the FBI and reporte
Berkeley  PACS  126
Any assessment of the International Community's reluctance to adopt swift plans for military intervention in Rwanda's genocide quickly finds itself confronted with a chilling reality; major international leaders refused to acknowledge the moral and l
Northeastern  CRJU  360
JD: Causes and Control (10)Patterns of Offending Over the Life Course Adolescentlimited Lifecourse persistentMost adolescents engage in delinquency 1. Small to moderate number of minor offenses 2. High school cliques involved to varying d
Berkeley  PACS  126
Weede and Waltz are scholars possessed with antipodal theories explaining the international system as it exists today. Kenneth Waltz is a realist who, in his piece Structural Realism after the Cold War, contends that the fall of the Soviet Union repr
Northeastern  CRJU  360
JD: Causes and Control (5)TheoryAttempt to explain or describe causes of something Basic parts of a theory 1. dependent variablewhat is to be explained 2. independent variablewhat is believed to be the cause 3. reasons why independent varia
Northeastern  CRJU  360
JD: Causes and Control (6)Introduction to Strain Theoryexperience strain, become upset, and sometimes engage in delinquency Several versions of strain theory each describes major types of strain and conditioning variables JuvenilesTwo Major C
Northeastern  CRJU  360
JD: Causes and Control (8)Introduction to Control Theory Explains conformity rather than delinquency Conform because of controls or restraints1. belief that delinquency is wrong 2. fear of sanctionsWhy Juveniles Conform and Sometimes Deviate
Northeastern  CRJU  360
JD: Causes and Control (9)Introduction to Labeling TheoryFocuses on official and informal reactions to delinquency central ideas Focuses on impact of such reactions on juveniles 1. labeled "delinquents" viewed as bad 2. harsh/rejecting respon
Northeastern  CRJU  360
Agnew (18) Major direct causes of individual delinquency1. four clusters a. irritability/low selfcontrol b. poor parenting practices c. negative school experiences d. association with delinquent peers/gang members 2. causes related to all ma
Northeastern  CRJU  360
Agnew (13) Definition: relatively stable ways ofperceiving, thinking about, and behaving toward the environment and oneself Certain traits increase likelihood of delinquency Clusters of related traits called "supertraits"1. low selfcontrol c
Northeastern  CRJU  360
Agnew (14) All major delinquency theories argue familyinfluences whether juveniles engage in delinquency1. level of and reaction to strain 2. learns to conform or deviate 3. control to which subjected 4. extent to which labeled Research on
Northeastern  CRJU  360
Agnew (15)low academic performance little school involvement low attachment to school poor relations with teachers low educational/occupational goals dropping out of school school misbehavior both school experiences and delinquencycaused by sam
Northeastern  CRJU  360
Agnew (16)How do delinquent peers impact delinquency? Typically strongest correlate of delinquency1. association due to several causal effects a. delinquent peers CAUSE delinquency b. third variables cause both c. delinquency CAUSES delinq
Northeastern  CRJU  360
Agnew (17) 1. increases external and internal controls,stake in conformity, and internal control 2. increases exposure to conventional models, teach conventional beliefs, and reinforce conformity 3. reduces strain and provide social support 4.
Northeastern  CRJU  360
HOW DO WE KNOW IF A POLICY OR PROGRAM IS EFFECTIVE IN CONTROLLING OR PREVENTING DELINQUENCY?D ETERMINING EFFECTIVENESS .Ideally use a randomized experiment (best method)features that provide accurate information about program's effectiveness
Berkeley  PACS  126
To accurately conceive human rights as both an international institution and as a universal principle of humankind, it is necessary to understand that they are a product of their time. That is to say that the theoretical, philosophical and social und
Northeastern  CRJU  360
W HAT D O THE P OLICE D O C ONTROL D ELINQUENCY ?TOD ISCUSSION Q UESTIONS 1. Describe the major characteristics of preventative patrol. 2. Discuss the general effectiveness of preventative patrol in controlling crime. 3. Describe the three m
UT Arlington  ENGL  1302
Tarrant County College District District Master Syllabus At Tarrant County College the District master syllabus documents the content of the course. A District master syllabus is required for every course offered. District master syllabi are prepared
Northeastern  CRJU  360
D OESTHE SYSTEM DISCRIMINATE ?Agnew (22)I. C HARGES OF D ISCRIMINATION IN THE JJSDiscriminates against certain groups 1. race and ethnic groups 2. also class and gender groupsDiscrimination in terms of conflict and labeling theories
Clemson  IE  456
ARTICLE IN PRESSInt. J. Production Economics 89 (2004) 119129A methodology to support decision making across the supply chain of an industrial distributorIsmail Erola, William G. Ferrell Jr.b,*b a Department of Business Administration, Abant Iz
Clemson  IE  456
Sloan Management ReviewSpring 1992 v33 n3 p65(9)Page 1Managing supply chain inventory: pitfalls and opportunities.by Hau L. Lee and Corey BillingtonDo you consider distribution and inventory costs when you design products? Can you keep your c
Clemson  IE  456
Copyright 2000 All Rights ReservedCopyright 2000 All Rights ReservedCopyright 2000 All Rights ReservedCopyright 2000 All Rights ReservedCopyright 2000 All Rights ReservedCopyright 2000 All Rights ReservedCopyright 2000 All Rights R
Clemson  IE  456
Optimal inventory policies for singlevendor, singlebuyer systems with quality considerations Apichai Ritvirool Department of Industrial Engineering Faculty of Engineering Naresuan University Phitsanulok, 65000 Thailand William G. Ferrell, Jr.* Depa
Georgia Tech  MATH  1501
Georgia Tech  MATH  1501
Georgia Tech  MATH  1501
Georgia Tech  MATH  1501
MATH 1501 Sample Quiz Questions for Test 1, Fall 2007 WTTNote 1: There are approximately two to three times as many problems listed here as you can expect on an hour exam, but this more comprehensive version should be of greater assistance to studen
Hobart and William Smith Colleges  EURO  101
The Egyptians: The Gift of the Nile1. Ancient Egypt: a ribbon of territories along the Nile, some 750 miles long, from the last cataracts to the Mediterranean sea. It flows from South to north and is 5 to 15 miles wide. Herodotus called Egypt the g
Drexel  MEM  423
Drexel  MEM  330
1Tension, Compression, and ShearNormal Stress and StrainProblem 1.21 A solid circular post ABC (see figure) supports a load P1 2500 lb acting at the top. A second load P2 is uniformly distributed around the shelf at B. The diameters of the upper
Yale  PLCS  118
Study GuidePrepared by Rosanna Forrest, DramaturgAbout The PlaywrightKate has received a Jeff Citation, an After Dark Award, the Kennedy Center's Roger L. Stevens Award and a finalist position for the international Susan Smith Blackburn Prize for
Des Moines CC  ECN  131
Elasticity of Demand and Supplyuppose you're the owner of a popular pizzeria. You're considering raising the price of your doublecheese deluxe by $1but how will your customers react? You know that, according to the law of demand, when a good's pr
Des Moines CC  ECN  131
Consumer Choice and the Theory of DemandAs a consumer, you make daily decisions about how to spend your limited income. Whatmotivates your choices? Why do you buy another Tshirt? What do you sacrifice by doing so? In analyzing consumer choices, e
Des Moines CC  ECN  131
The Business Firm:A Prologue to the Theory of Market SupplyDo you work part time or have a summer job? If so, you know you don't work just for thefun of it. You may enjoy your job, and it may be enabling you to acquire skills that will be valuabl
Des Moines CC  ECN  131
Production and CostHave you ever had a brilliant idea for a product or service that would make you amillionaire? Chances are, if you took your idea beyond the fantasy stage, you would find that when it comes to producing an item, there's no free l
Thomas Edison State  ACT  421
Directed Independent Adult LearningCOURSE SYLLABUSFEDERAL INCOME TAXATION ACC421GSCourse Syllabus FEDERAL INCOME TAXATION ACC421GS Thomas Edison State College January 2008Course EssentialsFederal Income Taxation is a onesemester course
Cornell  CS  101
Answers to selfhelp exercise on drawing objects of subclasses1. Note, when writing a method in an object, it is best to include its complete signature, that is, its name followed by the types of its parameters, in parentheses and separated by comma
Cornell  CS  101
Answers to selfhelp exercise including a partition for class Object when drawing an objectb1 equals(Object) toString() Objectb2 equals(Object) toString() ObjectEx h _3_ Ex(int) getH() toString()Ex h _3_ Ex(int) getH() toString() Subk_6_ Su
Cornell  CS  101
Constrained ArrayLists a0 0 a1 1 a4 2 a2 ArrayListObjecta0 String 0 a1 1 a4Constrained ArrayLists ArrayListInteger Integer Integera1 CS100M a4Object Objecta1 0 a4 7 a2 Integer Integer5 a2 .Integer2 a2JFrame b a05IntegerEach
Cornell  CS  101
Declare a local variable where it logically belongs Generally, close to its first use/* Sort array segment b[0.n] */ public void selectionSort(int[] b, int n) { int temp; int j; / inv: b[0.k1 is sorted & b[0.k1] <= b[k.n] for (int k= 0; k < n; k=
UCLA  MGMT  1A
Management 1A Summer 2004 Danny S. Litt EXAM 1 SOLUTION Name: _Student ID No. _I agree to have my grade posted by Student ID Number._(Signature)PROBLEM 1 2 3 4 5 6 7 8 9 10 TOTALPOINTS 20 20 20 20 20 20 20 20 20 20 200SCOREMANAGEMENT 1A
UCLA  MGMT  1A
Management 1A Summer 2004 Danny S. Litt EXAM 2 SOLUTION Name: _Student ID No. _I agree to have my grade posted by Student ID Number._(Signature)PROBLEM 1 2 3 4 5 6 7 8 TOTALPOINTS 20 25 20 20 30 30 30 25 200SCOREMANAGEMENT 1A Problem 1
UCLA  MGMT  1A
Management 1A Spring 2004 Danny S. Litt EXAM 3 SOLUTION Name: _Student ID No. _I agree to have my grade posted by Student ID Number._(Signature)PROBLEM 1 2 3 4 5 6 7 8 9 10 TOTALPOINTS 20 20 20 20 20 20 20 20 20 20 200SCOREMANAGEMENT 1A
Cornell  MATH  1120
Pacific  BIOL  51
Chapter 19: Eukaryotic Genomes: Organization, Regulation, and EvolutionMULTIPLE CHOICE 1) The condensed chromosomes observed in mitosis include all of the following structures except A) nucleosomes. B) 30nm fibers. C) 300nm fibers. D) looped domai
WVU  IENG  213
INSTRUCTOR'S SOLUTION MANUALKEYING YE AND SHARON MYERSfor PROBABILITY & STATISTICSFOR ENGINEERS & SCIENTISTSEIGHTH EDITIONWALPOLE, MYERS, MYERS, YEContents1 Introduction to Statistics and Data Analysis 2 Probability 3 Random Variables and
WVU  MATH  261
 CHAPTER 1. Chapter OneSection 1.1 1.For C "& , the slopes are negative, and hence the solutions decrease. For C "& , the slopes are positive, and hence the solutions increase. The equilibrium solution appears to be Ca>b oe "& , to which all
WVU  MATH  261
 CHAPTER 2. Chapter TwoSection 2.1 1a+ba,b Based on the direction field, all solutions seem to converge to a specific increasing function. a b The integrating factor is .a>b oe /$> , and hence Ca>b oe >$ "* /#>  /$> It follows that all s
WVU  MAE  241
Engineering Mechanics  StaticsChapter 2Problem 21 Determine the magnitude of the resultant force FR = F1 + F 2 and its direction, measured counterclockwise from the positive x axis. Given: F 1 = 600 N F 2 = 800 N F 3 = 450 N = 45 deg = 60 de
WVU  MAE  241
Engineering Mechanics  StaticsChapter 1Problem 11 Represent each of the following combinations of units in the correct SI form using an appropriate prefix: (a) m/ms (b) km (c) ks/mg (d) km N Units Used: N = 106N kmkm = 1096Gs = 10 s
WVU  MAE  241
Engineering Mechanics  StaticsChapter 3Problem 31 Determine the magnitudes of F1 and F2 so that the particle is in equilibrium. Given: F = 500 N 1 = 45 deg 2 = 30degSolution: Initial Guesses F 1 = 1N Given+ Fx = 0; +F 2 = 1NF 1 cos (
WVU  MATH  261
 CHAPTER 5. Chapter FiveSection 5.1 1. Apply the ratio test : lim aB $b8" k a B $b 8 kHence the series converges absolutely for kB $k " . The radius of convergence is 3 oe " . The series diverges for B oe # and B oe % , since the nth ter