Unformatted text preview: CHAPTER 9 Section 1. Overview of the Technique
1.1. If
A= 12
β7 14
,
β9 then the characteristic polynomial is
p(Ξ») = det (A β Ξ»I )
12 β Ξ»
14
= det
β7
β9 β Ξ»
= (12 β Ξ»)(β9 β Ξ») + 98
= Ξ»2 β 3Ξ» β 10
= (Ξ» β 5)(Ξ» + 2).
1.2. Thus, the eigenvalues are Ξ»1 = 5 and Ξ»2 = β2.
If
A= 2
0 0
,
2 then the characteristic polynomial is
p(Ξ») = det (A β Ξ»I )
2βΞ»
0
= det
0
2βΞ»
= (2 β Ξ»)(2 β Ξ»).
1.3. Thus, Ξ» = 2 is a repeated eigenvalue of algebraic multiplicity 2.
If
β2 3
A=
,
0 β5
then the characteristic polynomial is
p(Ξ») = det (A β Ξ»I )
β2 β Ξ»
3
= det
0
β5 β Ξ»
= (β2 β Ξ»)(β5 β Ξ»)
= (Ξ» + 2)(Ξ» + 5). 1.4. Thus, the eigenvalues are Ξ»1 = β2 and Ξ»2 = β5.
If
β4
A=
β2
542 1
,
1 9.1. Overview of the Technique 543 then the characteristic polynomial is
p(Ξ») = det (A β Ξ»I )
β4 β Ξ»
1
= det
β2
1βΞ»
= (β4 β Ξ»)(1 β Ξ») + 2
= Ξ» 2 + 3Ξ» β 2 .
The quadratic formula provides Ξ»1 = (β3 β
1.5. β 17)/2 and Ξ»2 = (β3 + If
A= 5
β6 3
,
β4 then the characteristic polynomial is
p(Ξ») = det (A β Ξ»I )
5βΞ»
3
= det
β6 β4 β Ξ»
= (5 β Ξ»)(β4 β Ξ») + 18
= Ξ»2 β Ξ» β 2
= (Ξ» β 2)(Ξ» + 1)
Thus, the eigenvalues are Ξ»1 = 2 and Ξ»2 = β1.
1.6. If
β2
0 A= 5
,
2 then the characteristic polynomial is
p(Ξ») = det (A β Ξ»I )
β2 β Ξ»
5
= det
0
2βΞ»
= (β2 β Ξ»)(2 β Ξ»).
Thus, the eigenvalues are Ξ»1 = β2 and Ξ»2 = 2.
1.7. If
A= β3
0 0
,
β3 then the characteristic polynomial is
p(Ξ») = det (A β Ξ»I )
β3 β Ξ»
0
= det
0
β3 β Ξ»
= (β3 β Ξ»)(β3 β Ξ»)
= (Ξ» + 3)2 .
Thus, Ξ» = β3 is a repeated eigenvalue of algebraic multiplicity 2.
1.8. If
A= 6
β5 10
,
β9 β 17)/2. 544 Chapter 9. Linear Systems with Constant Coefο¬cients
then the characteristic polynomial is
p(Ξ») = det (A β Ξ»I )
6βΞ»
10
= det
β5 β9 β Ξ»
= (6 β Ξ»)(β9 β Ξ») + 50
= Ξ» 2 + 3Ξ» β 4
= (Ξ» + 4)(Ξ» β 1). 1.9. Thus, the eigenvalues are Ξ»1 = β4 and Ξ»2 = 1.
If
A=
then
p(Ξ») = 1
0
0 1βΞ»
0
0 2
0
3 3
2
1 , 2
3
0βΞ»
2
.
3
1βΞ» Expanding down the ο¬rst column,
0βΞ»
2
3
1βΞ»
= (1 β Ξ»)(βΞ»(1 β Ξ») β 6) p(Ξ») = (1 β Ξ») = (1 β Ξ»)(Ξ»2 β Ξ» β 6)
= (1 β Ξ»)(Ξ» β 3)(Ξ» + 2).
1.10. Thus, the eigenvalues are Ξ»1 = 1, Ξ»2 = 3, and Ξ»3
If
1
4
A=
β8
then
p(Ξ») = 1βΞ»
4
β8 = β2.
0
3
β4 0
2
β3 , 0
0
3βΞ»
2
.
β4 β3 β Ξ» Expanding across the ο¬rst row,
3βΞ»
2
β4 β3 β Ξ»
= (1 β Ξ»)((3 β Ξ»)(β3 β Ξ») + 8) p(Ξ») = (1 β Ξ») = (1 β Ξ»)(Ξ»2 β 1)
= β(Ξ» β 1)2 (Ξ» + 1).
1.11. Thus, the eigenvalues are β1 and 1, the latter a repeated eigenvalue of algebraic multiplicity 2.
If
β1 β4 β2
0
1
1,
A=
β6 β12 2
then
p(Ξ») = β1 β Ξ» β4
β2
0
1βΞ»
1
.
β6
β12 2 β Ξ» 9.1. Overview of the Technique 545 Expanding down the ο¬rst column,
β4 β2
1βΞ»
1
β6
1βΞ» 1
β12 2 β Ξ»
= (β1 β Ξ»)((1 β Ξ»)(2 β Ξ») + 12) β 6(β4 + 2(1 β Ξ»))
= (β1 β Ξ»)((1 β Ξ»)(2 β Ξ») + 12) β 6(β2 β 2Ξ») p(Ξ») = (β1 β Ξ») = (β1 β Ξ»)(Ξ»2 β 3Ξ» + 14) β 12(β1 β Ξ»)
= (β1 β Ξ»)(Ξ»2 β 3Ξ» + 14 β 12)
= β(Ξ» + 1)(Ξ»2 β 3Ξ» + 2)
= β(Ξ» + 1)(Ξ» β 1)(Ξ» β 2).
1.12. Thus, the eigenvalues are Ξ»1 = β1, Ξ»2 = 1, and Ξ»3 = 2.
If
1
0 β1
A = β2 β1 3 ,
β4 0
4
Then
1βΞ»
0
β1
3
p(Ξ») = β2 β1 β Ξ»
.
β4
0
4βΞ»
Expanding down the second column,
1 β Ξ» β1
p(Ξ») = (β1 β Ξ»)
β4 4 β Ξ»
= (β1 β Ξ»)((1 β Ξ»)(4 β Ξ») β 4)
= β(Ξ» + 1)(Ξ»2 β 5Ξ»)
= βΞ»(Ξ» + 1)(Ξ» β 5). 1.13. Thus, the eigenvalues are Ξ»1 = 0, Ξ»2 = β1 and Ξ»3 = 5.
We used a computer to calculate the characteristic polynomial of matrix A.
pA (Ξ») = βΞ»3 + 3Ξ»2 + 13Ξ» β 15
A computer was used to calculate the eigenvalues: Ξ»1 = β3, Ξ»2 = 1, and Ξ»3 = 5. Next, a computer was used
to draw the plot of pA . pA(Ξ»)
50 0
β4 β2 0 2 4 Ξ»
6 β50
The graph of the characteristic polynomial appears to cross the horizontal axis at β3, 1, and 5. Thus, the zeros
of the characteristic polynomial pA are the eigenvalues of the matrix A. In a similar manner, the characteristic
polynomial of matrix B is
pB (Ξ») = βΞ»3 β 3Ξ»2 + 13Ξ» + 15. 546 Chapter 9. Linear Systems with Constant Coefο¬cients
A computer was used to calculate the eigenvalues: Ξ»1 = β5, Ξ»2 = β1, and Ξ»3 = 3. A computer drawn graph
of pB follows. pB(Ξ»)
50 0
β6 β4 β2 0 2 Ξ»
4 β50 1.14. The graph of the characteristic polynomial pB crosses the horizontal axis at β5, β1, and 3. Again, the zeros
of the polynomial are the eigenvalues.
The matrix
5
4
A=
,
β8 β7
has characteristic polynomial p(Ξ») = Ξ»2 β 2Ξ» β 3. Note that
p(A) = A2 + 2A β 3I
β7 β8
10
=
+
16 17
β16
00
.
=
00 1.15. 1.16. 8
β3
+
β14
0 0
β3 Using MATLAB, for example, you would execute the commands
>> A=[12,14;7,9]; p=poly(A); polyvalm(p,A)
for the matrix in Exercise 9.1.1. This will result in the zero matrix. A similar command works for the matrices
in the other problems.
If
2
0
,
A=
β4 β2
then 2βΞ»
0
β4 β2 β Ξ»
= (2 β Ξ»)(β2 β Ξ»). p(Ξ») = det Thus, the eigenvalues are Ξ»1 = 2 and Ξ»2 = β2. For Ξ»1 = 2,
A β 2I = 0
β4 0
β4 and v1 = (1, β1)T is an eigenvector. Thus,
y1 (t) = e2t 1
β1 is a solution. For Ξ»2 = β2,
A + 2I = 4
β4 0
,
0 9.1. Overview of the Technique 547 and v2 = (0, 1)T is an eigenvector. Thus,
y2 (t) = eβ2t 1.17. 0
1 is a solution. Because y1 (0) = (1, β1)T and y2 (0) = (0, 1)T are independent, the solutions y1 (t) and y2 (t)
are independent for all t and for a fundamental set of solutions.
If
6 β8
A=
,
0 β2
then 6βΞ»
β8
0
β2 β Ξ»
= β12 β 6Ξ» + 2Ξ» + Ξ»2 p(Ξ») = det = Ξ»2 β 4Ξ» β 12
= (Ξ» β 6)(Ξ» + 2).
Thus, the eigenvalues are Ξ»1 = 6 and Ξ»2 = β2. For Ξ»1 = 6,
A β 6I = 0
0 β8
.
β8 It is easily seen that the nullspace of A β 6I is generated by the vector (1, 0)T . Thus,
y1 (t) = e6t
is a solution. For Ξ»2 = β2,
A + 2I = 8
0 1
0
β8
.
0 It is easily seen that the nullspace of A + 2I is generated by the vector (1, 1)T . Thus,
y2 (t) = eβ2t 1.18. 1
1 is a solution. Because y1 (0) = (1, 0)T and y2 (0) = (1, 1)T are independent, the solutions y1 (t) and y2 (t) are
independent for all t and form a fundamental set of solutions.
If
β3 β4
A=
,
2
3
then β3 β Ξ» β4
2
3βΞ»
= (β3 β Ξ»)(3 β Ξ») + 8 p(Ξ») = det = Ξ»2 β 1
= (Ξ» + 1)(Ξ» β 1).
Thus, Ξ»1 = β1 and Ξ»2 = 1 are eigenvalues. For Ξ»1 = β1,
A+I = β2
2 β4
4 and v1 = (β2, 1)T is an eigenvector. Thus,
y1 (t) = eβt
is a solution. For Ξ»2 = 1,
AβI = β4
2 β2
1
β4
2 548 Chapter 9. Linear Systems with Constant Coefο¬cients
and v2 = (1, β1)T is an eigenvalue. Thus,
1
β1 y2 (t) = et 1.19. is a solution. Because y1 (0) = (β2, 1)T and y2 (0) = (1, β1)T are independent, the solutions y1 (t) and y2 (t)
are independent for all t and form a fundamental set of solutions.
If
β1 0
A=
,
0 β1
then
β1 β Ξ»
0
p(Ξ») = det
0
β1 β Ξ»
= (β1 β Ξ»)2
= (1 + Ξ»)2 .
Thus, Ξ» = β1 is an eigenvalue. For Ξ» = β1,
A+I = 0
0 0
.
0 It is easily seen that both (1, 0)T and (0, 1)T are elements of the nullspace of A + I . Thus,
y1 (t) = et 1.20. 1
0 y2 (t) = et and 0
1 are solutions. Because y1 (0) = (1, 0)T and y2 (0) = (0, 1)T are independent, y1 (t) and y2 (t) are independent
for all t and form a fundamental set of solutions.
If
3 β2
A=
,
4 β3
then
3βΞ»
β2
p(Ξ») = det
4
β3 β Ξ»
= (3 β Ξ»)(β3 β Ξ») + 8
= Ξ»2 β 1
= (Ξ» + 1)(Ξ» β 1).
Thus, Ξ»1 = β1 and Ξ»2 = 1 are eigenvalues. For Ξ»1 = β1,
A+I = 4
4 β2
β2 and v1 = (1, 2)T is an eigenvector. Thus,
y1 (t) = eβt
is a solution. For Ξ»2 = 1,
AβI = 2
4 1
2
β2
β4 and v2 = (1, 1)T is an eigenvector. Thus,
y2 (t) = et 1.21. 1
1 is a solution. Because y1 (0) = (1, 2)T and y2 (0) = (1, 1)T are independent, the solutions y1 (t) and y2 (t) are
independent for all t and form a fundamental set of solutions.
If
7
10
A=
,
β5 β8 9.1. Overview of the Technique
then 549 7βΞ»
10
β5 β8 β Ξ»
= β56 β 7Ξ» + 8Ξ» + Ξ»2 + 50 p(Ξ») = det = Ξ»2 + Ξ» β 6
= (Ξ» + 3)(Ξ» β 2).
Thus, Ξ»1 = β3 and Ξ»2 = 2 are eigenvalues. For Ξ»1 = β3,
10
β5 A + 3I = 10
.
β5 It is easily seen that the nullspace of A + 3I is generated (1, β1)T . Thus,
1
β1 y1 (t) = eβ3t
is a solution. For Ξ» = 2,
A β 2I = 5
β5 10
.
β10 It is easily seen that the nullspace of A β 2I is generated by (2, β1)T . Thus,
2
β1 y2 (t) = e2t 1.22. is a solution. Because y1 (0) = (1, β1)T and y2 (0) = (2, β1)T are independent, the solutions y1 (t) and y2 (t)
are independent for all t and form a fundamental set of solutions.
If
β3 14
A=
,
0
4
then β3 β Ξ»
14
0
4βΞ»
= (β3 β Ξ»)(4 β Ξ»). p(Ξ») = det Thus, Ξ»1 = β3 and Ξ»2 = 4 are eigenvalues. For Ξ»1 = β3,
A + 3I = 0
0 14
7 and v1 = (1, 0)T is an eigenvector. Thus,
y1 (t) = eβ3t
is a solution. For Ξ»2 = 4,
A β 4I = 1
0 β7
0 14
0 and v2 = (2, 1)T is an eigenvector. Thus,
y2 (t) = e4t 1.23. 2
1 is a solution. Because y1 (0) = (1, 0)T and y2 (0) = (2, 1)T are independent, the solutions y1 (t) and y2 (t) are
independent for all t and form a fundamental set of solutions.
If
5 β4
A=
,
8 β7 550 Chapter 9. Linear Systems with Constant Coefο¬cients
then 5βΞ»
β4
8
β7 β Ξ»
= β35 β 5Ξ» + 7Ξ» + Ξ»2 + 32 p(Ξ») = det = Ξ»2 + 2Ξ» β 3
= (Ξ» + 3)(Ξ» β 1).
Thus, Ξ»1 = β3 and Ξ»2 = 1 are eigenvalues. For Ξ»1 = β3,
A + 3I = 8
8 β4
.
β4 It is easily seen that the nullspace of A + 3I is generated (1, 2)T . Thus,
1
2 y1 (t) = eβ3t
is a solution. For Ξ» = 1, β4
.
β8 4
8 AβI = It is easily seen that the nullspace of A β I is generated by (1, 1)T . Thus,
1
1 y2 (t) = et 1.24. is a solution. Because y1 (0) = (1, 2)T and y2 (0) = (1, 1)T are independent, the solutions y1 (t) and y2 (t) are
independent for all t and form a fundamental set of solutions.
If
β5 0 β6
A = 26 β3 38 ,
4
0
5
then β5 β Ξ»
0
β6
26
β3 β Ξ»
38
4
0
5βΞ»
β5 β Ξ» β6
= (β3 β Ξ»)
4
5βΞ»
= (β3 β Ξ»)((β5 β Ξ»)(5 β Ξ») + 24) p(Ξ») = det = (β3 β Ξ»)(Ξ»2 β 1)
= β(Ξ» + 3)(Ξ» + 1)(Ξ» β 1).
Thus, Ξ»1 = β3, Ξ»2 = β1, and Ξ»3 = 1 are eigenvalues. For Ξ»1 = β3,
A + 3I = β2
26
4 β6
38
8 0
0
0 which has reduced row echelon form
1
0
0 0
0
0 0
1
0 . Thus, v1 = (0, 1, 0)T is an eigenvector and
y1 (t) = eβ3t 0
1
0 , 9.1. Overview of the Technique
is a solution. For Ξ»1 = β1, β4
26
4 A+I = β6
38
6 0
β2
0 551 , which has reduced row echelon form
1
0
0 0
1
0 3/2
1/2
0 . Thus, v2 = (β3, β1, 2)T is an eigenvector and
β3
β1
2 y2 (t) = eβt
is a solution. For Ξ»3 = 1, β6
26
4 AβI = β6
38
4 0
β4
0 , which has reduced row echelon form
1
0
0 0
1
0 1
β3
0 . Thus, v3 = (β1, 3, 1)T is an eigenvector and
y3 (t) = et β1
3
1 is a solution. Because
det y1 (0), y2 (0), y3 (0) = det 1.25. β3
β1
2 0
1
0 β1
3
1 = 1, the solutions y1 (t), y2 (t), and y3 (t) are independent for all t and forma fundamental set of solutions.
If
β1 0
0
2 β5 β6 ,
A=
β2 3
4
then β1 β Ξ»
0
0
2
β5 β Ξ» β6
β2
3
4βΞ»
β5 β Ξ» β6
= (β1 β Ξ»)
3
4βΞ»
= (β1 β Ξ»)(β20 + 5Ξ» β 4Ξ» + Ξ»2 + 18) p(Ξ») = det = β(Ξ» + 1)(Ξ»2 + Ξ» β 2)
= β(Ξ» + 1)(Ξ» + 2)(Ξ» β 1)
Thus, Ξ»1 = β1, Ξ»2 = β2, and Ξ»3 = 1 are eigenvalues. For Ξ»1 = β1,
A+I = 0
2
β2 0
β4
3 0
β6
5 , 552 Chapter 9. Linear Systems with Constant Coefο¬cients
which has reduced row echelon form
1
0
0 0
1
0 β1
1
0 . It is easily seen that the nullspace of A + I is generated (1, β1, 1)T . Thus,
1
y1 (t) = eβt β1
1
is a solution. For Ξ» = β2,
1
0
0
2 β3 β6 ,
A + 2I =
β2 3
6
which has reduced row echelon form
100
012.
000 1.26. It is easily seen that the nullspace of A + 2I is generated by (0, β2, 1)T . Thus,
0
y2 (t) = eβ2t β2
1
is a solution. For Ξ»3 = 1,
β2 0
0
2 β6 β6 ,
AβI =
β2 3
3
which has reduced row echelon form
100
011.
000
It is easily seen that the nullspace of A β I is generated by (0, β1, 1). Thus,
0
y3 (t) = et β1
1
is a solution. Because
1
0
0
det[y1 (0), y2 (0), y3 (0)] = det β1 β2 β1 = β1,
1
1
1
the solutions y1 (t), y2 (t), and y3 (t) are independent for all t and form a fundamental set of solutions.
If
β1
2
0
18 ,
A = β19 14
17 β11 β17
then
β1 β Ξ»
2
0
β19
14 β Ξ»
18
p(Ξ») = det
17
β11 β17 β Ξ»
14 β Ξ»
18
β19
18
β2
= (β1 β Ξ»)
β11 β17 β Ξ»
17 β17 β Ξ»
= β(Ξ» + 1)(Ξ»2 + 3Ξ» β 40) β 2(19Ξ» + 17)
= βΞ» 3 β 4 Ξ» 2 β Ξ» + 6
= β(Ξ» β 1)(Ξ» + 3)(Ξ» + 2) 9.1. Overview of the Technique 553 Thus, Ξ»1 = 1, Ξ»2 = β3, and Ξ»3 = β2 are eigenvalues. For Ξ»1 = 1,
β2
β19
17 AβI = 2
13
β11 0
18
β18 , 0
18
β14 , 0
18
β15 , which has reduced row echelon form
1
0
0 β3
β3
0 0
1
0 . Thus, v1 = (3, 3, 1)T is an eigenvector and
3
3
1 y1 (t) = et
is a solution. For Ξ»2 = β3,
2
β19
17 A + 3I = 2
17
β11 which has reduced row echelon form
1
0
0 β1/2
1/2
0 0
1
0 . Thus, v2 = (1, β1, 2)T is an eigenvector and
y2 (t) = e 1
β1
2 β3t is a solution. For Ξ»3 = β2,
1
β19
17 A + 2I = 2
16
β11 which has reduced row echelon form
1
0
0 0
1
0 β2/3
1/3
0 . Thus, v3 = (2, β1, 3)T is an eigenvector and
y3 (t) = eβ2t 2
β1
3 is a solution. Because
det y1 (0), y2 (0), y3 (0) = det 1.27. 3
3
1 1
β1
2 2
β1
3 =1 the solutions y1 (t), y2 (t), and y3 (t) are independent for all t and form a fundamental set of solutions.
If
β3 0
2
6 3 β12 ,
A=
2 2 β6 554 Chapter 9. Linear Systems with Constant Coefο¬cients
then β3 β Ξ»
0
2
6
3βΞ»
β12
2
2
β6 β Ξ»
6 3βΞ»
3βΞ»
β12
+2
= (β3 β Ξ»)
2
2
2
β6 β Ξ»
2
= (β3 β Ξ»)(Ξ» + 3Ξ» + 6) + 2(6 + 2Ξ») p(Ξ») = det = β(Ξ» + 3)(Ξ»2 + 3Ξ» + 6) + 4(Ξ» + 3)
= (Ξ» + 3)(βΞ»2 β 3Ξ» β 6 + 4)
= β(Ξ» + 3)(Ξ»2 + 3Ξ» + 2)
= β(Ξ» + 3)(Ξ» + 2)(Ξ» + 1).
Thus, Ξ»1 = β3, Ξ»2 = β2, and Ξ»3 = β1 are eigenvalues. For Ξ»1 = β3,
0
6
2 which has reduced row echelon form 1
0
0 2
β12
β3 1
0
0 A + 3I = 0
6
2
0
1
0 . , It is easily seen that the nullspace of A + 3I is generated (β1, 1, 0)T . Thus,
β1
1
0 y1 (t) = eβ3t
is a solution. For Ξ» = β2,
A + 2I = β1
6
2 0
5
2 2
β12
β4 , which has reduced row echelon form
1
0
0 β2
0
0 0
1
0 . It is easily seen that the nullspace of A + 2I is generated by (2, 0, 1)T . Thus,
y2 (t) = eβ2t
is a solution. For Ξ»3 = β1,
A+I = β2
6
2 2
0
1 0
4
2 2
β12
β5 β1
β3/2
0 . , which has reduced row echelon form
1
0
0 0
1
0 It is easily seen that the nullspace of A + I is generated by (1, 3/2, 1). Thus,
y3 (t) = eβt 1
3/2
1 9.1. Overview of the Technique 555 is a solution. Because
β1
1
0 det[y1 (0), y2 (0), y3 (0) = det 1.28. 1
1
1 2 ββ 1
0
1 , 1 ββ 1
β1
1 , 2
0
1 β2 ββ 0
2
1 , β2
1
2 β2 ββ , β1 ββ , 2 ββ 1
β2
β1 , β1 ββ 1
β2
2 , 1
1
β2 3 ββ , β4 ββ β1
β1
1 , β1
1
2 β3 ββ , β1 ββ Using a computer, we ο¬nd the following eigenvalueeigenvector pairs.
2
1
1
0
β3 ββ ,
β2 1 0
β2 ββ ,
β1 1 Using a computer, we ο¬nd the following eigenvalueeigenvector pairs.
1
0
0
1
3 ββ ,
2
2 1.36. , 1
1
0
1
β1
3
1
β1
1 Using a computer, we ο¬nd the following eigenvalueeigenvector pairs. β2 1 ββ ,
1
β1
1.35. β1
β2
1 β2 ββ Using a computer, we ο¬nd the following eigenvalueeigenvector pairs. 1 ββ
1.34. , 1
β2
0 β2 ββ
1.33. 2
2
1 Using a computer, we ο¬nd the following eigenvalueeigenvector pairs.
3 ββ 1.32. 1
,
2 Using a computer, we ο¬nd the following eigenvalueeigenvector pairs.
β3 ββ 1.31. = Using a computer, we ο¬nd the following eigenvalueeigenvector pairs.
3 ββ 1.30. 1
3/2
1 the solutions y1 (t), y2 (t), and y3 (t) are independent for all t and form a fundamental set of solutions.
Using a computer, we ο¬nd the following eigenvalueeigenvector pairs.
0 ββ 1.29. 2
0
1 1
β1 ββ ,
3/2 1 1
β2 ββ ,
1
1 Using a computer, we ο¬nd the following eigenvalueeigenvector pairs.
0
1 β1 0
,
2 ββ 2
1 β1 ,
β1 ββ 0
1 β2 ,
1 ββ 0
1 β2
1
3 β3 1
β1 ββ 2
β2 β1 2
β4 ββ 1
2
1
1
0 ββ 1
1 556
1.37. Chapter 9. Linear Systems with Constant Coefο¬cients
Using a computer, we ο¬nd the following eigenvalueeigenvector pairs.
1
0 β1 1
4 ββ ,
1
0 1.38. 0
β2 ββ ,
β1 1
2
1
1 1
0
0 2
1
0 y2 (t) = eβ4t , β1
β1
1 , y3 (t) = eβ2t 2
1
1 2
0
β3 , y2 (t) = e2t 0
1
1 y3 (t) = eβ2t 1
β1
β2 , y3 (t) = e2t β2
1
0 , y3 (t) = eβt β1
β2
2 y3 (t) = e4t β1
1
β1 , 0
0
1 , y2 (t) = eβ3t β1
0
1 Using a computer, a fundamental set of solutions is found.
y1 (t) = e4t 1.43. y3 (t) = eβ2t , Using a computer, a fundamental set of solutions is found.
y1 (t) = eβt 1.42. 0
2
1 Using a computer, a fundamental set of solutions is found.
y1 (t) = eβ3t 1.41. y2 (t) = e2t , Using a computer, a fundamental set of solutions is found.
y1 (t) = e3t 1.40. 0
β1 ββ 1
0 Using a computer, a fundamental set of solutions is found.
y1 (t) = et 1.39. β1 2 ββ ,
β1 1 β1 2
3
0 , y2 (t) = eβ5t 1
2
β3 Using a computer, a fundamental set of solutions is found.
y1 (t) = et 1
β2
2 , y2 (t) = eβ3t 1
β2
1 1.44. Using a computer, a fundamental set of solutions is found.
0 1.45. Using a computer, a fundamental set of solutions is found.
1 , 1
1
0
y1 (t) = eβ2t , y2 (t) = e3t ,
β2 β2 β2
β1
0
0
2 β1 y3 (t) = et , y4 (t) = eβt 1
1
0
1 β1 β1 1
y1 (t) = eβ4t , y2 (t) = eβ2t ,
1
0
0
0 β1 β3/2 β1 1
y3 (t) = e2t , y4 (t) = eβt β1 β1 1
0 9.1. Overview of the Technique
1.46. 557 Using a computer, a fundamental set of solutions is found. β1 β2 Using a computer, a fundamental set of solutions is found.
0 1 1
1
y1 (t) = et , y2 (t) = eβ4t ,
1
1
1
0
2
1
1
0
y3 (t) = e3t , y4 (t) = eβ2t 0
0
2
2 1.47. 2
0
y1 (t) = eβ5t , y2 (t) = eβ2t ,
1
1
2
2
0
1
2
1
y3 (t) = e4t , y4 (t) = e2t 1
1
1
2
1.48. If v and w are eigenvectors associated to the eigenvalue Ξ», then
Av = Ξ»v
Thus, if y = a v + bw, then and Aw = Ξ»w. Ay = A(a v + bw)
= A(a v) + A(bw)
= a(Av) + b(Aw)
= a(Ξ»v) + b(Ξ»w)
= Ξ»(a v + bw)
= Ξ»y. 1.49. Thus, y = a v + bw is also an eigenvector associated with Ξ».
If
6 β8
,
A=
4 β6
then A has eigenvalues 2 and β2, and determinant D = β4. Note that the product of the eigenvalues equals
the determinant. If
β11 β16
B=
,
8
13
then B has eigenvalues β3 and 5, and determinant D = β15. Note that the product of the eigenvalues equals
the determinant. If
7 β21 β11
5 β13 β5 ,
C=
β5
9
1 1.50. then C has eigenvalues 2, β3, and β4, and determinant D = 24. Note that the product of the eigenvalues
equals the determinant.
In the case
β11 β16
,
B=
8
13
the eigenvalues are Ξ»1 = 5 and Ξ»2 = β3. Thus,
Ξ»1 + Ξ»2 = 2. 558 Chapter 9. Linear Systems with Constant Coefο¬cients
The trace of B is also
tr(B) = β11 + 13 = 2. 1.51. Thus, the trace of matrix B equals the sum of its eigenvalues. This statement is also true when applied to the
matrices A and C .
If
23
A=
,
0 β4
then the eigenvalues of A are 2 and β4. Note that the eigenvalues lie on the main diagonal. If
1
0
0 B= 2
β1
0 3
4
5 , then the eigenvalues of B are 1, β1, and 5. Note that the eigenvalues lie on the main diagonal. If 2 β1 1 1 0
C=
0
0 3
0
0 β1
β4
0 0
,
1
2 then the eigenvalues of C are 2, 3, β4 and 2. Note that the eigenvalues lie on the main diagonal. Here is an
example of a lower triangular matrix.
1
2
3 1.52. 0
β2
1 0
0
4 A computer shows that the eigenvalues are 1, β2, and 4. Again, note that the main diagonal contains the
eigenvalues.
Consider an n Γ n matrix that is upper triangular (aij = 0 for i > j ). Then
p(Ξ») = det (A β Ξ»I ) a12
Β·Β·Β·
a1n
a11 β Ξ»
a22 β Ξ» Β· Β· Β·
a2 n 0
.
= det .
.
. .
.
.
.
.
.
0
0
Β· Β· Β· ann β Ξ»
Expanding down the ο¬rst column,
a22 β Ξ»
0
p(Ξ») = (a11 β Ξ»)
.
.
.
0 a23
a33 β Ξ»
.
.
.
0 Β·Β·Β·
Β·Β·Β· a2 n
a3n
.
.
. . Β· Β· Β· ann β Ξ» Expanding down the ο¬rst column,
a34
Β·Β·Β·
a3n
a33 β Ξ»
0
a44 β Ξ» Β· Β· Β·
a4 n
p(Ξ») = (a11 β Ξ»)(a22 β Ξ»)
.
.
.
.
.
.
.
.
.
.
0
0
Β· Β· Β· ann β Ξ»
Continuing in this manner,
p(Ξ») = (a11 β Ξ»)(a22 β Ξ»)(a33 β Ξ») Β· Β· Β· (ann β Ξ»)
1.53. and the eigenvalues are Ξ»1 = a11 , Ξ»2 = a22 , Ξ»3 = a33 , . . . , and Ξ»n = ann .
If
β2 0
β2 1
,
and D =
V=
03
10 9.2. Planar Systems
then β2 1
10
43
=
β2 0
43
=
β2 0
3 10
=
0 β2
= A. β2 0
03
β2 1
10
01
12 V DV β1 = 1.54. 559 β2
1 1
0 β1 β1 If 60
,
8 β2
then a computer reveals the following eigenvalueeigenvector pairs.
A= β2 ββ 0
,
1 6 ββ and Thus, the matrices
V=
1.55. 0
1 1
1 D= and β2
0 0
6 diagonalize matrix A. That is A = V DV β1 .
If
β1 β2
A=
,
4 β7
then a computer reveals the following eigenvalueβeigenvector pairs.
β5 β 1
2 β3β and Thus, the matrices
V=
1.56. 1
.
1 1
2 1
1 D= and diagonalize matrix A. That is, A = V DV β1 .
The matrix
A= 5
β1 1
.
1 β5
0 0
β3 1
3 has a repeated eigenvalue Ξ» = 4 but only 1 independent eigenvector v = (1, β1)T . Section 2. Planar Systems
2.1. The matrix 2
0 A= β6
,
β1 has the following eigenvalueeigenvector pairs.
Ξ»1 = 2 β 1
0 and Ξ»2 = β1 β Thus, the general solution is
y(t) = C1 e2t
2.2. The matrix
A= 1
2
+ C2 eβt
.
0
1
β1
β3 6
8 2
.
1 560 Chapter 9. Linear Systems with Constant Coefο¬cients
has the following eigenvalueeigenvector pairs.
2
1 Ξ»1 = 2 ββ and Ξ»2 = 5 ββ 1
1 Thus, the general solution is
2
1
+ C2 e5t
.
1
1 y(t) = C1 e2t
2.3. The matrix β5
β2 A= 1
β2 has the following eigenvalueeigenvector pairs.
Ξ»1 = β4 β 1
1 Ξ»2 = β3 β and 1
.
2 Thus, the general solution is
1
1
+ C2 eβ3t
.
1
2 y(t) = C1 eβ4t
2.4. The matrix β3
0 β6
β1 and A= Ξ»2 = β1 ββ has the following eigenvalueeigenvector pairs.
1
0 Ξ»1 = β3 ββ β3
1 Thus, the general solution is
1
β3
+ C2 eβt
.
0
1 y(t) = C1 eβ3t
2.5. The matrix
A= 1
β1 2
4 has the following eigenvalueeigenvector pairs.
Ξ»1 = 2 β 2
1 and Ξ»2 = 3 β 1
.
1 Thus, the general solution is
2
1
+ C2 e3t
.
1
1 y(t) = C1 e2t
2.6. The matrix β1
1 A= 1
β1 has the following eigenvalueeigenvector pairs.
Ξ»1 = 0 ββ 1
1 and Ξ»2 = β2 ββ Thus, the general solution is
y(t) = C1
2.7. 1
β1
+ C2 eβ2t
.
1
1 The system in Exercise 1 had general solution
y(t) = C1 e2t 1
2
+ C2 eβt
.
0
1 β1
1 9.2. Planar Systems 2.8. 2.9. 2.10. 561 Thus, if y(0) = (0, 1)T , then
0
1
2
12
C1
= C1
+ C2
=
.
C2
1
0
1
01
The augmented matrix reduces.
120
1 0 β2
β
011
01 1
Therefore, C1 = β2 and C2 = 1, giving particular solution
1
2
+ e βt
.
y(t) = β2e2t
0
1
The system in Exercise 2 had the general solution
2
1
y(t) = C1 e2t
+ C2 e5t
.
1
1
Thus, if y(0) = (1, β2)T , then
1
2
1
21
C1
= C1
+ C2
=
.
β2
1
C2
1
11
The augmented matrix reduces
21 1
10 3
ββ
.
1 1 β2
0 1 β5
Thus, C1 = 3 and C2 = β5, giving particular solution
2
1
y(t) = 3e2t
β 5e5t
.
1
1
The system in Exercise 3 had general solution
1
1
+ C2 eβ3t
.
y(t) = C1 eβ4t
1
2
Thus, if y(0) = (0, β1)T , then
0
1
1
11
C1
+ C2
.
= C1
=
C2
β1
1
2
12
The augmented matrix reduces.
10 1
11 0
β
0 1 β1
1 2 β1
Therefore, C1 = 1 and C2 = β1, giving particular solution
1
1
y(t) = eβ4t
.
β e β3t
1
2
The system in Exercise 4 had the general solution
1
β3
+ C2 eβt
.
y(t) = C1 eβ3t
0
1
Thus, if y(0) = (1, 1)T , then
1
1
β3
= C1
+ C2
=
1
0
1
The augmented matrix reduces.
1
1 β3 1
β
0
011
Thus, C1 = 4 and C2 = 1, giving particular solution
1
+ e βt
y(t) = 4eβ3t
0 1
0 β3
1 0
1 4
1
β3
.
1 C1
.
C2 562
2.11. Chapter 9. Linear Systems with Constant Coefο¬cients
The system in Exercise 5 had general solution
y(t) = C1 e2t 2.12. 2.13. 2
1
+ C2 e3t
.
1
1 Thus, if y(0) = (3, 2)T , then
3
2
1
21
C1
= C1
+ C2
=
.
2
C2
1
1
11
The augmented matrix reduces.
213
101
β
112
011
Therefore, C1 = 1 and C2 = 1, giving particular solution
2
1
y(t) = e2t
+ e3t
.
1
1
The system in Exercise 6 had the general solution
β1
1
.
y(t) = C1
+ C2 eβ2t
1
1
Thus, if y(0) = (1, 5)T , then
1
1
β1
1 β1
C1
= C1
+ C2
=
.
C2
5
1
1
11
The augmented matrix reduces.
1 β1 1
103
β
115
012
Thus, C1 = 3 and C2 = 2, giving particular solution
1
β1
y(t) = 3
+ 2 e β2 t
.
1
1
If
1
i
β3
1 1βi
, and z =
,
, B=
A=
1βi
1 2βi
2i 1 + i
then
1
1 1βi
Az =
1βi
2i 1 + i
1 β 2i
=
2 + 2i
1 + 2i
=
.
2 β 2i
On the other hand,
1 1βi
1
Az =
2i 1 + i
1βi
1
1+i
1
=
β2 i 1 β i
1+i
1 + 2i
=
.
2 β 2i
Therefore, Az = Az. Next, 1βi
i
β3
1+i
1 2βi
1
β2 β 3i
=
β1 + i 3 β 5i
1
β2 + 3i
.
=
β1 β i 3 + 5i AB = 1
2i 9.2. Planar Systems
On the other hand,
1 1βi
i
β3
2i 1 + i 1 2 β i
1
1+i
βi β3
=
β2i 1 β i
1 2+i
1
β2 + 3i
=
.
β1 β i 3 + 5i AB = Therefore, AB = AB.
2.14. If z = (z1 , z2 , . . . , zn )T and w = (w1 , w2 , . . . , wn )T , then
z + w = (z1 , z2 , . . . , zn )T + (w1 , w2 , . . . , wn )T
= (z1 + w1 , z2 + w2 , . . . , zn + wn )T
= (z1 + w1 , z2 + w2 , . . . , zn + wn )T
= (z1 + w1 , z2 + w2 , . . . , zn + wn )T
= (z1 , z2 , . . . , zn )T + (w1 , w2 , . . . , wn )T
= z + w. 2.15. Let Ξ± be a complex number and let z = (z1 , z2 , . . . , zn )T . Then,
Ξ± x = Ξ±(z1 , z2 , ..., zn )T
= (Ξ±z1 , Ξ±z2 , . . . , Ξ±zn )T
= (Ξ±z1 , Ξ±z2 , . . . , Ξ±zn )T
= (Ξ± z1 , Ξ± z2 , . . . , Ξ± z1 )T
= Ξ± (z1 , z2 , . . . , zn )T
= Ξ± z. 2.16. If A is n Γ n with real entries and z = (z1 , z2 , . . . , zn )T , then
Az = [a1 , a2 , . . . , an ] (z1 , z2 , . . . , zn )T
= z1 a1 + z 2 a2 + Β· Β· Β· + z n an
= z 1 a 1 + z2 a 2 + Β· Β· Β· + z n a n
= z1 a1 + z2 a2 + Β· Β· Β· + zn an
= z 1 a1 + z 2 a2 + Β· Β· Β· + z n an
= [a1 , a2 , . . . , an ] (z1 , z2 , . . . , zn )T
= Az 2.17. If A and B are m Γ n and n Γ p matrices, with possibly complex entries, then
AB = A[b1 , b2 , . . . , bp ]
= [Ab1 , Ab2 , . . . , Abp ]
= [Ab1 , Ab2 , . . . , Abp ]
= [A b1 , A b2 , . . . , A bp ]
= A [b1 , b2 , . . . , bp ]
= A B. 563 564
2.18. Chapter 9. Linear Systems with Constant Coefο¬cients
If z(t) = x(t) + iy(t), then
z (t) = (x(t) + iy(t))
= x (t) + iy (t)
= x (t) β iy (t)
= (x(t) β iy(t))
= z(t) . 2.19. If z = x + i y, then
1
1
(z + z) = (x + i y + x + i y)
2
2
1
= (x + i y + x β i y)
2
1
= (2x)
2
= x.
Secondly,
1
1
(z β z) = (x + i y β x + i y)
2i
2i
1
= (x + i y β (x β i y))
2i
1
= (x + i y β x + i y)
2i
1
= (2i y)
2i
= y. 2.20. If z(t) = e2it (1, 1 + i)T , then
1
1+i
0
1
+i
= (cos 2t + i sin 2t)
1
1
1
0
0
1
+ sin 2t
+ i cos 2t
β sin 2t
= cos 2t
1
1
1
1 z(t) = (cos 2t + i sin 2t) 2.21. 2.22. . Therefore, Re(x(t)) = (cos 2t, cos 2t β sin 2t)T and Im(z(t)) = (sin 2t, cos 2t + sin 2t)T .
If z(t) = e(1+i)t (β1 + i, 2)T , then
β1 + i
z(t) = et eit
2
1
β1
t
= e (cos t + i sin t)
+i
0
2
β1
1
1
β1
+ i sin t
= et cos t
+ i cos t
β sin t
2
0
0
2
t β cos t β sin t
t cos t β sin t
=e
+ ie
.
2 cos t
2 sin t
Therefore, Re (z(t)) = et (β cos t β sin t, 2 cos t)T and Im (z(t)) = et (cos t β sin t, 2 sin t)T .
If z(t) = e3it (β1 β i, 2)T , then
β1
β1
+i
z(t) = (cos 3t + i sin 3t)
0
2
β1
β1
β1
β1
+ cos 3t
+ i sin 3t
β sin 3t
= cos 3t
0
2
0
2 9.2. Planar Systems
The real part of z(t) is β cos 3t + sin 3t
2 cos 3t y1 (t) =
and 3 sin 3t + 3 cos 3t
.
β6 sin 3t y1 (t) =
However,
3
β6 β cos 3t + sin 3t
2 cos 3t 3
β3 565 3 sin 3t + 3 cos 3t
β6 sin 3t = as well, so y1 is a solution of y = Ay. The imaginary part of z(t) is
β sin 3t β cos 3t
2 sin 3t y2 (t) =
and
y2 (t) =
However,
3
3
β6 β3 β3 cos 3t + 3 sin 3t
.
6 cos 3t β sin 3t β cos 3t
2 sin 3t = β3 cos 3t + 3 sin 3t
6 cos 3t as well, so y2 is a solution of y = Ay. Finally, because
y1 (0) =
2.23. β1
2 and y2 (0) = β1
0 are independent, y1 (t) and y2 (t) are independent for all values of t and form a fundamental set of solutions.
If
β4 β8
A=
,
4
4
then the characteristic polynomial of A is p(Ξ») = Ξ»2 + 4 and the eigenvalues are Ξ»1 = 4i and Ξ»2 = β4i .
Trusting that
β4 β 4i
β8
A β (4i)I =
4
4 β 4i
is singular, examination of the second row shows that (β1 + i, 1)T generates the nullspace of A β (4i)I .
Thus, we have a complex solution which we must break into real and imaginary parts.
z(t) = e4it β1 + i
1 β1
1
+i
1
0
β1
1
1
β1
= cos 4t
β sin 4t
+ i cos 4t
+ i sin 4t
1
0
0
1
β cos 4t β sin 4t
cos 4t β sin 4t
=
+i
.
cos 4t
sin 4t = (cos 4t + i sin 4t) Therefore,
y1 (t) =
2.24. β cos 4t β sin 4t
cos 4t and form a fundamental set of real solutions.
If
β1
A=
4 y2 (t) = cos 4t β sin 4t
sin 4t β2
,
3 then the characteristic polynomial is p(Ξ») = Ξ»2 β 2Ξ» + 5 and the eigenvalues are 1 Β± 2i . Trusting that
A β (1 + 2i)I = β2 β 2i
4 β2
2 β 2i 566 Chapter 9. Linear Systems with Constant Coefο¬cients
is singular, examination of the ο¬rst row reveals the eigenvector v = (1, β1 β i)T , Thus,
z(t) = e(1+2i)t 1
β1 β i 1
0
+i
β1
β1
1
0
0
1
= et cos 2t
β sin 2t
+ iet cos 2t
+ sin 2t
β1
β1
β1
β1 = et (cos 2t + i sin 2t) . Therefore,
y1 (t) = et 2.25. cos 2t
β cos 2t + sin 2t y2 (t) = et and form a fundamental set of solutions.
If
A= β1
β5 sin 2t
β cos 2t β sin 2t 1
,
β5 then the characteristic polynomial of A is p(Ξ») = Ξ»2 + 6Ξ» + 10 and the eigenvalues are Ξ»1 = β3 + i and
Ξ»2 = β3 β i . Trusting that
2βi
β5 A β (β3 + i)I = 1
β2 β i is singular, examination of the ο¬rst row shows that (1, β2 + i)T generates the nullspace of A β (β3 + i)I .
Thus, we have a complex solution which we must break into real and imaginary parts.
z(t) = e(β3+i)t 1
β2 + i 1
0
+i
β2
1
1
0
0
1
cos t
β sin t
+ i cos t
+ i sin t
β2
1
1
β2
cos t
sin t
+ ieβ3t
β2 cos t β sin t
cos t β 2 sin t = eβ3t (cos t + i sin t)
= eβ3t
= eβ3t
Therefore,
y1 (t) = eβ3t 2.26. cos t
β2 cos t β sin t and y2 (t) = eβ3t sin t
cos t β 2 sin t form a fundamental set of real solutions.
The characteristic polynomial of
A= 0
β2 4
β4 is p(Ξ») = Ξ»2 β 4Ξ» + 8, which has complex roots Ξ» = β2 Β± 2i. For the eigenvalue Ξ» = 2 + 2i , we have the
eigenvector w = (β1 β i, 1)T . The corresponding exponential solution is
z(t) = e(β2+2i)t β1 β i
1 β1
β1
+i
0
1
β1
β1
β2 t
=e
β sin 2t Β·
cos 2t Β·
0
1
β1
β1
β2 t
+ ie
+ sin 2t Β·
cos 2t Β·
1
0 = eβ2t [cos 2t + i sin 2t ] . 9.2. Planar Systems 567 The real and imaginary parts of z,
y1 (t) = eβ2t
y2 (t) = eβ2t
2.27. are a fundamental set of solutions.
If
A= β cos 2t + sin 2t
cos 2t
β cos 2t β sin 2t
sin 2t
β1
β3 3
,
β1 then the characteristic polynomial of A is p(Ξ») = Ξ»2 + 2Ξ» + 10 and the eigenvalues are Ξ»1 = β1 + 3i and
Ξ»2 = β1 β 3i . Trusting that
β3i
3
A β (β1 + 3i)I =
β3 β3i 2.28. is singular, examination of the ο¬rst row shows that (1, i)T generates the nullspace of A β (β1 + 3i)I . Thus,
we have a complex solution which we must break into real and imaginary parts.
1
z(t) = e(β1+3i)t
i
1
0
βt
= e (cos 3t + i sin 3t)
+i
0
1
1
0
0
1
βt
=e
cos 3t
β sin 3t
+ i cos 3t
+ i sin 3t
0
1
1
0
cos 3t
sin 3t
+ ieβt
= e βt
β sin 3t
cos 3t
Therefore,
cos 3t
sin 3t
and y2 (t) = eβt
y1 (t) = eβt
β sin 3t
cos 3t
form a fundamental set of real solutions.
If
3 β6
,
A=
35
β
then the characteristic polynomial is p(Ξ») = Ξ»2 β 8Ξ» + 33 and the eigenvalues are 4 Β± 17i . Trusting that
3 β Ξ» β6
A β Ξ»I =
3
5βΞ»
β
is singular, examination of the ο¬rst row reveals the eigenvector v = (6, 3 β Ξ»)T . Substituting Ξ» = 4 + 17i
β
give v = (6, β1 β 17i)T . Thus,
β
6β
z(t) = e(4+ 17i)t
β1 β 17i
β
β
6
0
β
= e4t (cos 17t + i sin 17t)
+i
β1
β 17
β
β
6
0
β
β sin 17t
= e4t cos 17t
β1
β 17
β
β
0
6
β
+ ie4t cos 17t
+ sin 17t
β 17
β1
Therefore,
β
4t
β 6 cos β17t β
and
y1 (t) = e
β cos 17t +β 17 sin 17t
6 sin 17t β
β
β
y2 (t) = e4t
β 17 cos 17t β sin 17t
form a fundamental set of solutions. 568
2.29. Chapter 9. Linear Systems with Constant Coefο¬cients
The fundamental solutions found in Exercise 23 allows the formation of the general solution
y(t) = C1
If y(0) = (0, 2)T , then β cos 4t β sin 4t
cos 4t + C2 cos 4t β sin 4t
.
sin 4t 0
β1
1
= C1
+ C2
.
2
1
0 The augmented matrix reduces.
β1
1 1
0 0
1
β
2
0 0
1 2
.
2 Thus, C1 = 2 and C2 = 2 and
β cos 4t β sin 4t
cos 4t β sin 4t
+2
cos 4t
sin 4t
β4 sin 4t
=
.
2 cos 4t + 2 sin 4t y(t) = 2 2.30. The fundamental solutions found in Exercise 24 allows the formation of the general solution
y(t) = C1 et
If y(0) = (0, 1)T , then cos 2t
β cos 2t + sin 2t + C2 e t sin 2t
.
β cos 2t β sin 2t 0
1
0
.
= C1
+ C2
1
β1
β1 The augmented matrix reduces.
1
β1 1
0
β
0
1 0
β1 0
1 0
.
β1 Thus, C1 = 0 and C2 = β1 and
y(t) = βet
2.31. sin 2t
.
β cos 2t β sin 2t The fundamental solutions found in Exercise 25 allows the formation of the general solution
y(t) = C1 eβ3t
If y(0) = (1, β5)T , then cos t
β2 cos t β sin t + C2 eβ3t sin t
.
cos t β 2 sin t 1
0
1
+ C2
.
= C1
β2
1
β5 The augmented matrix reduces.
1
β2 0
1 1
1
β
0
β5 0
1 1
.
β3 Thus, C1 = 1 and C2 = β3 and
y(t) = eβ3t
= eβ3t
2.32. cos t
sin t
β 3eβ3t
β2 cos t β sin t
cos t β 2 sin t
cos t β 3 sin t
.
β5 cos t + 5 sin t A fundamental set of solutions was found in Exercise 26, so the solution has the form y(t) = C1 y1 (t) + C2 y2 (t),
where
β cos 2t + sin 2t
y1 (t) = eβ2t
cos 2t
β2t β cos 2t β sin 2t
y2 (t) = e
.
sin 2t 9.2. Planar Systems
At t = 0 we have 569 β1
β1
β1
= y(0) = C1
+ C2
2
1
0
β1 β1
C1
=
.
C2
1
0 This system can be readily solved, getting C1 = 2 and C2 = β1. Hence the solution is
y(t) = 2y1 (t) β y2 (t) = eβ2t
2.33. β cos 2t + 3 sin 2t
.
2 cos 2t β sin 2t The fundamental solutions found in Exercise 27 allows the formation of the general solution
cos 3t
β sin 3t y(t) = C1 eβt + C2 e β t sin 3t
.
cos 3t If y(0) = (3, 2)T , then
3
1
0
= C1
+ C2
.
2
0
1
Thus, C1 = 3 and C2 = 2 and
cos 3t
sin 3t
+ 2 e βt
β sin 3t
cos 3t
3 cos 3t + 2 sin 3t
.
β3 sin 3t + 2 cos 3t y(t) = 3eβt
= et
2.34. The fundamental solutions found in Exercise 28 allows the formation of the general solution.
β
β 6 cos β17t β
y(t) = C1 e4t
β cos 17t + 17 sin 17t
β
6 sin 17t β
β
β
+ C2 e 4 t
.
β 17 cos 17t β sin 17t
If y(0) = (1, 3)T , then
0
1
6
β
.
= C1
+ C2
β 17
3
β1
The augmented matrix reduces .
6
0
1
β
β
β1 β 17 3
β
Thus, C1 = 1/6 and C2 = β19 17/102 and 1
0 0
1 1
β/6
β19 17/102 β
1 4t
β 6 cos β17t β
y(t) = e
β cos 17t + 17 sin 17t
6
β
β
19 17
6 sin 17t β
β
β
β
.
β 17 cos 17t β sin 17t
102 2.35. (a) Let A be a 2 Γ 2 matrix with one eigenvalue Ξ» of multiplicity two. If the eigenspace of Ξ» has dimension
two, then there are two independent eigenvectors v1 and v2 that must span all of R2 . If (x, y)T is a vector
in R2 ,
x
= x v1 + y v2 .
y
Because the eigenspace is a subspace, it must be closed under addition and scalar multiplication. That
is, any linear combination of two eigenvectors must also be an eigenvector. Therefore, (x, y)T is an
eigenvector.
(b) Suppose that
ab
.
A=
cd 570 Chapter 9. Linear Systems with Constant Coefο¬cients
Because all vectors in R2 are eigenvectors,
a
c Ae1 = Ξ»e1
b
1
1
=Ξ»
d
0
0
a
Ξ»
=
.
c
0 Secondly, e2 is an eigenvector, so
a
c Thus, 2.36. b
d a
c b
d Ae2 = Ξ»e2
0
0
=Ξ»
1
1
b
0
=
.
d
Ξ»
= Ξ»
0 0
.
Ξ» A is a real 2 Γ 2 matrix with one eigenvalue Ξ»1 of multiplicity 2. If
y(t) = eΞ»1 t [v + t (A β Ξ»1 I )v] ,
then
y(0) = e0 [v + 0(A β Ξ»1 I )v] = v.
Moreover, using the product rule,
y (t) = Ξ»1 eΞ»1 t [v + t (A β Ξ»1 I )v] + eΞ»1 t (A β Ξ»1 I )v
= eΞ»1 t [Ξ»1 v + tΞ»1 (A β Ξ»1 I )v + (A + Ξ»1 I )v]
= eΞ»1 t [Av + tΞ»1 (A β Ξ»1 I )v]
However, because A has a repeated eigenvalue Ξ»1 , its characteristic polynomial has the form
p(Ξ») = (Ξ» β Ξ»1 )2 .
By the CayleyHamilton Theorem,
(A β Ξ»1 I )2 = 0I
and (A β Ξ»1 I )(A β Ξ»1 I ) = 0I
A(A β Ξ»1 I ) β Ξ»1 (A β Ξ»1 I ) = 0I
A(A β Ξ»1 I ) = Ξ»1 (A β Ξ»1 I ). Thus, we can write Ay(t) = AeΞ»1 t [v + t (A β Ξ»1 I )v]
= eΞ»1 t [Av + tA(A β Ξ»1 I )v]
= eΞ»1 t [Av + tΞ»1 (A β Ξ»1 I )v] . 2.37. Therefore, y (t) = Ay(t) and y(t) = eΞ»1 t [v + t (A β Ξ»1 I )v] is a solution of y = Ay with y(0) = v.
The matrix
β2 0
A=
0 β2
has a single eigenvalue Ξ» = β2. However,
A β (β2)I = 0
0 0
,
0 9.2. Planar Systems 571 so all nonzero vectors are eigenvectors. Choose e1 = (1, 0)T and e2 = (0, 1)T as eigenvectors. Then,
1
0
+ C2 eβ2t
0
1 y(t) = C1 eβ2t
2.38. is the general solution.
The matrix
A= β3
β1 1
β1 has one eigenvalue, Ξ» = β2. However, the nullspace of
A + 2I = β1
β1 1
1 is generated by a single eigenvector, v1 = (1, 1)T , with corresponding solution
1
.
1 y1 (t) = eβ2t To ο¬nd another solution, we need to ο¬nd a vector v2 which satisο¬es (A + 2I )v2 = v1 . Choose w = (1, 0)T ,
which is independent of v1 and note that
β1
β1 (A + 2I )w = 1
β1
=
= βv1 .
0
β1 1
1 Thus, choose v2 = βw = (β1, 0)T . Our second solution is
y2 (t) = eβ2t (v1 + t v2 )
β1
1
= e β2 t
+t
0
1 . Thus, the general solution can be written
1
β1
1
+t
+ C2 eβ2t
1
0
1
β1
1
(C1 + C2 t)
.
+ C2
0
1 y(t) = C1 eβ2t
= e β2 t
2.39. The matrix
A= β1
1 3
1 has one eigenvalue, Ξ» = 2. However, the nullspace of
A β 2I = 1
1 β1
β1 is generated by the single eigenvector, v1 = (1, 1)T , with corresponding solution
y1 (t) = e2t 1
.
1 To ο¬nd another solution, we need to ο¬nd a vector v2 which satisο¬es (A β 2I )v2 = v1 . Choose w = (1, 0)T ,
which is independent of v1 , and note that
(A β 2I )w = 1
1 β1
β1 1
1
= v1 .
=
0
0 Thus, choose v2 = w = (1, 0)T . Our second solution is
y2 (t) = e2t (v2 + t v1 )
1
1
+t
= e 2t
1
0 . 572 Chapter 9. Linear Systems with Constant Coefο¬cients
Thus, the general solution can be written
1
1
1
+ C2 e2t
+t
1
1
0
1
1
(C1 + C2 t)
+ C2
1
0 y(t) = C1 e2t
= e 2t
2.40. The matrix β2
4 A= β1
2 has one eigenvalue, Ξ» = 0. However, the nullspace of
β2
4 A + 0I = β1
2 is generated by a single eigenvector, v1 = (1, β2)T , with corresponding solution
y1 (t) = e0t 1
1
=
.
β2
β2 To ο¬nd another solution, we need to ο¬nd a vector v2 which satisο¬es (A + 0I )v2 = v1 . Choose w = (1, 0)T ,
which is independent of v1 and note that
(A + 0I )w = β2
4 β2
1
= β2v1 .
=
4
0 β1
2 Thus, chose v2 = β(1/2)w = (β1/2, 0)T . Our second solution is
y2 (t) = e0t (v2 + t v1 )
β1/2
1
=
+t
.
0
β2
Thus, the general solution can be written
1
β1/2
1
+ C2
+t
β2
0
β2
1
β1/2
.
= (C1 + C2 t)
+ C2
β2
0 y(t) = C1 2.41. The matrix
A= β2
β9 1
4 has one eigenvalue, Ξ» = 1. However, the nullspace of
AβI = β3
β9 1
3 is generated by the single eigenvector, v1 = (1, 3)T , with corresponding solution
y1 (t) = et 1
.
3 To ο¬nd another solution, we need to ο¬nd a vector v2 which satisο¬es (A β I )v2 = v1 . Choose w = (1, 0)T ,
which is independent of v1 , and note that
(A β I )w = β3
β9 1
3 1
β3
1
= β3v1 .
= β3
=
3
β9
0 Thus, choose v2 = β(1/3)w = (β1/3, 0)T . Our second solution is
y2 (t) = et (v2 + t v1 )
1
β1/3
+t
= et
3
0 . 9.2. Planar Systems 573 Thus, the general solution can be written
1
β1/3
1
+ C2 et
+t
3
0
3
β1/3
1
+ C2
= et (C1 + C2 t)
0
3 y(t) = C1 et 2.42. The matrix 51
β4 1
has one eigenvalue, Ξ» = 3. However, the nullspace of
2
1
A β 3I =
β4 β2
A= is generated by a single eigenvector, v1 = (1, β2)T , with corresponding solution
1
y1 (t) = e3t
.
β2
To ο¬nd another solution, we need to ο¬nd a vector v2 which satisο¬es (A β 3I )v2 = v1 . Choose w = (1, 0)T ,
which is independent of v1 and note that
2
1
1
2
(A β 3I )w =
=
= 2 v1 .
β4 β2
0
β4
Thus, choose v2 = (1/2)w = (1/2, 0)T . Our second solution is
y2 (t) = e3t (v2 + t v1 )
1/2
1
= e3t
+t
.
0
β2
Thus, the general solution can be written
1
1/2
1
+ C2 e3t
+t
y(t) = C1 e3t
β2
0
β2
1
1/2
= e3t (C1 + C2 t)
+ C2
.
β2
0
2.43. From Exercise 37,
y(t) = C1 eβ2t 1
0
+ C2 eβ2t
.
0
1 If y(0) = (3, β2)T , then 1
0
3
+ C2
,
= C1
0
1
β2
and C1 = 3 and C2 = β2. Thus, the particular solution is
1
0
β 2 e β2 t
y(t) = 3eβ2t
0
1
3
β2 t
=e
.
β2 2.44. From Exercise 38,
y(t) = eβ2t (C1 + C2 t)
If y(0) = (0, β3)T , then β1
1
+ C2
0
1 1
β1
0
+ C2
.
= C1
1
0
β3 The augmented matrix reduces,
1
1 β1
0 1
0
β
0
β3 0
1 β3
,
β3 . 574 Chapter 9. Linear Systems with Constant Coefο¬cients
and C1 = β3 and C2 = β3. Thus, the particular solution is
β1
1
β3
0
1 y(t) = eβ2t (β3 β 3t)
= e β2 t
2.45. β3t
.
β3 β 3t From Exercise 39,
y(t) = e2t (C1 + C2 t)
If y(0) = (2, β1)T , then 1
1
+ C2
0
1 . 2
1
1
= C1
+ C2
.
β1
1
0 The augmented matrix reduces,
1
1 1
0 2
1
β
β1
0 β1
,
3 0
1 and C1 = β1 and C2 = 3. Thus, the particular solution is
y(t) = e2t (β1 + 3t)
= e 2t
2.46. 2 + 3t
.
β1 + 3t From Exercise 40,
y(t) = (C1 + C2 t)
If y(0) = (1, 1)T , then 1
1
+3
0
1 1
β1/2
.
+ C2
β2
0 1
1
β1/2
.
= C1
+ C2
1
β2
0 The augmented matrix reduces,
1
β2 β1/2
0 1
1
β
1
0 0
1 β1/2
β3 and C1 = β1/2 and C2 = β3. Thus, the particular solution is
1
y(t) = β β 3t
2
1 β 3t
=
.
1 + 6t
2.47. β1/2
1
β3
0
β2 y(t) = et (C1 + C2 t) 1
β1/3
+ C2
3
0 From Exercise 41, If y(0) = (5, 3)T , then 5
1
β1/3
= C1
+ C2
.
3
3
0 The augmented matrix reduces,
1
3 β1/3 5
1
β
0
3
0 0
1 1
,
β12 and C1 = 1 and C2 = β12. Thus, the particular solution is
y(t) = et (1 β 12t)
= et 5 β 12t
.
3 β 36t β1/3
1
β 12
0
3 . 9.2. Planar Systems
2.48. From Exercise 42, 1/2
1
+ C2
0
β2 y(t) = e3t (C1 + C2 t)
If y(0) = (0, 2)T , then 575 . 0
1
1/2
= C1
+ C2
.
2
β2
0 The augmented matrix reduces,
1/2
0 1
β2 1
0
β
0
2 0
1 β1
2 and C1 = β1 and C2 = 2. Thus, the particular solution is
1
1/2
+2
β2
0 y(t) = e3t (β1 + 2t)
2t
.
2 β 4t = e3t
2.49. The matrix 2
β1 A= 4
6 has characteristic polynomial p(Ξ») = Ξ»2 β 8Ξ» + 16 and one eigenvalue, Ξ» = 4. Moreover, the nullspace of
β2
β1 A β 4I = 4
2 is generated by the single eigenvector, v1 = (2, 1)T , with corresponding solution
2
.
1 y1 (t) = e4t To ο¬nd another solution, we need to ο¬nd a vector v2 which satisο¬es (A β 4I )v2 = v1 . Choose w = (1, 0)T ,
which is independent of v1 , and note that
β2
β1 (A β 4I )w = 4
2 1
2
=β
= βv1 .
0
1 Thus, choose v2 = βw = (β1, 0)T . Our second solution is
y2 (t) = e4t (v2 + t v1 )
β1
2
= e 4t
+t
0
1 . Thus, the general solution can be written
2
2
β1
+ C2 e4t
+t
1
1
0
2
β1
+ C2
(C1 + C2 t)
1
0 y(t) = C1 e4t
= e 4t
2.50. The matrix
A= β8
5 β10
7 has characteristic polynomial p(Ξ») = Ξ»2 + Ξ» β 6 with eigenvalues Ξ»1 = β3 and Ξ»2 = 2. The nullspace of
A + 3I = β5
5 β10
10 is generated by the single eigenvector, v1 = (β2, 1)T , with corresponding solution
y1 (t) = eβ3t β2
.
1 576 Chapter 9. Linear Systems with Constant Coefο¬cients
The nullspace of
A β 2I = β10
5 β10
5 is generated by the single eigenvector, v2 = (1, β1)T , with corresponding solution
y2 (t) = e2t 1
.
β1 Thus, the general solution can be written
β2
1
+ C2 e2t
.
1
β1 y(t) = C1 eβ3t
2.51. The matrix
5
12
β4 β 9 A= has characteristic polynomial p(Ξ») = Ξ»2 + 4Ξ» + 3 and eigenvalues Ξ»1 = β1 and Ξ»2 = β3. The nullspace of
A β (β1)I = 6
β4 12
β8 is generated by the single eigenvector, v1 = (β2, 1)T , with corresponding solution
y1 (t) = eβt β2
.
1 The nullspace of
A β (β3)I = 8
β4 12
β6 is generated by the single eigenvector, v2 = (β3/2, 1)T , with corresponding solution
y2 (t) = eβ3t β3/2
.
1 Thus, the general solution can be written
y(t) = C1 eβt
2.52. β2
β3/2
+ C2 eβ3t
.
1
1 The matrix
A= β6
0 1
β6 has repeated eigenvalue, Ξ» = β6, but the nullspace of
A + 6I = 0
0 1
0 is generated by the single eigenvector, v1 = (1, 0)T , with corresponding solution
y1 (t) = eβ6t 1
.
0 We need a solution v2 satisfying (A + 6I )v2 = v1 . Choose w = (0, 1), which is independent of v1 , and note
that
1
0
01
= v1 .
=
(A + 6I )w =
0
1
00
Thus, choose v2 = (0, 1)T , giving a second solution
y2 (t) = eβ6t (v1 + t v2 ) = eβ6t 0
1
+t
1
0 . 9.2. Planar Systems 577 Thus, the general solution can be written
1
0
1
+ C2 eβ6t
+t
0
1
0
1
0
+ C2
(C1 + C2 t)
.
0
1 y(t) = C1 eβ6t
= eβ6t
2.53. The matrix
A= β4
2 β5
2 has characteristic polynomial p(Ξ») = Ξ»2 + 2Ξ» + 2 and eigenvalues Ξ»1 = β1 + i and Ξ»2 = β1 β i . The
nullspace of
β3 β i β5
A β (β1 + i)I =
2
3βi
is generated by the single eigenvector, v1 = (5, β3 β i)T , with corresponding solution
5
.
β3 β i z(t) = e(β1+i)t
Breaking this solution into real and imaginary parts,
z(t) = e(β1+i)t 5
β3 β i 5
0
+i
β3
β1
5
0
0
5
cos t
β sin t
+ i cos t
+ i sin t
β3
β1
β1
β3
5 cos t
5 sin t
.
+ ieβt
β3 cos t + sin t
β cos t β 3 sin t = eβt (cos t + i sin t)
= e βt
= e βt Thus, the general solution is
y(t) = C1 eβt
2.54. 5 cos t
β3 cos t + sin t The matrix + C 2 e βt β6
β8 A= 5 sin t
.
β cos t β 3 sin t 4
2 has characteristic polynomial p(Ξ») = Ξ»2 + 4Ξ» + 20 with eigenvalues β2 Β± 4i . Trusting that
A β (β2 + 4i)I = β4 β 4i
β8 4
4 β 4i is nonsingular, examination of the ο¬rst row shows that v = (1, 1 + i)T is an eigenvector. Thus,
z = e(β2+4i)t 1
1+i 0
1
+i
1
1
0
1
1
0
+ ieβ2t cos 4t
β sin 4t
+ sin 4t
cos 4t
1
1
1
1 = eβ2t (cos 4t + i sin 4t)
= e β2 t is a complex solution. The real and imaginary parts of z form a fundamental set of solutions that lead to the
general solution.
y(t) = eβ2t C1
2.55. cos 4t
cos 4t β sin 4t The matrix
A= β10
β12 + C2
4
4 sin 4t
cos 4t + sin 4t 578 Chapter 9. Linear Systems with Constant Coefο¬cients
has characteristic polynomial p(Ξ») = Ξ»2 + 6Ξ» + 8 and eigenvalues Ξ»1 = β4 and Ξ»2 = β2. The nullspace of
A β (β4)I = β6
β12 4
8 is generated by the single eigenvector, v1 = (2, 3)T , with corresponding solution
2
.
3 y1 (t) = eβ4t
The nullspace of
A β (β2)I = β8
β12 4
6 is generated by a single eigenvector, v2 = (1, 2)T , with corresponding solution
1
.
2 y2 (t) = eβ2t
Thus, the general solution is
y(t) = C1 eβ4t
2.56. The matrix
A= 2
1
+ C2 eβ2t
.
3
2
β1
β5 5
β1 has characteristic polynomial p(Ξ») = Ξ»2 + 2Ξ» + 26 with eigenvalues β1 Β± 5i . Trusting that
β5i
β5 A β (β1 + 5i)I = 5
β5i is nonsingular, examination of the ο¬rst row shows that v = (1, i)T is an eigenvector. Thus,
z = e(β1+5i)t 1
i 1
0
+i
0
1
1
0
0
1
cos 5t
β sin 5t
+ ieβt cos 5t
+ sin 5t
0
1
1
0 = eβt (cos 5t + i sin 5t)
= e βt , is a complex solution. The real and imaginary parts of z form a fundamental set of solutions that lead to the
general solution
cos 5t
sin 5t
y(t) = eβt C1
+ C2
.
β sin 5t
cos 5t
2.57. From Exercise 49, the general solution is
y(t) = e4t (C1 + C2 t)
Because y(0) = (3, 1)T , β1
2
+ C2
0
1 2
β1
3
+ C2
.
= C1
1
0
1 Reduce the augmented matrix.
2
1 β1
1 3
1
β
0
0 0
1 1
β1 Thus, C1 = 1 and C2 = β1 and the particular solution is
y(t) = e4t (1 β t)
= e 4t 2
β1
β
1
0 3 β 2t
.
1βt . 9.2. Planar Systems
2.58. 579 From Exercise 50, the general solution is
y(t) = C1 eβ3t
Because y(0) = (3, 1)T , β2
1
+ C2 e2t
.
1
β1 β2
1
3
+ C2
.
= C1
1
β1
1 Reduce the augmented matrix
β2
1 1
β1 3
1
β
1
0 0
1 β4
.
β5 Thus, C1 = β4 and C2 = β5 and the particular solution is
β2
1
β 5e2t
y(t) = β4eβ3t
1
β1
8e β 3t β 5e 2 t
=
.
β4eβ3t + 5e2t
2.59. From Exercise 51, the general solution is
y(t) = C1 eβt
Because y(0) = (1, 0)T , β2
β3/2
+ C2 eβ3t
.
1
1 β2
β3/2
1
+ C2
.
= C1
1
1
0 Reduce the augmented matrix.
β2 β3/2 1
1 0 β2
β
1
1
0
01 2
Thus, C1 = β2 and C2 = 2 and the particular solution is
β2
β3/2
+ 2eβ3t
y(t) = β2eβt
1
1
4 e β t β 3e β 3t
.
=
β2eβt + 2eβ3t
2.60. From Exercise 52, the general solution is
y(t) = eβ6t (C1 + C2 t) 1
0
+ C2
0
1 . Because y(0) = (1, 0)T , 0
1
1
.
+ C2
= C1
1
0
0
It is easy to see that C1 = 1 and C2 = 0 and the particular solution is
e β 6t
1
=
.
y(t) = eβ6t
0
0
2.61. From Exercise 53, the general solution is
5 cos t
y(t) = C1 eβt
β3 cos t + sin t
Because y(0) = (β3, 2)T , + C 2 e βt 5 sin t
.
β cos t β 3 sin t 5
0
β3
+ C2
.
= C1
β3
β1
2 Reduce the augmented matrix.
5
β3 0
β1 β3
1
β
2
0 0
1 β3/5
β1/5 580 2.62. Chapter 9. Linear Systems with Constant Coefο¬cients
Thus, C1 = β3/5 and C2 = β1/5 and the particular solution is
1
3
5 cos t
5 sin t
β e βt
y(t) = β eβt
β3 cos t + sin t
β cos t β 3 sin t
5
5
βt β3 cos t β sin t
=e
.
2 cos t
From Exercise 54, the general solution is
cos 4t
sin 4t
y(t) = eβ2t C1
+ C2
.
cos 4t β sin 4t
cos 4t + sin 4t
Because y(0) = (4, 0)T , 1
0
4
+ C2
.
= C1
1
1
0 Reduce the augmented matrix. 2.63. 104
10 4
β
110
0 1 β4
Thus, C1 = 4 and C2 = β4 and the particular solution is
cos 4t
sin 4t
β4
y(t) = eβ2t 4
cos 4t β sin 4t
cos 4t + sin 4t
4 cos 4t β 4 sin 4t
.
= e β2 t
β8 sin 4t
From Exercise 55, the general solution is
2
1
y(t) = C1 eβ4t
+ C2 eβ2t
.
3
2
Because y(0) = (2, 1)T , 2
1
2
+ C2
.
= C1
3
2
1 Reduce the augmented matrix. 2.64. 212
10 3
β
321
0 1 β4
Thus, C1 = 3 and C2 = β4 and the particular solution is
2
1
β 4 e β2 t
y(t) = 3eβ4t
3
2
6e β 4 t β 4 e β 2 t
=
9eβ4t β 8eβ2t
From Exercise 56, the general solution is
y(t) = eβt C1 cos 5t
β sin 5t + C2 sin 5t
cos 5t Because y(0) = (5, 5)T , 1
0
5
+ C2
.
= C1
0
1
5
Thus, C1 = 5 and C2 = 5 and the particular solution is
y(t) = eβt 5
= 5eβt
2.65. sin 5t
cos 5t
+5
cos 5t
β sin 5t
cos 5t + sin 5t
.
cos 5t β sin 5t (a) Let
(A β Ξ»I )2 = a
c b
d . 9.2. Planar Systems 581 and assume that (A β Ξ»I )2 v = 0 for all v in R. Then
a
0
1
ab
β
=
c
0
0
cd
and Thus, a
c b
d 0
0
b
=
β
1
0
d
a
c b
d = 0
0 = 0
0 = 0
.
0 0
,
0 so (A β Ξ»I )2 = 0I .
(b) Let v1 be an eigenvector of A associated with the eigenvalue Ξ». Note that this means that Av1 β Ξ»v1 = 0.
Let v = Ξ± v1 be a multiple of the eigenvector v1 . Then
(A β Ξ»I )2 v = (A β Ξ»I )2 (Ξ± v1 )
= Ξ±(A β Ξ»I )2 v1
= Ξ±(A β Ξ»I )(Av1 β Ξ»v1 )
= Ξ±(A β Ξ»I )0
= Ξ±0
= 0.
(c) Now choose v in R2 such that v is not a multiple of the eigenvector v1 . Note that this means that v is
not an eigenvector associated with the eigenvalue Ξ». The set B = {v, v1 } is independent with dimension
two. Therefore, it must span all of R2 and is a basis for R2 .
(d) Set w = (A β Ξ»I )v. Note that this means that w is nonzero, for otherwise v would be an eigenvector
associated with the eigenvalue Ξ». In part (c), we saw that B = {v, v1 } was a basis for R2 . Thus, B spans
R2 and we can ο¬nd a and b such that
w = a v1 + bv.
(e) From (d), w = (A β Ξ»I )v and w = a v1 + bv. Thus,
(A β Ξ»I )w = (A β Ξ»I )(a v1 + bv)
= a(A β Ξ»I )v1 + b(A β Ξ»I )v
= 0 + bw
= b w,
Hence, (A β Ξ»I )w = bw
Aw β Ξ»w = bw
Aw = (Ξ» + b)w. Thus, w, being nonzero, is an eigenvector of A with eigenvalue Ξ» + b. But, Ξ» is the only eigenvalue, so
b must equal zero and w must be a multiple of v1 .
(f) Finally, because b = 0 and (A β Ξ»I )v = w,
(A β Ξ»I )2 v = (A β Ξ»I )(A β Ξ»I )v
= (A β Ξ»I )w
= bw
= 0.
Consequently, whether v is a multiple of v1 or not, (A β Ξ»I )2 v = 0. Since this is true for any arbitrary v
in R2 , by part (a), (A β Ξ»I )2 = 0I .
582 Chapter 9. Linear Systems with Constant Coefο¬cients Section 3. Phase Plane Portraits
3.1. If
β10
5 A= β25
,
10 then T = 0 and D = 25, leading to
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 + 25.
On the other hand,
p(Ξ») = det (A β Ξ»I )
β10 β Ξ» β25
=
5
10 β Ξ»
= (β10 β Ξ»)(10 β Ξ») + 125
= Ξ»2 + 25.
3.2. If
0
β1 A= 5
,
4 then T = 4 and D = 5, leading to
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 β 4Ξ» + 5.
On the other hand,
p(Ξ») = det (A β Ξ»I )
βΞ»
5
=
β1 4 β Ξ»
= βΞ»(4 β Ξ») + 5
= Ξ»2 β 4Ξ» + 5.
3.3.
y2 1.2e1.2 (1, 1)T 1.2e0.6 (1, 1)T
1.2(1, 1)T
y1 9.3. Phase Plane Portraits 583 3.4.
5 y1 β0.8(β2, 1)T
β0.8e0.6 (β2, 1)T β0.8e1.2 (β2, 1)T β5 y2 3.5.
y2 0.8(4, 4)T 0.8eβ0.6 (4, 4)T
0.8eβ1.2 (4, 4)T
y1 3.6.
y2 5
β1.2(4, β4)T β1.2eβ0.6 (4, β4)T β1.2eβ1.2 (4, β4)T y1 β5 584 Chapter 9. Linear Systems with Constant Coefο¬cients 3.7.
14 14 3.8.
14 14 3.9.
14 14 9.3. Phase Plane Portraits 585 3.10. Both eigenvalues are negative, so the equilibrium point at the origin is a sink. Solutions dive toward the origin
tangent to the slow exponential solution, eβt (2, 1)T . As solutions move backward in time, the eventually
parallel the fast exponential solution, eβ2t (β1, 1)T . 3.11. Both eigenvalues are positive, so the equilibrium point at the origin is a source. Solutions emanate from
the origin tangent to the slow exponential solution, et (β1, 2)T , eventually paralleling the fast exponential
solution, e2t (3, β1)T . 586 Chapter 9. Linear Systems with Constant Coefο¬cients 3.12. One eigenvalue is positive, the other negative, so the equilibrium point at the origin is a saddle. As t β +β,
solutions parallel the exponential solution et (1, 1)T . As t β ββ, solutions parallel the exponential solution
eβ2t (1, β1)T . 3.13. Both eigenvalues are negative, so the equilibrium point at the origin is a sink. Solutions dive toward the origin
tangent to the slow exponential solution, eβt (1, 2)T . As solutions move backward in time, they eventually
parallel the fast exponential solution, eβ3t (β4, 1)T . 9.3. Phase Plane Portraits 587 3.14. One eigenvalue is positive, the other negative, so the equilibrium point at the origin is a saddle. As t β +β,
solutions parallel the exponential solution e2t (β1, 4)T . As t β ββ, solutions parallel the exponential
solution eβt (β5, 2)T . 3.15. Both eigenvalues are positive, so the equilibrium point at the origin is a source. Solutions emanate from the
origin tangent to the slow exponential solution, et (1, 5)T , eventually paralleling the fast exponential solution,
e3t (4, 1)T . 3.16. Matrix
A= β4
β4 8
4 has trace T = 0 and determinant D = 16. Thus, the characteristic polynomial is
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 + 16,
which produces eigenvalues Ξ»1 = β4i and Ξ»2 = 4i . Because the real part of these eigenvalues is zero, the
equilibrium point at the origin is a center. At (1, 0),
β4
β4 8
4 β4
1
.
=
β4
0 588 Chapter 9. Linear Systems with Constant Coefο¬cients
Thus, the motion is clockwise. A hand sketch follows. 5 5 The phase portrait, drawn using a numeric solver, follows. y 5 0 β5
β5 3.17. 0
x 5 Matrix
A= 0
β3 3
0 has trace T = 0 and determinant D = 9. Thus, the characteristic polynomial is
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 + 9,
which produces eigenvalues Ξ»1 = 3i and Ξ»2 = β3i . Because the real part of these eigenvalues is zero, the
equilibrium point at the origin is a center. At (1, 0),
0
β3 3
0 0
1
.
=
β3
0 9.3. Phase Plane Portraits 589 Thus, the motion is clockwise. A hand sketch follows.
3 3 The phase portrait, drawn using a numerical solver, follows. y 2
0
β2
β2 3.18. 0
x 2 Matrix
A= 2
β4 2
β2 has trace T = 0 and determinant D = 4. Thus, the characteristic polynomial is
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 + 4,
which produces eigenvalues Ξ»1 = 2i and Ξ»2 = β2i . Because the real part of these eigenvalues is zero, the
equilibrium point at the origin is a center. At (1, 0),
2
β4 2
β2 2
1
.
=
β4
0 590 Chapter 9. Linear Systems with Constant Coefο¬cients
Thus, the motion is clockwise. A hand sketch follows. 5 5 The phase portrait, drawn using a numeric solver, follows. y 5 0 β5
β5 3.19. 0
x 5 Matrix
A= 0
β4 1
0 has trace T = 0 and determinant D = 4. Thus, the characteristic polynomial is
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 + 4,
which produces eigenvalues Ξ»1 = 2i and Ξ»2 = β2i . Because the real part of these eigenvalues is zero, the
equilibrium point at the origin is a center. At (1, 0),
0
β4 1
0 0
1
.
=
β4
0 9.3. Phase Plane Portraits 591 Thus, the motion is clockwise. A hand sketch follows.
3 3 The phase portrait, drawn using a numerical solver, follows. y 2
0
β2
β2 3.20. 0
x 2 Matrix
A= β2
β1 2
0 has trace T = β2 and determinant D = 2. thus, the characteristic polynomial is
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 + 2Ξ» + 2,
which produces eigenvalues Ξ»1 = β1 + i and Ξ»2 = β1 β i . Because the real part of the eigenvalues is
negative, the equilibrium point at the origin is a spiral sink. At (1, 0),
β2
β1 2
0 1
β2
=
,
0
β1 592 Chapter 9. Linear Systems with Constant Coefο¬cients
so the motion is clockwise. A hand sketch follows.
5 5 The phase portrait, draw in a numeric solver, follows. y 5 0 β5
β5 3.21. 0
x 5 Matrix
A= β1
β5 1
3 has trace T = 2 and determinant D = 2. Thus, the characteristic polynomial is
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 β 2Ξ» + 2,
which produces eigenvalues Ξ»1 = 1 + i and Ξ»2 = 1 β i . Because the real part of the eigenvalues is positive,
the equilibrium point at the origin is a spiral source. At (1, 0),
β1
β5 1
3 1
β1
=
,
0
β5 9.3. Phase Plane Portraits 593 so the motion is clockwise. A hand sketch follows.
5 5 The phase portrait, drawn in a numerical solver, follows. y 5 0 β5
β5 3.22. 0
x 5 Matrix
A= 7
4 β10
β5 has trace T = 2 and determinant D = 5. Thus, the characteristic polynomial is
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 β 2Ξ» + 5,
which produces eigenvalues Ξ»1 = 1 + 2i and Ξ»2 = 1 β 2i . Because the real part of the eigenvalues is positive,
the equilibrium point at the origin is a spiral source. At (1, 0),
7
4 β10
β5 7
1
,
=
4
0 594 Chapter 9. Linear Systems with Constant Coefο¬cients
so the motion is counterclockwise. A hand sketch follows.
10 10 The phase portrait, draw with a numerical solver, follows. 10 y 5
0
β5
β10
β10 3.23. β5 0
x 5 10 Matrix
A= β3
β4 2
1 has trace T = β2 and determinant D = 5. Thus, the characteristic polynomial is
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 + 2Ξ» + 5,
which produces eigenvalues Ξ»1 = β1 + 2i and Ξ»2 = β1 β 2i . Because the real part of the eigenvalues is
negative, the equilibrium point at the origin is a spiral sink. At (1, 0),
β3
β4 2
1 β3
1
,
=
β4
0 9.3. Phase Plane Portraits 595 so the motion is clockwise. A hand sketch follows. 5 5 The phase portrait, drawn in a numerical solver, follows. y 5 0 β5
β5 3.24. 0
x 5 If
A= 8
β4 20
,
β8 then the trace is T = 0 and the determinant is D = 16. Further, the characteristic polynomial is
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 + 16,
which produces eigenvalues Ξ»1 = 4i and Ξ»2 = β4i . Therefore, the equilibrium point at the origin is a center.
At (1, 0),
8
β4 20
β8 8
1
,
=
β4
0 596 Chapter 9. Linear Systems with Constant Coefο¬cients
so the motion is clockwise. A hand sketch follows.
10 10 The phase portrait, draw with a numerical solver, follows. 10 y 5
0
β5
β10
β10 3.25. β5 If
A= 0
x β16
β18 5 10 9
,
11 then the trace is T = β5 and the determinant is D = β14 < 0. Hence, the equilibrium point at the origin is
a saddle. Further, the characteristic polynomial is
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 + 5Ξ» β 14,
which produces eigenvalues Ξ»1 = β7 and Ξ»2 = 2. Because
A + 7I = β9
β18 9
,
18 v1 = (1, 1)T , leading to the exponential solution eβ7t (1, 1)T . Because
A β 2I = β18
β18 9
,
9 v2 = (1, 2)T , leading to the exponential solution e2t (1, 2)T . Thus, the general solution is
y(t) = C1 eβ7t 1
1
+ C2 e2t
.
1
2 9.3. Phase Plane Portraits 597 Solutions approach the halο¬ine generated by C2 (1, 2)T as they move forward in time, but they approach the
halο¬ine generated by C1 (1, 1)T as they move backward in time. A hand sketch follows. The phase portrait, drawn in a numerical solver, follows. y 5 0 β5
β5 3.26. 0
x 5 If
A= 2
8 β4
β6 then the trace is T = β4 and the determinant is D = 20. Further, T 2 β 4D = (β4)2 β 4(20) = β64 < 0,
so the equilibrium point at the origin is a spiral sink. At (1, 0),
2
8 β4
β6 2
1
,
=
8
0 598 Chapter 9. Linear Systems with Constant Coefο¬cients
so the motion is counterclockwise. A hand sketch follows.
10 10 The phase portrait, draw with a numerical solver, follows. 10 y 5
0
β5
β10
β10 3.27. β5 If
A= 0
x 8
β6 5 10 3
,
β1 then the trace is T = 7 and the determinant is D = 10 > 0. Further, T 2 β 4D = (7)2 β 4(10) = 9 > 0, so
the equilibrium point at the origin is a nodal source. Further, the characteristic polynomial is
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 β 7Ξ» + 10,
which produces eigenvalues Ξ»1 = 2 and Ξ»2 = 5. Because
A β 2I = 6
β6 3
,
β3 v1 = (1, β2)T , leading to the exponential solution e2t (1, β2)T . Because
A β 5I = 3
β6 3
,
β6 v2 = (1, β1)T , leading to the exponential solution e5t (1, β1)T . Thus, the general solution is
y(t) = C1 e2t 1
1
+ C2 e5t
.
β2
β1 9.3. Phase Plane Portraits 599 Solutions emanate from the source tangent to the βslowβ halο¬ine solution generated by C1 (1, β2)T and
eventually parallel the βfastβ halο¬ine generated by C2 (1, β1)T as they move forward in time. A hand sketch
follows. The phase portrait, drawn in a numerical solver, follows. y 5 0 β5
β5 3.28. 0
x If β11
10 A= 5 β5
,
4 then the trace is T = β7 and the determinant is D = 6. Further, T 2 β 4D = (β7)2 β 4(6) = 25 > 0, so the
equilibrium point at the origin is a nodal sink. Further, the characteristic polynomial is
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 + 7Ξ» + 6,
which produces eigenvalues Ξ»1 = β1 and Ξ»2 = β6. Because
A+I = β10
10 β5
,
5 v1 = (1, β2)T , leading to the exponential solution eβt (1, β2)T . Because
A + 6I = β5
10 β5
,
10 v2 = (1, β1)T , leading to the exponential solution eβ6t (1, β1)T . Thus, the general solution is
y(t) = C1 eβt 1
1
+ C 2 e β 6t
.
β2
β1 600 Chapter 9. Linear Systems with Constant Coefο¬cients
Solutions approach the origin tangent to the βslowβ halο¬ine solution generated by C1 (1, β2)T . As time moves
backwards, solutions eventually parallel the halο¬ine generated by C2 (1, β1)T , the βfastβ solution. A hand
sketch follows. The phase portrait, draw in a numerical solver, follows. y 5 0 β5
β5 3.29. 0
x 5 If
A= 6
10 β5
,
β4 then the trace is T = 2 and the determinant is D = 26 > 0. Further, T 2 β 4D = (2)2 β 4(26) = β100 < 0,
so the equilibrium point at the origin is a spiral source. Further,
6
10 β5
β4 1
6
=
,
0
10 9.3. Phase Plane Portraits 601 so the motion is counterclockwise. A hand sketch follows. The phase portrait, drawn in a numerical solver, follows. y 5 0 β5
β5 3.30. 0
x If
A= β7
β5 5 10
,
8 then the trace is T = 1 and the determinant is D = β6, so the origin is a saddle point. Further, the characteristic
polynomial is
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 β Ξ» β 6.
which produces eigenvalues Ξ»1 = β2 and Ξ»2 = 3. Because
A + 2I = β5
β5 10
10 v1 = (2, 1)T , leading to the exponential solution eβ2t (2, 1)T . Because
A β 3I = β10
β5 10
,
5 v2 = (1, 1)T , leading to the exponential solution e3t (1, 1)T . Thus, the general solution is
y(t) = C1 eβ2t 2
1
+ C2 e3t
.
1
1 602 Chapter 9. Linear Systems with Constant Coefο¬cients
Solutions approach the halο¬ine C2 (1, 1) as they move forward in time, but they approach the halο¬ine generated
by C1 (2, 1) as they move backward in time. A hand sketch follows. The phase portrait, drawn in a numerical solver, follows. y 5 0 β5
β5 3.31. 0
x 5 If
A= 4
β15 3
,
β8 then the trace is T = β4 and the determinant is D = 13 > 0. Further, T 2 β 4D = (β4)2 β 4(13) = β36 < 0,
so the equilibrium point at the origin is a spiral sink. Further,
4
β15 3
β8 1
4
=
,
0
β15 9.3. Phase Plane Portraits 603 so the motion is clockwise. A hand sketch follows. The phase portrait, drawn in a numerical solver, follows. 10 y 5
0
β5
β10
β10 3.32. β5 0
x 5 10 If
A= 3
β4 2
,
β1 then the trace T = 2 and the determinant is D = 5, and the discriminant is T 2 β 4D = (2)2 β 4(5) = β16 < 0.
Thus, the origin is a spiral source. At (1, 0),
3
β4 2
β1 1
3
=
,
0
β4 604 Chapter 9. Linear Systems with Constant Coefο¬cients
so the motion is clockwise. A hand sketch follows. The phase portrait, drawn in a numerical solver, follows. y 5 0 β5
β5 3.33. 0
x If β5
β6 A= 5 2
,
2 then the trace is T = β3 and the determinant is D = 2 > 0. Further, T 2 β 4D = (β3)2 β 4(2) = 1 > 0, so
the equilibrium point at the origin is a nodal sink. Further, the characteristic polynomial is
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 + 3Ξ» + 2,
which produces eigenvalues Ξ»1 = β1 and Ξ»2 = β2. Because
A+I = β4
β6 2
,
3 v1 = (1, 2)T , leading to the exponential solution eβt (1, 2)T . Because
A + 2I = β3
β6 2
,
4 v2 = (2, 3)T , leading to the exponential solution eβ2t (2, 3)T . Thus, the general solution is
y(t) = C1 eβt 1
2
+ C2 eβ2t
.
2
3 9.3. Phase Plane Portraits 605 Solutions sink into the origin tangent to the βslowβ halο¬ine solution generated by C1 (1, 2)T and eventually
parallel the βfastβ halο¬ine generated by C2 (2, 3)T as they move backward in time. A hand sketch follows. The phase portrait, drawn in a numerical solver, follows. y 5 0 β5
β5 3.34. 0
x 5 If
A= β4
β2 10
4 then the trace is T = 0 and the determinant is D = 4, so the origin is a center. At (1, 0),
β4
β2 10
4 1
β4
=
,
0
β2 606 Chapter 9. Linear Systems with Constant Coefο¬cients
so the rotation is clockwise. A hand sketch follows.
5 5 The phase portrait, drawn in a numerical solver, follows. y 5 0 β5
β5 3.35. 0
x If
A= β2
4 5 β6
,
8 then the trace is T = 6 and the determinant is D = 8 > 0. Further, T 2 β 4D = (6)2 β 4(8) = 4 > 0, so the
equilibrium point at the origin is a nodal source. Further, the characteristic polynomial is
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 β 6Ξ» + 8,
which produces eigenvalues Ξ»1 = 2 and Ξ»2 = 4. Because
A β 2I = β4
4 β6
,
6 v1 = (3, β2)T , leading to the exponential solution e2t (3, β2)T . Because
A β 4I = β6
4 β6
,
4 v2 = (1, β1)T , leading to the exponential solution e4t (1, β1)T . Thus, the general solution is
y(t) = C1 e2t 3
1
+ C2 e 4 t
.
β2
β1 9.3. Phase Plane Portraits 607 Solutions emanate from the source tangent to the βslowβ halο¬ine solution generated by C1 (3, β2)T and
eventually parallel the βfastβ halο¬ine generated by C2 (1, β1)T as they move forward in time. A hand sketch
follows. The phase portrait, drawn in a numerical solver, follows. y 5 0 β5
β5 3.36. 0
x 5 (a) For
A= 1
β1 4
β3 we have T = t r(A) = β2 and D = det (A) = 1. Since the discriminant T 2 β 4D = 0 the point (T , D)
lies on the parabola that divides nodal sinks from spiral sinks in the tracedeterminant plane.
(b) The general solution can be written
y(t) = eβt (C1 + C2 t) 0
2
+ C2
1/2
β1 . Because teβt β 0 as t β β (use lβHopitalβs rule), both eβt (C1 + C2 t)(2, β1)T β 0 and
Λ
C2 eβt (0, 1/2)T β 0 as t β β. However, the ο¬rst term is larger for large values of t . Thus, as t β β,
y(t) β eβt (C1 + C2 t)(2, β1)T , which implies that solutions approach the origin tangent to the halο¬ine
generated by (2, β1)T . In a similar manner, as t β ββ, the term eβt (C1 + C2 t)(2, β1)T is larger
than the term C2 eβt (0, 1/2)T , so solutions eventually parallel the halο¬ine generated by (2, β1)T as time
moves backwards. 608 Chapter 9. Linear Systems with Constant Coefο¬cients
(c) The following ο¬gure shows the halfline solutions and one other in each sector. The solutions clearly
exhibit the behavior predicted in part (a). y 5 0 β5
β5 3.37. 3.38. 0
x 5 (a) There is one exponential solution, eΞ»t v1 . Because Ξ» < 0, this solution decays to the equilibrium point at
the origin along the half line generated by C v1 .
(b) The general solution is
y(t) = eΞ»t [(C1 + C2 t)v1 + C2 v2 ].
Because Ξ» < 0, the terms eΞ»t (C1 + C2 t)v1 and C2 eΞ»t v2 both decay to zero. However, the ο¬rst term is
larger for large values of t . Thus, as t β β, y(t) β eΞ»t (C1 + C2 t)v1 , which implies that the solution
approaches zero tangent to the halο¬ine generated by C2 v1 .
(c) Because Ξ» < 0, the terms eΞ»t (C1 + C2 t)v1 and C2 eΞ»t v2 get inο¬nitely large in magnitude as t β ββ.
However, the ο¬rst term is larger in magnitude for negative values of t that are large in magnitude. Thus,
as t β ββ, y(t) β eΞ»t (C1 + C2 t)v1 ,, which implies that the solution eventually parallels the halο¬ine
generated by βC2 v1 .
(d) Degenerate nodal sink.
In general everything moves in the opposite direction in comparison to the situation in Exercise 37.
(a) As t β β the exponential solution tends to β along the halfline generated by C1 v1 .
(b) As t β β the general solution tends to β and becomes parallel to the halfline generated by C2 v1 .
(c) As t β ββ the general solution tends to 0 tangent to the halfline generated by βC2 v1 . 3.39. 3.40. The origin is a degenerate nodal source.
Because the linear degenerate nodal sources and sinks have only one eigenvalue, and because the eigenvalues
are given by
β
T Β± T 2 β 4D
,
Ξ»1 , Ξ»2 =
2
we must have T 2 β 4D = 0. Therefore, the degenerate nodal sources and sinks lie on the parabola T 2 β 4D = 0
in the tracedeterminant plane. This positioning on the boundary between the nodal sink and sources and the
spiral sinks and sources is signiο¬cant. The solutions attempt to spiral, but they cannot. The present of the
halο¬ine solutions prevents them from spiralling (solutions cannot cross).
If y = Ay, where
64
A=
,
β1 2
then the trace is T = 8 and the determinant is D = 16. Further, T 2 β 4D = 82 β 4(16) = 0, so this system
lies on the parabola T 2 β 4D = 0 that separates spiral sources and sinks from nodal sources and sinks in the
trace determinant plane. Thus, the equilibrium point at the origin is a degenerate nodal source (T = 8).
The characteristic equation is
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 β 8Ξ» + 16, 9.3. Phase Plane Portraits
which produces a single eigenvalue Ξ» = 4. Because
2
A β 4I =
β1 609 4
,
β2 v1 = (2, β1)T and we have the exponential solution e4t (2, β1)T . To ο¬nd another solution, we must solve
(A β Ξ»I )v2 = v1 . Start with any vector that is not a multiple of v1 , say w = (1, 0)T . Then
2
4
1
2
(A β 4I )w =
=
= v1 .
β1 β2
0
β1
Thus, let v2 = w = (1, 0)T . Thus, a second, independent solution is
1
2
e4t (v2 + t v1 ) = e4t
+t
,
0
β1
and the general solution is
2
1
2
y(t) = C1 e4t
+ C2 e 4 t
+t
β1
0
β1
2
1
= e4t (C1 + C2 t)
+ C2
.
β1
0
We know that solutions must emanate from the origin parallel to the halο¬ines generated by C1 (2, β1). Not
only that, the solutions must also turn parallel to the halο¬ines as time marches forward. At (1, 0),
64
1
6
=
,
β1 2
0
β1
so the rotation is clockwise. A hand sketch follows. The phase portrait, drawn in a numerical solver, follows.
10 y 5
0
β5
β10
β10 β5 0
x 5 10 610
3.41. Chapter 9. Linear Systems with Constant Coefο¬cients
If y = Ay, where
β4
1 A= β4
,
0 then the trace is T = β4 and the determinant is D = 4. Further, T 2 β 4D = (β4)2 β 4(4) = 0, so this system
lies on the parabola T 2 β 4D = 0 that separates the spiral sources and sinks from the nodal sources and sinks
in the trace determinant plane. Thus, the equilibrium point at the origin is a degenerate nodal sink (T = β4).
The characteristic equation is
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 + 4Ξ» + 4,
which produces the single eigenvalue Ξ» = β2. Because
A + 2I = β2
1 β4
,
2 v1 = (2, β1)T and we have the exponential solution eβ2t (2, β1)T . To ο¬nd another solution, we must solve
(A β Ξ»I )v2 = v1 . Start with any vector that is not a multiple of v1 , say w = (1, 0)T . Then,
β2
1
β2
=
1 (A + 2I )w = β4
2 1
0 2
β1
= β1v1 .
= β1 Thus, let v2 = βw = (β1, 0)T . Thus, a second, independent solution is
eβ2t (v2 + t v1 ) = eβ2t β1
2
+t
0
β1 , and the general solution is
2
2
β1
+ C2 eβ2t
+t
β1
β1
0
2
β1
+ C2
(C1 + C2 t)
.
β1
0 y(t) = C1 eβ2t
= e β2 t We know that solution must decay to the origin in a manner parallel to the halο¬ines generated by C1 (2, β1)T .
Not only that, the solutions must also turn parallel to the half lines as time marches backward. We need only
ο¬nd whether the rotation is clockwise or counterclockwise. But,
β2
1 β4
2 β2
1
,
=
1
0 9.3. Phase Plane Portraits 611 so the rotation is counterclockwise. A hand sketch follows. The phase portrait, drawn in a numerical solver, follows.
5 y 0 β5
β5 3.42. 0
x 5 (a) In matrix form, the system
x = x + ay
y =x+y
is written x
y = 1
1 a
1 x
.
y The trace of the coefο¬cient matrix is T = 2 and the determinant is D = 1 β a . The discriminant is
T 2 β 4D = (2)2 β 4(1 β a) = 4a.
If the origin is a nodal source, then we must have D > 0 and T 2 β 4D > 0. Thus,
1βa >0 and This leads to the requirement 0 < a < 1.
(b) Let
A= 1
1 4a > 0 . a
.
1 In the case that 0 < a < 1,
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 β 2Ξ» + (1 β a). 612 Chapter 9. Linear Systems with Constant Coefο¬cients
The quadratic formula reveals the eigenvalues, Ξ»1 = 1 + A β Ξ»I = β a and Ξ»2 = 1 β β a . Because 1βΞ»
a
,
1
1βΞ» β
β
v = (Ξ» β 1, 1)T is the eigenvector associated with Ξ». If Ξ»1 = 1 + a , then v1 = ( a, 1)T is its associated
β
β
eigenvector. If Ξ»2 = 1 β a , then v2 β (β a, 1)T is its associated eigenvector. Thus, the equations
=
of the halο¬ine solutions are y = Β± x/ a . As a β 0, the halο¬ine solutions coalesce into one halο¬ine
solution, which lies on the y axis with equation x = 0.
(c) When a = 0, T = 2 and D = 1. Moreover, T 2 β 4D = (2)2 β 4(1) = 0, and we lie on the parabola
T 2 β 4D = 0 in the tracedeterminant plane. By part (b), the eigenvalues and eigenvectors coalesce, and
we have a degenerate nodal source. If a < 0, then T 2 β 4D = 4a < 0, and we move above the parabola
T 2 β 4D = 0 into the land of spiral sources.
3.43. (a) If y = B y, where
B= 2
0 0
,
2 then B has a single eigenvalue Ξ» = 2 and all vectors in R2 are eigenvectors. Thus, e2t (a, b)T is an
exponential solution for all (a, b)T β R2 . Moreover, these solutions will have to increase to inο¬nity along
the halο¬ines generated by C(a, b)T . If y = C y, where
C= β2
0 0
,
β2 9.3. Phase Plane Portraits 613 then the eigenvalue is β2, making eβ2t (a, b)T an exponential solution for all (a, b)T β R2 . Thus, the
phase portrait is identical to the ο¬rst graph, only with time reversed. Solutions now decay to zero along
the halο¬ines generated by C(a, b)T . (b) The system y = B y has a star source at the origin, but the system y = C y has a star sink.
(c) Because
20
B=
,
02
the trace is T = 4 and the determinant is D = 4. Further, T 2 β 4D = (4)2 β 4(4) = 0, so this case lives
on the parabola T 2 β 4D = 0 in the trace determinant plane. Moreover, because T = 4, it lives on the
right half of the parabola, nestled in the land of sources. In the case of
β2 0
C=
,
0 β2
this case also lives on the parabola T 2 β 4D = 0, but because T = β4, it lives on the left half, in the
land of the sinks.
(d) If y = Ay, where
a0
A=
,
0a 3.44. then the trace is T = 2a and the determinant is a 2 . Further, T 2 β 4D = (2a)2 β 4(a)2 = 0, placing
the star sinks and sources on the parabola T 2 β 4D = 0 in the tracedeterminant plane. If a > 0, then
T = 2a > 0, placing it on the right half of the parabola, making the equilibrium point at the origin a star
source. A similar argument shows that if a < 0, then the equilibrium point is a star sink.
Let A be a 2 Γ 2 matrix with real entries. If D = det (A) = 0, then the characteristic polynomial becomes
p(Ξ») = Ξ»2 β T Ξ» + D
= Ξ»2 β T Ξ»
= Ξ»(Ξ» β T ).
Thus, Ξ» = 0 is an eigenvalue. On the other hand, if one eigenvalue is Ξ» = 0, then Ξ» must be a factor of the
characteristic equation Ξ»2 β T Ξ» + D . This can only happen if D = 0. 3.45.
(1) If 2
1
,
β10 β5
then the trace is T = β3 and the determinant is D = 0. Thus, this degenerate case lies on the horizontal
axis, separating the saddles from the nodal sinks.
A= 614 Chapter 9. Linear Systems with Constant Coefο¬cients
(2) To ο¬nd the equilibrium points, we set the right hand side of y = Ay equal to zero, as in Ay = 0.
Consequently, the equilibrium points are simply the nullspace of A, which is generated by a single
vector, v1 = (1, β2)T . Thus, we have a whole line of equilibrium points. Everything on the line
y = β2x is an equilibrium point.
(3) The characteristic polynomial is
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 + 3Ξ»,
which produces eigenvalues Ξ»1 = 0 and Ξ»2 = β3. Because
A β 0I = A = 2
β10 1
,
β5 the eigenvector is v1 = (1, β2)T , the same vector that produces a line of equilibrium points. Because
A + 3I = 5
β10 1
,
β2 v2 = (1, β5)T . Thus, the general solution is
y(t) = C1 e0t 1
1
+ C 2 e β 3t
,
β2
β5 or
y(t) = C1 1
1
+ C2 eβ3t
.
β2
β5 Note that each solution in this family is the sum of a ο¬xed multiple of (1, β2)T and a decaying multiple
of (1, β5)T . Thus, as t β β, solutions move in lines parallel to (1, β5)T , decaying into the line of
equilibrium points as shown in the following ο¬gure. 9.3. Phase Plane Portraits 615 Our numerical solver provides further evidence of this behavior. 10 y 5
0
β5
β10
β5 0
x 5 3.46.
(1) If
8
β10 A= 4
β5 then the trace is T = 3 and the determinant is D = 0. Thus, this degenerate case lies on the horizontal
axis, separating saddles from the nodal sources.
(2) To ο¬nd the equilibrium points, we set the righthand side of y = Ay equal to zero, as in Ay = 0.
Consequently, the equilibrium points are simply the nullspace of A, which is generated by a single
vector, v1 = (1, β2)T . Thus, everything on the line y = β2x is an equilibrium point.
(3) The characteristic polynomial is
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 β 3Ξ»,
which produces eigenvalues Ξ»1 = 0 and Ξ»2 = 3. Because
A + 0I = 8
β10 4
,
β5 the eigenvector is v1 = (1, β2)T , the same vector that produces a line of equilibrium points. Because
A β 3I = 5
β10 4
,
β8 v2 = (4, β5)T . Thus, the general solution is
y(t) = C1 e0t 1
4
+ C2 e3t
,
β2
β5 or
y(t) = C1 1
4
+ C2 e3t
.
β2
β5 616 Chapter 9. Linear Systems with Constant Coefο¬cients
Note that each solutions in this family is the sum of a ο¬xed multiple of (1, β2)T and an increasing
multiple of (4, β5)T . Thus, as t β β, solutions move away from the line of equilibrium points along
lines parallel to (4, β5)T , as shown in the following ο¬gure. Our numerical solver provides further evidence of this behavior. 10 y 5
0
β5
β10
β10 3.47. β5 0
x 5 The solutions emanate from a line of equilibrium points, rather than decaying into the line of equilibrium
points. Section 4. Higher Dimensional Systems
4.1. 10 If
A= 2
0
6 1
1
10 0
0
β1 , then
p(Ξ») = det (A β Ξ»I )
2βΞ»
1
0
0
1βΞ»
0
=
6
10
β1 β Ξ» 9.4. Higher Dimensional Systems 617 Expanding down the third column,
2βΞ»
1
0
1βΞ»
= (β1 β Ξ»)(2 β Ξ»)(1 β Ξ») p(Ξ») = (β1 β Ξ») = β(Ξ» + 1)(Ξ» β 2)(Ξ» β 1).
Thus, the eigenvalues are β1, 2, and 1, respectively. The graph of the characteristic polynomial follows. Note
that the graph crosses the horizontal axis at the eigenvalues β1, 2, and 1. p(Ξ»)
15
10
5
0
β2 β1 0 1 2 Ξ»
3 β5
β10 4.2. If
A= β1
0
β1 6
β1
11 2
0
2 , then
p(Ξ») = det (A β Ξ»I ) = β1 β Ξ»
0
β1 6
2
β1 β Ξ»
0
.
11
2βΞ» Expanding across the second row,
β1 β Ξ»
2
β1
2βΞ»
= β(Ξ» + 1)((β1 β Ξ»)(2 β Ξ») + 2) p(Ξ») = (β1 β Ξ») = β(Ξ» + 1)(Ξ»2 β Ξ»)
= βΞ»(Ξ» + 1)(Ξ» β 1). 618 Chapter 9. Linear Systems with Constant Coefο¬cients
Thus, the eigenvalues are 0, β1, and 1, respectively. The graph of the characteristic polynomial follows. Note
that the graph crosses the horizontal axis at the eigenvalues 0, β1, and 1.
p (Ξ»)
5 5 4.3. If
A=
then 2
β6
β3 0
1
0 0
β4
β1 Ξ» , p(Ξ») = det (A β Ξ»I )
2βΞ»
0
0
β4
= β6 1 β Ξ»
β3
0
β1 β Ξ» Expanding across the ο¬rst row,
1βΞ»
β4
0
β1 β Ξ»
= (2 β Ξ»)(1 β Ξ»)(β1 β Ξ»)
= β(Ξ» β 2)(Ξ» β 1)(Ξ» + 1). p(Ξ») = (2 β Ξ») Thus, the eigenvalues are 2, 1, and β1, respectively. Because A β 2I reduces,
10 1
0
0
0
A β 2I = β6 β1 β4 β 0 1 β2 ,
00 0
β3 0 β3 4.4. it is easily seen that the nullspace of A β 2I is generated by the eigenvector v1 = (β1, 2, 1)T . In a similar
manner, we arrive at the following eigenvalueeigenvector pairs.
0
0
1β 1
and
β1β 2
1
0
Because
β1 0 0
det 2 1 2 = β1,
1 01
the eigenvectors are independent.
If
100
A = 3 β2 1 ,
5 β5 2 9.4. Higher Dimensional Systems
then
p(Ξ») = det (A β Ξ»I ) = 1βΞ»
3
5 619 0
0
β2 β Ξ»
1
.
β5
2βΞ» Expanding across the ο¬rst row,
β2 β Ξ»
1
β5
2βΞ»
= (1 β Ξ»)((β2 β Ξ»)(2 β Ξ») + 5) p(Ξ») = (1 β Ξ») = (1 β Ξ»)(Ξ»2 + 1)
Thus, the eigenvalues are 1, i , and βi , respectively. Because A β iI reduces,
A β iI = 1βi
3
5 0
β2 β i
β5 0
1
2βi 1
0
0 β 0
1
0 0
β2/5 + 1/5i
0 , it is easily seen that the nullspaces of A β iI is generated by the eigenvector v = (0, 2 β i, 5)T . In a similar
manner, we arrive at the following eigenvalueeigenvector pairs.
βi ββ 0
2+i
5 det 0
2βi
5 Because 4.5. the eigenvectors are independent.
If β4
12
β6 A=
then 0
2+i
5 0
2
0 1
1
0 1 ββ and 1
1
0 = β10i, 2
β6
3 , p(Ξ») = det (A β Ξ»I )
β4 β Ξ»
0
2
12
2 β Ξ» β6
=
β6
0
3βΞ» Expanding down the second column,
β4 β Ξ»
2
β6
3βΞ»
= β(Ξ» β 2)(Ξ»2 + Ξ»)
= βΞ»(Ξ» β 2)(Ξ» + 1). p(Ξ») = (2 β Ξ») Thus, the eigenvalues are 0, 2, and β1, respectively. Because A β 0I reduces,
A β 0I = β4
12
β6 0
2
0 2
β6
3 β 1
0
0 0
1
0 β1/2
0
0 , it is easily seen that the nullspace of A β 0I is generated by the eigenvector v1 = (1, 0, 2)T . In a similar
manner, we arrive at the following eigenvalueeigenvector pairs.
2β 0
1
0 and β1β β2
2
β3 620 Chapter 9. Linear Systems with Constant Coefο¬cients
Because
det 4.6. the eigenvectors are independent.
If
A= 1
0
2 0
1
0 β2
2
β3 β5 β2
4
1
β3 β1 then
p(Ξ») = det(A β Ξ»I ) = = 1, 0
0
β2 β5 β Ξ»
4
β3 . β2
0
1βΞ»
0
.
β1 β2 β Ξ» Expanding down the third column,
β5 β Ξ» β2
4
1βΞ»
= β(Ξ» + 2)((β5 β Ξ»)(1 β Ξ») + 8) p(Ξ») = (β2 β Ξ») = β(Ξ» + 2)(Ξ»2 + 4Ξ» + 3)
= β(Ξ» + 2)(Ξ» + 1)(Ξ» + 3).
Thus, the eigenvalues are β2, β1, and β3, respectively. Because
β3 β2 0
10
4
3 0β01
A + 2I =
β3 β1 0
00 4.7. 0
0
0 , it is easily seen that the nullspace of A + 2I is generated by the eigenvector v = (0, 0, 1)T . In a similar
manner, we arrive at the following eigenvalueeigenvector pairs.
β1
β1
2
1
β1 ββ
and
β 3 ββ
1
β2
Because
0 β1 β1
1
= 1,
det 0 2
1 1 β2
the eigenvectors are independent.
The system in matrix form,
x
x
4 β5 4
y,
y = 0 β1 4
z
z
001
reveals that the matrix
4 β5 4
A = 0 β1 4
001
is upper triangular. Thus, the eigenvalues are located on the main diagonal and are β1, 4, and 1. Because
1 β1 0
5 β5 4
A+I = 0 0 4 β 0 0 1 ,
000
002
it is easily seen that β1 β (1, 1, 0)T is an eigenvalueeigenvector pair. Similarly,
2
1
and
1β 2
4β 0
1
0 9.4. Higher Dimensional Systems 621 are the remaining eigenvalueeigenvector pairs. These lead to the general solution
x
y
z
4.8. = C1 e βt 1
1
0 For
A=
we have
A β Ξ»I = + C2 e
β3
3
2 1
0
0 4t 0
2
0 + C3 e t 2
2
1 . β1
3
0 β3 β Ξ»
0
3
2βΞ»
2
0 β1
3
βΞ» We can compute the characteristic polynomial p(Ξ») = det (A β Ξ»I ) by expanding along the second column
to get
β3 β Ξ» β1
p(Ξ») = (2 β Ξ») det
2
βΞ»
= β(Ξ» β 2)(Ξ»2 + 3Ξ» + 2)
= β(Ξ» β 2)(Ξ» + 1)(Ξ» + 2).
Hence the eigenvalues are Ξ»1 = β2, Ξ»2 = β1, and Ξ»3 = 2.
For Ξ»1 = β2 we have
β1 0
34
A β Ξ»1 I = A + 2 I =
20 β1
3
2 The nullspace is generated by the vector v1 = (β1, 0, 1)T .
For Ξ»2 = β1 we have
β2 0
33
A β Ξ»2 I = A + I =
20 β1
3
1 The nullspace is generated by the vector v2 = (1, 1, β2)T .
For Ξ»3 = 2 we have
β5 0
30
A β Ξ»3 I = A β 2 I =
20 β1
3
β2 The nullspace is generated by the vector v3 = (0, 1, 0)T .
Thus we have three exponential solutions:
y1 (t) = eΞ»1 t v1 = eβ2t
y2 (t) = eΞ»2 t v2 = eβt
y3 (t) = eΞ»3 t v3 = e2t β1
0
1
1
1
β2
0
1
0 Since the three eigenvalues are distinct, these solutions are linearly independent and form a fundamental set
of solutions. The general solution is
y(t) = C1 y1 (t) + C2 y2 (t) + C3 y3 (t). 622
4.9. Chapter 9. Linear Systems with Constant Coefο¬cients
In matrix form, x
y=
z
the characteristic polynomial of matrix
A=
is found by calculating β3
β5
β5 0
6
2 β3
β5
β5 0
6
2 x
y
z 0
β4
0
0
β4
0 , , p(Ξ») = det (A β Ξ»I )
β3 β Ξ»
0
0
β5
6 β Ξ» β4 .
=
β5
2
βΞ» Expanding across the ο¬rst row,
6 β Ξ» β4
2
βΞ»
2
= β(Ξ» + 3)(Ξ» β 6Ξ» + 8)
= β(Ξ» + 3)(Ξ» β 4)(Ξ» β 2). p(Ξ») = (β3 β Ξ») Thus, the eigenvalues are 4, β3, and 2. Because A β 4I reduces,
β7 0 0
10 0
A β 4I = β5 2 β4 β 0 1 β2
β5 2 β4
00 0 , it is easily seen that the nullspace of A β 4I is generated by the eigenvector v1 = (0, 2, 1)T . In a similar
manner, we arrive at the following eigenvalueeigenvector pairs.
1
0
β3 β 1
and
2β 1
1
1
Thus, the general solution is
x
0
1
0
y = C1 e4t 2 + C2 eβ3t 1 + C3 e2t 1 .
z
1
1
1
4.10. For
A= β3
0
0 β6
1
β2 β2
0
β1 , we have β3 β Ξ» β6
β2
0
1βΞ»
0
.
0
β2 β1 β Ξ»
We can compute the characteristic polynomial p(Ξ») = det (A β Ξ»I ) by expanding across the second row to
get
β3 β Ξ»
β2
p(Ξ») = (1 β Ξ»)
0
β1 β Ξ»
= (1 β Ξ»)(β3 β Ξ»)(β1 β Ξ»)
A β Ξ»I = Hence, the eigenvalues are Ξ»1 = 1, Ξ»2 = β3, and Ξ»3 = β1, respectively. For Ξ»1 = 1, we have
β4 β6 β2
1 0 β1
0
0
0
AβI =
β01 1
0 β2 β2
00 0 9.4. Higher Dimensional Systems 623 The nullspace is generated by the vector v1 = (1, β1, 1)T . For Ξ»2 = β3,
0
0
0 A + 3I = β6
4
β2 β2
0
2 0
0
0 β 1
0
0 0
1
0 . The nullspace is generated by the vector v2 = (1, 0, 0)T . For Ξ»3 = β1
β2
0
0 A+I = β6
2
β2 β2
0
0 1
0
0 β 0
1
0 1
0
0 . The nullspace is generated by v3 = (β1, 0, 1)T . Thus, we have three exponential solutions.
y1 (t) = et 1
β1
1 1
0
0 y2 (t) = eβ3t , , β1
0
1 y3 (t) = eβt Since the three eigenvalues are distinct, these solutions are linearly independent and form a fundamental set
of solutions. The general solution is
y(t) = C1 et
4.11. 1
β1
1 1
0
0 + C2 eβ3t β1
0
1 + C3 eβt . The characteristic polynomial of matrix
β3
β2
0 A= 4
3
0 8
2
2 is found by calculating
p(Ξ») = det (A β Ξ»I )
β3 β Ξ»
4
8
β2
3βΞ»
2
.
=
0
0
2βΞ»
Expanding across the third row,
β3 β Ξ»
4
β2
3βΞ»
= β(Ξ» β 2)(Ξ»2 β 1)
= β(Ξ» β 2)(Ξ» + 1)(Ξ» β 1). p(Ξ») = (2 β Ξ») Thus, the eigenvalues are β1, 1, and 2. Because A + I reduces,
A+I = β2
β2
0 4
4
0 8
2
3 β 1
0
0 β2
0
0 0
1
0 , it is easily seen that the nullspace of A + I is generated by the eigenvector v1 = (2, 1, 0)T . In a similar
manner, we arrive at the following eigenvalueeigenvector pairs.
1β 1
1
0 2β and 0
β2
1 Thus, the general solution is
y(t) = C1 eβt 2
1
0 + C2 et 1
1
0 + C3 e2t 0
β2
1 . 624
4.12. Chapter 9. Linear Systems with Constant Coefο¬cients
In matrix form, x 2 1 4
0
0
2 x2 3
x = 0
3 x4 3 β4 β1 .
0
β3 0
β2
1
β2 Using a computer, we ο¬nd the following eigenvalueeigenvector pairs.
1
1
0 β2 β2 ββ ,
0
β1 1
2 ββ ,
0
1 1
β1 ββ ,
0
1 0
1
1 ββ β1 1 Because the eigenvalues are distinct, the eigenvectors are independent and the exponential solutions
1
1 β2 1
, y2 (t) = e2t ,
y1 (t) = eβ2t 0
0
β1
1
0
0
1
1
y3 (t) = eβt , and y4 (t) = et 0
β1 1
1
form a fundamental set of solutions. Thus,
1 x (t) 1 0 x2 (t) β 2 t β2 2t 1 βt 1 t 1 x (t) = C1 e 0 + C2 e 0 + C3 e 0 + C4 e β1 3
β1
1
1
1
x4 (t)
1 4.13. 0 is the general solution.
The general solution in Exercise 7 was
x
y
z = C1 eβt 1
1
0 1
0
0 + C2 e4t + C3 et 2
2
1 If x(0) = 1, y(0) = β1, and z(0) = 2, then
1
β1
2 1
1
0 + C2 1
β1
2 β = C1 1
0
0 2
2
1 + C3 . The augmented matrix reduces.
1
1
0 1
0
0 2
2
1 1
0
0 0
1
0 0
0
1 β5
2
2 Thus, C1 = β5, C2 = 2, and C3 = 2, and the particular solution is
x (t)
y(t)
z(t) =
4.14. 1
1
1 + 2 e 4t 0
0
0
β5eβt + 2e4t + 4et
β5eβt + 4et
2e t = β5eβt + 2e t The solution has the form
y(t) = C1 y1 (t) + C2 y2 (t) + C3 y3 (t). 2
2
1 . 9.4. Higher Dimensional Systems 4.15. 4.16. 625 where y1 , y2 , and y3 are the fundamental set of solutions found in Exercise 9.4.8. Hence we must have
1
β1 = y(0)
2
= C1 y1 (0) + C2 y2 (0) + C3 y3 (0)
= C1 v 1 + C 2 v 2 + C 3 v 3
C1
= [v1 , v2 , v3 ] C2 ,
C3
where v1 , v2 , and v3 are the eigenvectors of A found in Exercise 9.4.8. To solve the system we form the
augmented matrix
β1 1 0 1
0
1 1 β1
[v1 , v2 , v3 , y(0)] =
1 β2 0 2
This is reduced to the row echelon form
1 0 0 β4
0 1 0 β3 .
001 2
Backsolving, we ο¬nd that C1 = β4, C2 = β3, and C3 = 2. Hence the solution is
β1
1
0
0 β 3e β t
1 + 2 e 2t 1
y(t) = β4eβ2t
1
β2
0
4 e β 2 t β 3e β t
= β3eβt + 2e2t .
β4eβ2t + 6eβt
The general solution in Exercise 9 was
x
0
1
0
y = C1 e4t 2 + C2 eβ3t 1 + C3 e2t 1 .
z
1
1
1
If x(0) = β2, y(0) = 0, and z(0) = 2, then
1
0
β2
0
0
= C1 2 + C2 1 + C3 1 .
1
1
2
1
The augmented matrix reduces.
0 1 0 β2
1 0 0 β2
211 0
β 0 1 0 β2
111 2
001 6
Thus, C1 = β2, C2 = β2, and C3 = 6, and the particular solution is
x (t)
0
1
0
y(t) = β2e4t 2 β 2eβ3t 1 + 6e2t 1
z(t)
1
1
1
β2eβ3t
= β4e4t β 2eβ3t + 6e2t
β2e4t β 2eβ3t + 6e2t
The general solution in Exercise 10 was
1
1
β1
0.
y(t) = C1 et β1 + C2 eβ3t 0 + C3 eβt
1
0
1 626 Chapter 9. Linear Systems with Constant Coefο¬cients
Because y(0) = (β3, β3, 0),
β3
β3
0 + C2 1
β1
1 = C1 1
0
0 + C3 β1
0
1 β 1
0
0 0
1
0 3
β9
β3 . The augmented matrix reduces.
1
β1
1 1
0
0 β1
0
1 β3
β3
0 0
0
1 Thus, C1 = 3, C2 = β9, and C3 = β3, and the particular solution is
y(t) = 3et
4.17. 1
β1
1 β 9eβ3t 1
0
0 β 3e β t β1
0
1 . + C2 et 1
1
0 + C3 e2t 0
β2
1 . + C2 1
1
0 + C3 0
β2
1 β 1
0
0 0
1
0 1
β1
1 The general solution in Exercise 11 was
2
1
0 y(t) = C1 eβt
If y(0) = (1, β2, 1)T , then
1
β2
1 = C1 2
1
0 . The augmented matrix reduces.
2
1
0 1
1
0 0
β2
1 1
β2
1 0
0
1 Thus, C1 = 1, C2 = β1, and C3 = 1, and the particular solution is
2
1
1 β et 1
0
0
2 e βt β e t
e βt β e t β 2 e 2 t
e 2t y(t) = eβt
=
4.18. The general solution in Exercise 12 was x (t) 1 0
β2
1 + e 2t 0 x2 (t) β 2 t β2 2t 1 βt 1 t 1 x (t) = C1 e 0 + C2 e 0 + C3 e 0 + C4 e β1 .
3
x4 (t)
β1
1
1
1
1 1 0 Because x1 (0) = 1, x2 (0) = β1, x3 (0) = 0, and x4 (0) = 2,
1
1
1
0 0 β1 2 1
1
1 0 = C1 0 + C2 0 + C3 0 + C4 β1 .
2
β1
1
1
1 The augmented matrix reduces.
1 1 β2
0
β1 1
0
1 0
1
0
1 0
1
β1
1 1
1
β1 0
β
0
0
0
2 0
1
0
0 0
0
1
0 0
0
0
1 3
β2 7
0 9.4. Higher Dimensional Systems 627 Thus, C1 = 3, C2 = β2, C3 = 7, and C4 = 0, and the particular solution is x (t) 1
1
0
1 x2 (t) β2t β2 2t 1 βt 1 x (t) = 3e 0 β 2e 0 + 7e 0 .
3 β1 x4 (t) 4.19. 1 1 Using Eulerβs formula
y(t) = e2it 1
1 + 2i
β3i = (cos 2t + i sin 2t) 1
1
β3 0
2
β3 +i 1
0
0
2 + i cos 2t
2 + i sin 2t
1 β sin 2t
β3
β3
β3
cos 2t
sin 2t
cos 2t β 2 sin 2t
2 cos 2t + sin 2t
.
+i
β3 cos 2t β 3 sin 2t
β3 cos 2t + 3 sin 2t = cos 2t
= 1
1
β3 Thus, the real and imaginary parts of the complex solution y(t) = y1 (t) + i y2 (t) are
y1 (t) =
4.20. cos 2t
cos 2t β 2 sin 2t
β3 cos 2t + 3 sin 2t y2 (t) = and sin 2t
2 cos 2t + sin 2t
β3 cos 2t β 3 sin 2t . Using Eulerβs formula, 1
1 + i y(t) = e(1+i)t 1βi
0 0 1 1 = et (cos t + i sin t) + i β1 1
0
0 0 1 1 0 1 1 1 1
= et cos β sin t + iet cos t + sin t β1 1
1
β1 0
0
0
0 1 Thus, the real and imaginary parts of the complex solution y(t) = y1 (t) + i y2 (t) are cos t
sin t cos t β sin t y1 (t) = et cos t + sin t 0 4.21. In matrix form, x
y
z = Using a computer, matrix
A= cos t + sin t y2 (t) = et .
β cos t + sin t 0 and β4
β4
0
β4
β4
0 8
4
0 x
y
z 8
2
2
8
4
0 8
2
2 . 628 Chapter 9. Linear Systems with Constant Coefο¬cients
has eigenvalues 2, 4i , and β4i . For the eigenvalue 2, we look for an vector in the nullspace (eigenspace) of
β6
β4
0 A β 2I = 8
2
0 8
2
0 . The computer tells us that (0, β1, 1)T is in the nullspace of A β 2I . Thus, one solution is y1 (t) =
e2t (0, β1, 1)T . In a similar vein, our computer tells us that (1 β i, 1, 0)T is in the nullspace of A β (4i)I .
Thus, we have conjugate solutions
1βi
1
0 z(t) = e4it 1+i
1
0 z(t) = eβ4it and . Using Eulerβs formula, we ο¬nd the real and imaginary parts of the solution z(t).
z(t) = e4it 1βi
1
0 = (cos 4t + i sin 4t)
cos 4t + sin 4t
cos 4t
0 = 1
1
0
+i β1
+i 0
0
β cos 4t + sin 4t
sin 4t
0 . The real and imaginary parts of z are solutions and we can write the general solution
x (t)
y(t)
z(t)
4.22. = C1 e2t 0
β1
1 cos 4t + sin 4t
cos 4t
0 + C2 + C3 β cos 4t + sin 4t
sin 4t
0 . Using a computer, matrix
A= 2
1
β3 4
2
β4 4
3
β5 has the following eigenvalueeigenvector pairs.
β1 ββ 0
β1
1 , 2i ββ β2
β1 β i
2 , β2i ββ β2
β1 + i
2 . Using Eulerβs formula,
z(t) = e2it β2
β1 β i
2 = (cos 2t + i sin 2t)
= cos 2t β2
β1
2 β2
β1
2 β sin 2t +i
0
β1
0 0
β1
0
+ cos 2t 0
β1
0 + sin 2t β2
β1
2 The real and imaginary parts of z are solutions and we can write the general solution.
y(t) = C1 eβt 0
β1
1 + C2 β2 cos 2t
β cos 2t + sin 2t
2 cos 2t + C3 β2 sin 2t
β cos 2t β sin 2t
2 sin 2t . 9.4. Higher Dimensional Systems
4.23. 629 In matrix form,
x
y
z 6
8
8 = β4
0
β2 0
β2
0 x
y
z . Using a computer, matrix
A= 6
8
8 β4
0
β2 0
β2
0 has eigenvalues β2, 2 + 4i , and 2 β 4i . For the eigenvalue β2, we look for a vector in the nullspace (eigenspace)
of
8 0 β4
A + 2I = 8 0 0 .
80 0
The computer tells us that (0, 1, 0)T is in the nullspace of A + 2I . Thus, one solution is y1 (t) = eβ2t (0, 1, 0)T .
In a similar vein, our computer tells us that (1 + i, 2, 2)T is in the nullspace of A β (2 + 4i)I . Thus, we have
conjugate solutions
z(t) = e(2+4i)t 1+i
2
2 z(t) = e(2β4i)t and 1βi
2
2 . Using Eulerβs formula, we ο¬nd the real and imaginary parts of the solution z(t).
z(t) = e2t e4it 1+i
2
2 = e2t (cos 4t + i sin 4t)
= e 2t cos 4t β sin 4t
2 cos 4t
2 cos 4t 1
2
2 1
0
0
cos 4t + sin 4t
2 sin 4t
2 sin 4t +i + ie2t The real and imaginary parts of z are solutions and we can write the general solution
x (t)
y(t)
z(t)
4.24. = C1 eβ2t 0
1
0 + C2 e2t cos 4t β sin 4t
2 cos 4t
2 cos 4t + C3 e 2 t cos 4t + sin 4t
2 sin 4t
2 sin 4t Using a computer, matrix
A= β1
β52
β20 0
β11
β4 0
26
9 has the following eigenvalueeigenvector pairs.
β1 + 2i ββ 0
5βi
2 , β1 β 2i ββ 0
5+i
2 , β1 ββ 1
0
2 . 630 Chapter 9. Linear Systems with Constant Coefο¬cients
Using Eulerβs formula,
0
5βi
2 z(t) = e(β1+2i)t 0
5
2 = eβt (cos 2t + i sin 2t)
= eβt cos 2t 0
5
2 β sin 2t 0
β1
0 +i
0
β1
0 0
β1
0 + ieβt cos 2t 0
5
2 + sin 2t . The real and imaginary parts of z are solutions and we can write the general solution.
y(t) = C1 eβt
4.25. 0
5 cos 2t + sin 2t
2 cos 2t In system y = Ay, where
A=
we have
A β Ξ»I = 0
β cos 2t + 5 sin 2t
2 sin 2t + C 2 e βt β7
2
3 β13
3
8 β7 β Ξ»
2
3 0
0
β2 + C 3 e βt 1
0
2 , β13
0
3βΞ»
0
8
β2 β Ξ» . We can compute the characteristic polynomial by expanding along the third column. We get
p(Ξ») = det (A β Ξ»I )
β7 β Ξ» β13
2
3βΞ»
= β(Ξ» + 2)(Ξ»2 + 4Ξ» + 5).
= (β2 β Ξ») det Hence we have one real eigenvalue Ξ»1 = β2, and the quadratic Ξ»2 + 4Ξ» + 5 has complex roots Ξ»2 = β2 + i ,
and Ξ»2 = β2 β i . For the eigenvalue Ξ»1 = β2, we look for a vector in the nullspace (eigenspace) of
A β Ξ»1 I = A + 2 I = β5
2
3 β13
5
8 0
0
0 . The eigenspace is generated by v1 = (0, 0, 1)T . Thus, one solution is
y1 (t) = eβ2t 0
0
1 . For the eigenvalue Ξ»2 = β2 + i , we look for an vector in the nullspace (eigenspace) of
A β Ξ»1 I = A + (2 β i)I = β5 β i
2
3 β13
5βi
8 0
0
βi . The eigenspace is generated by (β5 + i, 2, 3 β i)T . Thus, we have the complex conjugate solutions
z(t) = e(β2+i)t β5 + i
2
3βi and z(t) = e(β2βi)t β5 β i
2
3+i . 9.4. Higher Dimensional Systems 631 Using Eulerβs formula, we ο¬nd the real and imaginary parts of the solution z(t).
β5 + i
2
3βi z(t) = eβ2t eit β5
2
3 = eβ2t (cos t + i sin t)
= e β2 t β5 cos t β sin t
2 cos t
3 cos t + sin t 1
0
β1
cos t β 5 sin t
2 sin t
β cos t + 3 sin t +i + ieβ2t Thus we have the solutions
β5 cos t β sin t
2 cos t
3 cos t + sin t
cos t β 5 sin t
2 sin t
β cos t + 3 sin t y2 (t) = Re(z(t)) = eβ2t
y3 (t) = Im(z(t)) = eβ2t and The general solution is
y(t) = C1 y1 (t) + C2 y2 (t) + C3 y3 (t).
4.26. If β2
10
1 A= 0
5
2 0
β10
β3 , then the characteristic polynomial is
p(Ξ») = det (A β Ξ»I ) = β2 β Ξ»
0
0
10
5βΞ»
β10 .
1
2
β3 β Ξ» Expanding across the ο¬rst row,
5βΞ»
β10
2
β3 β Ξ»
= (β2 β Ξ»)(Ξ»2 β 2Ξ» + 5), p(Ξ») = (β2 β Ξ») which has roots β2 and 1 Β± 2i . For Ξ»1 = β2,
A + 2I = 0
10
1 0
7
2 0
β10
β1 β 1
0
0 0
1
0 β1
0
0 β 1
0
0 and v1 = (1, 0, 1)T is its associated eigenvector and
y1 (t) = eβ2t 1
0
1 is an exponential solution. For Ξ»2 = 1 + 2i ,
A β (1 + 2i)I = β3 β 2i
10
1 0
4 β 2i
2 0
β10
β4 β 2i 0
1
0 0
β2 β i
0 , 632 Chapter 9. Linear Systems with Constant Coefο¬cients
and v2 = (0, 2 + i, 1)T is its associated eigenvector. Using Eulerβs formula,
0
2+i
1 z(t) = e(1+2i)t 0
2
1 = et (cos 2t + i sin 2t)
0
2
1 = et cos 2t 0
1
0 +i
0
1
0 β sin 2t 0
1
0 + iet cos 2t + sin 2t 0
2
1 . The real and imaginary parts of z(t) give two more independent solutions
y2 (t) = e t 0
2 cos 2t β sin 2t
cos 2t and y2 (t) = e 0
cos 2t + 2 sin 2t
sin 2t t and the general solution is
y(t) = C1 y1 (t) + C2 y2 (t) + C3 y3 (t).
4.27. In Exercise 21, the general solution was
x (t)
y(t)
z(t) 0
β1
1 = C1 e2t + C2 cos 4t + sin 4t
cos 4t
0 + C3 β cos 4t + sin 4t
sin 4t
0 . If x(0) = 1, y(0) = 0, and z(0) = 0, then
1
0
0 + C2 0
β1
1 = C1 1
1
0 + C3 β1
0
0 β 1
0
0 0
1
0 0
0
β1 . The augmented matrix reduces.
0
β1
1 β1
0
0 1
1
0 1
0
0 0
0
1 Thus, C1 = C2 = 0 and C3 = β1, giving the particular solution
x (t)
y(t)
z(t)
4.28. = cos 4t β sin 4t
β sin 4t
0 . In Exercise 22, the general solution was
y(t) = C1 eβt 0
β1
1 + C2 β2 cos 2t
β cos 2t + sin 2t
2 cos 2t + C3 β2 sin 2t
β cos 2t β sin 2t
2 sin 2t = C1 0
β1
1 β2
β1
2 + C2 0
β1
0 . 0
0
1 1
β1/2
1/2 . If y(0) = (1, β1, 0)T , then
1
β1
0 + C2 The augmented matrix reduces.
0
β1
1 β2
β1
2 0
β1
0 1
β1
0 β 1
0
0 0
1
0 . 9.4. Higher Dimensional Systems 633 Thus, C1 = 1, C2 = β1/2, C3 = 1/2 and the solution is
β2 cos 2t
0
1
β cos 2t + sin 2t
β1 β
2
2 cos 2t
1
cos 2t β sin 2t
βeβt β sin 2t
.
βt
e β cos 2t + sin 2t y(t) = eβt
=
4.29. + β2 sin 2t
β cos 2t β sin 2t
2 sin 2t 1
2 In Exercise 23, the general solution was
x (t)
y(t)
z(t) 0
1
0 = C1 eβ2t cos 4t β sin 4t
2 cos 4t
2 cos 4t + C2 e2t cos 4t + sin 4t
2 sin 4t
2 sin 4t + C3 e 2 t . If x(0) = β2, y(0) = β1, and z(0) = 0, then
β2
β1
0 = C1 0
1
0 + C2 β2
β1
0 β 1
2
2 1
0
0 + C3 . The augmented matrix reduces.
0
1
0 1
2
2 1
0
0 1
0
0 0
1
0 0
0
1 β1
0
β2 Thus, C1 = β1, C2 = 0 and C3 = β2, giving the particular solution
x (t)
y(t)
z(t)
4.30. = e2t (β2 cos 4t β 2 sin 4t)
βeβ2t β 4e2t sin 4t
β4e2t sin 4t . In Exercise 24, the general solution was
y(t) = C1 eβt 0
5 cos 2t + sin 2t
2 cos 2t + C 2 e βt 0
β cos 2t + 5 sin 2t
2 sin 2t 0
5
2 0
β1
0 1
0
2 + C 3 e βt If y(0) = (β2, 4, β2)T , then
β2
4
β2 = C1 + C2 + C3 1
0
2 . The augmented matrix reduces.
0
5
2 0
β1
0 1
0
2 β2
4
β2 β 1
0
0 0
1
0 0
0
1 1
1
β2 Thus, C1 = 1, C2 = 1, C3 = β2, and the solution is
y(t) = eβt
= e βt 0
0
5 cos 2t + sin 2t + eβt β cos 2t + 5 sin 2t
2 cos 2t
2 sin 2t
β2
4 cos 2t + 6 sin 2t
.
2 cos 2t + 2 sin 2t β 4 + β2eβt 1
0
2 . 634
4.31. Chapter 9. Linear Systems with Constant Coefο¬cients
In Exercise 25, the general solution was
x (t)
y(t) =C1 eβ2t
z(t) 0
0
1 β5 cos t β sin t
2 cos t
3 cos t + sin t
cos t β 5 sin t
2 sin t
.
β cos t + 3 sin t + C2 eβ2t + C 3 e β2 t
If y(0) = (β1, 1, 1)T , then 4.32. β1
0
β5
1
1
0 + C2 2 + C3 0 .
= C1
1
1
3
β1
The augmented matrix reduces.
100 1
0 β5 1 β1
02
0
1
β 0 1 0 1/2
0 0 1 3 /2
1 3 β1 1
Thus, C1 = 1, C2 = 1/2 and C3 = 3/2, giving the particular solution
β cos t β sin t
y(t) = eβ2t cos t + 3 sin t .
1 + 5 sin t
In Exercise 26, the general solution was
1
0
0
y(t) = C1 eβ2t 0 + C2 et 2 cos 2t β sin 2t + C3 et cos 2t + 2 sin 2t
1
cos 2t
sin 2t
If y(0) = (β1, 1, β1)T , then 4.33. β1
1
0
0
1
0 + C2 2 + C3 1 .
= C1
β1
1
1
0
The augmented matrix reduces.
1 0 0 β1
1 0 0 β1
021 1
β010 0
001 1
1 1 0 β1
Thus, C1 = β1, C2 = 0, C3 = 1, and the solution is
1
0
y(t) = βeβ2t 0 + et cos 2t + 2 sin 2t
1
sin 2t
βeβ2t
= et (cos 2t + 2 sin 2t) .
βeβ2t + et sin 2t
In matrix form,
x
1
00
x
y.
1
10
y=
z
β10 8 5
z
Using a computer, matrix
1
00
1
10
A=
β10 8 5
has characteristic polynomial
p(Ξ») = (Ξ» β 1)2 (Ξ» β 5). 9.4. Higher Dimensional Systems 635 Thus, A has eigenvalues 1 and 5 with algebraic multiplicities 2 and 1, respectively. For the eigenvalue 1, we
look for a vector in the nullspace (eigenspace) of
0
1
β10 AβI = 0
0
8 0
0
4 1
0
0 β 0
1
0 0
1/2
0 . Note that there is one free variable and the eigenspace is generated by the single eigenvector (0, 1, β2)T .
Therefore, the eigenvalue 1 has geometric multiplicity 1. For the eigenvalue 5, we look for a vector in the
nullspace (eigenspace) of
β4
1
β10 A β 5I = 4.34. 0
β4
8 0
0
0 1
0
0 β 0
1
0 0
0
0 . Note again that there is only one free variable and the eigenspace is generated by the single eigenvector
(0, 0, 1)T . Therefore, the eigenvalue 5 has geometric multiplicity 1. Consequently, there are not enough
independent eigenvectors to form a fundamental solution set.
Using a computer, matrix
200
A = β6 2 3
6 0 β1
has characteristic polynomial
p(Ξ») = β(Ξ» β 2)2 (Ξ» + 1).
Thus, A has eigenvalues 2 and β1 with algebraic multiplicities 2 and 1, respectively. For Ξ»1 = 2,
A β 2I = 0
β6
6 0
0
0 0
3
β3 1
0
0 β 0
0
0 β1/2
0
0 . Note that there are two free variables, so Ξ»1 = 2 has geometric multiplicity 2. Thus, there are two independent
eigenvectors and independent exponential solutions
y1 (t) = e2t 1
0
2 A+I = 3
β6
6 and 0
1
0 y2 (t) = e2t . For Ξ»2 = β1,
0
3
0 0
3
0 1
0
0 β 0
1
0 0
1
0 . Note that there is one free variable so Ξ»2 = β1 has geometric multiplicity 1. Thus, the nullspace is generated
by the single eigenvector v = (0, β1, 1)T , and
y3 (t) = e βt 0
β1
1 is another independent solution. Thus, the general solution is
y(t) = C1 e2t
4.35. In matrix form, x
y
z 1
0
2 + C2 e2t = 4
β6
7 0
β2
1 0
1
0 0
β1
1 + C3 eβt 0
0
β2 x
y
z . . 636 Chapter 9. Linear Systems with Constant Coefο¬cients
Using a computer, matrix
A= 4
β6
7 0
β2
1 0
0
β2 has characteristic polynomial
p(Ξ») = (Ξ» β 4)(Ξ» + 2)2 .
Thus, A has eigenvalues 4 and β2 with algebraic multiplicities 1 and 2, respectively. For the eigenvalue 4,
we look for a vector in the nullspace (eigenspace) of
0
0
0
1 0 β1
A β 4I = β6 β6 0
β01 1 .
7
1 β6
00 0 4.36. Note that there is one free variable and the eigenspace is generated by the single eigenvector (1, β1, 1)T .
Therefore, the eigenvalue 4 has geometric multiplicity 1. For the eigenvalue β2, we look for a vector in the
nullspace (eigenspace) of
100
6 00
A + 2I = β6 0 0 β 0 1 0 .
000
7 10
Note again that there is only one free variable and the eigenspace is generated by the single eigenvector
(0, 0, 1)T . Therefore, the eigenvalue β4 has geometric multiplicity 1. Consequently, there are not enough
independent eigenvectors to form a fundamental solution set.
Using a computer, matrix
6 β5 10
A = β1 2 β2
β1 1 β1
has characteristic polynomial
p(Ξ») = β(Ξ» β 5)(Ξ» β 1)2 .
Thus, A has eigenvalues 5 and 1, with algebraic multiplicities 1 and 2, respectively. For Ξ»1 = 5,
10 5
1 β5 10
A β 5I = β1 β3 β2 β 0 1 β1 .
00 0
β1 1 β6
Note that there is one free variable, so Ξ»1 = 5 has geometric multiplicity 1. The eigenvector v = (β5, 1, 1)T
gives the exponential solution
β5
1.
y1 (t) = e5t
1
For Ξ»2 = 1,
1 β1 2
5 β5 10
A β I = β1 1 β2 β 0 0 0 .
000
β1 1 β2
Note that there are two free variables, so Ξ»1 = 1 has geometric multiplicity 2. The eigenvectors (1, 1, 0)T and
(β2, 0, 1)T produce two independent exponential solutions
1
β2
0.
and y3 (t) = et
y2 (t) = et 1
0
1
Thus, the general solution is
β5
1
β2
1 + C2 et 1 + C3 et
0.
y(t) = C1 e5t
1
0
1 9.4. Higher Dimensional Systems
4.37. Using a computer, matrix β6 2
β1 β1
4 β2 A= 637 β3
β1
1 has eigenvalueβeigenvector pairs
β2 β 1
β1
β2 , β1
0
1 β3 β , β1
β1
1 β1β and . Therefore,
y1 (t) = eβ2t 4.38. 1
β1
β2 , β1
0
1 y2 (t) = eβ3t , and y3 (t) = eβt β1
β1
1 form a fundamental set of solutions.
Using a computer we ο¬nd that the eigenvalues of
A= β7
42
38 β4
18
18 2
β11
β10 are β2 and the complex conjugate pair 1 Β± 2i . For the eigenvalue β2 we look for a basis of the eigenspace,
which is the nullspace of A β Ξ»i = A + 2I . Using a computer we ο¬nd that the eigenspace has dimension 1
and is spanned by the vector v = (0, 2, 1)T . Hence we have the solution
0
2
1 y1 (t) = eβ2t . Next we look at the eigenspace for the eigenvalue 1 + 2i . The computer tells us that it has dimension 1 and
is spanned by w = (β1 + i, 3 + 3i, 4)T . Therefore we have the complex valued solution.
β1 + i
3 + 3i
4 z(t) = e(1+2i)t . Expanding using Eulerβs formula, we get
β1
3
4 z(t) = et [cos 2t + i sin 2t ]
= et cos 2t Β· β1
3
4 + iet cos 2t Β· β sin 2t Β·
1
3
0 +i 1
3
0 1
3
0 + sin 2t Β· β1
3
4 Since the real and imaginary parts of z(t) are solutions we get two real solutions
y1 (t) = Re(z(t)) = et
y2 (t) = Im(z(t)) = et β cos 2t β sin 2t
3 cos 2t β 3 sin 2t
4 cos 2t
cos 2t β sin 2t
3 cos 2t + 3 sin 2t
4 sin 2t The functions y1 , y2 , and y3 form a fundamental set of solutions. . 638
4.39. Chapter 9. Linear Systems with Constant Coefο¬cients
Using a computer, matrix
8
β9
β1 A= 12 β4
β13 4
β3
0 has eigenvalueβeigenvector pairs
0
1
3 β1 β , β2
2
1+i β2 + 2 i β , β2
2
1βi β 2 β 2i β and . Therefore,
0
1
3 y1 (t) = eβt
is a solution. Because
β2
2
1+i z(t) = e(β2+2i)t β2
2
1 = eβ2t (cos 2t + i sin 2t)
β2 cos 2t
2 cos 2t
cos 2t β sin 2t = e β2 t + ieβ2t the set
0
1
3 y1 (t) = eβt and 4.40. β2 cos 2t
2 cos 2t
cos 2t β sin 2t y2 (t) = eβ2t , y3 (t) = eβ2t 0
0
1
β2 sin 2t
2 sin 2t
cos 2t + sin 2t +i , , β2 sin 2t
2 sin 2t
cos 2t + sin 2t forms a fundamental set of solutions.
Using a computer, matrix
β1
β1
β1 β2
0
2 4
β4
β6 β2 ββ β4
0
1 , A=
has eigenvalueeigenvector pairs
2
1
0 β2 ββ , β3 ββ β1
1
1 . Note that the eigenvalue β2 has algebraic multiplicity 2 and geometric multiplicity 2, so we have enough
eigenvectors to form a fundamental set of solutions.
y1 (t) = eβ2t
4.41. 2
1
0 , y2 (t) = eβ2t β4
0
1 , Using a computer, matrix
A= β18
18
10 β18
17
10 10
β10
β7 y3 (t) = eβ3t β1
1
1 . 9.4. Higher Dimensional Systems 639 has eigenvalueβeigenvector pairs
β2 β 1
β2
β2 , β6 + 2i
8βi
5 β3 + 2i β , β6 β 2 i
8+i
5 β 3 β 2i β and . Therefore,
1
β2
β2 y1 (t) = eβ2t
is a solution. Because,
β6 + 2 i
8βi
5 z(t) = e(β3+2i)t β6
2
8 + i β1
5
0
β6 cos 2t β 2 sin 2t
2 cos 2t β 6 sin 2t
8 cos 2t + sin 2t
+ ieβ3t β cos 2t + 8 sin 2t
5 cos 2t
5 sin 2t = eβ3t (cos 2t + i sin 2t)
= eβ3t
the set
1
β2
β2 y1 (t) = eβ2t , and y3 (t) = eβ3t 4.42. forms a fundamental set of solutions.
The matrix β6 cos 2t β 2 sin 2t
8 cos 2t + sin 2t
5 cos 2t
2 cos 2t β 6 sin 2t
β cos 2t + 8 sin 2t
5 sin 2t
y2 (t) = eβ3t A= β6
β12
8 6
16
β12 , β2 ββ 3
2
0 . , 8
24
β18 has eigenvalueeigenvector pairs
2
0
1 β2 ββ , β4 ββ β1
β3
2 . Note that the eigenvalue β2 has algebraic multiplicity 2 and geometric multiplicity 2, so we have enough
eigenvectors to forma fundamental set of solutions.
y1 (t) = eβ2t
4.43. The matrix 2
0
1 , y2 (t) = eβ2t 1 β6
A=
3
β3 4
β10
4
β4 3
2
0
1
β2
β1
β1 , y3 (t) = eβ4t
β5 10 β5 3 has characteristic polynomial
p(Ξ») = (Ξ» + 1)(Ξ» + 2)3 , β1
β3
2 . 640 Chapter 9. Linear Systems with Constant Coefο¬cients
indicating eigenvalues β1 and β2, with algebraic multiplicities 1 and 3, respectively. The matrix
2
1 0 0 1 4
1 β5 β6 β9
A+I =
3
4
β3 β 4 β2
0
β1 10 0
β
β5 0
4
0 1
0
0 0
1
0 β2 1
0 has one free variable, generating a single eigenvector and the solution β1 2
.
y1 (t) = eβt β1 1 The matrix 3 β6
A + 2I = 3
β3 4
β8
4
β4 1
β2
1
β1 1
β5 10 0
β
β5 0
5
0 4 /3
0
0
0 1/3
0
0
0 β5/3 0
0
0 has three free variables. A basis for the nullspace (eigenspace) of A + 2I contains the vectors
4
1
5 β3 0 , 0 β3 , 0 0 and 0
0,
3 so, together with y1 (t) = eβt (β1, 2, β1, 1)T ,
4 β3 ,
y2 = eβ2t 0
0
4.44. 1
0
y3 = eβ2t ,
β3 0 complete a fundamental set of solutions.
The matrix 6
8
A=
β1
β8 has eigenvalueeigenvector pairs β6
β8
7
6 β6
β6
β10
6 5
and β8 β8 β9 6 11 8
β2 ββ 0
5 0
y4 (t) = ,
0
3 9
and 7
β 2 ββ .
5
0 Note that the eigenvalue 2 has algebraic multiplicity 2 and geometric multiplicity 2, so there are sufο¬cient
independent eigenvectors to form the independent solutions 11 9
8
y1 (t) = eβ2t 0
5
Further, the eigenvalueeigenvector pair and 7
y2 (t) = eβ2t .
5
0 3+i 3+i β1 + 3i ββ 5
β3 β i 9.4. Higher Dimensional Systems 641 allows us to form the complex solution 3+i 3+i z(t) = e(β1+3i)t 5
β3 β i 3 1 3 1 = eβt (cos 3t + i sin 3t) +i
5
0 β3
β1 3 1 3 1 = eβt cos 3t β sin 3t 5
0 β3
β1 3 1 3 1
.
+ sin 3t + i cos 3t 5 0
β3
β1 The real and imaginary parts provide two additional independent solutions 3 cos 3t β sin 3t cos 3t + 3 sin 3t 3 cos 3t β sin 3t βt cos 3t + 3 sin 3t y3 (t) = eβt and y4 (t) = e .
5 cos 3t
5 sin 3t
β3 cos 3t + sin 3t
β cos 3t β 3 sin 3t
4.45. Thus, y1 (t), y2 (t), y3 (t), and y4 (t) form a fundamental set of solutions.
In Exercise 37, the fundamental set of solutions found there lead to the general solution
y(t) = C1 eβ2t 1
β1
β2 + C2 eβ3t β1
0
1 + C3 eβt β1
β1
1 The initial condition y(0) = (β6, 2, 9)T provides
β6
2
9 = C1 1
β1
β2 + C2 β1
0
1 + C3 β1
β1
1 0
1
0 β3
2
1 The augmented matrix reduces.
1
β1
β2 β1
0
1 β1
β1
1 β6
2
9 β 1
0
0 0
0
1 Thus, C1 = β3, C2 = 2 and C3 = 1, leading to
y(t) =
4.46. β3eβ2t β 2eβ3t β eβt
3eβ2t β eβt
β2 t
6e + 2eβ3t + eβt . In Exercise 38 we found the fundamental set of solutions
β cos 2t β sin 2t
y1 (t) = Re(z(t)) = et 3 cos 2t β 3 sin 2t
4 cos 2t
cos 2t β sin 2t
y2 (t) = Im(z(t)) = et 3 cos 2t + 3 sin 2t
4 sin 2t . . 642 Chapter 9. Linear Systems with Constant Coefο¬cients
Our solution has the form y(t) = C1 y1 (t) + C2 y2 (t) + C3 y3 (t). At t = 0 we have
β2
2
5 β1
3
4 = y(0) = C1
β1
3
4 = 1
3
0 C1
C2
C3 0
2
1 1
3
0 + C2 0
2
1 + C3 . We can allow our computer to solve this system of equations, obtaining C1 = 1, C2 = β1, and C3 = 1. Thus
our solution is
β2et cos 2t
β6et sin 2t + 2eβ2t
.
y(t) = y1 (t) β y2 (t) + y3 (t) =
t
4e cos 2t β 4et sin 2t + eβ2t
4.47. In Exercise 39, the fundamental set of solutions found there lead to the general solution
y(t) = C1 eβt 0
1
3 β2 cos 2t
2 cos 2t
cos 2t β sin 2t + C2 eβ2t + C 3 e β2 t β2 sin 2t
2 sin 2t
cos 2t + sin 2t The initial condition y(0) = (0, 8, 5)T provides
0
8
5 0
1
3 = C1 + C2 β2
2
1 + C3 0
0
1 . The augmented matrix reduces.
0
1
3 β2
2
1 0
0
1 0
8
5 β 1
0
0 0
1
0 0
0
1 8
0
β19 Thus, C1 = 8, C2 = 0 and C3 = β19, leading to
y(t) =
4.48. 38eβ2t sin 2t
8e β 38eβ2t sin 2t
β 19eβ2t cos 2t β 19eβ2t sin 2t
βt 24eβt . In Exercise 40, the fundamental set of solutions found there lead to the general solution
y(t) = C1 eβ2t 2
1
0 + C2 eβ2t β4
0
1 + C3 eβ3t β1
1
1 The initial condition y(0) = (1, 0, 0)T provides
1
0
0 = C1 2
1
0 + C2 β4
0
1 + C3 β1
1
1 . 0
1
0 β1
β1
1 . The augmented matrix reduces
2
1
0 β4
0
1 β1
1
1 1
0
0 β 1
0
0 Thus, C1 = β1, C2 = β1, and C3 = 1, leading to
y(t) = 2 e β 2 t β e β 3t
βeβ2t + eβ3t
βeβ2t + eβ3t 0
0
1 . . 9.4. Higher Dimensional Systems
4.49. 643 In Exercise 41, the fundamental set of solutions found there lead to the general solution
y(t) =C1 eβ2t 1
β2
β2 β6 cos 2t β 2 sin 2t
8 cos 2t + sin 2t
5 cos 2t
2 cos 2t β 6 sin 2t
β cos 2t + 8 sin 2t .
5 sin 2t + C 2 e β 3t + C3 eβ3t The initial condition y(0) = (β1, 7, 3)T provides
β1
7
3 1
β2
β2 = C1 + C2 β6
8
5 2
β1
0 + C3 . The augmented matrix reduces.
1
β2
β2 β6
8
5 β1
7
3 2
β1
0 1
0
0 β 0
1
0 0
0
1 7
17/5
31/5 Thus, C1 = 7, C2 = 17/5 and C3 = 31/5, leading to
y(t) =
4.50. 7eβ2t β eβ3t (8 cos 2t β 44 sin 2t)
β14eβ2t + eβ3t (21 cos 2t + 53 sin 2t)
β14eβ2t + eβ3t (17 cos 2t + 31 sin 2t) . In Exercise 42, the fundamental set of solutions found there lead to the general solution
y(t) = C1 eβ2t 2
0
1 3
2
0 + C3 eβ4t + C2 3
2
0 + C3 β 1
0
0 0
1
0 + C2 eβ2t β1
β3
2 . The initial condition y(0) = (β1, β4, 1)T provides
β1
β4
1 = C1 2
0
1 β1
β3
2 . The augmented matrix reduces.
2
0
1 3
2
0 β1
β3
2 β1
β4
1 0
0
1 13
β11
β6 Thus, C1 = 13, C2 = β11, and C3 = β6, leading to
y(t) =
4.51. β7eβ2t + 6eβ4t
β22eβ2t + 18eβ4t
13eβ2t β 12eβ4t . In Exercise 43, the fundamental set of solutions found there lead to the general solution β1 4
1
5 2 β3 0
0
+ C2 eβ2t + C3 e β 2 t + C4 eβ2t .
y(t) = C1 eβt β1 0
β3 0
1
0
0
3 The initial condition y(0) = (β1, 5, 2, 4)T provides β1 β1 4
1
5
5
2 β3 0
0 2 = C1 β1 + C2 0 + C3 β3 + C4 0 .
4
1
0
0
3 644 Chapter 9. Linear Systems with Constant Coefο¬cients
The augmented matrix reduces. β1 4
2 β1
1 β3
0
0 1
0
β3
0 5
0
0
3 1
β1 5
0
β
0
2
0
4 0
1
0
0 0
0
1
0 0
0
0
1 1
β1 β1 1 Thus, C1 = 1, C2 = β1, C3 = β1 and C4 = 1, leading to βeβt 2 e β t + 3e β 2 t y(t) = βt
.
βe + 3eβ2t e β t + 3e β 2 t 4.52. In Exercise 44, the fundamental set of solutions found there lead to the general solution 11 9 3 cos 3t β sin 3t 8
7 3 cos 3t β sin 3t y(t) = C1 eβ2t + C2 eβ2t + C3 eβt 5
5 cos 3t
0
0
β3 cos 3t + sin 3t
5 cos 3t + 3 sin 3t cos 3t + 3 sin 3t + C 4 e βt .
5 sin 3t
β cos 3t β 3 sin 3t The initial condition y(0) = (β2, β1, 6, β5) provides β2 11 9 3
1 β1 8
7
3
1 6 = C1 0 + C2 5 + C3 5 + C4 0 .
β5
5
0
β3
β1 The augmented matrix reduces 11 9 3
8 7 3
0 5 5
5 0 β3 1
β2 β1 0
β
6
0
β5
0 1
1
0
β1 0
1
0
0 0
0
1
0 0
0
0
1 β1 1
1/5 β3/5 Thus, C1 = β1, C2 = 1, C3 = 1/5, and C3 = β3/5, leading to 11 9 3 cos 3t β sin 3t 8
7 1 3 cos 3t β sin 3t y(t) = βeβ2t + eβ2t + eβt 5 cos 3t
0
5
5
β3 cos 3t + sin 3t
5
0 cos 3t + 3 sin 3t 3 cos 3t + 3 sin 3t β e βt 5 sin 3t
5
β cos 3t β 3 sin 3t β2eβ2t β 2eβt sin 3t
βt
β2 t
βe β 2e sin 3t y(t) = β2t
.
5e + eβt cos 3t β 3eβt sin 3t β2 t
βt
β5e + 2e sin 3t Section 5. The Exponential of a Matrix
5.1. It is easily checked that
A2 = 0
0 0
.
0 9.5. The Exponential of a Matrix 645 Therefore, the series
12
A + Β·Β·Β·
2! eA = I + A +
truncates and 5.2. 0
β2
+
1
1 1
0 eA = I + A = β4
β1
=
2
1 β4
.
3 It is easily checked that
1
β1 A2 = 1
β1 1
β1 1
0
=
β1
0 0
.
0 Therefore, the series
12
A + Β·Β·Β·
2! eA = I + A +
truncates and 1
0 eA = I + A =
5.3. 0
1
+
1
β1 1
2
=
β1
β1 1
.
0 It is easily checked that
0
0
0 A2 = 0
0
0 0
0
0 . Therefore, the series
12
A + Β·Β·Β·
2! A=I +A+
truncates and
eA = I + A =
5.4. 1
0
0 0
1
0 0
0
1 If
A= +
β2
β1
1 1
1
0
1
1
β1 β1
β1
0 0
0
0 2
1
0 = β1
0
0 0
0
1 . β3
β1
1 use a computer to check that
A3 = AA2 = β2
β1
1 1
1
β1 β3
β1
1 0
0
0 2
1
β1 2
1
β1 = 0
0
0 0
0
0 2
1
β1 0
0
0 0
0
0 Therefore, the series
eA = I + A + 12
A + Β·Β·Β·
2! truncates and
1
e A = I + A + A2
2
β2 1
100
= 0 1 0 + β1 1
1 β1
001
β1
2
β2
= β1 5/2 β1/2 .
1 β3/2 3/2
5.5. β3
β1
1 + 1
2 (a) If A2 = Ξ±A, Ξ± = 0, then
A3 = AA2 = A(Ξ±A) = Ξ±A2 = Ξ±(Ξ±A) = Ξ± 2 A. 2
1
β1 . 646 Chapter 9. Linear Systems with Constant Coefο¬cients
Similarly,
A4 = AA3 = A(Ξ± 2 A) = Ξ± 2 A2 = Ξ± 2 (Ξ±A) = Ξ± 3 A.
Proceeding inductively, Ak = Ξ± kβ1 A. Now, t2 2 t3 3
A + A + Β·Β·Β·
2!
3!
t2
t3
+ tA + (Ξ±A) + (Ξ± 2 A) + Β· Β· Β·
2!
3!
Ξ±t 2
Ξ±2 t 3
+ t+
+
+ Β·Β·Β· A
2!
3!
Ξ±t + Ξ± 2 t 2 /2! + Ξ± 3 t 3 /3! + Β· Β· Β·
+
A
Ξ±
(1 + Ξ±t + Ξ± 2 t 2 /2! + Ξ± 3 t 3 /3! + Β· Β· Β· ) β 1
+
A
Ξ±
eΞ±t β 1
+
A.
Ξ± etA = I + tA +
=I
=I
=I
=I
=I
(b) One can easily show that 1112
333
A= 1 1 1
= 3 3 3 = 3A.
111
333
Thus, we can apply the formula developed in part (a). With Ξ± = 3,
2 e3t β 1
A
3
100
111
e3t β 1
010+
111
3
001
111
(e3t + 2)/3 (e3t β 1)/3 (e3t β 1)/3
(e3t β 1)/3 (e3t + 2/3 (e3t β 1)/3
(e3t β 1)/3 (e3t β 1)/3 (e3t + 2/3 etA = I +
=
=
5.6. First, if
A=
note that
A2 = β1
0 0
,
β1 A3 = . β1
,
0 0
1 0
β1 1
,
0 1
0 A4 = 0
,
1 after which A5 = A and the sequence repeats with period 4. Thus,
t2 2 t3 3 t4 4
A + A + A + Β·Β·Β·
2!
3!
4!
t 2 β1 0
t3 0
0 β1
10
+
+t
+
=
10
01
2! 0 β1
3! β1
1 β t 2 /2! + t 4 /4! Β· Β· Β·
βt + t 3 /3! β Β· Β· Β·
=
t β t 3 /3! + Β· Β· Β·
1 β t 2 /2! + t 4 /4! β Β· Β· Β·
cos t β sin t
.
=
sin t
cos t eAt = I + At + 5.7. Note that
A= a
b βb
a = a
0 0
0
+
b
a βb
0 t4 1
1
+
0
4! 0 = aI + b 0
1 β1
.
0 0
+ Β·Β·Β·
1 9.5. The Exponential of a Matrix 647 Thus, by the result shown in Exercise 6,
etA = e atI +bt 0 β1
10
bt 0 β1 = eat I e 1 0
cos bt β sin bt
.
= eat
sin bt
cos bt
5.8. We can write
A= a
0 b
a where
B= =a
0
0 1
0 0
0
+b
1
0 1
0 and B2 = 1
= aI + bB,
0
0
0 0
.
0 Note that B commutes with I , so
etA = et (aI +bB)
= eat I ebtB
= eat I + btB + b2 t 2 2 b3 t 3 3
B+
B + Β·Β·Β·
2!
3! = eat (I + btB)
10
0
+
= eat
01
0
at 1 bt
=e
.
01
5.9. (a) On the one hand
AB = bt
0 β4
0 0
,
0 but 00
.
0 β4
(b) Note that if t = 1, the result from Exercise 7 becomes
BA = e a βb
ba = ea cos b
sin b β sin b
.
cos b Thus,
e A+B = e
=e
= 0 β2 + 0 0
00
20
0 β2
20 cos 2
sin 2 β sin 2
.
cos 2 (c) Both A2 and B 2 equal the zero matrix, so the series expansions for eA and eB truncate.
1 β2
0 β2
10
=
+
eA = I + A =
01
00
01
10
00
10
eB = I + B =
+
=
01
20
21
Thus,
1 β2
10
β3 β2
=
,
eA eB =
01
21
2
1
which is not the same as eA+B calculated in part (b). The problem arises because AB = BA, as was
shown in part (a). 648
5.10. Chapter 9. Linear Systems with Constant Coefο¬cients
If A = P DP β1 , then
etA = I + tA + t2 2 t3 3
A + A + Β·Β·Β· .
2!
3! However, note that
A2 = (P DP β1 )2 = P DP β1 P DP β1 = P D 2 P β1 .
In a similar manner,
Ak = P D k P β1 ,
for k = 3, 4, 5, . . . . Thus,
etA = I + P (tD)P β1 + P (t 2 D 2 /2!)P β1 + P (t 3 D 3 /3!)P β1 + Β· Β· Β·
= P I + tD + t 2 D 2 /2! + t 3 D 3 /3! + Β· Β· Β· P β1
= P etD P β1 .
5.11. If
A= 2
0 6
,
β1 then the characteristic polynomial is p(Ξ») = (Ξ» β 2)(Ξ» + 1), giving eigenvalues Ξ»1 = 2 and Ξ»2 = β1. Set
D= 2
0 0
.
β1 The nullspace (eigenspace) of
0
0 A β 2I = 6
β3 is generated by the single eigenvector v1 = (1, 0)T . The nullspace of
3
0 A+I = 6
0 is generated by the single eigenvector v1 = (2, β1)T . Set
P= 1
0 2
.
β1 It is easily checked that
P β1 =
Now, 1
0 and A = P DP β1 . etA = P etD P β1
1
0
1
=
0
1
=
0
e 2t
=
0
= 5.12. 2
β1 2t 0 12
2
e 0 βt
0 β1
β1
e 2t
2
12
0
β1
0 β1
0 e βt
e 2t 2 e 2t
2
β1
0 βeβt
2t
2 e β 2 e βt
.
e βt Matrix
A= β2
β3 0
β3 is lower triangular, so β2 and β3 (diagonal elements) are eigenvalues. For Ξ» = β2,
A + 2I 0
β3 0
,
β1 9.5. The Exponential of a Matrix 649 so v = (1, β3)T is its eigenvector. For Ξ» = β3,
1
β3 A + 3I = 0
,
0 so v = (0, 1)T is its eigenvector. Set
P= 1
β3 0
1 and D= β2
0 0
.
β3 It is easily checked that
P β1 = 1
3 0
1 and A = P DP β1 . Thus,
etA = P etD P β1
β2t
0
10
10
β3t
e0
β3 1
31
10
e β2 t
10
0
=
β3 1
31
0
e β 3t
β2 t
10
e
0
=
β3 1
3eβ3t eβ3t
β2 t
0
e
=
.
β3eβ2t + 3eβ3t eβ3t = 5.13. Matrix
A= β2
β1 1
,
0 has characteristic polynomial p(Ξ») = (Ξ» + 1)2 and repeated eigenvalue Ξ» = β1. We can write
etA = et (βI +(A+I ))
= eβtI et (A+I )
= eβt I + t (A + I ) + t2
(A + I )2 + Β· Β· Β·
2! . Matrix A must satisfy its characteristic polynomial, so (A + I )2 = 0 and (A + I )k = 0 for k β₯ 2. Thus, the
series truncates.
etA = eβt (I + t (A + I ))
β1 1
10
+t
= e βt
β1 1
01
t
βt 1 β t
=e
βt
1+t
5.14. Matrix
A= β1
1 0
β1 has characteristic polynomial p(Ξ») = (Ξ» + 1)2 and repeated eigenvalue Ξ» = β1. We can write
etA = et (βI +(A+I ))
= eβtI et (A+I )
= eβt I + t (A + I ) + t2
(A + I )2 + Β· Β· Β· .
2! 650 Chapter 9. Linear Systems with Constant Coefο¬cients
Matrix A must satisfy its characteristic polynomial, so (A + I )2 = 0 and (A + I )k = 0 for k β₯ 2. Thus, the
series truncates.
etA = eβt (I + t (A + I ))
10
00
= e βt
+t
01
10
10
= e βt
.
t1 5.15. Matrix
A= 0
β1 1
,
2 has characteristic polynomial p(Ξ») = (Ξ» β 1)2 and repeated eigenvalue Ξ» = 1. We can write
etA = et (I +(AβI ))
= etI et (AβI )
t2
(A β I )2 + Β· Β· Β·
2! = et I + t (A β I ) + Matrix A must satisfy its characteristic polynomial, so (A β I )2 = 0 and (A β I )k = 0 for k β₯ 2. Thus, the
series truncates.
etA = et (I + t (A β I ))
10
β1 1
+t
= et
01
β1 1
t
t 1βt
=e
βt
1+t
5.16. Matrix
A= β3
4 β1
1 has characteristic polynomial p(Ξ») = (Ξ» + 1)2 and repeated eigenvalue Ξ» = β1. We can write
etA = et (βI +(A+I ))
= eβtI et (A+I )
= eβt I + t (A + I ) + t2
(A + I )2 + Β· Β· Β· .
2! Matrix A must satisfy its characteristic polynomial, so (A + I )2 = 0 and (A + I )k = 0 for k β₯ 2. Thus the
series truncates.
etA = eβt (I + t (A + I ))
β2 β1
10
+t
= e βt
4
2
01
1 β 2t
βt
= e βt
.
4t
1 + 2t
5.17. Using a computer, matrix
A= β1
β1
β2 0
1
4 0
β1
β3 has characteristic polynomial p(Ξ») = β(Ξ» + 1)3 and repeated eigenvalue Ξ» = β1. We can write
etA = et (βI +(A+I ))
= eβtI et (A+I )
= eβt I + t (A + I ) + t2
(A + I )2 + Β· Β· Β·
2! 9.5. The Exponential of a Matrix 651 Matrix A must satisfy its characteristic polynomial, so (A + I )3 = 0 and (A + I )k = 0 for k β₯ 3. But,
0
β1
β2 A+I = 0
2
4 0
β1
β2 0
0
0 (A + I )2 = and 0
0
0 0
0
0 , so (A + I )k = 0 for k β₯ 2 and the series will truncate earlier.
etA = eβt (I + t (A + I ))
100
0
0 1 0 + t β1
= e βt
001
β2
1
0
0
βt
= eβt βt 1 + 2t
β2t
4t
1 β 2t
5.18. 0
2
4 0
β1
β2 Using a computer we ο¬nd that A has eigenvalue β1 with algebraic multiplicity 3. We also ο¬nd that
A+I = 0
β1
β1 β1
1
2 0
β1
β1 , β1
0
1 1
0
β1 (A + I )2 = 1
0
β1 , and
(A + I )3 =
Thus 0
0
0 0
0
0 0
0
0 . etA = eΞ»t et (AβΞ»I )
= eβt et (A+I )
t2
(A + I )2
2
0
0 β1 0
t2
0 + t β1 1 β1 +
2
1
β1 2 β1
2
2
βt β t /2
t /2
1+t
βt
.
2t + t 2 /2 1 β t β t 2 /2 = eβt I + t (A + I ) +
10
01
00
1 + t 2 /2
βt
βt β t 2 /2 = e βt
=
5.19. 1
0
β1 β1
0
1 1
0
β1 Using a computer, matrix
A= β2
0
0 β1
0
β4 0
1
β4 has characteristic polynomial p(Ξ») = β(Ξ» + 2)3 and repeated eigenvalue Ξ» = β2. We can write
etA = et (β2I +(A+2I ))
= eβ2tI et (A+2I )
= eβ2t I + t (A + 2I ) + t2
(A + 2I )2 + Β· Β· Β·
2! Matrix A must satisfy its characteristic polynomial, so (A + 2I )3 = 0, but
A + 2I = 0
0
0 β1
2
β4 0
1
β2 and (A + 2I )2 = 0
0
0 β2
0
0 β1
0
0 , 652 Chapter 9. Linear Systems with Constant Coefο¬cients
so (A + I )k = 0 for k β₯ 3 and the series truncates at this point.
t2
(A + 2I )2
2!
100
0 β1 0
0 1 0 +t 0 2
1
001
0 β4 β2
1 βt β t 2 βt 2 /2
0 1 + 2t
t
0
β4t
1 β 2t etA = eβ2t I + t (A + 2I ) +
= e β2 t
= e β2 t
5.20. Using a computer, matrix β2
0
β1 A= 0
β2
1 + t2
2 0
0
0 β2
0
0 β1
0
0 0
0
β2 has characteristic polynomial p(Ξ») = β(Ξ» + 2)3 and repeated eigenvalue β2. We can write
etA = et (β2I +(A+2I ))
= eβ2tI et (A+2I )
= eβ2t I + t (A + 2I ) + t2
(A + 2I )2 + Β· Β· Β·
2! Matrix A must satisfy its characteristic polynomial, so (A + 2I )3 = 0, but
A + 2I = 00
00
β1 1 0
0
0 and 0
0
0 (A + 2I )2 = 0
0
0 0
0
0 , so (A + 2I )k = 0 for k β₯ 2 and the series truncates at this point.
etA = eβ2t (I + t (A + 2I ))
100
0 1 0 +t
= e β2 t
001
1 00
= e β2 t 0 1 0 .
βt t 1
5.21. Using a computer, matrix 1
0
A=
0
0 β1
1
0
β1 2
0
1
2 0
0
β1 0
0
1 0
0
0 0
0
0
1 has characteristic polynomial p(Ξ») = (Ξ» β 1)4 and repeated eigenvalue Ξ» = 1. We can write
etA = et (I +(AβI ))
= etI et (AβI )
= et I + t (A β I ) + t2
(A β I )2 + Β· Β· Β·
2! Matrix A must satisfy its characteristic polynomial, so (A β I )4 = 0, but 0 β1 2 0 0 0
0 0 0 0
0 0
and (A β I )2 = AβI =
0 0 0 0
00
0 β1 2 0
00 0
0
0
0 0
0
0
0 9.5. The Exponential of a Matrix 653 so (A β I )k = 0 for k β₯ 2 and the series truncates at this point.
etA = et (I + t (A β I )) 1 0 0 0 0 0 0 1 0 0 +t
= et 0
0 0 1 0
0
0001 1 βt 2t 0 0 0
0 1
= et 00
1 0
0 βt 2 t 1
5.22. Using a computer, matrix β5 β4
A=
4
0 β1
1
β5
β1 0
0
β4
β1 β1
0
0
β1 2
0
0
2 0 0 0 0 4
5
β4 β2 has characteristic polynomial p(Ξ») = (Ξ» + 3)4 and repeated eigenvalue Ξ» = β3. We can write
etA = et (β3I +(A+3I ))
= eβ3tI et (A+3I )
= eβ3t I + t (A + 3I ) + t2
(A + 3I )2 + Β· Β· Β·
2! Matrix A must satisfy its characteristic polynomial, so (A + 3I )4 = 0, but β2 0 β1 4 0
1
5 β4 3
0
and (A + 3I )2 = A + 3I = 4 β4 β2 β4 0
0 β1 β1 1
0 0
0
0
0 0
0
0
0 0
0
,
0
0 So (A + 3I )k = 0 for k β₯ 2 and the series truncates at this point.
etA = eβ3t (I + t (A + 3I )) 1 0 0 0 β2 0 β1
1 β4 3
β3t 0 1 0 0 = e +t
4 β4 β2
0 0 1 0
0 β1 β1
0001 1 β 2t
0
βt
4t 1 + 3t
t
5t β4t
= eβ3t .
4t
β4t
1 β 2t β4t 0
βt
βt
1+t 5.23. Using a computer, matrix 0
1
A=
0
3 4
β5
2
β10 5
β7
3
β13 4 5 β4 1 β2 3
β1 6 has characteristic polynomial p(Ξ») = (Ξ» β 1)4 and repeated eigenvalue Ξ» = 1. We can write
etA = et (I +(AβI ))
= etI et (AβI )
= et I + t (A β I ) + t2
(A β I )2 + Β· Β· Β·
2! 654 Chapter 9. Linear Systems with Constant Coefο¬cients
Matrix A must satisfy its characteristic polynomial, so (A β I )4 = 0, but β1 1
AβI =
0
3 4
β6
2
β10 5
β2 β7
3
,
2
β1 β13 5 and β1 2
(A β I )2 = β1
2 0
0
(A β I )3 = 0
0 0
0
0
0 0
0
0
0 2
β4
2
β4 3
β6
3
β6 β1 2
,
β1 2 0
0
,
0
0 so (A β I )k = 0 for k β₯ 3 and the series truncates at this point.
t2
etA = et I + t (A β I ) + (A β I )2
2 1 0 0 0 β1 0 1 0 0 1
= et +t
0 0 1 0
0
0001
3 2 β 2t β t 2
8t + 2t 2
1 2t + 2t 2
2 β 12t β 4t 2
= et βt 2
4t + 2 t 2
2
6t + 2t 2
β20t β 4t 2 5.24. Using a computer, matrix β1 2
5
β2 2
β7
3 t 2 β4
+
2
β1 2 β1 2
β13 5
2 β4
10t + 3t 2
β4t β t 2 β14t β 6t 2
6t + 2t 2 2 + 4t + 3t 2
β2t β t 2 β26t β 6t 2 2 + 10t + 2t 2 4
β6
2
β10 1 β9
A=
13
2 0
4
β3
β1 3
β6
3
β6 β1 2 β1 2 0
4
β5 0 0
1
β1
0 has characteristic polynomial p(Ξ») = (Ξ» β 1)4 and repeated eigenvalue Ξ» = 1. We can write
etA et (I +(AβI ))
= etI et (AβI )
= et A + t (A β I ) + t2
(A β I )2 + Β· Β· Β· .
2! Matrix A must satisfy its characteristic polynomial, so (A β I )4 = 0. But
0
0 β9 3
AβI =
13 β3
2 β1
and 0
1
β2
0 0
4
,
β5 β1 0 β6
(A β I )2 = β9
7 0
1
(A β I )3 = 1
β1 0
0
0
0 0
0
0
0 0
0
,
0
0 0
2
2
β2 0
1
1
β1 0
3
,
3
β3 9.5. The Exponential of a Matrix 655 so (A β I )k = 0 for k β₯ 4 and the series truncates at this point.
t2
t3
etA = I + t (A β I ) + (A β I )2 + (A β I )3
2!
3! 1 0 0 0 0
0
0
0
1
4 0 1 0 0 β9 3
= et +t
0 0 1 0
13 β3 β2 β5 0001
2 β1 0 β1
0 0 0 0 0 0
0
0
2
3
t β6 2
1
3 t 1 0 0 0 +
+ 1 0 0 0 1
3
2 β9 2
6
7 β2 β1 β3
β1 0 0 0 1
0
0
0
2
3
2
2
2
t + t /2
4t + 3t /2 β9t β 3t + t /6 1 + 3t + t
.
= et 1 β 2t + t 2 /2 β5t + 3t 2 /2 13t β 9t 2 /2 + t 3 /6 β3t + t 2
2
3
2
2
2
2t + 7t /2 β t /6
βt β t
βt /2
1 β t β 3t /2
5.25. If
A= β2
1
3 1
β3
β5 β1
0
0 , then p(Ξ») = det (A β Ξ»I )
β2 β Ξ»
1
β1
1
β3 β Ξ» 0
=
3
β5
βΞ»
Expanding down the third column,
β2 β Ξ»
1
1 β3 β Ξ»
βΞ»
p(Ξ») = β1
1
β3 β Ξ»
3
β5
= β1(4 + 3Ξ») β Ξ»(Ξ»2 + 5Ξ» + 5)
= βΞ»3 β 5Ξ»2 β 8Ξ» β 4.
Zeros must be factors of the constant term, so β1 is a possibility. Dividing by Ξ» + 1 leads to the following
factorization
p(Ξ») = β(Ξ» + 1)(Ξ» + 2)2
and eigenvalues Ξ»1 = β1 and Ξ»2 = β2. Because
β1 1 β1
1 β2 0
A+I =
3 β5 1 β 1
0
0 0
1
0 2
1
0 , the geometric multiplicity of Ξ»1 = β1 is one, and an eigenvector is v1 = (β2, β1, 1)T , leading to the solution
β2
y1 (t) = etA v1 = eβt β1 .
1
Next,
1 0 β1
0 1 β1
β 0 1 β1 ,
A + 2I = 1 β1 0
00 0
3 β5 2
the geometric multiplicity of Ξ»2 = β2 is one, and an eigenvector is v2 (t) = (1, 1, 1)T , leading to the solution
1
y2 (t) = etA v2 = eβ2t 1 .
1 656 Chapter 9. Linear Systems with Constant Coefο¬cients
Notice that β2
β1
1 (A + 2I )2 = β2
β1
1 4
2
β2 β 1
0
0 β2
0
0 1
0
0 has dimension two, equalling the algebraic multiplicity of Ξ»2 . Thus, we can pick a vector in the nullspace of
(A + 2I )2 that is not in the nullspace of A + 2I . Choose v3 = (β1, 0, 1)T , which is not a multiple of v2 ,
making the set {v2 , v3 } independent, and giving a third solution,
y3 (t) = etA v3
= eβ2t [v3 + t (A + 2I )v3 ]
β1
01
0 + t 1 β1
= e β2 t
1
3 β5
β1
β1
0 + t β1
= e β2 t
β1
1
β1 β t
βt
= e β2 t
.
1βt
Because 5.26. β2
det[y1 (0), y2 (0), y3 (0)] = β1
1 β1
0
2 1
1
1 β1
0
1 β1
0 = 1,
1 the solutions are independent for all t and form a fundamental set of solutions.
If
10 1
A = 2 2 β2 ,
00 2
then 1βΞ»
2
0 p(Ξ») = det (A β Ξ»I ) = 0
1
2 β Ξ» β2 .
0
2βΞ» Expanding across the third row,
p(Ξ») = (2 β Ξ») 1βΞ»
2 0
= (2 β Ξ»)2 (1 β Ξ»),
2βΞ» so the eigenvalues are 2 and 1, with algebraic multiplicities 2 and 1, respectively. For Ξ»1 = 1,
AβI = 0
2
0 0
1
0 1
β2
1 β 1/2
0
0 1
0
0 0
1
0 , so the geometric multiplicity of Ξ»1 is 1 and an eigenvector is v1 = (β1, 2, 0)T , providing exponential solution
y1 (t) = etA v1 = et
For Ξ»2 = 2,
A β 2I = β1
2
0 0
0
0 1
β2
0 β β1
2
0
1
0
0 . 0
0
0 β1
0
0 , 9.5. The Exponential of a Matrix 657 so there are two free variables and the geometric multiplicity is 2. Thus, v2 = (0, 1, 0)T and v3 = (1, 0, 1)T
are independent eigenvectors and
y2 (t) = etA v2 = e2t 0
1
0 and y3 (t) = etA v3 = e2t 1
0
1 are independent solutions. Because
det y1 (0), y2 (0), y3 (0) = 5.27. β1
2
0 0
1
0 1
0 = β1,
1 the solutions are independent for all t and form a fundamental set of solutions.
If
0 10
A = β4 4 0 ,
β2 0 1
then p(Ξ») = det (A β Ξ»I )
βΞ»
1
0
0
= β4 4 β Ξ»
β2
0
1βΞ» Expanding down the third column,
βΞ»
1
β4 4 β Ξ»
= β(Ξ» β 1)(Ξ»2 β 4Ξ» + 4) p(Ξ») = (1 β Ξ») = β(Ξ» β 1)(Ξ» β 2)2 ,
providing eigenvalues Ξ»1 = 1 and Ξ»2 = 2, with algebraic multiplicities 1 and 2, respectively. Because
AβI = β1
β4
β2 1
3
0 0
0
0 β 1
0
0 0
1
0 0
0
0 , the geometric multiplicity of Ξ»1 = 1 is one, and an eigenvector is v1 = (0, 0, 1)T , leading to the solution
tA y1 (t) = e v1 = e
Next,
A β 2I = β2
β4
β2 1
2
0 0
0
β1 t β 0
0
1 . 1
0
0 0
1
0 1/2
1
0 , the geometric multiplicity of Ξ»2 = 2 is one, and an eigenvector is v2 (t) = (β1, β2, 2)T , leading to the
solution
β1
y2 (t) = etA v2 = e2t β2 .
2
Next
(A β 2I )2 = 0
0
6 0
0
β2 0
0
1 has dimension two, equaling the algebraic multiplicity of Ξ»2 . Thus, we can pick a vector in the nullspace of
(A β 2I )2 that is not in the nullspace of A β 2I . Choose v3 = (1, 0, β6)T , which is not a multiple of v2 , 658 Chapter 9. Linear Systems with Constant Coefο¬cients
making the set {v2 , v3 } independent, and giving a third solution,
y3 (t) = etA v3
= e2t [v3 + t (A β 2I )v3 ]
1
β2 1
0 + t β4 2
= e 2t
β6
β2 0
1
β2
0 + t β4
= e 2t
β6
4
1 β 2t
β4t
= e 2t
.
β6 + 4t
Because 5.28. 0
0
β1 β1
β2
2 0
det[y1 (0), y2 (0), y3 (0)] = 0
1 1
0
β6 1
0 = 2,
β6 the solutions are independent for all t and form a fundamental set of solutions.
If
β1 0
0
2 β5 β1 ,
A=
0
4 β1
then
p(Ξ») = det(A β Ξ»I ) β1 β Ξ»
2
0 0
0
β5 β Ξ»
β1
.
4
β1 β Ξ» Expanding across the ο¬rst row,
β5 β Ξ»
β1
4
β1 β Ξ»
= β(Ξ» + 1)(Ξ»2 + 6Ξ» + 9) p(Ξ») = (β1 β Ξ») = β(Ξ» + 1)(Ξ» + 3)2 ,
providing eigenvalues Ξ»1 = β1 and Ξ»2 = β3, with algebraic multiplicities 1 and 2, respectively. Because
0
2
0 A+I = 0
β4
4 0
β1
0 β 1
0
0 β1/2
0
0 0
1
0 , the geometric multiplicity of Ξ»1 = β1 is 1, and an eigenvector is v1 = (1, 0, 2)T , providing the exponential
solution
1
y1 (t) = etA v1 = eβt 0 .
2
For Ξ»2 = β3
A + 3I = 2
2
0 0
β2
4 0
β1
2 β 1
0
0 0
1
0 0
1/2
0 has one free variable, so the geometric multiplicity of Ξ»2 = β3 is 1. An eigenvector is v2 = (0, β1, 2)T ,
giving a second exponential solution,
y2 (t) = etA v2 = eβ3t 0
β1
2 . 9.5. The Exponential of a Matrix 659 Next,
(A + 3I )2 = 4
0
8 0
0
0 0
0
0 1
0
0 β 0
0
0 0
0
0 has dimension 2, equaling the algebraic multiplicity of Ξ»2 . Thus, we can pick a vector in the nullspace of
(A + 3I )2 that is not in the nullspace of A + 3I . Choose v3 = (0, 1, 0)T , which is not a multiple of v2 , making
the set {v2 , v3 } independent and giving a third solution,
y3 (t) = etA v3
= eβ3t [v3 + t (A + 3I )v3 ]
0
20
1 + t 2 β2
= eβ3t
0
04
0
0
1 + t β2
= eβ3t
4
0
0
= eβ3t 1 β 2t .
4t
Because 5.29. 1
det y1 (0), y2 (0), y3 (0) = 0
2 0
β1
2 0
β1
2 0
1
0 0
1 = β2 ,
0 the solutions are independent for all t and form a fundamental set of solutions.
Using a computer, matrix 11 β42 4
28 β12
A=
0
β24 39
0
81 β4
β1
β8 β28 ,
0
β57 has characteristic polynomial,
p(Ξ») = (Ξ» + 3)2 (Ξ» + 1)2 ,
providing eigenvalues Ξ»1 = β3 and Ξ»2 = β1, with algebraic multiplicities 2 and 2, respectively. Because 14 β42 4
1 0 0
28 0 β12
A + 3I = 0
β24 42
0
81 β4
2
β8 β28 0
β
0
0
β54
0 1
0
0 0
1
0 β2/3 ,
0
0 the geometric multiplicity of Ξ»1 = β3 is one, and an eigenvector is v1 = (0, β2, 0, β3)T , leading to the
solution
0 β2 .
y1 (t) = etA v1 = eβ3t 0
β3
Next, 28
0
(A + 3I )2 = 0
β12 β84
0
0
36 8
0
4
β4 1
56 0
0
β
0
0
0
β24 β3
0
0
0 0
1
0
0 2
0
,
0
0 has dimension two, equaling the algebraic multiplicity of Ξ»1 . Thus, we can pick a vector in the nullspace of
(A + 3I )2 that is not in the nullspace of A + 3I . Choose v2 = (β2, 0, 0, 1)T , which is not a multiple of v1 , 660 Chapter 9. Linear Systems with Constant Coefο¬cients
making the set {v1 , v2 } independent, and giving a second solution,
y2 (t) = etA v2
= eβ3t [v2 + t (A + 3I )v2 ] 0 14 β42 β2 β12 42
= eβ3t +t
0
0
0
3
β24 81 β2 0 0 β4 = eβ3t +t
0
0 1
β6 β2 β4t .
= eβ3t 0
1 β 6t 4
β4
2
β8 28 β2 β28 0 0 0 β54
1 Because 1 0 1/3 7/3 12 β42 4
28 0
0 1 0 β12 40 β4 β28 β
A+I =
0
0
0
0
00 0
0
β24 81 β8 β56
00 0
0
has dimension two, equaling the algebraic multiplicity of Ξ»2 , we can pick two independent eigenvectors in the
nullspace of A + I . Choose v3 = (β1, 0, 3, 0)T and v4 = (β7, 0, 0, 3)T . Note that they are not multiples of
one another, making the set {v3 , v4 } independent, and giving a third and fourth solution, β1 β7 0
v3 (t) = eβt 3
0 and 0
y4 (t) = eβt .
0
3 Because 5.30. 0 β2 β1 β7
β2 0
0
0
det[y1 (0), y2 (0), y3 (0), y4 (0)] =
= 6,
0
0
3
0
β3 1
0
3
the solutions are independent for all t and form a fundamental set of solutions.
Using a computer, matrix 18 β7 24
24 16 15 β8 20
A=
0
0
β1
0
β12 4 β15 β17
has characteristic polynomial p(Ξ») = (Ξ» + 3)2 (Ξ» + 1)2 , providing eigenvalues Ξ»1 = β3 and Ξ»2 = β1, with
algebraic multiplicities 2 and 2, respectively. Because 1 β1/3 0 0 21 β7 24
24 15
A + 3I = 0
β12 β5
0
4 20
2
β15 16 0
β
0
0
0
β14 0
0
0 1
0
0 0
,
1
0 the geometric multiplicity of Ξ»1 = β3 is 1, and an eigenvector is v1 = (1, 3, 0, 0)T , giving solution
1
3
y1 (t) = etA v1 = eβ3t .
0
0 9.5. The Exponential of a Matrix 661 Next, 48 48
(A + 3I )2 = 0
β24 β16
β16
0
8 52
60
4
β28 1
56 56 0
β
0
0
β28
0 β1/3
0
0
0 0
1
0
0 7/6 0
0
0 has dimension 2, equalling the algebraic multiplicity of Ξ»1 . Thus, we can pick a vector in the nullspace of
(A + 3I )2 that is not in the nullspace of A + 3I . Choose v2 = (β7, 0, 0, 6)T , which is not a multiple of v1 .,
making the set {v1 , v2 } independent, and giving a second solution
y2 (t) = etA v2
= eβ3t [v2 + t (A + 3I )v2 ] β7 21 β7 0 15 β5
= eβ3t +t
0
0
0
6
β12 4 β3 β7 β9 0 +t
= eβ3t 0 0
0
6 β7 β 3t β9t = eβ3t .
0
6 24 β7 16 0 0 0 β14
6 24
20
2
β15 For Ξ»2 = β1, 19 15
A+I =
0
β12 β7
β7
0
4 24
20
0
β15 1
24 16 0
β
0
0
β16
0 0
1
0
0 0
0
1
0 2
2
,
0
0 the geometric multiplicity of Ξ»2 = β1 is 1, and an eigenvector is v3 = (β2, β2, 0, 1)T , giving the exponential
solution β2 β2 .
y3 (t) = etA v3 = eβt 0
1
Next, β32 β12
(A + I )2 = 0
24 12
8
0
β8 β44
β20
0
32 1
β40 β8 0
β
0
0
0
32 0
1
0
0 1
β1
0
0 2
2
0
0 has dimension 2, equaling the algebraic multiplicity of Ξ»2 , so we can pick a vector in the nullspace of (A + I )2
that is not in the nullspace of A + I . Choose v4 = (β1, 1, 1, 0)T , which is not a multiple of v3 , making the 662 Chapter 9. Linear Systems with Constant Coefο¬cients
set {v3 , v4 } independent, and giving a fourth solution.
y4 (t) = etA v4
= eβt [v4 + t (A + I )v4 ] β1 19 β7 1 15 β7
= eβt +t
1
0
0
0
β12 4 β1 β2 1 β2 +t
= eβt 1
0 0
1 β1 β 2 t 1 β 2t = e βt .
1
t
Because
1
3
det y1 (0), y2 (0), y3 (0), y4 (0) =
0
0 5.31. 24
20
0
β15 β7
0
0
6 24 β1 16 1 0 1 β16
0 β2
β2
0
1 β1
1
= 3,
1
0 the solutions are independent for all t and form a fundamental set of solutions.
Using a computer, matrix 0 β30 β42 40 β48 14 7
9
β9
10
β2 1 5
8
β6
6
β2 β1
A=
,
45
64 β60 72 β20 2
2
33
47 β45 55 β15 0
7
11 β10 10
β1
has characteristic polynomial,
p(Ξ») = (Ξ» β 1)3 (Ξ» β 2)3 ,
providing eigenvalues Ξ»1 = 1 and Ξ»2 = 2, with algebraic multiplicities 3 and 3, respectively. Because β1 β30 β42 40 β48 14 1 0 0 0 4 β2 6
9
β9
10
β2 1 0 1 0 0 4 β3 5
7
β6
6
β2 β1 0 0 1 0 2 β1 AβI =
β
,
45
64 β61 72 β20 2 0 0 0 1 4 β3 2 0 0 0 0 0 0 33
47 β45 54 β15
0
7
11 β10 10
β2
00000 0
the geometric multiplicity of Ξ»1 = 1 is two, and we can choose two independent eigenvectors form the
nullspace of A β I , v1 = (β4, β4, β2, β4, 1, 0)T and v2 = (2, 3, 1, 3, 0, 1)T . These eigenvectors provide
the solutions β4 2 β4 3 tA
t β2 tA
t 1
y1 (t) = e v1 = e and y2 (t) = e v2 = e . β4 3
1
0
0
1 9.5. The Exponential of a Matrix 663 Next, β3 β2 β1
(A β I ) = 1
2
β4
2 β46 β64
β38 β53
9
12
21
29
25
35
β37 β52 β76
β62
12
34
42
β64 62
51
β11
β28
β34
51 1
22 18 0 β4 0
β
β10 0
0
β12 0
18 0
1
0
0
0
0 0
0
1
0
0
0 β4/7
β5/7
β3/7
0
0
0 12/7
8/7
2/7
0
0
0 β2/7 β6/7 2/7 0
0
0 has dimension three, equaling the algebraic multiplicity of Ξ»1 . Thus, we can pick a vector in the nullspace of
(A β I )2 that is not in the nullspace of A β I . We will try v3 = (4, 5, 3, 7, 0, 0)T , but weβll need to check
independence before proceeding. However, β4 β4 β2 β4
1
0 2
3
1
3
0
1 1
4
5
0 3
0
β
7
0
0
0
0
0 0
1
0
0
0
0 0
0 1
,
0
0
0 and a pivot in each column tells us that {v1 , v2 , v3 } is an independent set. A third solution is
y3 (t) = etA v3
= et [v3 + t (A β I )v3 ] β1 β30 4 6
1 5 5 β1 3 = et + t 45
2 7 2 0 33
0
7
1 4 14 5 β4 3 β2 = et + t 7 β22 0 β16 1
β4 4 + 14t 5 β 4t = e t 3 β 2t . β16t 1 β 4t β42
9
7
64
47
11 40
β9
β6
β61
β45
β10 β48
10
6
72
54
10 14 4 β2 5 β2 3 β20 7 β15 0 1
β2 Next, β2 1 β1
A β 2I = 2
2
0 β30
5
5
45
33
7 β42
9
6
64
47
11 40
β9
β6
β62
β45
β10 β48
10
6
72
53
10 1
14 β2 0 β2 0
β
β20 0
0
β15 β3
0 0
1
0
0
0
0 0
0
1
0
0
0 0
0
0
1
0
0 0
0
0
0
1
0 β2 β2 1
,
1
1
0 664 Chapter 9. Linear Systems with Constant Coefο¬cients
has dimension one, giving a single eigenvalue v4 = (2, 2, β1, β1, β1, 1)T and a fourth solution,
2
2 tA
2t β1 y4 (t) = e v4 = e . β1 β1 1
Next,
0 β4 1
(A β 2I )2 = β3 β2
β4 14
β49
β1
β69
β41
β51 20
β71
β1
β99
β59
β74 β18
69
1
95
56
71 20
β82
0
β110
β65
β84 1
β6 22 0 0
0
β
30 0
0
18 0
23 0
1
0
0
0
0 0
0
1
0
0
0 0
0
0
1
0
0 2
3
β2
β1
0
0 0
1 β1 ,
0
0
0 which has dimension one. Pick v5 = (0, β1, 1, 0, 0, 1)T in the nullspace of (A β 2I )2 . Note that it is not a
multiple of v4 and is therefore independent of v4 . Now,
y5 (t) = etA v5
= e2t [v5 + t (A β 2I )v5 ] β2 β30 0 5
1 β1 5 β1 1 = e2t +t
45
2 0 2 0 33
0
7
1 2 0 2 β1 β1 1 = e2t +t β1 0 β1 0 1
1 2t β1 + 2 t 2t 1 β t =e . βt βt 1+t β42
9
6
64
47
11 40
β9
β6
β62
β45
β10 β48
10
6
72
53
10 14 0 β2 β1 β2 1 β20 0 0 β15
1
β3 Finally, examine β2
4 0
3
(A β 2I ) = 6
4
5 β22
73
5
105
61
79 β32
105
7
151
88
114 30
β101
β7
β145
β84
β109 β36
118
8
170
99
128 1
10 β32 0 β2 0
β
β46 0
0
β27 β35
0 0
1
0
0
0
0 0
0
1
0
0
0 1
0
β1
0
0
0 1
3
β1
0
0
0 0
1 β1 ,
0
0
0 9.5. The Exponential of a Matrix
which has dimension three. Pick v6 = (β1, β3, 1, 0, 1, 0)T
check independence. Since
2
1
0 β1 2 β1 β3 0 β1 1
1 0 β
β1 0
0 0 β1 0
0
1
0
1
1
0 665 in the nullspace of (A β 2I )3 . We will need to
0
1
0
0
0
0 0
0 1 0
0
0 has a pivot in each column, the set {v4 , v5 , v6 } is independent. A sixth solution is formed as follows.
y6 (t) = etA v6
t2
= e2t v6 + t (A β 2I )v6 + (A β 2I )2 v6
2 β1 β1 β1 β3 β3 β3 2
1 t 1 1 = e2t + (A β 2I )2 + t (A β 2I ) 0 2 0 0 1 1 1 0
0
0
2 β1 β2 3 β3 β2 2 β2 t 1 1 = e2t + +t β1 2 1 0 β1 1 1 0
0
β1 β1 + 2t β t 2 β3 + 3t β t 2 1 β 2t + t 2 /2 = e 2t 2 βt + t /2 1 β t + t 2 /2 βt 2 /2
Because
β4
β4
β2
det[y1 (0), y2 (0), y3 (0), y4 (0), y5 (0), y6 (0)] =
β4
1
0
= 1,
5.32. 2
3
1
3
0
1 4
5
3
7
0
0 2
2
β1
β1
β1
1 0
β1
1
0
0
1 the solutions are independent for all t and form a fundamental set of solutions.
Using a computer, matrix
2
0
0
0
0
1
11
β9
β8 β14 β2 β7 7
β6
β4 β9 β3 β3
A= 17 β12 β9 β19 β5 β9 β29 β7 β13 23 β16 β15 19
5
9
β15 12
11
has characteristic polynomial
p(Ξ») = (Ξ» β 1)3 (Ξ» β 2)3 , β1
β3
1
0
1
0 666 Chapter 9. Linear Systems with Constant Coefο¬cients
providing eigenvalues Ξ»1 = 1 and Ξ»2 = 2, with algebraic multiplicities 3 and 3, respectively. Using a
computer, the nullspace of A β I has dimension one, as A β I reduces to
1 0 0 0 0 1 0 0
AβI =
0
0
0 1
0
0
0
0 0
1
0
0
0 0
0
1
0
0 0 2
.
1
β1 0 0
0
0
1
0 Thus, Ξ»1 = 1 has geometric multiplicity 1. Using a computer to reduce (A β I )2 , you can check that the
nullspace of (A β I )2 has dimension 2. The key here is that (A β I )3 reduces to
1 0 1 0
1
2
0 0
(A β I )3 β 0
0
0 β2
0
0
0
0 1
0
0
0
0 0
1
0
0
0 β3/2
0
0
0
0 A basis for the nullspace is provided by the vectors. β1 β1 v1 = 2
1
0
0
0 , 3/2 0
v2 = ,
0
1
0 β5/2 1
.
0
0
0 β2 and 5/2 0
v3 = β1 0
1 Because v1 , v2 , and v3 are in the nullspace of (A β I )3 we know that
y(t) = eAt v = v + t (A β I )v + t2
(A β I )v
2 for each v = v1 , v2 , and v3 . This fact, and a computer, provide the following solutions. β1 β t β t 2 /2 2+t 1 β t β t2 ,
βt 2 /2 2t + t 2 /2 t 2 /2 β1 β t β t 2 /4 3/2 + t/2 2
tA
t β3t/2 β t /2 y2 (t) = e v2 = e , βt/2 β t 2 /4 1 + 3t/2 + t 2 /4 t/2 + t 2 /4 y1 (t) = e v1 = e tA and t β2 β t β 3t 2 /4 5/2 + 3t/2 2
tA
t βt/2 β 3t /2 y3 (t) = e v3 = e . β1 + t/2 β 3t 2 /4 10t/4 + 3t 2 /4 1 β t/2 + 3t 2 /4 9.5. The Exponential of a Matrix 667 On the other hand, A β 2I reduces to
1
0 0
A β 2I β 0
0
0 0
1
0
0
0
0 1/6
7/6
0
0
0
0 β5/6
1/6
0
0
0
0 1/2
1/2
0
0
0
0 0
0 1
,
0
0
0 so the nullspace of A β 2I has dimension 3, and the geometric multiplicity of Ξ»2 = 2 is 3. A basis for the
eigenspace contains the vectors β1/6 β7/6 1
v4 = ,
0
0
0 5/6 β1/6 0
v5 = ,
1
0
0 β1/2 and β1/2 0
v6 = .
0
1
0 The corresponding solutions are β1/6 β7/6 1
y4 (t) = e v4 = e ,
0
0
0 5/6 β1/6 tA
2t 0 y5 (t) = e v5 = e ,
1
0
0
tA and 2t β1/2 β1/2 0
y6 (t) = etA v6 = e2t .
0
1
0
Because
det y1 (0), y2 (0), y3 (0), y4 (0), y5 (0), y6 (0)
β1 β1 β2 β1/6 5/6 β1/2
2 3/2 5/2 β7/6 β1/6 β1/2
1
1
0
0
1
0
0
=
,
=
0
0
β1
0
1
0
12
0
1
0
0
0
1
0
0
1
0
0
0
the solutions y1 (t), y2 (t), y3 (t), y4 (t), y5 (t), and y6 (t) are independent for all t and form a fundamental set of
solutions. 668
5.33. Chapter 9. Linear Systems with Constant Coefο¬cients
Consider the ο¬rst studentβs solution. If y1 (t) = e2t (1, 4, 4), then
2
8
8
2
8
8 y1 (t) = e2t
β2
β4
0 2
3
β1 β1
0
3 y1 (t) = e2t , and , so y1 is a solution. Similarly, y2 and y3 are seen to be solutions. Further,
1
det[y1 (0), y2 (0), y3 (0)] = 4
4 1
1
0 β1
0 = 1,
1 so the solutions are independent and form a fundamental solution set. Looking at the second studentβs solution,
y2 (t) = et β5
β10 + et
β5
β2 2
β4 3
0 β1 3 β 5t
1 β 10t = et
β2 β 5t
β1
0 y2 (t) = et
3 β2 β 5t
β9 β 10t
β7 β 5t
β2 β 5t
β9 β 10t
β7 β 5t , and , so y2 is a solution. In a similar manner, you can check that the second studentβs y1 and y3 are solutions.
Moreover,
13
3
4 1 β1 = β6,
det[y1 (0), y2 (0), y3 (0)] =
4 β2 β4 5.34. so the solutions are independent and form a fundamental solution set. Thus, both students are correct. They
both have fundamental solution sets. They are just using different bases.
If
6 0 β4
A = β2 4 5 ,
102
then the characteristic polynomial is found with the following computation
p(Ξ») = det (A β Ξ»I ) = 6βΞ»
0
β4
β2 4 β Ξ»
5
.
1
0
2βΞ» Expanding down the second column,
6 β Ξ» β4
1
2βΞ»
2
= (4 β Ξ»)(Ξ» β 8Ξ» + 16) p(Ξ») = (4 β Ξ») = β(Ξ» β 4)3 .
Because a matrix must satisfy itβs characteristic, the series
etA = et [4I +(Aβ4I )]
= e4tI et (Aβ4I )
= e4t I + t (A β 4I ) + t2
(A β 4I )2 + Β· Β· Β·
2! 9.5. The Exponential of a Matrix 669 truncates, with (A β 4I )k = 0 for k = 3, 4, . . . . Thus,
etA = e4t
= e 4t
Choose
e1 = 1
0
0 , 0
1
0 e2 = y2 (t) = etA e2 = e4t
y3 (t) = etA e3 = e4t A= 8
0
β8 t2
2! 0
1
0 0
0
0 e3 = 0
0
1 . 0
β2
0 1 + 2t
β2t + t 2 /2 ,
t
0
1 , and
0
β4t
5t β t 2
1 β 2t y1 (t) = etA e1 = e4t form a fundamental set of solutions.
If + and , Then 5.35. β4
5
β2 100
20
0 1 0 + t β2 0
001
10
1 + 2t
0
β4 t
β2t + t 2 1 5t β t 2 .
t
0 1 β 2t 3
4
β6 2
0
0 , then the characteristic polynomial is found with the following computation.
p(Ξ») = det (A β Ξ»I )
8βΞ»
3
0
4βΞ»
=
β8
β6 2
0.
βΞ» Expanding across the second row,
8βΞ» 2
β8 βΞ»
= β(Ξ» β 4)(Ξ»2 β 8Ξ» + 16) p(Ξ») = (4 β Ξ») = β(Ξ» β 4)3 .
Thus, Ξ» = 4 is a repeated eigenvalue having algebraic multiplicity 3. Because
A β 4I = 4
0
β8 3
0
β6 2
0
β4 β 1
0
0 3/4
0
0 1/2
0
0 has dimension two, we can select two eigenvectors from the nullspace of A β 4I , v1 = (β3, 4, 0)T and
v2 = (β1, 0, 2). Of course, these lead to the independent solutions
y1 (t) = etA v1 = e4t
y2 (t) = etA v2 = e4t β3
4
0
β1
0
2 670 Chapter 9. Linear Systems with Constant Coefο¬cients
Examining
0
0
0 (A β 4I )2 = 0
0
0 0
0
0 , we note that (A β 4I )k = 0 for k β₯ 2. We can write
etA = et (4I +(Aβ4I ))
= e4tI et (Aβ4I )
= e4t [I + t (A β 4I )] ,
knowing that the series truncates. Choose any vector independent from v1 and v2 , such as v3 = (1, 0, 0)
(check this), then
y3 (t) = e4t v3
= e4t [v3 + t (A β 4I )v3 ]
1
0 + t (A β 4I )
= e 4t
0
1
4
0 +t
0
= e 4t
0
β8
1 + 4t
0
= e 4t
β8t
5.36. provides the remaining solution.
In matrix form, x
β2
y=
0
z
0
which leads to the characteristic polynomial
p(Ξ») = det(A β Ξ»I ) = β4
5
1 1
0
0 x
y
z 13
β4
1 , β2 β Ξ» β4
13
0
5 β Ξ» β4 .
0
1
1βΞ» Expanding down the ο¬rst column,
5 β Ξ» β4
1
1βΞ»
= (β2 β Ξ»)(Ξ»2 β 6Ξ» + 9) p(Ξ») = (β2 β Ξ») = β(Ξ» + 2)(Ξ» β 3)2 .
for Ξ»1 = β2,
A + 2I = 0
0
0 β4
7
1 13
β4
3 β 0
0
0 1
0
0 0
1
0 , and the eigenvector v1 = (1, 0, 0)T provides the solution
y1 (t) = etA v1 = eβ2t
For Ξ»2 = 3,
A β 3I = β β5
0
0 β4
2
1 13
β4
β2 β 1
0
0 . 1
0
0 0
1
0 β1
β2
0 , 9.5. The Exponential of a Matrix 671 and the eigenvector v2 = (1, 2, 1)T provides the solution
y2 (t) = etA v2 = e3t
Because 25
0
0 (A β 3I )2 = 25
0
0 β75
0
0 1
2
1
β .
1
0
0 1
0
0 β3
0
0 , the nullspace of (A β 3I )2 has dimension 2. Pick v3 = (3, 0, 1)T in the nullspace of (A β 3I )2 . Note that v2
and v3 are independent. This gives a third solution,
y3 (t) = etA v3
= e3t [v3 + t (A β 3I )v3 ]
3
β5 β4
0 +t
0
2
= e3t
1
0
1
β2
3
0 + t β4
= e3t
β2
1
3 β 2t
β4t
= e3t
.
1 β 2t 13
β4
β2 3
0
1 Because 5.37. 113
det y1 (0), y2 (0), y3 (0) = 0 2 0 = 2,
011
the solutions y1 (t), y2 (t), and y3 (t) are independent for all g and forma fundamental set of solutions.
In matrix form,
x
β1 5
3
x
y=
0
1
1
y,
z
0 β2 β2
z
which leads to the characteristic polynomial
p(Ξ») = det (A β Ξ»I )
β1 β Ξ»
5
3
0
1βΞ»
1
.
=
0
β2 β2 β Ξ»
Expanding down the ο¬rst column,
1βΞ»
1
p(Ξ») = (β1 β Ξ»)
β2 β2 β Ξ»
= β(Ξ» + 1)(Ξ»2 + Ξ»)
= βΞ»(Ξ» + 1)2 .
Thus, Ξ»1 = 0 and Ξ»2 = β1 are eigenvalues having algebraic multiplicities 1 and 2, respectively. Because
102
β1 5
3
0
1
1
β011,
A β 0I =
000
0 β2 β2
v1 = (β2, β1, 1)T is an eigenvector, providing the solution
β2
y1 (t) = etA v1 = e0t β1 =
1 β2
β1
1 . 672 Chapter 9. Linear Systems with Constant Coefο¬cients
Because
A+I = 0
0
0 5
2
β2 3
1
β1 β 0
0
0 1
0
0 0
1
0 has dimension one, we can choose an eigenvector v2 = (1, 0, 0)T to produce a second solution,
1
y2 (t) = etA v2 = eβt 0 .
0
Examining
04
2
0 1 1/2
1
(A + I )2 = 0 2
β00 0
,
0 β2 β1
00 0
we note that the nullspace of (A + I )2 has dimension two, so we can choose v3 = (0, 1, β2) in the nullspace
(A + I )2 independent of v2 (itβs not a multiple of v2 ). Then,
y3 (t) = etA v3
= et (βI +(A+I ) v3
= eβtI et (A+I ) v3
t2
(A + I )2 + Β· Β· Β· v3
2
= eβt [v3 + t (A + I )v3 ] ,
= eβt I I + t (A + I ) + 5.38. because v3 is in the nullspace of (A + I )2 and (A + I )k v3 = 0 for all k β₯ 2. Thus,
0
0
1 + t (A + I ) 1
y3 (t) = eβt
β2
β2
0
β1
1 +t
0
= e βt
β2
0
βt
1.
= e βt
β2
If 5 β1 0
2
0
4
0 3
A=
,
1 1 β1 β3 0 β1 0
7
a computer reveals the characteristic equation
p(Ξ») = (Ξ» + 1)(Ξ» β 5)3 ,
so Ξ»1 = 1 and Ξ»2 = 5 are eigenvalues, with algebraic multiplicities 1 and 3 respectively. Because
1 0 0 0 6 β1 0 2 0 1 0 0
0 4 0 4 ,
β
A+I =
0 0 0 1
1 1 0 β3 0000
0 β1 0 8
the eigenvector v1 = (0, 0, 1, 0)T provides the solution 0 0
y1 (t) = eAt v1 = eβt .
1
0 9.5. The Exponential of a Matrix
Because 0
0
A β 5I = 1
0 β1
β2
1
β1 1
2
4
0
β
β3 0
2
0 0
0
β6
0 673 β6
0
0
0 0
1
0
0 β1 β2 ,
0
0 the nullspace of A β 5I has dimension 2 and the eigenvector v2 = (6, 0, 1, 0)T and v3 = (1, 2, 0, 1)T provide
two more solutions.
6
1
0
y2 (t) = eAt v2 = e5t 1
0
Because 0
0
(A β 5I )2 = β6
0 0
0
β6
0 and 0
0
36
0 2
y3 (t) = eAt v3 = e5t .
0
1 1
0
0
0
β
0
18 0
0 1
0
0
0 β6
0
0
0 β3 0
,
0
0 the nullspace of (A β 5I )2 has dimension 3 and we can pick v4 = (3, 0, 0, 1)T independent of v2 and v3
(check this). This gives solution
y2 (t) = eAt v4
= e5t [v4 + t (A β 5I )v4 ] 3 0 β1 0 0 β2
= e5t + t 0
11
1
0 β1 2 3 4 0 = e5t + t 0
0
2
1 3 + 2t 4t = e5t .
0
1 + 2t 0
0
β6
0 2 3 4 0 β3 0 2
1 Because
0
0
det y1 (0), y2 (0), y3 (0), y4 (0) =
1
0 6
0
1
0 1
2
0
1 3
0
= 12,
0
1 the solutions are independents for all t and form a fundamental set of solutions.
5.39. If β12 β8
A=
0
β17 β1
0
0
β1 8
β1
5
8 10 9
,
0
15 a computer reveals the characteristic equation
p(Ξ») = (Ξ» + 1)2 (Ξ» β 5)2 , 674 Chapter 9. Linear Systems with Constant Coefο¬cients
so Ξ»1 = β1 and Ξ»2 = 5 are eigenvalues, each having algebraic multiplicities 2. Because β17 β1 β8 β5
A β 5I = 0
0
β17 β1 8
β1
0
8 1
10 9
0
β
0
0
10
0 0
1
0
0 β41/77
81/77
0
0 β41/77 β73/77 ,
0
0 the nullspace of A β 5I provides two eigenvectors, v1 = (41, β81, 77, 0)T and v2 = (43, 73, 0, 77)T , and
solutions 41 β81 y1 (t) = etA v1 = e5t 77 0 41 73 y2 (t) = etA v2 = e5t .
0
77
Examining β11 β1
1 β8
A+I =
0
0
β17 β1 8
β1
6
8 1
10 9
0
β
0
0
16
0 0
1
0
0 0
0
1
0 β1 1
,
0
0 showing that A + I has dimension one, giving up only one eigenvector, v3 = (1, β1, 0, 1)T , and one solution
1 β1 y3 (t) = etA v3 = eβt .
0
1
But, β41 β73
(A + I )2 = 0
β77 0
0
0
0 41
1
36
41 1
41 73 0
β
0
0
77
0 0
0
0
0 0
1
0
0 β1 0
0
0 has dimension two, so v4 = (1, 0, 0, 1)T is in the nullspace of (A + I )2 , independent of v3 (itβs not a multiple
of v3 ), and (A + I )k v4 = 0 for k β₯ 2. Thus,
y4 (t) = etA v4
= et (βI +(A+I )) v4
= eβtI et (A+I ) v4
= eβt I I + t (A + I ) +
= eβt [v4 + t (A + I )v4 ] . t2
(A + I )2 + Β· Β· Β· v4
2 9.5. The Exponential of a Matrix 675 Thus, 1 1 0 0 y4 (t) = eβt + t (A + I ) 0
0
1
1 1 β1 0 1 = eβt + t 0
0 1
β1
1 β t t
= e βt .
0
1βt
5.40. If β1 β6
A=
0
β2 0
13
β6
5 0
0
β2
0 2
β42 ,
13 β16 a computer reveals the characteristic polynomial
p(Ξ») = (Ξ» + 2)2 (Ξ»2 + 2Ξ» + 5).
For Ξ» = β2, β3 β12
(A + 2I )2 = 10
β4 10 0
15 0
β25 0
5
0 1
β26 β54 0
β
70 0
β18
0 0
1
0
0 0
0
0
0 2
β2 ,
0
0 so we can pick v1 = (0, 0, 1, 0)T and v2 = (β2, 2, 0, 1)T . Thus,
0
0
y1 (t) = etA v1 = eβ2t [v1 + t (A + 2I )v1 ] = eβ2t 1
0 β2 2
y2 (t) = etA v2 = eβ2t [v2 + t (A + 2I )v2 ] = eβ2t .
t
1
The remaining eigenvalues are β1 Β± 2i . A computer reveals an eigenvector w = (1, 3i, β2 β i, i)T associated
with Ξ» = β1 + 2i . Thus,
1 3i .
z(t)etA w = e(β1+2i)t w = eβt e2it β2 β i i 676 Chapter 9. Linear Systems with Constant Coefο¬cients
Using Eulerβs identity, 1 0 0 3 z(t) = eβt (cos 2t + i sin 2t) +i
β2 β1 0
1 1 0 0 3 = eβt cos 2t β sin 2t β2 β1 0
1 0 1 3 0 + sin 2t .
+ ieβt cos 2t β1 β2 1
0
Thus, cos 2t
β3 sin 2t y3 (t) = eβt β2 cos 2t + sin 2t β sin t sin 2t
3 cos 2t y4 (t) = eβt β cos 2t β 2 sin 2t cos 2t and are solutions. Because
0
0
det y1 (0), y2 (0), y3 (0), y4 (0) =
1
0
5.41. β2
2
0
1 1
0
β2
0 0
3
= 1,
β1
1 the solutions y1 (t), y2 (t), y3 (t), and y4 (t) are independent for all t and form a fundamental set of solutions.
If β8 β2 3
12 6 β3 β2 2
A=
,
2
0 β3 β4 β4 β1 2
6
a computer reveals the characteristic equation
p(Ξ») = (Ξ» + 1)(Ξ» + 2)3 ,
so Ξ»1 = β1 and Ξ»2 = β2 are eigenvalues, having algebraic multiplicities 1 and 3, respectively. Because β7 β2 3 1 0 0 β1 12 β3
A+I =
2
β4 β1
0
β1 2
β2
2 6
0
β
β4 0
7
0 1
0
0 0
1
0 β1 ,
1
0 the nullspace of A + I provides one eigenvector, v1 = (1, 1, β1, 1)T and one solution,
1
1
y1 (t) = etA v1 = eβt .
β1 1
Examining β6 β3
A + 2I = 2
β4 β2
0
0
β1 3
2
β1
2 1
12 6
0
β
0
β4 0
8 0
1
0
0 0
0
1
0 β2 0
,
0
0 9.5. The Exponential of a Matrix 677 showing that A + 2I has dimension one, giving up only one eigenvector, v2 = (2, 0, 0, 1)T , and one solution
2
0
y2 (t) = etA v2 = eβ2t .
0
1
But, 0 β2
(A + 2I )2 = 2
β1 0
0
0
0 β1
1
β1
0 1
0
4
0
β
β4 0
2
0 0
0
0
0 0
1
0
0 β2 0
0
0 has dimension two, so v3 = (0, 1, 0, 0)T is in the nullspace of (A + 2I )2 , independent of v2 (itβs not a multiple
of v2 ), and (A + 2I )k v3 = 0 for k β₯ 2. Thus,
y3 (t) = etA v3
= et (β2I +(A+2I )) v3
= eβ2tI et (A+2I ) v3
= eβ2t I I + t (A + 2I ) + t2
(A + 2I )2 + Β· Β· Β· v3
2 = eβ2t [v3 + t (A + 2I )v3 ] .
Thus, 0 0 1 1 y3 (t) = eβ2t + t (A + 2I ) 0
0
0
0 0 β2 1 0 = eβ2t + t 0
0 0
β1 β2t 1
= e β2 t .
0
βt
Examining β2 β2
(A + 2I )3 = 2
β2 0
0
0
0 1
1
β1
1 1
4
4
0
β
β4 0
4
0 0
0
0
0 β1/2
0
0
0 β2 0
,
0
0 we see that (A + 2I )3 has dimension three. Thus, we pick v4 = (1, 0, 2, 0)T from the nullspace of (A + 2I )3
independent of v2 and v3 (check this), having the property that (A + 2I )k = 0 for k β₯ 3. Thus,
y4 (t) = etA v4
= et (β2I +(A+2I )) v4
= eβ2tI et (A+2I ) v4
t2
(A + 2I )2 + Β· Β· Β· v4
2!
t2
v4 + t (A + 2I )v4 + (A + 2I )2 v4
2 = eβ2t I + t (A + 2I ) +
= e β2 t 678 Chapter 9. Linear Systems with Constant Coefο¬cients
Further, 1 1 1 0 0 t 0 y4 (t) = eβ2t + t (A + 2I ) + (A + 2I )2 2
2
2
2
0
0
0 1 0 β2 2 0 1 t 0 = eβ2t + t + 2
0
0 2
0
0
β1 1 β t2 t
.
= e β2 t 2
βt 2 /2
2 5.42. If β2 β1 A = 15 12
β5 2
0
β16
β13
5 β2
β1
β1
1
0 0
0
10
6
β3 β3 β3 33 ,
26 β12 a computer reveals the characteristic equation
p(Ξ») = β(Ξ» + 1)(Ξ» + 2)4 .
The eigenvalueeigenvector pair Ξ»1 = β1, v1 = (β3, β2, β2, β2, β1)T give the solution β3 β2 y1 (t) = etA v1 = eβt β2 . β2 1
Because (A + 2I )4 reduces
1
0 (A + 2I )4 β 0
0
0 β4/3
0
0
0
0 1/3
0
0
0
0 2 /3
0
0
0
0 8/3 0 0 ,
0
0 the dimension of the nullspace of (A + 2I )4 is 4 and the collection
4
3 v2 = 0 ,
0
0 β1 0 v3 = 3 ,
0
0 β2 0 v4 = 0 ,
3
0 β8 0 v5 = 0 0
3 is a basis for (A + 2I )4 . Moreover, for i = 2, 3, 4, and 5,
yi (t) = etA vi = eβ2t vi + t (A + 2I )vi + t2
t3
(A + 2I )2 vi + (A + 2I )3 vi .
2!
3! 9.5. The Exponential of a Matrix
Using this result and a computer, y2 (t) = etA v2 y3 (t) = etA v3 y4 (t) = etA v4 y5 (y) = etA v4 679 8 + 12t β 5t 2 + t 3 6 + 4t + t 2 + t 3 1 = eβ2t 24t β 5t 2 + t 3 2
18t
β10t + 3t 2 2 + 12t β 5t 2 + t 3 4t + t 2 + t 3 1 β2 t = β e β6 + 24t β 5t 2 + t 3 2
18t
2
10t + 3t β4 + t 2 4+t 1 = e β2 t t 2 2
6
2t 8 + 9t β 5t 2 + t 3 t + t2 + t3 β2 t = βe 21t β 5t 2 + t 3 18t
2
β3 β 10t + 3t Because
β3
β2
det y1 (t), y2 (0), y3 (t), y4 (0), y5 (0) = β2
β2
1
5.43. 4
3
0
0
0 β1
0
3
0
0 β2
0
0
3
0 β8
0
0 = 27,
0
3 the solutions are independent for all t and form a fundamental set of solutions.
If β4 3
6
4
2 0 β8 β10 β8 2 10
9 β1 ,
A = β1 7 1 β4 β7 β7 0 β1 β1 β1 β1 0
a computer reveals the characteristic equation
p(Ξ») = (Ξ» + 1)(Ξ» + 2)4 ,
so Ξ»1 = β1 and Ξ»2 = β2 are eigenvalues, having algebraic multiplicities 1 and 4, respectively. Because
1 0 0 0 0 β3 3
6
4
2 0 β7 A + I = β1 7 1 β4
β1 β1 β10
11
β7
β1 β8
9
β6
β1 2
0 β1 β 0 0
0
1
0 1
0
0
0 0
1
0
0 0
0
1
0 β2 2 ,
β1 0 the nullspace of A + I provides one eigenvector, v1 = (0, 2, β2, 1, 1)T and one solution,
0
2 y1 (t) = etA v1 = eβt β2 .
1
1 680 Chapter 9. Linear Systems with Constant Coefο¬cients
Examining β2 0 A + 2I = β1
1
β1 3
β6
7
β4
β1 6
β10
12
β7
β1 1
4
2
β8 2 0 9 β1 β 0 0
β5 0
0
β1 2 0
1
0
0
0 0
0
1
0
0 1
β2
2
0
0 β1 β2 1 ,
0
0 showing that A + 2I has dimension two, giving two eigenvectors, v2 = (β1, 2, β2, 1, 0)T and v3 =
(1, 2, β1, 0, 1)T , and two solutions. β1 2 y2 (t) = etA v2 = eβ2t β2 1
0
1
2 y3 (t) = etA v3 = eβ2t β1 0
1
But,
0
0 (A + 2I )2 = 0
0
0 0
β4
4
β2
β2 0
β6
6
β3
β3 0
β4
4
β2
β2 0
0
2
0 β2 β 0 0
1
0
1 1
0
0
0
0 3/2
0
0
0
0 1
0
0
0
0 β1/2 0 0 ,
0
0 has dimension four, so v4 = (1, 0, 0, 0, 0)T and v5 = (0, β1, 0, 1, 0)T are in the nullspace of (A + 2I )2 .
Also,
1 0 0 0 β1 1 1 0 2 0 β1 0 1 0 0
2 β2 β1 0 0 β 0 0 1 0 , 0 0 0 1
1
001
0000
0
100
so each column is a pivot column and the vectors v2 , v3 , v4 and v5 are independent. Furthermore, (A + 2I )k v4 =
0 and (A + 2I )k v5 = 0 for k β₯ 2. Thus,
y4 (t) = eβ2t [v4 + t (A + 2I )v4 ] β2 1 0 0 = eβ2t 0 + t β1 1 0 β1
0 1 β 2t 0 = e β 2 t βt ,
t
βt 9.5. The Exponential of a Matrix 681 and
y5 (t) = eβ2t [v5 + t (A + 2I )v5 ] 1 0 β2 β1 = eβ2t 0 + t 2 β1 1 0
0 t β1 β 2t = e β2 t 2 t
, 1βt 0
5.44. In matrix form, x 1 x2 x3 x 4
x5 5
3 = β3
3
β4 7
6
β8
14
β9 1
5
β2
8
β6 1
4
β5
10
β5 8 x1 5 x2 β12 x3 .
18 x4 β9
x5 A computer reveals the characteristic equation
p(Ξ») = β(Ξ» + 1)2 (Ξ» β 4)3 .
For Ξ» = β1, (A + I )2 reduces 1 0 (A + I )2 β 0
0
0 0
1
0
0
0 0
0
1
0
0 β1
1
0
0
0 β1 2 β1 ,
0
0 and v1 = (1, β1, 0, 1, 0)T and v2 = (1, β2, 1, 0, 1)T form a basis for the nullspace of (A + I )2 . Moreover,
yi (t) = etA vi = eβt [vi + t (A + I )vi ]
for i = 1, 2. Using this result and a computer, 1 β1 y1 (t) = etA v1 = eβt 0 1
0
1+t β2 β t y2 (t) = etA v2 = eβt 1 .
t 1 For Ξ» = 4, (A β 4I )3 reduces 1
0 (A β 4I )3 β 0
0
0 0
1
0
0
0 1
0
0
0
0 1
0
0
0
0 1
1 0,
0
0 682 Chapter 9. Linear Systems with Constant Coefο¬cients
and v3 = (β1, 0, 1, 0, 0)T , v4 = (β1, 0, 0, 1, 0)T , and v5 = (β1, β1, 0, 0, 1)T form a basis for the nullspace
of (A β 4I )3 . Moreover,
yi (t) = etA vi = e4t vi + t (A β 4I )vi + t2
(A β 4I )2 vi
2! for i = 3, 4, and 5. Using this result and a computer, β2
2
4t β t 1 y3 (t) = eAt v3 = e4t 2 β 6t + t 2 2 10t β 2t 2 β4t + t 2 β2
2
2t β t 1 y4 (t) = eAt v4 = e4t β4t + t 2 2 2 + 6t β 2 t 2 β2t + t 2 β2 β2 β t 2 1 4t At
y5 (t) = e v5 = e β2t + t 2 2 2t β 2t 2 2 + t2
Because
1
β1
det[y1 (0), y2 (0), y3 (0), y4 (0), y5 (0)] = 0
1
0 1
β2
1
0
1 β1
0
1
0
0 β1
0
0
1
0 β1
β1
0 = 1,
0
1 the solutions y1 (t), y2 (t), y3 (t), y4 (t), and y5 (t) are independent for all t and forma fundamental set of
solutions. Section 6. Qualitative Analysis of Linear Systems
6.1. In matrix form,
x
y = β0.2
β2.0 2.0
β0.2 x
,
y the coefο¬cient matrix
A= β0.2
β2.0 2.0
β0.2 has characteristic polynomial
p(Ξ») = Ξ»2 + 0.4Ξ» + 4.04, 9.6. Qualitative Analysis of Linear Systems 683 producing eigenvalues Ξ» = β0.2 Β± 2i . Because the real part of each eigenvalue is negative, the equilibrium
point at the origin is asymptotically stable. y 5 0 β5
β5 6.2. 0
x 5 In matrix form,
x
y 4
3 = x
.
y 0
1 The coefο¬cient matrix,
A= 4
3 0
1 is lower triangular, so the eigenvalues lie on the diagonal, Ξ»1 = 4 and Ξ»2 = 1. Because both eigenvalues are
positive, the equilibrium point at the origin is a source and unstable. y 5 0 β5
β5 6.3. 0
x 5 In matrix form,
x
y = β6
3 β15
6 the coefο¬cient matrix
A= β6
3 β15
6 has characteristic polynomial
p(Ξ») = Ξ»2 + 9, x
,
y 684 Chapter 9. Linear Systems with Constant Coefο¬cients
producing eigenvalues Ξ» = Β±3i . Therefore, the equilibrium point at the origin is a stable center. y 5 0 β5
β5 6.4. 0
x 5 In matrix form,
x
y = 2
β3 0
β1 x
.
y The coefο¬cient matrix
A= 2
β3 0
β1 is lower triangular, so the eigenvalues lie on the diagonal, Ξ»1 = 2 and Ξ»2 = β1. Because there is at least one
positive eigenvalue, the equilibrium point at the origin is unstable. Indeed, with one positive and one negative
eigenvalue, the origin is a saddle. y 5 0 β5
β5 6.5. 0
x 5 In system
y= 0.1
β2.0 2.0
y,
0.1 the coefο¬cient matrix
A= 0.1
β2.0 2 .0
0.1 has characteristic polynomial
p(Ξ») = Ξ»2 β 0.2Ξ» + 4.01, 9.6. Qualitative Analysis of Linear Systems 685 producing eigenvalues Ξ» = 0.1 Β± 2i . Therefore, the equilibrium point at the origin is a unstable. Indeed, the
equilibrium point is a spiral source. y 5 0 β5
β5 6.6. 0
x 5 In system
y= β0.2
β0.1 0 .0
y,
β0.1 the coefο¬cient matrix
A= β0.2
β0.1 0.0
β0.1 is lower triangular. The eigenvalues lie on the diagonal, Ξ»1 = β0.2 and Ξ»2 = β0.1. Since both eigenvalues
are negative, the equilibrium point at the origin is asymptotically stable. Indeed, the origin is a sink. y 5 0 β5
β5 6.7. 0
x In system
y= 1
1 β4
y,
β3 the coefο¬cient matrix
A= 1
1 β4
β3 has characteristic polynomial
p(Ξ») = Ξ»2 + 2Ξ» + 1, 5 686 Chapter 9. Linear Systems with Constant Coefο¬cients
producing the repeated eigenvalue Ξ» = β1. Because the real part of every eigenvalue is negative, the
equilibrium point at the origin is asymtotically stable. Indeed, the equilibrium point is a degenerate sink. y 5 0 β5
β5 6.8. 0
x In system
2
1 y=
the coefο¬cient matrix
A= 2
1 5 β1
y,
0
β1
0 has characteristic polynomial
p(Ξ») = Ξ»2 β 2Ξ» + 1 = (Ξ» β 1)2 ,
producing a repeated eigenvalue, Ξ» = 1. Because the eigenvalue is positive, the equilibrium point at the origin
is unstable. y 5 0 β5
β5 6.9. 0
x Consider the system
y=
Using a computer, matrix
A= β3
β2
β3
β3
β2
β3 β4
β7
β8
β4
β7
β8 5 2
4
4 y. 2
4
4 has characteristic polynomial
p(Ξ») = βΞ»3 β 6Ξ»2 β 11Ξ» β 6, 9.6. Qualitative Analysis of Linear Systems 687 and eigenvalues Ξ»1 = β3, Ξ»2 = β2, and Ξ»3 = β1. Because the real parts of all eigenvalues are negative, the
equilibrium point at the origin is asymptotically stable. One such solution, with initial condition (1, 1, 1)T , is
shown in the following ο¬gure. z y
x
6.10. Consider the system β3
2
β6 y=
Using a computer, matrix
A= β3
2
β6 β1
0
β1
β1
0
β1 0
0
3 y. 0
0
3 has characteristic polynomial
p(Ξ») = β(Ξ» β 1)(Ξ» β 2)(Ξ» β 3)
and eigenvalues Ξ»1 = 1, Ξ»2 = 2, and Ξ»3 = 3. Because there is a positive eigenvalue, the equilibrium point at
the origin is unstable. One solution, starting at the point (0.01, 0.01, 0.01) is shown in the following ο¬gure.
z
y
x 6.11. In matrix form, x
y
z = Using a computer, matrix
A= β1
0
0
β1
0
0 3
1
β3
3
1
β3 4
6
β5 x
y
x 4
6
β5 has characteristic polynomial
p(Ξ») = βΞ»3 β 5Ξ»2 β 1tΞ» β 13, . 688 Chapter 9. Linear Systems with Constant Coefο¬cients
and eigenvalues Ξ»1 = β1, Ξ»2 = β2 + 3i , and Ξ»3 = β2 β 3i . Because all the real parts of the eigenvalues are
negative, the equilibrium point at the origin is asymptotically stable. One such solution, with initial condition
(1, 1, 1)T , is shown in the following ο¬gure. z y
x 6.12. In matrix form, the system is
x
y
z 2
β2
β4 = 1
0
β6 x
y
z 0
0
β2 . The matrix has eigenvalues β2 and 1 Β± i . Since the complex eigenvalues have positive real part, the origin is
an unstable equilibrium point. This is illustrated by the solution plotted in the accompanying ο¬gure.
600
400 z 200
0 x
0 1 2 y β200
β400 6.13. t
4 3 If
y= 0
β1
4 0
0
β2 then matrix
A= 0
β1
4 0
0
β2 β1
0
β3 y, β1
0
β3 has characteristic polynomial
p(Ξ») = β(Ξ» + 1)(Ξ»2 + 2Ξ» + 2) 9.6. Qualitative Analysis of Linear Systems 689 and eigenvalues β1, β1 + i , and β1 β i . Therefore, the real part of eigenvalue is negative, so the hypotheses
of Theorem 6.2 are satisο¬ed and the equilibrium point at the origin is asymptotically stable. One such solution,
with initial condition (2, β1, β2)T , is shown in the image that follows. z y
x 6.14. If y= 3
0
0 β3
1
β3 β5
0
β2 y, then a computer reveals that matrix A= 3
0
0 β3
1
β3 β5
0
β2 has characteristic equation p(Ξ») = β(Ξ» β 3)(Ξ» β 1)(Ξ» + 2) 690 Chapter 9. Linear Systems with Constant Coefο¬cients
and eigenvalues Ξ»1 = 3, Ξ»2 = 1, and Ξ»3 = β2. Thus, at least one eigenvalue has positive real part
and Theorem 6.2 predicts that the equilibrium point at the origin is unstable. One solution, starting at
(0.01, 0.01, 0.01) is shown in the following ο¬gure. z x 6.15. y If
3 16
y =
β14
β19 β2
β6
5
8 β5
β17
15
23 3
9
y,
β8 β13 then a computer reveals that matrix
3 16
A=
β14
β19 β2
β6
5
8 β5
β17
15
23 3
9
β8 β13 has characteristic equation
p(Ξ») = (Ξ» β 2)(Ξ» + 1)3
and eigenvalues Ξ»1 = 2 and Ξ»2 = β1, the latter having algebraic multiplicity 3. Thus, one eigenvalue has
positive real part and Theorem 6.2 predicts that the equilibrium point at the origin is unstable. One such
solution, with initial conation (0.1, 0.1, 0.1, 0.1)T , seems to approach the origin, only to veer away with 9.6. Qualitative Analysis of Linear Systems 691 the passage of time, much like a saddle point solution in the phase plane. This behavior is indicated in the
following plot of each component of the solution versus time.
0.5 0 β0.5
0 6.16. 5 10 15 With a computer we ο¬nd that the eigenvalues of the matrix β3 3
0 β4 β7
β1
β6 4
A=
0
4 0
β3
0 8
2
7 are β3 and β1, the latter having algebraic multiplicity 3. Since all of the eigenvalues are negative, Theorem 6.2
tells us that the origin is an asymptotically stable equilibrium point. This is veriο¬ed by the solution plotted in
the accompanying ο¬gure. 2 y 1 0 β2 6.17. y2 0 t
8 4 y3
y4 (a) In matrix form, x
y
z = Using a computer, matrix
A= β3
β2
0
β3
β2
0 0
β1
0
0
β1
0 0
0
β2 x
y
z 0
0
β2 has characteristic polynomial
p(Ξ») = (Ξ» + 3)(Ξ» + 2)(Ξ» + 1) . 692 Chapter 9. Linear Systems with Constant Coefο¬cients
and eigenvalues β3, β2 and β1. A computer also reveals the associated eigenvectors which lead to the
following exponential solutions.
0
0
1
y1 (t) = eβ3t 1 , y2 (t) = eβ2t 0 , and y3 (t) = eβt 1 .
1
0
0
These exponential solutions generate the halfline solutions shown in the following ο¬gure. Each of the
halfline solutions decay to the origin with the passage of time. (b) We selected initial conditions (1, 0, 1)T , (β1, 0, 1)T , (1/2, 1, 1)T , (β1/2, β1, 1)T , (1, 0, β1)T ,
(β1, 0, β1)T , (1/2, 1, β1)T , and (β1/2, β1, β1)T to craft the portrait in the following ο¬gure. z y
x 6.18. (c) Nodal sink
(a) If
y=
then the coefο¬cient matrix
A= 1
0
0
1
0
0 β1
2
0 0
0
3 β1 0
20
03 y, 9.6. Qualitative Analysis of Linear Systems 693 is upper triangular, so the eigenvalues lie on the diagonal, Ξ»1 = 1, Ξ»2 = 2, and Ξ»3 = 3. A computer
reveals the associated eigenvectors, and consequently, the exponential solutions
y1 (t) = et 1
0
0 , y2 (t) = e2t β1
1
0 , y3 (t) = e3t 0
0
1 . These exponential solutions generate the halfline solutions shown in the following ο¬gure. (b) We selected initial conditions (1, 2, 1)T , (1, 2, β1)T , (2, β1, 1)T , (2, β1, β1)T , (β1, β2, 1)T ,
(β1, β2, β1)T , (β2, 1, 1)T , and (β2, 1, β1)T to craft the portrait in the following ο¬gure. Each was
scaled by a factor of 1 Γ 10β3 . z y x 6.19. (c) Nodal source.
(a) If
y= β1
10
0 β10
β1
0 0
0
β1 y, then, using a computer, matrix
A= β1
10
0 β10
β1
0 0
0
β1 has characteristic polynomial
p(Ξ») = (Ξ» + 1)(Ξ»2 + 2Ξ» + 101) 694 Chapter 9. Linear Systems with Constant Coefο¬cients
and eigenvalues β1, β1 + 10i and β1 β 10i . A computer also generates associated eigenvectors, leading
to the real solution
0
y1 (t) = eβt 0
1
and the complex solution
1
βi
0 z(t) = e(β1+10i)t = eβt (cos 10t + i sin 10t)
= e βt cos 10t
sin 10t
0 This leads to the real solutions
cos 10t
y2 (t) = eβt sin 10t
0 , + ieβt and 1
0
0 + i β1
0
0
sin 10t
β cos 10t .
0
sin 10t
β cos 10t
0 y3 (t) = eβt . (b) Any solution starting on the zaxis lies on the halflines generated by the exponential solution
0
0.
1
Thus, the solution will remain on the zaxis as it decays to the equilibrium point at the origin. In the
following image, solutions with initial conditions (0, 0, 1)T and (0, 0, β1)T remain on the zaxis and
decay to the origin.
y(t) = C1 eβt z y
x (c) The general solution is
y(t) = C1 eβt 0
0
1 + C2 eβt cos 10t
sin 10t
0 + C3 eβt sin 10t
β cos 10t
0 . If a solution starts in the xy plane with initial condition y(0) = (a, b, 0)T , then
a
b
0 = C1 0
0
1 + C2 1
0
0 + C3 0
β1
0 , 9.6. Qualitative Analysis of Linear Systems 695 leading to C1 = 0, C2 = a , and C3 = βb. Thus, the particular solution is
cos 10t
sin 10t
0 y(t) = aeβt + beβt sin 10t
β cos 10t
0 , so these solutions will remain in the xy plane and spiral inward to the equilibrium point at the origin.
This is shown in the following ο¬gure, where we have plotted the solution with initial condition (1, 1, 0)T . z y
x (d) A solution having initial condition y(0) = (a, b, c)T , where c = 0, would lead to
a
b
c 0
0
1 = C1 + C2 1
0
0 + C3 0
β1
0 and C1 = c, C2 = a , and C3 = βb. Thus, the particular solution is
y(t) = eβt (a cos 10t β b sin 10t)
eβt (a sin 10t + b cos 10t)
ceβt . We saw in part (c) that if c = 0, solutions spiral into the origin while remaining in the xy plane. In this
case, the zcoordinate decays to zero, so it is reasonable to assume that solutions will spiral while the
zcoordinate decays to zero. Solutions with initial conditions (1, 1, 1)T and (β1, β1, β1)T are shown in
the following ο¬gure. z y
x 696 Chapter 9. Linear Systems with Constant Coefο¬cients Section 7. Higher Order Linear Equations
7.1. (a) If
x1 (t) = e3t
,
3e3t x1 (t) = 3e3t
9e3t then and 0
3 0
1
x (t) =
3
21 e3t
3e3t 1
2 = 3e3t
,
9e3t = βeβt
,
e βt so x1 is a solution of
0
3 x=
Similarly, if 1
x.
2 x2 (t) = e βt
,
βeβt x2 (t) = βe β t
e βt then and 0
1
x (t) =
3
22 1
2 e βt
βeβt x= 0
3 0
3 1
x.
2 so x2 is a solution of To show independence, we need only show that the functions are independent at one value of t . However,
x1 (0) = 1
3 and 1
β1 x2 (0) = are clearly independent (x2 (0) is not a multiple of x1 (0)).
(b) Because
x(t) = C1 x1 (t) + C2 x2 (t)
e3t
e βt
= C1
,
3t + C2
3e
βeβt
the ο¬rst component of x(t) is y(t) = C1 e3t + C2 eβt . Thus,
y = 3C1 e3t β C2 eβt
y = 9C1 e3t + C2 eβt ,
and y β 2y β 3y = (9C1 e3t + C2 eβt )
β 2(3C1 e3t β C2 eβt ) β 3(C1 e3t + C2 eβt )
=0 7.2. (a) If x1 (t) = (sin 2t, 2 cos 2t)T , then x1 (t) = (2 cos 2t, β4 sin 2t)T and
0
β4 0
1
x (t) =
β4
01 1
0 sin 2t
2 cos 2t so x1 is a solution of
x1 = 0
β4 1
x.
0 = 2 cos 2t
,
β4 sin 2t 9.7. Higher Order Linear Equations 697 Similarly, if x2 (t) = (cos 2t, β2 sin 2t)T , then x2 (t) = (β2 sin 2t, β4 cos 2t)T and
01
0
x (t) =
β4 0 2
β4
so x2 is also a solution of 1
0 cos 2t
β2 sin 2t = β2 sin 2t
,
β4 cos 2t 01
x.
β4 0
To show independence, we need only show that the functions are independent at one value of t . However,
1
0
x1 (0) =
and x2 (0) =
0
2
are clearly independent (x2 (0) is not a multiple of x1 (0)).
(b) Because
sin 2t
cos 2t
x(t) = C1 x1 (t) + C2 x2 (t) = C1
+ C2
,
2 cos 2t
β2 sin 2t
The ο¬rst component of x(t) is y(t) = C1 sin 2t + C2 cos 2t . Thus,
x= y = 2C1 cos 2t β 2C2 sin 2t, and
y = β4C1 sin 2t β 4C2 cos 2t,
and 7.3. y + 4y = (β4C1 sin 2t β 4C2 cos 2t) + 4(C1 sin 2t + C2 cos 2t)
= 0. If y1 (t) = et and y2 (t) = e2t , suppose that there exists constants c1 and c2 such that
c1 et + c2 e2t = 0
for all t . Then, t = 0 β c1 + c2 = 0
t = 1 β c1 e + c2 e2 = 0. Solving the ο¬rst equation, c1 = βc2 , and substituting into the second equation gives
βc2 e + c2 e2 = 0
c2 (e2 β e) = 0.
7.4. 7.5. Because e2 β e = 0, this give c2 = 0, whence c1 = βc2 = 0. Hence, y1 and y2 are independent.
Suppose y1 (t) = et cos t and y2 (t) = et sin t , and there are constants C1 and C2 such that C1 y1 (t) + C2 y2 (t) =
et [C1 cos t + C2 sin t = 0 for all t . Then at t = 0 we have C1 = 0, and at t = Ο/2 we have C2 eΟ/2 = 0.
Hence both constants are 0, so the functions are linearly independent.
If y1 (t) = cos t , y2 (t) = sin t , and y3 (t) = et , suppose that there exists constants c1 , c2 , and c3 such that
c1 cos t + c2 sin t + c3 et = 0
for all t . Then, t = 0 β c1 + c3 = 0
t = Ο/2 β c2 + c3 eΟ/2 = 0
t = Ο β βc1 + c3 eΟ . Solving the ο¬rst equation, c1 = βc3 , and substituting this into the third equation give
0 = c3 + c3 eΟ = c3 (1 + eΟ ).
Because eΟ + 1 = 0, this give c3 = 0, whence c1 = βc3 = 0. Substituting c3 = 0 into the second equation
gives
0 = c2 + 0eΟ = c2 .
Therefore, y1 , y2 , and y3 are linearly independent. 698
7.6. Chapter 9. Linear Systems with Constant Coefο¬cients
If y1 (t) = et , y2 (t) = tet , and y3 (t) = t 2 et , suppose that there exists constants C1 , C2 , and C3 such that
C1 et + C2 tet + C3 t 2 et = 0
for all t . If t = 1, then C1 e + C2 e + C3 e = 0
(C1 + C2 + C3 )e = 0
C1 + C2 + C3 = 0. If t = β1, then C1 eβ1 β C2 eβ1 + C3 eβ1 = 0
(C1 β C2 + C3 )eβ1 = 0
C1 β C2 + C3 = 0 Finally, if t = 0, then C1 = 0 and the last two equations become
C2 + C3 = 0
βC2 + C3 = 0.
Because the coefο¬cient matrix 7.7. 1
β1 1
1 C1
C2 = 0
0 has determinant D = 2, the coefο¬cient matrix is nonsingular and this last system has unique solution C2 =
C3 = 0. Hence, C1 = C2 = C3 = 0 and the solutions y1 (t) = et 4, y2 (t) = tet , and y3 (t) = t 2 et are linearly
independent.
If y1 (t) = cos 3t , then
y (t) = β3 sin 3t
y (t) = β9 cos 3t
and
y1 + 9y1 = β9 cos 3t + 9 cos 3t = 0.
Similarly, if y2 (t) = sin 3t , then y2 (t) = 3 cos 3t
y2 (t) = β9 sin 3t, and
y2 + 9y2 = β9 sin 3t + 9 sin 3t = 0.
Thus, both y1 and y2 are solutions of y + 9y = 0. Finally, the Wronskian is
y1 y2
y 1 y2
cos 3t
sin 3t
= det
β3 sin 3t 3 cos 3t
= 3 cos2 3tI + 3 sin2 3t
= 3, W (t) = det 7.8. 7.9. which is nonzero for all t . Hence, the solutions y1 and y2 are linearly independent.
If y1 (t) = eβ10t and y2 (t) = et , we have y1 (t) = β10eβ10t and y1 (t) = 100eβ10t . Hence y1 + 9y1 β 10y1 =
(100 β 90 β 10)eβ10t = 0, so y1 is a solution. Similarly, y2 (t) = y2 (t) = et , so y2 + 9y2 β 10y2 =
(1 + 9 β 10)et = 0, so y2 is a solution. We have y1 (t) = eβ10t = eβ11t et = eβ11t y2 (t). Since eβ11t is not
constant, the functions are linearly independent, and therefore form a fundamental set of solutions.
If y1 (t) = e2t , then
y (t) = 2e2t
y (t) = 4e2t . 9.7. Higher Order Linear Equations 699 and
y 1 β 4 y 1 + 4 y 1 = 4 e 2 t β 8e 2 t + 4 e 2 t = 0 .
Similarly, if y2 (t) =, then y2 (t) = e2t (2t + 1)
y2 (t) = e2t (4t + 4), and y2 β 4y2 + 4y2 = e2t (4t + 4) β 4e2t (2t + 1) + 4te2t
= e2t (4t + 4 β 8t β 4 + 4t)
= 0. Thus, both y1 and y2 are solutions of y β 4y + 4y = 0. Finally, the Wronskian is
y1 y2
y1 y2
te2t
e 2t
= det
2t
2t
2e
e (2t + 1)
4t
= e (2t + 1) β 2te4t W (t) = det = e 4t ,
7.10. which is nonzero for all t . Hence, the solutions y1 and y2 are linearly independent.
If y1 (t) = cos 3t , then
y1 = β3 sin 3t,
y1 = β9 cos 3t, and
y1 = 27 sin 3t.
Thus,
y1 β 3y1 + 9y1 β 27y1 = 27 sin 3t β 3(β9 cos 3t) + 9(β3 sin 3t) β 27(cos 3t)
=0
and y1 is a solution of y β 3y + 9y β 27y = 0. In a similar manner, y2 (t) = sin 3t and y3 (t) = e3t are
also solutions. Finally, the Wronskian is
W (t) = det 7.11. y1
y1
y1 y2
y2
y2 y3
y3
y3 = cos 3t
β3 sin 3t
β9 cos 3t sin 3t
3 cos 3t
β9 sin 3t e3t
33t
9e3t . Using a computer, W (t) = 54e3t , which is never zero. Therefore, the solutions y1 , y2 , and y3 are linearly
independent.
If y1 (t) = et , then
y1 β 3y1 + 3y1 β y1 = et β 3et + 3et β et = 0.
If y2 (t) = tet , then y2 = (t + 1)et
y2 = (t + 2)et
y2 = (t + 3)et and y2 β 3y2 + 3y2 β y2 = (t + 3)et β 3(t + 2)et
3(t + 1)et β tet
t = e (t + 3 β 3t β 6 + 3t + 3 β t)
= 0. 700 Chapter 9. Linear Systems with Constant Coefο¬cients
If y3 (t) = t 2 et , then
y3 = (t 2 + 2t)et
y3 = (t 2 + 4t + 2)et
y3 = (t 2 + 6t + 6)et ,
and
y3 β 3y3 + 3y3 β y3 = (t 2 + 6t + 6)et β 3(t 2 + 4t + 2)et
3(t 2 + 2t)et β t 2 et
= et (t 2 + 6t + 6
β 3t 2 β 12t β 6 + 3t 2 + 6t β t 2 )
= 0.
Thus, y1 , y2 , and y3 are solutions of the equation y β 3y + 3y β y = 0. Finally, the Wronskian is
W (t) = det
= det 7.12. y1 y2 y3
y1 y2 y3
y 1 y 2 y3
tet
et
t
e (t + 1)et
et (t + 2)et t 2 et
(t + 2t)et
2
(t + 4t + 2)et
2 . Using a computer, W (t) = 2e3t , which is never zero. Therefore, the solutions y1 , y2 and y3 are linearly
independent.
If y1 = cos 3t , then
y1 = β3 sin 3t
y1 = β9 cos 3t
y1 = 27 sin 3t
(
y14) = 81 cos 3t. Thus,
(
y14) + 13y1 + 36y1 = 81 cos 3t + 13(β9 cos 3t) + 36 cos 3t
=0 and y1 is a solution of y (4) + 13y + 36y = 0. In a similar manner, y2 = sin 3t , y3 = cos 2t , and y4 = sin 2t
are also solutions. Finally, the Wronskian is
y
y1 y 3 y 4 1
y 2 y3 y4 y
W (t) = det 1
y1 y2 y3 y4 y
y
y3 y 4 1cos 32
t
sin 3t
cos 2t
sin 2t 3 cos 3t
β2 sin 2t
2 cos 2t β3 sin 3t
= det .
β9 cos 3t β9 sin 3t β4 cos 3t β4 sin 2t 27 sin 3t β27 cos 3t
8 sin 2t
β8 cos 2t
7.13. Using a computer, W (t) = 150, so the solutions y1 , y2 , y3 , and y4 are linearly independent.
(a) If y = eΞ»t , then
y = Ξ»eΞ»t
y = Ξ»2 eΞ»t
y = Ξ»3 eΞ»t . 9.7. Higher Order Linear Equations 701 Subbing these results into y + ay + by + cy = 0 gives
Ξ»3 eΞ»t + aΞ»2 eΞ»t + bΞ»eΞ»t + ceΞ»t = 0
eΞ»t (Ξ»3 + aΞ»2 + bΞ» + c) = 0.
Because eΞ»t can never equal zero, we must have
Ξ»3 + aΞ»2 + bΞ» + c = 0.
(b) If
y = βay β by β cy,
let x1 = y , x2 = y , and x3 = y . Then
x1 = x2
x2 = x3
x3 = βax3 β bx2 β cx1 .
In matrix form,
x1
x2
x3 = 0
0
βc 1
0
βb x1
x2
x3 0
1
βa , and if
A= 0
0
βc 1
0
βb 0
1
βa , the characteristic polynomial is
p(Ξ») = det (A β Ξ»I )
βΞ» 1
0
1
.
= 0 βΞ»
βc βb βa β Ξ»
Expanding across the ο¬rst row,
βΞ»
1
0
β1
βb βa β Ξ»
βc
= βΞ»(aΞ» + Ξ»2 + b) β 1(c) p(Ξ») = βΞ» 1
βa β Ξ» = βΞ»3 β aΞ»2 β bΞ» β c.
7.14. 7.15. The characteristic polynomial of the equation y β 2y β y + 2y = 0 is p(Ξ») = Ξ»3 β 2Ξ»2 β Ξ» + 2. Notice that
2 is a root. Hence the polynomial factors as p(Ξ») = (Ξ» β 2)(Ξ»2 β 1) = (Ξ» β 2)(Ξ» β 1)(Ξ» + 1). Consequently,
the roots are β1, 1, and 2. We have the exponential solutions y1 (t) = eβt , y2 (t) = et , and y3 (t) = e2t .
Since the roots are distinct, these solutions are linearly independent, and therefore form a fundamental set of
solutions.
If y β 3y β 4y + 12y = 0, then the characteristic equation factors
Ξ»3 β 3Ξ»2 β 4Ξ» + 12 = 0
Ξ»2 (Ξ» β 3) β 4(Ξ» β 3) = 0
(Ξ» + 2)(Ξ» β 2)(Ξ» β 3) = 0.
Thus, the characteristic equation has roots β2, 2, and 3, leading to the general solution
y(t) = C1 eβ2t + C2 e2t + C3 e3t . 702
7.16. Chapter 9. Linear Systems with Constant Coefο¬cients
If y (4) β 5y + 4y = 0, then the characteristic equation factors
Ξ» 4 β 5Ξ» 2 + 4 = 0
(Ξ»2 β 4)(Ξ»2 β 1) = 0
(Ξ» + 2)(Ξ» β 2)(Ξ» + 1)(Ξ» β 1) = 0.
Thus, the characteristic equation has roots β2, 2, β1, and 1, leading to the general solution
y(t) = C1 eβ2t + C2 e2t + C3 eβt + C4 et . 7.17. If y (4) β 13y + 36y = 0, then the characteristic equation factors
Ξ»4 β 13Ξ»2 + 36 = 0
(Ξ»2 β 4)(Ξ»2 β 9) = 0
(Ξ» + 2)(Ξ» β 2)(Ξ» + 3)(Ξ» β 3) = 0.
Thus, the characteristic equation has roots β3, β2, 2, and 3, leading to the general solution
y(t) = C1 eβ3t + C2 eβ2t + C3 e2t + C4 e3t . 7.18. If y + 2y β 5y β 6y = 0, then the characteristic equation is p(Ξ») = Ξ»3 + 2Ξ»2 β 5Ξ» β 6. Note that β1 is
a root of p , so
p(Ξ») = (Ξ» + 1)(Ξ»2 + Ξ» β 6)
= (Ξ» + 1)(Ξ» + 3)(Ξ» β 2).
Thus, the characteristic polynomial has roots β1, β3, and 2, leading to the general solution
y(t) = C1 eβt + C2 eβ3t + C3 e2t . 7.19. If y β 4y β 11y + 30y = 0, then the characteristic equation is
Ξ»3 β 4Ξ»2 β 11Ξ» + 30 = 0.
A plot of the characteristic equation (computer or calculator) reveals possible roots. β4 β3 β2 β1 01 2 3 4 5 6 The plot suggests that β3 is a root, but division by Ξ» + 3 guarantees that β3 is a root and Ξ» + 3 is a factor.
(Ξ» + 3)(Ξ»2 β 7Ξ» + 10) = 0
(Ξ» + 3)(Ξ» β 2)(Ξ» β 5) = 0
Thus, the roots are β3, 2, and 5, and the general solution is
y(t) = C1 eβ3t + C2 e2t + C3 e5t . 9.7. Higher Order Linear Equations
7.20. 703 If y (5) + 3y (4) β 5y β 15y + 4y + 12y = 0, then the characteristic equation is
Ξ»5 + 3Ξ»4 β 5Ξ»3 β 15Ξ»2 + 4Ξ» + 12 = 0.
A plot of the characteristic equation (computer or calculator) reveals possible roots. β5 β4 β3 β2 β1 01 2 3 4 5 The plot suggests a root at β3. Long (or synthetic) division reveals
(Ξ» + 3)(Ξ»4 β 5Ξ»2 + 4) = 0
(Ξ» + 3)(Ξ»2 β 4)(Ξ»2 β 1) = 0
(Ξ» + 3)(Ξ» + 2)(Ξ» β 2)(Ξ» + 1)(Ξ» β 1) = 0.
Thus, the roots of the characteristic equation are β3, β2, β1, 1, and 2, and the general solution is
y(t) = C1 eβ3t + C2 eβ2t + C3 eβt + C4 et + C5 e2t .
7.21. If y (5) β 4y (4) β 13y + 52y + 36y β 144y = 0, then the characteristic equation is
Ξ»5 β 4Ξ»4 β 13Ξ»3 + 52Ξ»2 + 36Ξ» β 144 = 0.
A plot of the characteristic equation (computer or calculator) reveals possible roots. β4 β3 β2 β1 01 2 3 4 5 The plot suggests a root at β3. Long (or synthetic) division reveals
(Ξ» + 3)(Ξ»4 β 7Ξ»3 + 8Ξ»2 + 28Ξ» β 48) = 0.
The plot suggests a root at β2. Again, division reveals
(Ξ» + 3)(Ξ» + 2)(Ξ»3 β 9Ξ»2 + 26Ξ» β 24) = 0. 704 Chapter 9. Linear Systems with Constant Coefο¬cients
The plot suggest a root at 2. Again, division reveals
(Ξ» + 3)(Ξ» + 2)(Ξ» β 2)(Ξ»2 β 7Ξ» + 12) = 0
(Ξ» + 3)(Ξ» + 2)(Ξ» β 2)(Ξ» β 3)(Ξ» β 4) = 0.
Thus, the roots of the characteristic equation are β3, β2, 2, 3, and 4, and the general solution is
y(t) = C1 eβ3t + C2 eβ2t + C3 e2t + C4 e3t + C5 e4t . 7.22. If y β 3y + 2y = 0, the characteristic equation is
Ξ» 3 β 3Ξ» + 2 = 0 .
The plot of the characteristic equation β5 β4 β3 β2 β1 01 2 3 4 5 suggests a root at Ξ» = β2. Division by Ξ» + 2 reveals
(Ξ» + 2)(Ξ»2 β 2Ξ» + 1) = 0
(Ξ» + 2)(Ξ» β 1)2 = 0.
Thus, the roots are β2 and 1, with the latter having multiplicity of 2. Therefore, the general solution is
y(t) = C1 eβ2t + C2 et + C3 tet .
7.23. If y + y β 8y β 12y = 0, then the characteristic equation is
Ξ»3 + Ξ»2 β 8Ξ» β 12 = 0.
The plot of the characteristic equation β4 β3 β2 β1 01 2 3 4 9.7. Higher Order Linear Equations 705 suggests a root at β2. Division shows that
(Ξ» + 2)(Ξ»2 β Ξ» β 6) = 0
(Ξ» + 2)(Ξ» + 2)(Ξ» β 3) = 0.
Hence there are two roots, β2 and 3, with the former having algebraic multiplicity 2. Thus, the general
solution is
y(t) = C1 eβ2t + C2 teβ2t + C3 e3t . 7.24. If y + 6y + 12y + 8y = 0, then the characteristic equation is
Ξ»3 + 6Ξ»2 + 12Ξ» + 8 = 0.
A plot of the characteristic equation β5 β4 β3 β2 β1 01 2 3 4 5 suggests a multiple root at β2. Division by Ξ» + 2 reveals that
(Ξ» + 2)(Ξ»2 + 4Ξ» + 4) = 0
(Ξ» + 2)3 = 0.
Thus, Ξ» = β2 is a root of algebraic multiplicity 3. Therefore, the general solution is
y(t) = C1 eβ2t + C2 teβ2t + C3 t 2 eβ2t . 7.25. If y + 3y + 3y + y = 0, then the characteristic equation is
Ξ»3 + 3Ξ»2 + 3Ξ» + 1 = 0. 706 Chapter 9. Linear Systems with Constant Coefο¬cients
The plot of the characteristic equation β3 β2 β1 0 1 suggests a root at β1. Division show that
(Ξ» + 1)(Ξ»2 + 2Ξ» + 1) = 0
(Ξ» + 1)3 = 0.
Thus, β1 is a root of algebraic multiplicity 3. Therefore, the general solution is
y(t) = C1 eβt + C2 teβt + C3 t 2 eβt .
7.26. If y (5) + 3y (4) β 6y β 10y + 21y β 9y = 0, then the characteristic equation is
Ξ»5 + 3Ξ»4 β 6Ξ»3 β 10Ξ»2 + 21Ξ» β 9 = 0.
A plot of the characteristic equation β5 β4 β3 β2 β1 01 2 3 4 5 suggests multiple roots at β3 and 1. Repeated division by Ξ» β 1 reveals
(Ξ» β 1)3 (Ξ»2 + 6Ξ» + 9) = 0
(Ξ» β 1)3 (Ξ» + 3)2 = 0.
Thus, β3 and 1 are roots, with algebraic multiplicities 2 and 3, respectively. Therefore, the general solution is
y(t) = C1 eβ3t + C2 teβ3t + C3 et + C4 tet + C5 t 2 et .
7.27. If y (5) β y (4) β 6y + 14y β 11y + 3y = 0, then the characteristic equation is
Ξ»5 β Ξ»4 β 6Ξ»3 + 14Ξ»2 β 11Ξ» + 3 = 0. 9.7. Higher Order Linear Equations 707 The plot of the characteristic equation β3 β2 β1 0 1 2 3 suggests a multiple root at 1. Repeated division by Ξ» β 1 reveals that
(Ξ» β 1)4 (Ξ» + 3) = 0.
Thus, the roots are 1 and β3, with the former having algebraic multiplicity 4. Therefore, the general solution
is
y(t) = C1 et + C2 tet + C3 t 2 et + C4 t 3 et + C5 eβ3t .
7.28. If y β y + 4y β 4y = 0, then the characteristic equation is
Ξ»3 β Ξ»2 + 4Ξ» β 4 = 0.
A plot of the characteristic equation β5 β4 β3 β2 β1 01 2 3 4 reveals a possible root at Ξ» = 1. Division by Ξ» β 1 reveals
(Ξ» β 1)(Ξ»2 + 4) = 0.
Thus, the zero are 1, β2i , and 2i , and the general solution is
y(t) = C1 et + C2 cos 2t + C2 sin 2t.
7.29. If y β y + 2y = 0, then the characteristic equation is
Ξ» 3 β Ξ»2 + 2 = 0 . 5 708 Chapter 9. Linear Systems with Constant Coefο¬cients
A plot of the characteristic equation β3 β2 β1 0 1 2 3 suggests a root at β1. Division reveals
(Ξ» + 1)(Ξ»2 β 2Ξ» + 2) = 0.
The quadratic formula provides the remaining roots, 1 Β± i . Thus, the general solution is
y(t) = C1 eβt + C2 et cos t + C3 et sin t.
7.30. If y (4) + 17y + 16y = 0, then the characteristic equation is
Ξ»4 + 17Ξ»2 + 16 = 0.
This factors
(Ξ»2 + 16)(Ξ»2 + 1) = 0,
so we have zeros Β±4i and Β±i . Consequently, the general solution is
y(t) = C1 cos 4t + C2 sin 4t + C3 cos t + C4 sin t. 7.31. If y (4) + 2y + y = 0, then the characteristic equation is
Ξ» 4 + 2 Ξ»2 + 1 = 0 ,
which easily factors as
(Ξ»2 + 1)2 = 0.
Thus, both i and βi are roots of multiplicity 2. Therefore, the general solution is
y(t) = C1 cos t + C2 t cos t + C3 sin t + C4 t sin t. 7.32. If y (5) β 9y (4) + 34y β 66y + 65y β 25y = 0, then the characteristic equation is
Ξ»5 β 9y 4 + 34Ξ»3 β 66Ξ»2 + 65Ξ» β 25 = 0. 9.7. Higher Order Linear Equations 709 A plot of the characteristic equation β5 β4 β3 β2 β1 01 2 3 4 5 reveals a possible zero at 1. Division by Ξ» β 1 reveals
(Ξ» β 1)(Ξ»4 β 8Ξ»3 + 26Ξ»2 β 40Ξ» + 25) = 0.
A computer is used to ο¬nd that 2 + i and 2 β i are zeros of the second factor, each with algebraic multiplicity
2. Thus,
y(t) = C1 e2t cos t + C2 e2t sin t + C3 te2t cos t + C4 te2t sin t + C5 et
7.33. is the general solution.
If y (6) + 3y (4) + 3y + y = 0, then the characteristic equation is
Ξ»6 + 3Ξ»4 + 3Ξ»2 + 1 = 0.
The form of the characteristic equation suggests the binomial theorem and
(Ξ»2 + 1)3 = 0.
Thus, i and βi are roots, each having algebraic multiplicity 3. Therefore, the general solution is
y(t) = C1 cos t + C2 t cos t + C3 t 2 cos t + C4 sin t + C5 t sin t + C6 t 2 sin t. 7.34. If y β 2y β 3y = 0, then the characteristic equation is
Ξ»2 β 2Ξ» β 3 = (Ξ» β 3)(Ξ» + 1) = 0.
Thus, the zeros are 3 And β1 and the general solution is
y(t) = C1 e3t + C2 eβt .
The initial condition y(0) = 4 provides
4 = C1 + C 2 .
Differentiating, y (t) = 3C1 e3t β C2 eβt . The initial condition y (0) = 0 provides
0 = 3C1 β C2 .
Thus, C1 = 1 and C2 = 3 and
7.35. y(t) = e3t + 3eβt . If y + 2y + 5y = 0, then the characteristic equation is
Ξ»2 + 2 Ξ» + 5 = 0 .
The quadratic formula provides the roots, β1 Β± 2i . Thus, the general solution is
y(t) = C1 eβt cos 2t + C2 eβt sin 2t. 710 Chapter 9. Linear Systems with Constant Coefο¬cients
Substituting the initial condition y(0) = 2 provides C1 = 2. The derivative of the general solution is
y (t) = C1 eβt (β cos 2t β 2 sin 2t) + C2 eβt (β sin 2t + 2 cos 2t).
The initial condition y (0) = 0 provides
0 = βC 1 + 2 C 2 ,
which in turn, because C1 = 2, generates C2 = 1. Thus, the solution of the initial value problem is
y(t) = 2eβt cos 2t + eβt sin 2t. 7.36. If y + 4y + 4y = 0, then the characteristic equation is
Ξ»2 + 4Ξ» + 4 = (Ξ» + 2)2 = 0.
Thus, Ξ» = β2 is a zero of algebraic multiplicity 2 and the general solution is
y(t) = C1 eβ2t + C2 teβ2t = (C1 + C2 t)eβ2t .
The initial condition y(0) = 2 provides C1 = 2. Differentiating,
y (t) = C2 eβ2t β 2(C1 + C2 t)eβ2t .
The initial condition y (0) = β1 provides
β1 = C2 β 2C1 .
Thus, C2 = 3 and the solution is
y(t) = (2 + 3t)eβ2t . 7.37. If y β 2y + y = 0, then the characteristic equation is
Ξ»2 β 2Ξ» + 1 = (Ξ» β 1)2 = 0.
Thus, 1 is a single root of multiplicity 2, and the general solution is
y(t) = C1 et + C2 tet .
The initial condition y(0) = 1 provides C1 = 1. The derivative of the general solution is
y (t) = C1 et + C2 (t + 1)et .
The initial condition y (0) = 0 provides
0 = C1 + C 2 ,
which in turn, because C1 = 1, generates C2 = β1. Therefore, the solution of the initial value problem is
y(t) = et β tet . 7.38. If y β 4y β 7y + 10y = 0, then the characteristic equation is
Ξ»3 β 4Ξ»2 β 7Ξ» + 10 = 0. 9.7. Higher Order Linear Equations 711 The plot of the characteristic equation β3 β2 β1 01 2 3 4 5 6 suggests a zero at β2. Division by Ξ» + 2 reveals
(Ξ» + 2)(Ξ»2 β 6Ξ» + 5) = 0
(Ξ» + 2)(Ξ» β 5)(Ξ» β 1) = 0.
Thus, the zeros are β2, 1, and 5 and the general solution and its derivatives are
y(t) = C1 eβ2t + C2 et + C3 e5t
y (t) = β2C1 eβ2t + C2 et + 5C3 e5t
y (t) = 4C1 eβ2t + C2 et + 25C3 e5t .
The initial conditions y(0) = 1, y (0) = 0, and y (0) = β1 provide
1 = C1 + C2 + C3
0 = β2C1 + C2 + 5C3
β1 = 4C1 + C2 + 25C3
The augmented matrix reduces
1
β2
4 11
15
1 25 1
0
β1 β 1
0
0 0
1
0 0
0
1 4/21
11/12
β3/28 , revealing C1 = 4/21, C2 = 11/12, and C3 = β3/28. Thus, the solution is
y(t) = 7.39. 4 β2t 11 t
3
e + e β e5t .
12
28
21 If y β 7y + 11y β 5y = 0, then the characteristic equation is
Ξ»3 β 7Ξ»2 + 11Ξ» β 5 = 0. 712 Chapter 9. Linear Systems with Constant Coefο¬cients
The plot of the characteristic equation β1 0 1 2 3 4 5 6 suggest a root at 1. Division reveals
(Ξ» β 1)(Ξ»2 β 6Ξ» + 5) = 0
(Ξ» β 1)2 (Ξ» β 5) = 0.
Thus, the roots are 1 and 5, the former having algebraic multiplicity 2. Thus, the general solution is
y(t) = C1 et + C2 tet + C3 e5t .
The initial condition y(0) = β1 provides
C1 + C3 = β1.
The derivative of the general solution is
y (t) = C1 et + C2 et (t + 1) + 5C3 e5t .
The initial condition y (0) = 1 provides
C 1 + C 2 + 5C 3 = 1.
The second derivative of the general solution is
y (t) = C1 et + C2 et (t + 2) + 25C3 e5t .
The initial condition y (0) = 0 provides
C1 + 2C2 + 25C3 = 0.
The augmented matrix
1
1
1 0
1
2 1 β1
5
1
25 0 β 1
0
0 0
1
0 0
0
1 β13/16
11/4
β3/16 provides C1 = β13/16, C2 = 11/4, and C3 = β3/16. Therefore, the solution of the initial value problem is
y(t) = β
7.40. 13 t 11 t
3
e + te β e5t .
16
4
16 If y β 2y + 4y = 0, then the characteristic equation is
Ξ»3 β 2 Ξ» + 4 = 0 . 9.7. Higher Order Linear Equations 713 A plot of the characteristic equation β5 β4 β3 β2 β1 01 2 3 4 5 suggests a zero at β2. Division by Ξ» + 2 reveals
(Ξ» + 2)(Ξ»2 β 2Ξ» + 2) = 0,
and the quadratic formula produces the zeros of the second factor, 1 Β± i . Therefore, the general solution is
y(t) = C1 eβ2t + C2 et cos t + C3 et sin t.
The initial condition y(0) = 1 provides
1 = C1 + C2 .
Differentiating,
y (t) = β2C1 eβ2t + C2 et cos t β C2 et sin t + C3 et sin t + C3 et cos t.
The initial condition y (0) = β1 provides
β1 = β2C1 + C2 + C3 .
Differentiating again,
y (t) = 4C1 eβ2t + C2 et (cos t β sin t) + C2 et (β sin t β cos t)
+ C3 et (sin t + cos t) + C3 et (cos t β sin t).
The initial condition y (0) = 0 provides
0 = 4C1 + 2C3 .
The augmented matrix reduces
1
β2
4 1
1
0 0
1
2 1
β1
0 β 1
0
0 0
1
0 0
0
1 2/5
3/5
β4/5 so C1 = 2/5, C2 = 3/5, C3 = β4/5, and the solution is
y(t) =
7.41. 2 β2 t 3 t
4
e + e cos t β et sin t.
5
5
5 If y β 6y + 12y β 8y = 0, then the characteristic equation is
Ξ»3 β 6Ξ»2 + 12Ξ» β 8 = 0. , 714 Chapter 9. Linear Systems with Constant Coefο¬cients
The plot of the characteristic equation β1 0 1 2 3 4 5 suggest a multiple root at 2. Repeated division by Ξ» β 2 reveals
(Ξ» β 2)3 = 0.
Thus, the characteristic polynomial has a single root, 2, with algebraic multiplicity 3. Thus, the general
solution is
y(t) = C1 e2t + C2 te2t + C3 t 2 e2t .
The initial condition y(0) = β2 provides C1 = β2. The derivative of the general solution is
y (t) = 2C1 e2t + C2 e2t (2t + 1) + C3 e2t (2t 2 + 2t).
The initial condition y (0) = 0 provides
2 C1 + C2 = 0 ,
which in turn, because C1 = β2, provides C2 = 4. The second derivative of the general solution is
y (t) = 4C1 e2t + C2 e2t (4 + 4t) + C3 e2t (4t 2 + 8t + 2).
The initial condition y (0) = 2 provides
4 C1 + 4 C2 + 2 C3 = 0 ,
which in turn, because C1 = β2 and C2 = 4, provides C3 = β3. Therefore, the solution of the initial value
problem is
y(t) = β2e2t + 4te2t β 3t 2 e2t .
7.42. If y β 3y + 52y = 0, then the characteristic equation is
Ξ»3 β 3Ξ» + 52 = 0. 9.7. Higher Order Linear Equations 715 The plot of the characteristic equation β5 β4 β3 β2 β1 01 2 3 4 5 suggests a zero at β4. Division by Ξ» + 4 reveals
(Ξ» + 4)(Ξ»2 β 4Ξ» + 13) = 0.
The quadratic formula reveals the zeros of the second factor, 2 Β± 3i . Thus, the general solution is
y(t) = C1 eβ4t + e2t (C2 cos 3t + C3 sin 3t).
The initial condition y(0) = 0 provides
0 = C1 + C2 .
Differentiate.
y (t) = β4C1 eβ4t + e2t (β3C2 sin 3t + 3C3 cos 3t) + 2e2t (C2 cos 3t + C3 sin 3t)
= β4C1 eβ4t + e2t ((3C3 + 2C2 ) cos 3t + (2C3 β 3C2 ) sin 3t)
The initial condition y (0) = β1 provides
β1 = β4C1 + 2C2 + 3C3 .
Differentiate again.
y (t) = 16C1 eβ4t + e2t ((β9C3 β 6C2 ) sin 3t + (6C3 β 9C2 ) cos 3t)
+ 2e2t ((3C3 + 2C2 ) cos 3t + (2C3 β 3C2 ) sin 3t)
The initial condition y (0) = 2 provides
2 = 16C1 + (6C3 β 9C2 ) + (6C3 + 4C2 )
2 = 16C1 β 5C2 + 12C3 .
The augmented matrix reduces
1 0 0 2/15
1
1
0
0
β4 2
3 β1 β 0 1 0 β2/15
0 0 1 β1/15
16 β5 12 2
Thus, C1 = 2/15, C2 = β2/15, C3 = β1/15, and the solution is
y(t) =
7.43. . 2 β4 t
2
1
sin 3t .
e + e2t β cos 3t β
15
15
15 If y (4) + 8y + 16y = 0, then the characteristic equation is
Ξ»4 + 8Ξ»2 + 16 = (Ξ»2 + 4)2 = 0.
Therefore, the roots are Β±2i , each of which has algebraic multiplicity 2. Therefore, the general solution if
y(t) = C1 cos 2t + C2 t cos 2t + C3 sin 2t + C4 t sin 2t. 716 Chapter 9. Linear Systems with Constant Coefο¬cients
The initial condition y(0) = 0 provides C1 = 0. The derivative of the general solution is
y (t) = β2C1 sin 2t + C2 (cos 2t β 2t sin 2t)
+ 2C3 cos 2t + C4 (sin 2t + 2t cos 2t).
The initial condition y (0) = β1 generates C2 + 2C3 = β1. The second derivative of the general solution is
y (t) = β4C1 cos 2t + C2 (β4 sin 2t β 4t cos 2t)
β 4C3 sin 2t + C4 (4 cos 2t β 4t sin 2t).
The initial condition y (0) = 2 generates β4C1 + 4C4 = 2. The third derivative of the general solution is
y (t) = 8C1 sin 2t + C2 (β12 cos 2t + 8t sin 2t)
β 8C3 cos 2t + C4 (β12 sin 2t β 8t cos 2t).
The initial condition y (0) = 0 generates β12C2 β 8C3 = 0. The augmented matrix
1
0
0 0 0
0 β4
0
reduces to 1
0
β12 20
04
β8 0 β1 2
0 1 000
0 0 1 0 0 1/2 0 0 1 0 β3/4 .
0 0 0 1 1/2
Thus, C1 = 0, C2 = 1/2, C3 = β3/4, and C4 = 1/2. Therefore, the solution of the initial value problem is
y(t) =
7.44. 1
3
1
t cos 2t β sin 2t + t sin 2t.
2
4
2 Recall that for a = (a1 , a2 , . . . , aq )T β Rq , we deο¬ne
ya (t) = (a1 + a2 t + Β· Β· Β· + aq t q β1 )eΞ»t .
Now, let b = (b1 , b2 , . . . , bq ) β R and Ξ± , Ξ² β R. Then,
Ξ± a + Ξ² b = (Ξ±a1 + Ξ²b2 , Ξ±a2 + Ξ²b2 , . . . , Ξ±aq + Ξ²bq )
and
yΞ±a+Ξ² b (t) = (Ξ±a1 + Ξ²b1 ) + (Ξ±a2 + Ξ²b2 )t + Β· Β· Β· + (Ξ±aq + Ξ²bq )t q β1 eΞ»t
= (Ξ±a1 + Ξ±a2 t + Β· Β· Β· + Ξ±aq t q β1 )eΞ»t + (Ξ²b1 + Ξ²b2 t + Β· Β· Β· + Ξ²bq t q β1 )eΞ»t
= Ξ± (a1 + a2 t + Β· Β· Β· + aq t q β1 )eΞ»t + Ξ² (b1 + b2 t + Β· Β· Β· + bq t q β1 )eΞ»t
= Ξ±ya (t) + Ξ²y b (t). 7.45. Thus, yΞ± a+Ξ² b = Ξ±ya + Ξ²y b .
First, we must show that the set V is closed under addition. Let a and b be elements of V β Rq . Then, ya
and yb are solutions of
y (n) + a1 y (nβ1) + Β· Β· Β· + anβ1 y + an y = 0.
(β)
However, ya + yb , being a linear combination of solutions of (β), is also a solution of (β). However, by (7.31),
ya + yb = ya+b .
q Recall that the set V β R is deο¬ned
V = {a : ya is a solution of (β)}.
Therefore, ya+b is a solution of (β) and a + b β V . Therefore, V is closed under addition. Next, we must
show that V is closed under scalar multiplication. Let a β V and let Ξ± β R be a scalar. Then, by deο¬nition 9.8. Inhomogeneous Linear Systems 717 of V , ya is a solution of (β). However, Ξ±ya , being a linear combination of solutions of (β), is also a solution
of (β). By (7.31),
Ξ±ya = yΞ±a . 7.46. Hence, yΞ±a is a solution of (β) and Ξ± a β V . Therefore, V is closed under scalar multiplication and is a
subspace of Rq .
Recall that
yj (t) = Pj (t)eΞ»t
are independent solutions of
y (n) + a1 y (nβ1) + Β· Β· Β· + anβ1 y + an y = 0, (ββ) for i = 1, 2, . . . , q . Recall that a β V β Rq iff ya is a solution of (ββ). For each j = 1, 2, . . . , q , let aj be
the vector of coefο¬cients of the polynomial Pj (t). Thus, yaj = yj for j = 1, 2, . . . , q . Let
C1 a1 + C2 a2 + Β· Β· Β· + Cq aq = 0.
Then,
yC1 a1 +C2 a2 +Β·Β·Β·+Cq aq = y0 .
Then
C1 ya1 + C2 ya2 + Β· Β· Β· + Cq yaq = y0 .
But, yj = yaj , so
C1 yj + C2 yj + Β· Β· Β· + Cq yq = y0 .
Note that y0 is the zero polynomial. However, the yj βs are given as q independent solutions. Thus, C1 =
C2 = Β· Β· Β· = Cq = 0 and a1 , a2 , . . . , aq are independent. Section 8. Inhomogeneous Linear Systems
8.1. If
A= 5
β2 6
β2 f= and et
,
et then the characteristic polynomial is
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 β 3Ξ» + 2 = (Ξ» β 1)(Ξ» β 2),
generating eigenvalues Ξ»1 = 1 and Ξ»2 = 2. The associated eigenvectors are
4
β2
3
A β 2I =
β2
AβI = 6
β3
6
β4 β
β 3
,
β2
2
.
v2 =
β1
v1 = and Thus, the homogeneous solution is yh = C1 y1 + C2 y2 , where
y1 (t) = et v1 = 3e t
β2et and y2 (t) = e2t v2 = The fundamental matrix is
Y (t) = [y1 (t), y2 (t)] = 3et
β2et 2 e 2t
.
βe2t 2 e 2t
.
βe2t 718 Chapter 9. Linear Systems with Constant Coefο¬cients
The inverse1 of Y (t) is calculated
Y β1 (t) = βe2t
2e t 1
e3t Hence,
Y β1 (t)f (t) = β2e2t
3et βeβt
2 e β2 t βeβt
2 e β2 t β2eβt
3eβ2t and β3
5e β t Y β1 (t)f (t) dt = β2eβt
.
3eβ2t et
et = β3
,
5e β t = β3t
.
β5eβt dt = Thus,
Y β1 f (t) dt yp = Y (t) 2 e 2t
β3t
3et
t
β2e βe2t
β5eβt
β9tet β 10et
=
.
6tet + 5et
= Finally, the general solution is
y(t) = C1 et
8.2. 3
2
β9tet β 10et
+ C2 e 2 t
.
+
β2
6tet + 5et
β1 The matrix
A= 3
2 β4
β3 has eigenvalues Ξ»1 = 1 and Ξ»2 = β1, with associated eigenvectors v1 = (2, 1)T and v2 = (1, 1)T . Thus, the
homogeneous solution is yh = C1 y1 + C2 y2 , where
2
1
y1 (t) = et
and y2 (t) = eβt
.
1
1
The fundamental matrix is
2 e t e βt
Y (t) = [y1 (t), y2 (t)] =
.
e t e βt
The inverse is calculated with
eβt βeβt
Y β1 (t) =
.
βet
2e t
Hence,
eβt βeβt
e βt
e β2 t β 1
Y β1 (t)f (t) =
=
,
β1 + 2 e 2 t
βet
2e t
et
and
1
β 2 e β2 t β t
e β2 t β 1
.
Y β1 f (t) dt =
dt =
2t
2e β 1
e 2t β t
Thus,
yp = Y (t) Y β1 f (t) dt 1
β 2 e β2 t β t
2 e t e βt
t
βt
e
e
e 2t β t
βt
t
βe β 2te + et β teβt
=
.
1
β 2 eβt β tet + et β teβt = 1 Perhaps the easiest way to invert a 2 Γ 2 matrix is to use the following fact:
1
ab
β Aβ1 =
A=
cd
det (A) d
βc βb
.
a 9.8. Inhomogeneous Linear Systems 719 Finally, the general solution is
βeβt β 2tet + et β teβt
2
1
+ C2 eβt
+
.
1
1
1
β 2 eβt β tet + et β teβt y(t) = C1 et
8.3. If
A= β3
β2 6
4 f= and 3
,
4 then the characteristic polynomial is
p(Ξ») = Ξ»2 β T Ξ» + D = Ξ»2 β Ξ» = Ξ»(Ξ» β 1),
generating eigenvalues Ξ»1 = 0 and Ξ»2 = 1. The associated eigenvectors are
β3 6
2
β v1 =
A β 0I =
, and
β2 4
1
β4 6
3
β v2 =
.
AβI =
β2 3
2
Thus, the homogeneous solution is yh = C1 y1 + C2 y2 , where
2
3e t
and y2 (t) = et v2 =
y1 (t) = e0t v1 =
.
1
2e t
The fundamental matrix is
2 3e t
Y (t) = [y1 (t), y2 (t)] =
.
1 2e t
The inverse of Y (t) is calculated
1 2et β3et
2
β3
=
.
Y β1 (t) = t
2
βeβt 2eβt
e β1
Hence,
2
β3
3
β6
Y β1 (t)f (t) =
=
,
βe β t 2 e β t
4
5e β t
and
β6
β6t
dt =
.
Y β1 (t)f (t) dt =
5e β t
β5eβt
Thus,
yp = Y (t) Y β1 f (t) dt β6t
2 3e t
β5eβt
1 2e t
β12t β 15
=
.
β6t β 10
= Finally, the general solution is
y(t) = C1
8.4. 2
3et
+ C2
1
2e t The matrix
A= β3
β3 + β12t β 15
.
β6t β 10 10
8 has eigenvalues Ξ»1 = 2 and Ξ»2 = 3, with associated eigenvectors v1 = (2, 1)T and v2 = (5, 3)T . Thus, the
homogeneous solution is yh = C1 y1 + C2 y2 , where
2
5
y1 (t) = e2t
and y2 (t) = e3t
.
1
3
The fundamental matrix is
2e2t 5e3t
Y (t) = [y1 (t), y2 (t)] =
.
e2t 3e3t 720 Chapter 9. Linear Systems with Constant Coefο¬cients
The inverse is calculated with
Y β1 (t) = 3e3t
βe2t 1
e5t Hence, β5e3t
2 e 2t 3eβ2t
βeβ3t Y β1 (t)f (t) = = 3eβ2t
βeβ3t β5eβ2t
.
2 e β 3t Y β1 (t)f (t) dt = 3
β11eβ2t
=
,
5eβ3t
4 β11eβ2t
5eβ3t and β5eβ2t
2eβ3t dt = 11eβ2t /2
.
β5eβ3t /3 Thus,
yp = Y (t) Y β1 (t)f (t) dt 2e2t 5e3t
e2t 3e3t
8/3
=
.
1/2 11eβ2t /2
β5e3t /3 = Finally, the general solution is
y(t) = C1 e2t
8.5. 2
5
8/3
+ C2 e3t
+
.
1
3
1/2 A has eigenvalues 2 Β± i , and associated with the eigenvalue 2 + i has eigenvector w = (β1 β i, 1)T . Hence
the homogenous equation has complex solution
z(t) = e(2+i)t β1 β i
1
β1
β1
+i
1
0
β cos t β sin t
+i
sin t = e2t [cos t + i sin t ]
sin t β cos t
cos t = e 2t Thus the homogeneous equation has the real solutions
y1 (t) = Re z(t) = e2t
y2 (t) = Im z(t) = e2t sin t β cos t
and
cos t
β cos t β sin t
.
sin t The fundamental matrix is
Y (t) = e2t sin t β cos t
cos t Its inverse is
Y β1 (t) = eβ2t
Hence
Y β1 (t)f (t) = eβ2t sin t
β cos t sin t
β cos t β cos t β sin t
.
sin t
cos t + sin t
.
sin t β cos t cos t + sin t
sin t β cos t 0
e 2t = and
Y β1 (t)f (t) dt = sin t β cos t
.
β cos t β sin t sin t + cos t
,
sin t β cos t 9.8. Inhomogeneous Linear Systems 721 Then the particular solution is
yp (t) = Y (t)
= e 2t
= e 2t Y β1 f (t) dt
sin t β cos t
cos t
2
.
β1 β cos t β sin t
sin t sin t β cos t .
β cos t β sin t The general solution is
y(t) = C1 e2t
8.6. sin t β cos t
cos t + C2 e2t The matrix
A= 4
β1 β cos t β sin t
sin t + e 2t 2
.
β1 2
2 has eigenvalues 3 Β± i . Associated with Ξ» = 3 + i is w = (β1 β i, 1)T , so the homogeneous equation has
complex solution
β1 β i
z(t) = e(3+i)t
1
β1
β1
3t
= e (cos t + i sin t)
+i
1
0
β1
β1
β1
β1
3t
= e cos t
.
β sin t
+ sin t
+ ie3t cos t
1
0
1
0
The real and imaginary parts of z(t) help form the fundamental matrix
β cos t + sin t β cos t β sin t
Y (t) = e3t
.
cos t
sin t
Its inverse is
sin t
cos t + sin t
.
Y β1 (t) = eβ3t
β cos t β cos t + sin t
Hence,
sin t
cos t + sin t
t
Y β1 (t)f (t) = eβ3t
β cos t β cos t + sin t
e3t
t sin t + e3t (cos t + sin t)
= e β 3t
βt cos t + e3t (β cos t + sin t)
β 3t
t e sin t + cos t + sin t
=
.
βteβ3t cos t β cos t + sin t
Needless to say,
to calculate Y β1 (t)f (t) dt is a tough antiderivative to ο¬nd. We will use a CAS (computer algebra system)
yp (t) = Y (t)
= Y β1 (t)f (t) dt βt/5 β 1/50 + 2e3t
.
βt/10 β 3/50 β e3t Hence, the general solution is
β cos t + sin t
β cos t β sin t
+ C2
cos t
sin t
βt/5 β 1/50 + 2e3t
+
.
βt/10 β 3/50 β e3t y(t) = e3t C1 8.7. A has eigenvalues 0, 2, and 1, with corresponding eigenvalues (β1, 2, 0)T , (1, 0, 1)T , and (0, 3, 1)T . Thus
β1 e 2 t 0
2
0 3e t .
V=
0 e 2t e t 722 Chapter 9. Linear Systems with Constant Coefο¬cients
If we form the augmented matrix [V , I ] and use row operations to reduce to row echelon form [I, V β1 ], we
discover that
β3
β1
3
V β1 = β2eβ2t βeβ2t 3eβ2t .
2 e βt
e βt
β2eβt
Then sin t
0
0 V β1 f =
Hence . V β1 f dt = (β cos t, 0, 0)T , and the particular solution is
V β1 f dt yp (t) = V
=
= β1 e2t
2
0
0 e 2t
cos t
β2 cos t
0 β cos t
0
0 0
3e t
et
. The general solution is 8.8. + C2 e 2t
0
e 2t A= y(t) = C1 β1
2
0 1
0
0 The matrix 0
3e t
et + C3 β18
14
35 cos t
β2 cos t
0 + . 8
β6
β15 has eigenvalues Ξ»1 = 0, Ξ»2 = β1, and Ξ»3 = 1, with associated eigenvectors v1 = (2, β3, β7)T , v2 =
(β2, 2, 5)T , and v3 = (1, 0, 0)T . Thus, the homogeneous solution is yh = C1 y1 + C2 y2 + C3 y3 , where
y1 (t) = 2
β3
β7 , y2 (t) = eβt β2
2
5 , and 1
0
0 y3 (t) = et The fundamental matrix is
Y (t) = [y1 (t), y2 (t), y3 (t)] =
with inverse
0
0
e βt Y β1 (t) =
Hence,
Y β1 (t)f (t) = 0
0
e βt β5
β7et
β4eβt and
Y β1 (t)f (t) dt = β2 e β t
2 e βt
5e β t 2
β3
β7 β5
β7eβt
β4eβt 2
3et
2 e βt 2
3et
2 e βt 0
1
0 β5
β7et
β4eβt dt = et
0
0 , . = β5
β7et
β4eβt
β5t
β7et
4 e βt . , . 9.8. Inhomogeneous Linear Systems 723 Thus,
yp (t) = Y (t) 2 β2eβt et
β3 2eβt
0
β7 5eβt
0
β10t + 18
15t β 14 .
35t β 35 =
=
Finally, the general solution is
2
y(t) = C1 β3
β7
8.9. Y β1 (t)f (t) dt + C2 eβt β2
2
5 β5t
β7et
4 e βt 1
0
0 + C3 et β10t + 18
15t β 14
35t β 35 + . A has eigenvalues 0, β2, and β1, with corresponding eigenvectors (β1, 4, 1)T , (β3, 2, 0)T , and (0, 3, 1)T ,
so
0
β1 β3eβ2t
3eβt .
4
2 e β2 t
V=
1
0
e βt
If we form the augmented matrix [V , I ] and use row operations to reduce to row echelon form [I, V β1 ], we
discover that
2
3
β9
V β1 = βe2t βe2t 3e2t .
β2et β3et 10et
Then V β1 f = (6, β2e2t , β6et )T , and V β1 f dt = (6t, βe2t , β6et )T . Hence the particular solution is
V β1 f dt yp (t) = V
=
= β1 β3eβ2t
4
2 e β2 t
1
0
3 β 6t
24t β 20 .
6t β 6 6t
βe2t
β6et 0
3eβt
e βt The general solution is
y(t) = C1
8.10. β1
4
1 + C2 The matrix
A= β3eβ2t
2 e β2 t
0
11
β6
42 + C3
β7
4
β27 0
3e β t
e βt + 3 β 6t
24t β 20
6t β 6 . β4
2
β15 has eigenvalues Ξ»1 = 1, Ξ»2 = β1, and Ξ»3 = 0, with associated eigenvectors v1 = (1, β2, 6)T , v2 = (1, 0, 3)T ,
and v3 = (1, 1, 1)T . Thus, the homogeneous solution is yh = C1 y1 + C2 y2 + C3 y3 , where
1
1
1
y1 (t) = et β2 , y2 (t) = eβt 0 , and y3 (t) = 1 .
6
3
1
The fundamental matrix is
et
e βt 1
t
0
1,
Y (t) = [y1 (t), y2 (t), y3 (t)] = β2e
6e t
3eβt 1 724 Chapter 9. Linear Systems with Constant Coefο¬cients
with inverse
Y β1 (t) = Hence,
Y β1 3eβt
β8et
6 3eβt
β8et
6 (t)f (t) = Y βeβt
3et
β2 βeβt
3et
β2 1
0
0 2 e βt
5et
β3 and
β1 β2eβt
5et
β3 3e β t
β8et
6 (t)f (t) dt = dt = . = 3e β t
β8et
6 β3eβt
β8et
6t , . Thus,
yp (t) = Y (t)
=
= Y β1 (t)f (t) dt e βt
et
0
β2et
6e t
3eβt
6t β 11
6t + 6 .
6t β 42 1
1
1 β3eβt
β8et
6t Finally, the general solution is
y(t) = C1 e t 1
β2
6 + C2 e βt 1
0
3 + C3 1
1
1 + 6t β 11
6t + 6
6t β 42 . 8.11. A has eigenvalues β1 and 3, with corresponding eigenvectors (0, 1)T and (β1, 1)T . Thus
0 βe3t
Y (t) = βt
.
e
e3t
11
0 β1
,
Y (0) =
and Y (0)β1 =
β1 0
11
so
0
e3t
etA = Y (t)Y (0)β1 = βt
.
e β e3t eβt 8.12. A has eigenvalues 1 and 3, with corresponding eigenvectors (2, 1)T and (1, 1)T . Thus
2et e3t
Y (t) =
.
e t e βt
1 β1
21
,
Y (0) =
and Y (0)β1 =
β1 2
11
so
2et β e3t 2e3t β 2et
etA = Y (t)Y (0)β1 =
.
et β e3t
2e3t β et 8.13. A has eigenvalues Β±i . An eigenvector corresponding to i is (2 β i, 5)T . Hence a complex solution is
z(t) = eit w
β1
2
+i
0
5
2 cos t + sin t
2 sin t β cos t
=
+i
.
5 cos t
5 sin t
The real and imaginary parts of z are solutions to the homogeneous equation, and so
2 cos t + sin t 2 sin t β cos t
V (t) =
.
5 cos t
5 sin t
= [cos t + i sin t ] 9.8. Inhomogeneous Linear Systems
Thus β1
0 2
5 V (0) = and and
etA = V (t)V (0)β1 =
8.14. 1/5
,
2 /5 0
β1 V (0)β1 = cos t β 2 sin t
β5 sin t 725 sin t
.
cos t + 2 sin t A has eigenvalues 1 Β± i . An eigenvector corresponding to i is (1 + 1, 1)T . Hence a complex solution is
z(t) = e(1+i)t w
1
1
+i
1
0
cos t + sin t
+i
sin t = et [cos t + i sin t ]
cos t β sin t
cos t = et . The real and imaginary parts of z are solutions to the homogeneous equation, and so
V (t) =
Thus
V (0) = 1
1 cos t β sin t
cos t
1
0 and
etA = V (t)V (0)β1 =
8.15. and cos t + sin t
.
sin t
0
1 V (0)β1 = cos t + sin t
sin t 1
,
β1 β2 sin t
.
cos t β sin t A has eigenvalue Ξ» = β3, which has algebraic multiplicity 2 and geometric multiplicity 1. Thus
etA = eΞ»t et (AβΞ»I )
= eβ3t [I + t (A + 3I )]
1βt
βt
.
= eβ3t
t
1+t 8.16. A has eigenvalue Ξ» = 2, which has algebraic multiplicity 2 and geometric multiplicity 1. Thus
etA = eΞ»t et (AβΞ»I )
= e2t [I + t (A β 2I )]
1 β 2t
t
= e 2t
.
β4t
1 + 2t 8.17. A has eigenvalues 2 and β1 with eigenvectors (β1, 1)T and (β1, 2)T . Thus
Y (t) =
Y (0) = β1
1 β1
2 Thus
etA = Y (t)Y (0)β1 = βe2t
e 2t
and βeβt
,
2 e βt
Y (0)β1 = 2 e 2 t β e βt
2 e βt β 2 e 2 t β2
1 β1
.
1 e 2 t β e βt
.
2 e βt β e 2 t The solution to the initial value problem is
y(t) = etA y0
e 2 t β e βt
2 e 2 t β e βt
=
βt
2t
2e β 2e
2 e βt β e 2 t
2t
βt
3e β 2e
=
.
4 e β t β 3e 2 t 1
1 726
8.18. Chapter 9. Linear Systems with Constant Coefο¬cients
A has eigenvalues β4 and β1 with eigenvectors (β1, 1)T and (β1, 2)T . Thus
Y (t) = βeβ4t
e β4 t Y (0) =
Thus
etA = Y (t)Y (0)β1 = β1
1 βeβt
,
2 e βt
β1
.
2 2 e β4 t β e βt
2 e βt β 2 e β4 t e β4 t β e βt
.
2 e βt β e β4 t The solution to the initial value problem is
y(t) = etA y0
2 e β4 t β e βt
e β4 t β e βt
=
βt
β4 t
2e β 2e
2 e βt β e β4 t
β4 t
βt
2e β e
=
.
2 e βt β 2 e β4 t
8.19. 1
0 A has eigenvalues Β±2i . Associated with the eigenvalue 2i there is the eigenvector (1 + i, 2)T . The associated
complex solution is
1+i
z(t) = e2it
2
1
1
= cos 2t
β sin 2t
2
0
1
1
+ i cos 2t
+ sin 2t
.
0
2
The real and imaginary parts of z are a fundamental set of solutions, so we can take
Y (t) = cos 2t β sin 2t
2 cos 2t cos 2t + sin 2t
1
, and Y (0) =
2 sin 2t
2 Then
etA = Y (t)Y (0)β1 = cos 2t + sin 2t
2 sin 2t β sin 2t
.
cos 2t β sin 2t The solution to the initial value problem is
y(t) = etA y0
cos 2t + sin 2t
β sin 2t
=
2 sin 2t
cos 2t β sin 2t
cos 2t
=
.
cos 2t + sin 2t
8.20. A has one eigenvalue, β3, with multiplicity 2. Hence
etA = eβ3t et (A+3I )
= eβ3t [I + t (A + 3I )]
1 + 2t
t
.
= eβ3t
β4t
1 β 2t
The solution to the initial value problem is
y(t) = etA y0
= e β 3t
= e β 3t 1 + 2t
t
β4t
1 β 2t
2 + 3t
.
β1 β 6t 2
β1 1
1 1
.
0 9.8. Inhomogeneous Linear Systems
8.21. 727 The matrix A has eigenvalues β1 and 5, with associated eigenvectors (0, 1)T and (1, 1)T . Thus Y (0) = 0
1 1
1 e5t
,
e5t 0
e βt Y (t) = β1
1 Y (0)β1 = and Hence e5t
e β e βt etA = Y (t)Y (0)β1 = 1
.
0 0
.
e βt 5t The solution to the initial value problem is
βe5t β5
.
4e β e5t β5 y(t) = e(t βt0 )A y0 =
8.22. 1βt The matrix A has eigenvalues β1 and β2 with associated eigenvectors (β1, 1)T and (β1, 2)T . Hence
Y (t) =
β1
1 Y (0) = βe β t
e βt β1
2 and Thus βeβ2t
,
2 e β2 t
Y (0)β1 = 2 e βt β e β2 t
2 e β2 t β 2 e βt etA = Y (t)Y (0)β1 = β2
1 β1
.
1 e βt β e β2 t
.
2 e β2 t β e βt The solution to the initial value problem is
y(t) = e(t βt0 )A y0 =
8.23. 2 e βt β1 β e β2 t β2
.
2 e β2 t β2 β 2 e βt β1 A has eigenvalues Β±3i . The eigenvalue 3i has associated eigenvector (2 + 1, 5)T . The associated solution is
2
1
β sin 3t
5
0
1
2
i cos 3t
+ sin 3t
0
5 z(t) = cos 3t Thus
Y (t) =
Y (0) = 2 cos 3t β sin 3t
5 cos 3t
2
5 1
0 and Hence
etA = Y (t)Y (0)β1 = . cos 3t + 2 sin 3t
,
5 sin 3t Y (0)β1 = cos 3t + 2 sin 3t
5 sin 3t 10
55 1
.
β2 β sin 3t
.
cos 3t β 2 sin 3t The solution to the initial value problem is
y(t) = e(t βt0 )A y0 =
8.24. β cos 3(t β 1) β 2 sin 3(t β 1)
.
β5 sin 3(t β 1) A has the single eigenvalue β4. Hence
etA = eβ4t et (A+4I )
= eβ4t [I + t (A + 4I )]
10
= e β4 t
.
βt 1
The solution to the initial value problem is
y(t) = e(t βt0 )A y0 = e8β4t β2
.
2t β 2 728
8.25. Chapter 9. Linear Systems with Constant Coefο¬cients
The matrix A has eigenvalues β2, 3, and 1, with associated eigenvectors (0, 1, 0)T , (β2, 0, 1)T , and
(β1, 1, 1)T . Hence
0
β2e3t βet
β2 t
0
et
,
Y (t) = e
3t
0
e
et
Y (0) = β2
0
1 0
1
0 β1
1
1 Thus
e
8.26. tA = Y (t)Y (0) β1 = β1
β1
1 Y (0)β1 = and 2e3t β et
e t β e β2 t
et β e3t 2e3t β 2et
2 e t β 2 e β2 t
2et β e3t 0 e β2
β1
2 1
0
0 β2 t 0 . . A has eigenvalues Β±i and 1. Associated to the eigenvalue i is the eigenvector w = (β1 β i, 0, 2)T . This leads
to the complex solution
z(t) = eit w
= cos t β1
0
2 + i cos t 1
0
0 + sin t
β1
0
0 β1
0
2 + sin t . The real and imaginary parts of z(t) provide two linearly independent solutions. We get a third from the
eigenvalue 1 and its eigenvector (0, 1, 2)T . It is y3 (t) = et (0, 1, 2)T . Hence we have the fundamental matrix
sin t β cos t
0
2 cos t Y (t) =
We have
Y (0) = β1
0
2 β1
0
0 0
1
2 and β cos t β sin t
0
2 sin t
Y (0)β1 = 0
et
2e t 0
β1
0 β1
1
1 . 1/2
β1/2
0 . Finally
etA = Y (t)Y (0)β1
cos t + sin t
0
=
β2 sin t
8.27. β2 sin t
et
t
2e β 2 cos t β 2 sin t sin t
0
cos t β sin t . A has eigenvalues β2, β1, 1, and 2 with associated eigenvectors (0, 1, 0, 1)T , (β1, 1, 1, 0)T , (0, 1, 0, 0)T ,
and (1, 2, 0, 1)T . Thus we have the fundamental matrix
0
βeβt 0 e2t e β2 t
Y (t) = 0
e β2 t We have 0
1
Y (0) = 0
1 β1
1
1
0 0
1
0
0 1
2
0
1 e βt
e βt
0 et
0
0 2 e 2t .
0
e 2t β1 and 0
Y (0)β1 = β1
1 0
0
1
0 β1
1
β2
1 1
0
.
β1 0 9.8. Inhomogeneous Linear Systems 729 Finally, 8.28. etA = Y (t)Y (0)β1 0
e 2 t β e βt
0
e 2t
2t
t
β2 t
t
2t
t
βt
β2 t
β2 t
t
e 2e β 2e + e β e
e βe 2e β e β e
=
.
0
0
e βt
0
0
e 2 t β e β2 t
e β2 t
e 2 t β e β2 t
A has eigenvalues β1, β2, and β3. β1 has algebraic multiplicity 2 and geometric multiplicity 1. The vector
v1 = (β2, 1, 0, 1)T is an eigenvector and v2 = (0, 0, 1, 1)T is a generalized eigenvector with (A+I )v2 = βv1 .
Hence v1 leads to the solution y1 (t) = eβt (β2, 1, 0, 1)T , and v2 leads to the solution
y2 (t) = eβt [v2 + t (A + I )v2 ]
= eβt [v2 β t v1 ] 2t βt .
= e βt 1
1βt 8.29. The eigenvalue β2 has eigenvector (β1, 0, 0, 1)T , and leads to the solution y3 (t) = eβ2t (β1, 0, 0, 1)T . The
eigenvector β3 has eigenvector (0, 0, 1, 0)T and leads to the solution y4 (t) = eβ3t (0, 0, 1, 0)T . Thus we have
the fundamental matrix β2eβt
2teβt
βeβ2t
0
βt
βt
βte
0
0
e
.
Y (t) = 0
e βt
0
eβ3t βt
βt
β2 t
e
e (1 β t) e
0
Then
0 β2 0 β1 0 1 0 0
1 0 1
1 1 0 0 0
and Y (0)β1 = .
Y (0) = 0 1 0 1
β1 β2 0 0 1110
β1 β1 1 β1
Finally
etA = Y (t)Y (0)β1 eβ2t + 2teβt
eβt (2t β 2) + 2eβ2t
0
2teβt βt
βt
βte
e (1 β t)
0
βteβt .
=
βt
β3t
βt
β 3t
β 3t
βt
e βe
e βe
e
e β e β 3t βt
β2 t
βt
β3t
βt
e (1 β t) β e
e (2 β t) β 2e
0
e (1 β t)
We write
t
y(t) = etA y0 + eβsA f (s) ds . 0 We can now prove the result by direct substitution.
y (t) = Ay(t) + etA eβtA f (t) = Ay(t) + f (t).
y(0) = e0A y0 = y0 . ...
View
Full Document
 Spring '10
 dunno
 Math, Cos, β1, Planar Systems

Click to edit the document details