This preview shows page 1. Sign up to view the full content.
Unformatted text preview: CHAPTER 9 Section 1. Overview of the Technique
1.1. If
A= 12
−7 14
,
−9 then the characteristic polynomial is
p(λ) = det (A − λI )
12 − λ
14
= det
−7
−9 − λ
= (12 − λ)(−9 − λ) + 98
= λ2 − 3λ − 10
= (λ − 5)(λ + 2).
1.2. Thus, the eigenvalues are λ1 = 5 and λ2 = −2.
If
A= 2
0 0
,
2 then the characteristic polynomial is
p(λ) = det (A − λI )
2−λ
0
= det
0
2−λ
= (2 − λ)(2 − λ).
1.3. Thus, λ = 2 is a repeated eigenvalue of algebraic multiplicity 2.
If
−2 3
A=
,
0 −5
then the characteristic polynomial is
p(λ) = det (A − λI )
−2 − λ
3
= det
0
−5 − λ
= (−2 − λ)(−5 − λ)
= (λ + 2)(λ + 5). 1.4. Thus, the eigenvalues are λ1 = −2 and λ2 = −5.
If
−4
A=
−2
542 1
,
1 9.1. Overview of the Technique 543 then the characteristic polynomial is
p(λ) = det (A − λI )
−4 − λ
1
= det
−2
1−λ
= (−4 − λ)(1 − λ) + 2
= λ 2 + 3λ − 2 .
The quadratic formula provides λ1 = (−3 −
1.5. √ 17)/2 and λ2 = (−3 + If
A= 5
−6 3
,
−4 then the characteristic polynomial is
p(λ) = det (A − λI )
5−λ
3
= det
−6 −4 − λ
= (5 − λ)(−4 − λ) + 18
= λ2 − λ − 2
= (λ − 2)(λ + 1)
Thus, the eigenvalues are λ1 = 2 and λ2 = −1.
1.6. If
−2
0 A= 5
,
2 then the characteristic polynomial is
p(λ) = det (A − λI )
−2 − λ
5
= det
0
2−λ
= (−2 − λ)(2 − λ).
Thus, the eigenvalues are λ1 = −2 and λ2 = 2.
1.7. If
A= −3
0 0
,
−3 then the characteristic polynomial is
p(λ) = det (A − λI )
−3 − λ
0
= det
0
−3 − λ
= (−3 − λ)(−3 − λ)
= (λ + 3)2 .
Thus, λ = −3 is a repeated eigenvalue of algebraic multiplicity 2.
1.8. If
A= 6
−5 10
,
−9 √ 17)/2. 544 Chapter 9. Linear Systems with Constant Coefﬁcients
then the characteristic polynomial is
p(λ) = det (A − λI )
6−λ
10
= det
−5 −9 − λ
= (6 − λ)(−9 − λ) + 50
= λ 2 + 3λ − 4
= (λ + 4)(λ − 1). 1.9. Thus, the eigenvalues are λ1 = −4 and λ2 = 1.
If
A=
then
p(λ) = 1
0
0 1−λ
0
0 2
0
3 3
2
1 , 2
3
0−λ
2
.
3
1−λ Expanding down the ﬁrst column,
0−λ
2
3
1−λ
= (1 − λ)(−λ(1 − λ) − 6) p(λ) = (1 − λ) = (1 − λ)(λ2 − λ − 6)
= (1 − λ)(λ − 3)(λ + 2).
1.10. Thus, the eigenvalues are λ1 = 1, λ2 = 3, and λ3
If
1
4
A=
−8
then
p(λ) = 1−λ
4
−8 = −2.
0
3
−4 0
2
−3 , 0
0
3−λ
2
.
−4 −3 − λ Expanding across the ﬁrst row,
3−λ
2
−4 −3 − λ
= (1 − λ)((3 − λ)(−3 − λ) + 8) p(λ) = (1 − λ) = (1 − λ)(λ2 − 1)
= −(λ − 1)2 (λ + 1).
1.11. Thus, the eigenvalues are −1 and 1, the latter a repeated eigenvalue of algebraic multiplicity 2.
If
−1 −4 −2
0
1
1,
A=
−6 −12 2
then
p(λ) = −1 − λ −4
−2
0
1−λ
1
.
−6
−12 2 − λ 9.1. Overview of the Technique 545 Expanding down the ﬁrst column,
−4 −2
1−λ
1
−6
1−λ 1
−12 2 − λ
= (−1 − λ)((1 − λ)(2 − λ) + 12) − 6(−4 + 2(1 − λ))
= (−1 − λ)((1 − λ)(2 − λ) + 12) − 6(−2 − 2λ) p(λ) = (−1 − λ) = (−1 − λ)(λ2 − 3λ + 14) − 12(−1 − λ)
= (−1 − λ)(λ2 − 3λ + 14 − 12)
= −(λ + 1)(λ2 − 3λ + 2)
= −(λ + 1)(λ − 1)(λ − 2).
1.12. Thus, the eigenvalues are λ1 = −1, λ2 = 1, and λ3 = 2.
If
1
0 −1
A = −2 −1 3 ,
−4 0
4
Then
1−λ
0
−1
3
p(λ) = −2 −1 − λ
.
−4
0
4−λ
Expanding down the second column,
1 − λ −1
p(λ) = (−1 − λ)
−4 4 − λ
= (−1 − λ)((1 − λ)(4 − λ) − 4)
= −(λ + 1)(λ2 − 5λ)
= −λ(λ + 1)(λ − 5). 1.13. Thus, the eigenvalues are λ1 = 0, λ2 = −1 and λ3 = 5.
We used a computer to calculate the characteristic polynomial of matrix A.
pA (λ) = −λ3 + 3λ2 + 13λ − 15
A computer was used to calculate the eigenvalues: λ1 = −3, λ2 = 1, and λ3 = 5. Next, a computer was used
to draw the plot of pA . pA(λ)
50 0
−4 −2 0 2 4 λ
6 −50
The graph of the characteristic polynomial appears to cross the horizontal axis at −3, 1, and 5. Thus, the zeros
of the characteristic polynomial pA are the eigenvalues of the matrix A. In a similar manner, the characteristic
polynomial of matrix B is
pB (λ) = −λ3 − 3λ2 + 13λ + 15. 546 Chapter 9. Linear Systems with Constant Coefﬁcients
A computer was used to calculate the eigenvalues: λ1 = −5, λ2 = −1, and λ3 = 3. A computer drawn graph
of pB follows. pB(λ)
50 0
−6 −4 −2 0 2 λ
4 −50 1.14. The graph of the characteristic polynomial pB crosses the horizontal axis at −5, −1, and 3. Again, the zeros
of the polynomial are the eigenvalues.
The matrix
5
4
A=
,
−8 −7
has characteristic polynomial p(λ) = λ2 − 2λ − 3. Note that
p(A) = A2 + 2A − 3I
−7 −8
10
=
+
16 17
−16
00
.
=
00 1.15. 1.16. 8
−3
+
−14
0 0
−3 Using MATLAB, for example, you would execute the commands
>> A=[12,14;7,9]; p=poly(A); polyvalm(p,A)
for the matrix in Exercise 9.1.1. This will result in the zero matrix. A similar command works for the matrices
in the other problems.
If
2
0
,
A=
−4 −2
then 2−λ
0
−4 −2 − λ
= (2 − λ)(−2 − λ). p(λ) = det Thus, the eigenvalues are λ1 = 2 and λ2 = −2. For λ1 = 2,
A − 2I = 0
−4 0
−4 and v1 = (1, −1)T is an eigenvector. Thus,
y1 (t) = e2t 1
−1 is a solution. For λ2 = −2,
A + 2I = 4
−4 0
,
0 9.1. Overview of the Technique 547 and v2 = (0, 1)T is an eigenvector. Thus,
y2 (t) = e−2t 1.17. 0
1 is a solution. Because y1 (0) = (1, −1)T and y2 (0) = (0, 1)T are independent, the solutions y1 (t) and y2 (t)
are independent for all t and for a fundamental set of solutions.
If
6 −8
A=
,
0 −2
then 6−λ
−8
0
−2 − λ
= −12 − 6λ + 2λ + λ2 p(λ) = det = λ2 − 4λ − 12
= (λ − 6)(λ + 2).
Thus, the eigenvalues are λ1 = 6 and λ2 = −2. For λ1 = 6,
A − 6I = 0
0 −8
.
−8 It is easily seen that the nullspace of A − 6I is generated by the vector (1, 0)T . Thus,
y1 (t) = e6t
is a solution. For λ2 = −2,
A + 2I = 8
0 1
0
−8
.
0 It is easily seen that the nullspace of A + 2I is generated by the vector (1, 1)T . Thus,
y2 (t) = e−2t 1.18. 1
1 is a solution. Because y1 (0) = (1, 0)T and y2 (0) = (1, 1)T are independent, the solutions y1 (t) and y2 (t) are
independent for all t and form a fundamental set of solutions.
If
−3 −4
A=
,
2
3
then −3 − λ −4
2
3−λ
= (−3 − λ)(3 − λ) + 8 p(λ) = det = λ2 − 1
= (λ + 1)(λ − 1).
Thus, λ1 = −1 and λ2 = 1 are eigenvalues. For λ1 = −1,
A+I = −2
2 −4
4 and v1 = (−2, 1)T is an eigenvector. Thus,
y1 (t) = e−t
is a solution. For λ2 = 1,
A−I = −4
2 −2
1
−4
2 548 Chapter 9. Linear Systems with Constant Coefﬁcients
and v2 = (1, −1)T is an eigenvalue. Thus,
1
−1 y2 (t) = et 1.19. is a solution. Because y1 (0) = (−2, 1)T and y2 (0) = (1, −1)T are independent, the solutions y1 (t) and y2 (t)
are independent for all t and form a fundamental set of solutions.
If
−1 0
A=
,
0 −1
then
−1 − λ
0
p(λ) = det
0
−1 − λ
= (−1 − λ)2
= (1 + λ)2 .
Thus, λ = −1 is an eigenvalue. For λ = −1,
A+I = 0
0 0
.
0 It is easily seen that both (1, 0)T and (0, 1)T are elements of the nullspace of A + I . Thus,
y1 (t) = et 1.20. 1
0 y2 (t) = et and 0
1 are solutions. Because y1 (0) = (1, 0)T and y2 (0) = (0, 1)T are independent, y1 (t) and y2 (t) are independent
for all t and form a fundamental set of solutions.
If
3 −2
A=
,
4 −3
then
3−λ
−2
p(λ) = det
4
−3 − λ
= (3 − λ)(−3 − λ) + 8
= λ2 − 1
= (λ + 1)(λ − 1).
Thus, λ1 = −1 and λ2 = 1 are eigenvalues. For λ1 = −1,
A+I = 4
4 −2
−2 and v1 = (1, 2)T is an eigenvector. Thus,
y1 (t) = e−t
is a solution. For λ2 = 1,
A−I = 2
4 1
2
−2
−4 and v2 = (1, 1)T is an eigenvector. Thus,
y2 (t) = et 1.21. 1
1 is a solution. Because y1 (0) = (1, 2)T and y2 (0) = (1, 1)T are independent, the solutions y1 (t) and y2 (t) are
independent for all t and form a fundamental set of solutions.
If
7
10
A=
,
−5 −8 9.1. Overview of the Technique
then 549 7−λ
10
−5 −8 − λ
= −56 − 7λ + 8λ + λ2 + 50 p(λ) = det = λ2 + λ − 6
= (λ + 3)(λ − 2).
Thus, λ1 = −3 and λ2 = 2 are eigenvalues. For λ1 = −3,
10
−5 A + 3I = 10
.
−5 It is easily seen that the nullspace of A + 3I is generated (1, −1)T . Thus,
1
−1 y1 (t) = e−3t
is a solution. For λ = 2,
A − 2I = 5
−5 10
.
−10 It is easily seen that the nullspace of A − 2I is generated by (2, −1)T . Thus,
2
−1 y2 (t) = e2t 1.22. is a solution. Because y1 (0) = (1, −1)T and y2 (0) = (2, −1)T are independent, the solutions y1 (t) and y2 (t)
are independent for all t and form a fundamental set of solutions.
If
−3 14
A=
,
0
4
then −3 − λ
14
0
4−λ
= (−3 − λ)(4 − λ). p(λ) = det Thus, λ1 = −3 and λ2 = 4 are eigenvalues. For λ1 = −3,
A + 3I = 0
0 14
7 and v1 = (1, 0)T is an eigenvector. Thus,
y1 (t) = e−3t
is a solution. For λ2 = 4,
A − 4I = 1
0 −7
0 14
0 and v2 = (2, 1)T is an eigenvector. Thus,
y2 (t) = e4t 1.23. 2
1 is a solution. Because y1 (0) = (1, 0)T and y2 (0) = (2, 1)T are independent, the solutions y1 (t) and y2 (t) are
independent for all t and form a fundamental set of solutions.
If
5 −4
A=
,
8 −7 550 Chapter 9. Linear Systems with Constant Coefﬁcients
then 5−λ
−4
8
−7 − λ
= −35 − 5λ + 7λ + λ2 + 32 p(λ) = det = λ2 + 2λ − 3
= (λ + 3)(λ − 1).
Thus, λ1 = −3 and λ2 = 1 are eigenvalues. For λ1 = −3,
A + 3I = 8
8 −4
.
−4 It is easily seen that the nullspace of A + 3I is generated (1, 2)T . Thus,
1
2 y1 (t) = e−3t
is a solution. For λ = 1, −4
.
−8 4
8 A−I = It is easily seen that the nullspace of A − I is generated by (1, 1)T . Thus,
1
1 y2 (t) = et 1.24. is a solution. Because y1 (0) = (1, 2)T and y2 (0) = (1, 1)T are independent, the solutions y1 (t) and y2 (t) are
independent for all t and form a fundamental set of solutions.
If
−5 0 −6
A = 26 −3 38 ,
4
0
5
then −5 − λ
0
−6
26
−3 − λ
38
4
0
5−λ
−5 − λ −6
= (−3 − λ)
4
5−λ
= (−3 − λ)((−5 − λ)(5 − λ) + 24) p(λ) = det = (−3 − λ)(λ2 − 1)
= −(λ + 3)(λ + 1)(λ − 1).
Thus, λ1 = −3, λ2 = −1, and λ3 = 1 are eigenvalues. For λ1 = −3,
A + 3I = −2
26
4 −6
38
8 0
0
0 which has reduced row echelon form
1
0
0 0
0
0 0
1
0 . Thus, v1 = (0, 1, 0)T is an eigenvector and
y1 (t) = e−3t 0
1
0 , 9.1. Overview of the Technique
is a solution. For λ1 = −1, −4
26
4 A+I = −6
38
6 0
−2
0 551 , which has reduced row echelon form
1
0
0 0
1
0 3/2
1/2
0 . Thus, v2 = (−3, −1, 2)T is an eigenvector and
−3
−1
2 y2 (t) = e−t
is a solution. For λ3 = 1, −6
26
4 A−I = −6
38
4 0
−4
0 , which has reduced row echelon form
1
0
0 0
1
0 1
−3
0 . Thus, v3 = (−1, 3, 1)T is an eigenvector and
y3 (t) = et −1
3
1 is a solution. Because
det y1 (0), y2 (0), y3 (0) = det 1.25. −3
−1
2 0
1
0 −1
3
1 = 1, the solutions y1 (t), y2 (t), and y3 (t) are independent for all t and forma fundamental set of solutions.
If
−1 0
0
2 −5 −6 ,
A=
−2 3
4
then −1 − λ
0
0
2
−5 − λ −6
−2
3
4−λ
−5 − λ −6
= (−1 − λ)
3
4−λ
= (−1 − λ)(−20 + 5λ − 4λ + λ2 + 18) p(λ) = det = −(λ + 1)(λ2 + λ − 2)
= −(λ + 1)(λ + 2)(λ − 1)
Thus, λ1 = −1, λ2 = −2, and λ3 = 1 are eigenvalues. For λ1 = −1,
A+I = 0
2
−2 0
−4
3 0
−6
5 , 552 Chapter 9. Linear Systems with Constant Coefﬁcients
which has reduced row echelon form
1
0
0 0
1
0 −1
1
0 . It is easily seen that the nullspace of A + I is generated (1, −1, 1)T . Thus,
1
y1 (t) = e−t −1
1
is a solution. For λ = −2,
1
0
0
2 −3 −6 ,
A + 2I =
−2 3
6
which has reduced row echelon form
100
012.
000 1.26. It is easily seen that the nullspace of A + 2I is generated by (0, −2, 1)T . Thus,
0
y2 (t) = e−2t −2
1
is a solution. For λ3 = 1,
−2 0
0
2 −6 −6 ,
A−I =
−2 3
3
which has reduced row echelon form
100
011.
000
It is easily seen that the nullspace of A − I is generated by (0, −1, 1). Thus,
0
y3 (t) = et −1
1
is a solution. Because
1
0
0
det[y1 (0), y2 (0), y3 (0)] = det −1 −2 −1 = −1,
1
1
1
the solutions y1 (t), y2 (t), and y3 (t) are independent for all t and form a fundamental set of solutions.
If
−1
2
0
18 ,
A = −19 14
17 −11 −17
then
−1 − λ
2
0
−19
14 − λ
18
p(λ) = det
17
−11 −17 − λ
14 − λ
18
−19
18
−2
= (−1 − λ)
−11 −17 − λ
17 −17 − λ
= −(λ + 1)(λ2 + 3λ − 40) − 2(19λ + 17)
= −λ 3 − 4 λ 2 − λ + 6
= −(λ − 1)(λ + 3)(λ + 2) 9.1. Overview of the Technique 553 Thus, λ1 = 1, λ2 = −3, and λ3 = −2 are eigenvalues. For λ1 = 1,
−2
−19
17 A−I = 2
13
−11 0
18
−18 , 0
18
−14 , 0
18
−15 , which has reduced row echelon form
1
0
0 −3
−3
0 0
1
0 . Thus, v1 = (3, 3, 1)T is an eigenvector and
3
3
1 y1 (t) = et
is a solution. For λ2 = −3,
2
−19
17 A + 3I = 2
17
−11 which has reduced row echelon form
1
0
0 −1/2
1/2
0 0
1
0 . Thus, v2 = (1, −1, 2)T is an eigenvector and
y2 (t) = e 1
−1
2 −3t is a solution. For λ3 = −2,
1
−19
17 A + 2I = 2
16
−11 which has reduced row echelon form
1
0
0 0
1
0 −2/3
1/3
0 . Thus, v3 = (2, −1, 3)T is an eigenvector and
y3 (t) = e−2t 2
−1
3 is a solution. Because
det y1 (0), y2 (0), y3 (0) = det 1.27. 3
3
1 1
−1
2 2
−1
3 =1 the solutions y1 (t), y2 (t), and y3 (t) are independent for all t and form a fundamental set of solutions.
If
−3 0
2
6 3 −12 ,
A=
2 2 −6 554 Chapter 9. Linear Systems with Constant Coefﬁcients
then −3 − λ
0
2
6
3−λ
−12
2
2
−6 − λ
6 3−λ
3−λ
−12
+2
= (−3 − λ)
2
2
2
−6 − λ
2
= (−3 − λ)(λ + 3λ + 6) + 2(6 + 2λ) p(λ) = det = −(λ + 3)(λ2 + 3λ + 6) + 4(λ + 3)
= (λ + 3)(−λ2 − 3λ − 6 + 4)
= −(λ + 3)(λ2 + 3λ + 2)
= −(λ + 3)(λ + 2)(λ + 1).
Thus, λ1 = −3, λ2 = −2, and λ3 = −1 are eigenvalues. For λ1 = −3,
0
6
2 which has reduced row echelon form 1
0
0 2
−12
−3 1
0
0 A + 3I = 0
6
2
0
1
0 . , It is easily seen that the nullspace of A + 3I is generated (−1, 1, 0)T . Thus,
−1
1
0 y1 (t) = e−3t
is a solution. For λ = −2,
A + 2I = −1
6
2 0
5
2 2
−12
−4 , which has reduced row echelon form
1
0
0 −2
0
0 0
1
0 . It is easily seen that the nullspace of A + 2I is generated by (2, 0, 1)T . Thus,
y2 (t) = e−2t
is a solution. For λ3 = −1,
A+I = −2
6
2 2
0
1 0
4
2 2
−12
−5 −1
−3/2
0 . , which has reduced row echelon form
1
0
0 0
1
0 It is easily seen that the nullspace of A + I is generated by (1, 3/2, 1). Thus,
y3 (t) = e−t 1
3/2
1 9.1. Overview of the Technique 555 is a solution. Because
−1
1
0 det[y1 (0), y2 (0), y3 (0) = det 1.28. 1
1
1 2 −→ 1
0
1 , 1 −→ 1
−1
1 , 2
0
1 −2 −→ 0
2
1 , −2
1
2 −2 −→ , −1 −→ , 2 −→ 1
−2
−1 , −1 −→ 1
−2
2 , 1
1
−2 3 −→ , −4 −→ −1
−1
1 , −1
1
2 −3 −→ , −1 −→ Using a computer, we ﬁnd the following eigenvalueeigenvector pairs.
2
1
1
0
−3 −→ ,
−2 1 0
−2 −→ ,
−1 1 Using a computer, we ﬁnd the following eigenvalueeigenvector pairs.
1
0
0
1
3 −→ ,
2
2 1.36. , 1
1
0
1
−1
3
1
−1
1 Using a computer, we ﬁnd the following eigenvalueeigenvector pairs. −2 1 −→ ,
1
−1
1.35. −1
−2
1 −2 −→ Using a computer, we ﬁnd the following eigenvalueeigenvector pairs. 1 −→
1.34. , 1
−2
0 −2 −→
1.33. 2
2
1 Using a computer, we ﬁnd the following eigenvalueeigenvector pairs.
3 −→ 1.32. 1
,
2 Using a computer, we ﬁnd the following eigenvalueeigenvector pairs.
−3 −→ 1.31. = Using a computer, we ﬁnd the following eigenvalueeigenvector pairs.
3 −→ 1.30. 1
3/2
1 the solutions y1 (t), y2 (t), and y3 (t) are independent for all t and form a fundamental set of solutions.
Using a computer, we ﬁnd the following eigenvalueeigenvector pairs.
0 −→ 1.29. 2
0
1 1
−1 −→ ,
3/2 1 1
−2 −→ ,
1
1 Using a computer, we ﬁnd the following eigenvalueeigenvector pairs.
0
1 −1 0
,
2 −→ 2
1 −1 ,
−1 −→ 0
1 −2 ,
1 −→ 0
1 −2
1
3 −3 1
−1 −→ 2
−2 −1 2
−4 −→ 1
2
1
1
0 −→ 1
1 556
1.37. Chapter 9. Linear Systems with Constant Coefﬁcients
Using a computer, we ﬁnd the following eigenvalueeigenvector pairs.
1
0 −1 1
4 −→ ,
1
0 1.38. 0
−2 −→ ,
−1 1
2
1
1 1
0
0 2
1
0 y2 (t) = e−4t , −1
−1
1 , y3 (t) = e−2t 2
1
1 2
0
−3 , y2 (t) = e2t 0
1
1 y3 (t) = e−2t 1
−1
−2 , y3 (t) = e2t −2
1
0 , y3 (t) = e−t −1
−2
2 y3 (t) = e4t −1
1
−1 , 0
0
1 , y2 (t) = e−3t −1
0
1 Using a computer, a fundamental set of solutions is found.
y1 (t) = e4t 1.43. y3 (t) = e−2t , Using a computer, a fundamental set of solutions is found.
y1 (t) = e−t 1.42. 0
2
1 Using a computer, a fundamental set of solutions is found.
y1 (t) = e−3t 1.41. y2 (t) = e2t , Using a computer, a fundamental set of solutions is found.
y1 (t) = e3t 1.40. 0
−1 −→ 1
0 Using a computer, a fundamental set of solutions is found.
y1 (t) = et 1.39. −1 2 −→ ,
−1 1 −1 2
3
0 , y2 (t) = e−5t 1
2
−3 Using a computer, a fundamental set of solutions is found.
y1 (t) = et 1
−2
2 , y2 (t) = e−3t 1
−2
1 1.44. Using a computer, a fundamental set of solutions is found.
0 1.45. Using a computer, a fundamental set of solutions is found.
1 , 1
1
0
y1 (t) = e−2t , y2 (t) = e3t ,
−2 −2 −2
−1
0
0
2 −1 y3 (t) = et , y4 (t) = e−t 1
1
0
1 −1 −1 1
y1 (t) = e−4t , y2 (t) = e−2t ,
1
0
0
0 −1 −3/2 −1 1
y3 (t) = e2t , y4 (t) = e−t −1 −1 1
0 9.1. Overview of the Technique
1.46. 557 Using a computer, a fundamental set of solutions is found. −1 −2 Using a computer, a fundamental set of solutions is found.
0 1 1
1
y1 (t) = et , y2 (t) = e−4t ,
1
1
1
0
2
1
1
0
y3 (t) = e3t , y4 (t) = e−2t 0
0
2
2 1.47. 2
0
y1 (t) = e−5t , y2 (t) = e−2t ,
1
1
2
2
0
1
2
1
y3 (t) = e4t , y4 (t) = e2t 1
1
1
2
1.48. If v and w are eigenvectors associated to the eigenvalue λ, then
Av = λv
Thus, if y = a v + bw, then and Aw = λw. Ay = A(a v + bw)
= A(a v) + A(bw)
= a(Av) + b(Aw)
= a(λv) + b(λw)
= λ(a v + bw)
= λy. 1.49. Thus, y = a v + bw is also an eigenvector associated with λ.
If
6 −8
,
A=
4 −6
then A has eigenvalues 2 and −2, and determinant D = −4. Note that the product of the eigenvalues equals
the determinant. If
−11 −16
B=
,
8
13
then B has eigenvalues −3 and 5, and determinant D = −15. Note that the product of the eigenvalues equals
the determinant. If
7 −21 −11
5 −13 −5 ,
C=
−5
9
1 1.50. then C has eigenvalues 2, −3, and −4, and determinant D = 24. Note that the product of the eigenvalues
equals the determinant.
In the case
−11 −16
,
B=
8
13
the eigenvalues are λ1 = 5 and λ2 = −3. Thus,
λ1 + λ2 = 2. 558 Chapter 9. Linear Systems with Constant Coefﬁcients
The trace of B is also
tr(B) = −11 + 13 = 2. 1.51. Thus, the trace of matrix B equals the sum of its eigenvalues. This statement is also true when applied to the
matrices A and C .
If
23
A=
,
0 −4
then the eigenvalues of A are 2 and −4. Note that the eigenvalues lie on the main diagonal. If
1
0
0 B= 2
−1
0 3
4
5 , then the eigenvalues of B are 1, −1, and 5. Note that the eigenvalues lie on the main diagonal. If 2 −1 1 1 0
C=
0
0 3
0
0 −1
−4
0 0
,
1
2 then the eigenvalues of C are 2, 3, −4 and 2. Note that the eigenvalues lie on the main diagonal. Here is an
example of a lower triangular matrix.
1
2
3 1.52. 0
−2
1 0
0
4 A computer shows that the eigenvalues are 1, −2, and 4. Again, note that the main diagonal contains the
eigenvalues.
Consider an n × n matrix that is upper triangular (aij = 0 for i > j ). Then
p(λ) = det (A − λI ) a12
···
a1n
a11 − λ
a22 − λ · · ·
a2 n 0
.
= det .
.
. .
.
.
.
.
.
0
0
· · · ann − λ
Expanding down the ﬁrst column,
a22 − λ
0
p(λ) = (a11 − λ)
.
.
.
0 a23
a33 − λ
.
.
.
0 ···
··· a2 n
a3n
.
.
. . · · · ann − λ Expanding down the ﬁrst column,
a34
···
a3n
a33 − λ
0
a44 − λ · · ·
a4 n
p(λ) = (a11 − λ)(a22 − λ)
.
.
.
.
.
.
.
.
.
.
0
0
· · · ann − λ
Continuing in this manner,
p(λ) = (a11 − λ)(a22 − λ)(a33 − λ) · · · (ann − λ)
1.53. and the eigenvalues are λ1 = a11 , λ2 = a22 , λ3 = a33 , . . . , and λn = ann .
If
−2 0
−2 1
,
and D =
V=
03
10 9.2. Planar Systems
then −2 1
10
43
=
−2 0
43
=
−2 0
3 10
=
0 −2
= A. −2 0
03
−2 1
10
01
12 V DV −1 = 1.54. 559 −2
1 1
0 −1 −1 If 60
,
8 −2
then a computer reveals the following eigenvalueeigenvector pairs.
A= −2 −→ 0
,
1 6 −→ and Thus, the matrices
V=
1.55. 0
1 1
1 D= and −2
0 0
6 diagonalize matrix A. That is A = V DV −1 .
If
−1 −2
A=
,
4 −7
then a computer reveals the following eigenvalue–eigenvector pairs.
−5 → 1
2 −3→ and Thus, the matrices
V=
1.56. 1
.
1 1
2 1
1 D= and diagonalize matrix A. That is, A = V DV −1 .
The matrix
A= 5
−1 1
.
1 −5
0 0
−3 1
3 has a repeated eigenvalue λ = 4 but only 1 independent eigenvector v = (1, −1)T . Section 2. Planar Systems
2.1. The matrix 2
0 A= −6
,
−1 has the following eigenvalueeigenvector pairs.
λ1 = 2 → 1
0 and λ2 = −1 → Thus, the general solution is
y(t) = C1 e2t
2.2. The matrix
A= 1
2
+ C2 e−t
.
0
1
−1
−3 6
8 2
.
1 560 Chapter 9. Linear Systems with Constant Coefﬁcients
has the following eigenvalueeigenvector pairs.
2
1 λ1 = 2 −→ and λ2 = 5 −→ 1
1 Thus, the general solution is
2
1
+ C2 e5t
.
1
1 y(t) = C1 e2t
2.3. The matrix −5
−2 A= 1
−2 has the following eigenvalueeigenvector pairs.
λ1 = −4 → 1
1 λ2 = −3 → and 1
.
2 Thus, the general solution is
1
1
+ C2 e−3t
.
1
2 y(t) = C1 e−4t
2.4. The matrix −3
0 −6
−1 and A= λ2 = −1 −→ has the following eigenvalueeigenvector pairs.
1
0 λ1 = −3 −→ −3
1 Thus, the general solution is
1
−3
+ C2 e−t
.
0
1 y(t) = C1 e−3t
2.5. The matrix
A= 1
−1 2
4 has the following eigenvalueeigenvector pairs.
λ1 = 2 → 2
1 and λ2 = 3 → 1
.
1 Thus, the general solution is
2
1
+ C2 e3t
.
1
1 y(t) = C1 e2t
2.6. The matrix −1
1 A= 1
−1 has the following eigenvalueeigenvector pairs.
λ1 = 0 −→ 1
1 and λ2 = −2 −→ Thus, the general solution is
y(t) = C1
2.7. 1
−1
+ C2 e−2t
.
1
1 The system in Exercise 1 had general solution
y(t) = C1 e2t 1
2
+ C2 e−t
.
0
1 −1
1 9.2. Planar Systems 2.8. 2.9. 2.10. 561 Thus, if y(0) = (0, 1)T , then
0
1
2
12
C1
= C1
+ C2
=
.
C2
1
0
1
01
The augmented matrix reduces.
120
1 0 −2
→
011
01 1
Therefore, C1 = −2 and C2 = 1, giving particular solution
1
2
+ e −t
.
y(t) = −2e2t
0
1
The system in Exercise 2 had the general solution
2
1
y(t) = C1 e2t
+ C2 e5t
.
1
1
Thus, if y(0) = (1, −2)T , then
1
2
1
21
C1
= C1
+ C2
=
.
−2
1
C2
1
11
The augmented matrix reduces
21 1
10 3
−→
.
1 1 −2
0 1 −5
Thus, C1 = 3 and C2 = −5, giving particular solution
2
1
y(t) = 3e2t
− 5e5t
.
1
1
The system in Exercise 3 had general solution
1
1
+ C2 e−3t
.
y(t) = C1 e−4t
1
2
Thus, if y(0) = (0, −1)T , then
0
1
1
11
C1
+ C2
.
= C1
=
C2
−1
1
2
12
The augmented matrix reduces.
10 1
11 0
→
0 1 −1
1 2 −1
Therefore, C1 = 1 and C2 = −1, giving particular solution
1
1
y(t) = e−4t
.
− e −3t
1
2
The system in Exercise 4 had the general solution
1
−3
+ C2 e−t
.
y(t) = C1 e−3t
0
1
Thus, if y(0) = (1, 1)T , then
1
1
−3
= C1
+ C2
=
1
0
1
The augmented matrix reduces.
1
1 −3 1
→
0
011
Thus, C1 = 4 and C2 = 1, giving particular solution
1
+ e −t
y(t) = 4e−3t
0 1
0 −3
1 0
1 4
1
−3
.
1 C1
.
C2 562
2.11. Chapter 9. Linear Systems with Constant Coefﬁcients
The system in Exercise 5 had general solution
y(t) = C1 e2t 2.12. 2.13. 2
1
+ C2 e3t
.
1
1 Thus, if y(0) = (3, 2)T , then
3
2
1
21
C1
= C1
+ C2
=
.
2
C2
1
1
11
The augmented matrix reduces.
213
101
→
112
011
Therefore, C1 = 1 and C2 = 1, giving particular solution
2
1
y(t) = e2t
+ e3t
.
1
1
The system in Exercise 6 had the general solution
−1
1
.
y(t) = C1
+ C2 e−2t
1
1
Thus, if y(0) = (1, 5)T , then
1
1
−1
1 −1
C1
= C1
+ C2
=
.
C2
5
1
1
11
The augmented matrix reduces.
1 −1 1
103
→
115
012
Thus, C1 = 3 and C2 = 2, giving particular solution
1
−1
y(t) = 3
+ 2 e −2 t
.
1
1
If
1
i
−3
1 1−i
, and z =
,
, B=
A=
1−i
1 2−i
2i 1 + i
then
1
1 1−i
Az =
1−i
2i 1 + i
1 − 2i
=
2 + 2i
1 + 2i
=
.
2 − 2i
On the other hand,
1 1−i
1
Az =
2i 1 + i
1−i
1
1+i
1
=
−2 i 1 − i
1+i
1 + 2i
=
.
2 − 2i
Therefore, Az = Az. Next, 1−i
i
−3
1+i
1 2−i
1
−2 − 3i
=
−1 + i 3 − 5i
1
−2 + 3i
.
=
−1 − i 3 + 5i AB = 1
2i 9.2. Planar Systems
On the other hand,
1 1−i
i
−3
2i 1 + i 1 2 − i
1
1+i
−i −3
=
−2i 1 − i
1 2+i
1
−2 + 3i
=
.
−1 − i 3 + 5i AB = Therefore, AB = AB.
2.14. If z = (z1 , z2 , . . . , zn )T and w = (w1 , w2 , . . . , wn )T , then
z + w = (z1 , z2 , . . . , zn )T + (w1 , w2 , . . . , wn )T
= (z1 + w1 , z2 + w2 , . . . , zn + wn )T
= (z1 + w1 , z2 + w2 , . . . , zn + wn )T
= (z1 + w1 , z2 + w2 , . . . , zn + wn )T
= (z1 , z2 , . . . , zn )T + (w1 , w2 , . . . , wn )T
= z + w. 2.15. Let α be a complex number and let z = (z1 , z2 , . . . , zn )T . Then,
α x = α(z1 , z2 , ..., zn )T
= (αz1 , αz2 , . . . , αzn )T
= (αz1 , αz2 , . . . , αzn )T
= (α z1 , α z2 , . . . , α z1 )T
= α (z1 , z2 , . . . , zn )T
= α z. 2.16. If A is n × n with real entries and z = (z1 , z2 , . . . , zn )T , then
Az = [a1 , a2 , . . . , an ] (z1 , z2 , . . . , zn )T
= z1 a1 + z 2 a2 + · · · + z n an
= z 1 a 1 + z2 a 2 + · · · + z n a n
= z1 a1 + z2 a2 + · · · + zn an
= z 1 a1 + z 2 a2 + · · · + z n an
= [a1 , a2 , . . . , an ] (z1 , z2 , . . . , zn )T
= Az 2.17. If A and B are m × n and n × p matrices, with possibly complex entries, then
AB = A[b1 , b2 , . . . , bp ]
= [Ab1 , Ab2 , . . . , Abp ]
= [Ab1 , Ab2 , . . . , Abp ]
= [A b1 , A b2 , . . . , A bp ]
= A [b1 , b2 , . . . , bp ]
= A B. 563 564
2.18. Chapter 9. Linear Systems with Constant Coefﬁcients
If z(t) = x(t) + iy(t), then
z (t) = (x(t) + iy(t))
= x (t) + iy (t)
= x (t) − iy (t)
= (x(t) − iy(t))
= z(t) . 2.19. If z = x + i y, then
1
1
(z + z) = (x + i y + x + i y)
2
2
1
= (x + i y + x − i y)
2
1
= (2x)
2
= x.
Secondly,
1
1
(z − z) = (x + i y − x + i y)
2i
2i
1
= (x + i y − (x − i y))
2i
1
= (x + i y − x + i y)
2i
1
= (2i y)
2i
= y. 2.20. If z(t) = e2it (1, 1 + i)T , then
1
1+i
0
1
+i
= (cos 2t + i sin 2t)
1
1
1
0
0
1
+ sin 2t
+ i cos 2t
− sin 2t
= cos 2t
1
1
1
1 z(t) = (cos 2t + i sin 2t) 2.21. 2.22. . Therefore, Re(x(t)) = (cos 2t, cos 2t − sin 2t)T and Im(z(t)) = (sin 2t, cos 2t + sin 2t)T .
If z(t) = e(1+i)t (−1 + i, 2)T , then
−1 + i
z(t) = et eit
2
1
−1
t
= e (cos t + i sin t)
+i
0
2
−1
1
1
−1
+ i sin t
= et cos t
+ i cos t
− sin t
2
0
0
2
t − cos t − sin t
t cos t − sin t
=e
+ ie
.
2 cos t
2 sin t
Therefore, Re (z(t)) = et (− cos t − sin t, 2 cos t)T and Im (z(t)) = et (cos t − sin t, 2 sin t)T .
If z(t) = e3it (−1 − i, 2)T , then
−1
−1
+i
z(t) = (cos 3t + i sin 3t)
0
2
−1
−1
−1
−1
+ cos 3t
+ i sin 3t
− sin 3t
= cos 3t
0
2
0
2 9.2. Planar Systems
The real part of z(t) is − cos 3t + sin 3t
2 cos 3t y1 (t) =
and 3 sin 3t + 3 cos 3t
.
−6 sin 3t y1 (t) =
However,
3
−6 − cos 3t + sin 3t
2 cos 3t 3
−3 565 3 sin 3t + 3 cos 3t
−6 sin 3t = as well, so y1 is a solution of y = Ay. The imaginary part of z(t) is
− sin 3t − cos 3t
2 sin 3t y2 (t) =
and
y2 (t) =
However,
3
3
−6 −3 −3 cos 3t + 3 sin 3t
.
6 cos 3t − sin 3t − cos 3t
2 sin 3t = −3 cos 3t + 3 sin 3t
6 cos 3t as well, so y2 is a solution of y = Ay. Finally, because
y1 (0) =
2.23. −1
2 and y2 (0) = −1
0 are independent, y1 (t) and y2 (t) are independent for all values of t and form a fundamental set of solutions.
If
−4 −8
A=
,
4
4
then the characteristic polynomial of A is p(λ) = λ2 + 4 and the eigenvalues are λ1 = 4i and λ2 = −4i .
Trusting that
−4 − 4i
−8
A − (4i)I =
4
4 − 4i
is singular, examination of the second row shows that (−1 + i, 1)T generates the nullspace of A − (4i)I .
Thus, we have a complex solution which we must break into real and imaginary parts.
z(t) = e4it −1 + i
1 −1
1
+i
1
0
−1
1
1
−1
= cos 4t
− sin 4t
+ i cos 4t
+ i sin 4t
1
0
0
1
− cos 4t − sin 4t
cos 4t − sin 4t
=
+i
.
cos 4t
sin 4t = (cos 4t + i sin 4t) Therefore,
y1 (t) =
2.24. − cos 4t − sin 4t
cos 4t and form a fundamental set of real solutions.
If
−1
A=
4 y2 (t) = cos 4t − sin 4t
sin 4t −2
,
3 then the characteristic polynomial is p(λ) = λ2 − 2λ + 5 and the eigenvalues are 1 ± 2i . Trusting that
A − (1 + 2i)I = −2 − 2i
4 −2
2 − 2i 566 Chapter 9. Linear Systems with Constant Coefﬁcients
is singular, examination of the ﬁrst row reveals the eigenvector v = (1, −1 − i)T , Thus,
z(t) = e(1+2i)t 1
−1 − i 1
0
+i
−1
−1
1
0
0
1
= et cos 2t
− sin 2t
+ iet cos 2t
+ sin 2t
−1
−1
−1
−1 = et (cos 2t + i sin 2t) . Therefore,
y1 (t) = et 2.25. cos 2t
− cos 2t + sin 2t y2 (t) = et and form a fundamental set of solutions.
If
A= −1
−5 sin 2t
− cos 2t − sin 2t 1
,
−5 then the characteristic polynomial of A is p(λ) = λ2 + 6λ + 10 and the eigenvalues are λ1 = −3 + i and
λ2 = −3 − i . Trusting that
2−i
−5 A − (−3 + i)I = 1
−2 − i is singular, examination of the ﬁrst row shows that (1, −2 + i)T generates the nullspace of A − (−3 + i)I .
Thus, we have a complex solution which we must break into real and imaginary parts.
z(t) = e(−3+i)t 1
−2 + i 1
0
+i
−2
1
1
0
0
1
cos t
− sin t
+ i cos t
+ i sin t
−2
1
1
−2
cos t
sin t
+ ie−3t
−2 cos t − sin t
cos t − 2 sin t = e−3t (cos t + i sin t)
= e−3t
= e−3t
Therefore,
y1 (t) = e−3t 2.26. cos t
−2 cos t − sin t and y2 (t) = e−3t sin t
cos t − 2 sin t form a fundamental set of real solutions.
The characteristic polynomial of
A= 0
−2 4
−4 is p(λ) = λ2 − 4λ + 8, which has complex roots λ = −2 ± 2i. For the eigenvalue λ = 2 + 2i , we have the
eigenvector w = (−1 − i, 1)T . The corresponding exponential solution is
z(t) = e(−2+2i)t −1 − i
1 −1
−1
+i
0
1
−1
−1
−2 t
=e
− sin 2t ·
cos 2t ·
0
1
−1
−1
−2 t
+ ie
+ sin 2t ·
cos 2t ·
1
0 = e−2t [cos 2t + i sin 2t ] . 9.2. Planar Systems 567 The real and imaginary parts of z,
y1 (t) = e−2t
y2 (t) = e−2t
2.27. are a fundamental set of solutions.
If
A= − cos 2t + sin 2t
cos 2t
− cos 2t − sin 2t
sin 2t
−1
−3 3
,
−1 then the characteristic polynomial of A is p(λ) = λ2 + 2λ + 10 and the eigenvalues are λ1 = −1 + 3i and
λ2 = −1 − 3i . Trusting that
−3i
3
A − (−1 + 3i)I =
−3 −3i 2.28. is singular, examination of the ﬁrst row shows that (1, i)T generates the nullspace of A − (−1 + 3i)I . Thus,
we have a complex solution which we must break into real and imaginary parts.
1
z(t) = e(−1+3i)t
i
1
0
−t
= e (cos 3t + i sin 3t)
+i
0
1
1
0
0
1
−t
=e
cos 3t
− sin 3t
+ i cos 3t
+ i sin 3t
0
1
1
0
cos 3t
sin 3t
+ ie−t
= e −t
− sin 3t
cos 3t
Therefore,
cos 3t
sin 3t
and y2 (t) = e−t
y1 (t) = e−t
− sin 3t
cos 3t
form a fundamental set of real solutions.
If
3 −6
,
A=
35
√
then the characteristic polynomial is p(λ) = λ2 − 8λ + 33 and the eigenvalues are 4 ± 17i . Trusting that
3 − λ −6
A − λI =
3
5−λ
√
is singular, examination of the ﬁrst row reveals the eigenvector v = (6, 3 − λ)T . Substituting λ = 4 + 17i
√
give v = (6, −1 − 17i)T . Thus,
√
6√
z(t) = e(4+ 17i)t
−1 − 17i
√
√
6
0
√
= e4t (cos 17t + i sin 17t)
+i
−1
− 17
√
√
6
0
√
− sin 17t
= e4t cos 17t
−1
− 17
√
√
0
6
√
+ ie4t cos 17t
+ sin 17t
− 17
−1
Therefore,
√
4t
√ 6 cos √17t √
and
y1 (t) = e
− cos 17t +√ 17 sin 17t
6 sin 17t √
√
√
y2 (t) = e4t
− 17 cos 17t − sin 17t
form a fundamental set of solutions. 568
2.29. Chapter 9. Linear Systems with Constant Coefﬁcients
The fundamental solutions found in Exercise 23 allows the formation of the general solution
y(t) = C1
If y(0) = (0, 2)T , then − cos 4t − sin 4t
cos 4t + C2 cos 4t − sin 4t
.
sin 4t 0
−1
1
= C1
+ C2
.
2
1
0 The augmented matrix reduces.
−1
1 1
0 0
1
→
2
0 0
1 2
.
2 Thus, C1 = 2 and C2 = 2 and
− cos 4t − sin 4t
cos 4t − sin 4t
+2
cos 4t
sin 4t
−4 sin 4t
=
.
2 cos 4t + 2 sin 4t y(t) = 2 2.30. The fundamental solutions found in Exercise 24 allows the formation of the general solution
y(t) = C1 et
If y(0) = (0, 1)T , then cos 2t
− cos 2t + sin 2t + C2 e t sin 2t
.
− cos 2t − sin 2t 0
1
0
.
= C1
+ C2
1
−1
−1 The augmented matrix reduces.
1
−1 1
0
→
0
1 0
−1 0
1 0
.
−1 Thus, C1 = 0 and C2 = −1 and
y(t) = −et
2.31. sin 2t
.
− cos 2t − sin 2t The fundamental solutions found in Exercise 25 allows the formation of the general solution
y(t) = C1 e−3t
If y(0) = (1, −5)T , then cos t
−2 cos t − sin t + C2 e−3t sin t
.
cos t − 2 sin t 1
0
1
+ C2
.
= C1
−2
1
−5 The augmented matrix reduces.
1
−2 0
1 1
1
→
0
−5 0
1 1
.
−3 Thus, C1 = 1 and C2 = −3 and
y(t) = e−3t
= e−3t
2.32. cos t
sin t
− 3e−3t
−2 cos t − sin t
cos t − 2 sin t
cos t − 3 sin t
.
−5 cos t + 5 sin t A fundamental set of solutions was found in Exercise 26, so the solution has the form y(t) = C1 y1 (t) + C2 y2 (t),
where
− cos 2t + sin 2t
y1 (t) = e−2t
cos 2t
−2t − cos 2t − sin 2t
y2 (t) = e
.
sin 2t 9.2. Planar Systems
At t = 0 we have 569 −1
−1
−1
= y(0) = C1
+ C2
2
1
0
−1 −1
C1
=
.
C2
1
0 This system can be readily solved, getting C1 = 2 and C2 = −1. Hence the solution is
y(t) = 2y1 (t) − y2 (t) = e−2t
2.33. − cos 2t + 3 sin 2t
.
2 cos 2t − sin 2t The fundamental solutions found in Exercise 27 allows the formation of the general solution
cos 3t
− sin 3t y(t) = C1 e−t + C2 e − t sin 3t
.
cos 3t If y(0) = (3, 2)T , then
3
1
0
= C1
+ C2
.
2
0
1
Thus, C1 = 3 and C2 = 2 and
cos 3t
sin 3t
+ 2 e −t
− sin 3t
cos 3t
3 cos 3t + 2 sin 3t
.
−3 sin 3t + 2 cos 3t y(t) = 3e−t
= et
2.34. The fundamental solutions found in Exercise 28 allows the formation of the general solution.
√
√ 6 cos √17t √
y(t) = C1 e4t
− cos 17t + 17 sin 17t
√
6 sin 17t √
√
√
+ C2 e 4 t
.
− 17 cos 17t − sin 17t
If y(0) = (1, 3)T , then
0
1
6
√
.
= C1
+ C2
− 17
3
−1
The augmented matrix reduces .
6
0
1
√
→
−1 − 17 3
√
Thus, C1 = 1/6 and C2 = −19 17/102 and 1
0 0
1 1
√/6
−19 17/102 √
1 4t
√ 6 cos √17t √
y(t) = e
− cos 17t + 17 sin 17t
6
√
√
19 17
6 sin 17t √
√
√
−
.
− 17 cos 17t − sin 17t
102 2.35. (a) Let A be a 2 × 2 matrix with one eigenvalue λ of multiplicity two. If the eigenspace of λ has dimension
two, then there are two independent eigenvectors v1 and v2 that must span all of R2 . If (x, y)T is a vector
in R2 ,
x
= x v1 + y v2 .
y
Because the eigenspace is a subspace, it must be closed under addition and scalar multiplication. That
is, any linear combination of two eigenvectors must also be an eigenvector. Therefore, (x, y)T is an
eigenvector.
(b) Suppose that
ab
.
A=
cd 570 Chapter 9. Linear Systems with Constant Coefﬁcients
Because all vectors in R2 are eigenvectors,
a
c Ae1 = λe1
b
1
1
=λ
d
0
0
a
λ
=
.
c
0 Secondly, e2 is an eigenvector, so
a
c Thus, 2.36. b
d a
c b
d Ae2 = λe2
0
0
=λ
1
1
b
0
=
.
d
λ
= λ
0 0
.
λ A is a real 2 × 2 matrix with one eigenvalue λ1 of multiplicity 2. If
y(t) = eλ1 t [v + t (A − λ1 I )v] ,
then
y(0) = e0 [v + 0(A − λ1 I )v] = v.
Moreover, using the product rule,
y (t) = λ1 eλ1 t [v + t (A − λ1 I )v] + eλ1 t (A − λ1 I )v
= eλ1 t [λ1 v + tλ1 (A − λ1 I )v + (A + λ1 I )v]
= eλ1 t [Av + tλ1 (A − λ1 I )v]
However, because A has a repeated eigenvalue λ1 , its characteristic polynomial has the form
p(λ) = (λ − λ1 )2 .
By the CayleyHamilton Theorem,
(A − λ1 I )2 = 0I
and (A − λ1 I )(A − λ1 I ) = 0I
A(A − λ1 I ) − λ1 (A − λ1 I ) = 0I
A(A − λ1 I ) = λ1 (A − λ1 I ). Thus, we can write Ay(t) = Aeλ1 t [v + t (A − λ1 I )v]
= eλ1 t [Av + tA(A − λ1 I )v]
= eλ1 t [Av + tλ1 (A − λ1 I )v] . 2.37. Therefore, y (t) = Ay(t) and y(t) = eλ1 t [v + t (A − λ1 I )v] is a solution of y = Ay with y(0) = v.
The matrix
−2 0
A=
0 −2
has a single eigenvalue λ = −2. However,
A − (−2)I = 0
0 0
,
0 9.2. Planar Systems 571 so all nonzero vectors are eigenvectors. Choose e1 = (1, 0)T and e2 = (0, 1)T as eigenvectors. Then,
1
0
+ C2 e−2t
0
1 y(t) = C1 e−2t
2.38. is the general solution.
The matrix
A= −3
−1 1
−1 has one eigenvalue, λ = −2. However, the nullspace of
A + 2I = −1
−1 1
1 is generated by a single eigenvector, v1 = (1, 1)T , with corresponding solution
1
.
1 y1 (t) = e−2t To ﬁnd another solution, we need to ﬁnd a vector v2 which satisﬁes (A + 2I )v2 = v1 . Choose w = (1, 0)T ,
which is independent of v1 and note that
−1
−1 (A + 2I )w = 1
−1
=
= −v1 .
0
−1 1
1 Thus, choose v2 = −w = (−1, 0)T . Our second solution is
y2 (t) = e−2t (v1 + t v2 )
−1
1
= e −2 t
+t
0
1 . Thus, the general solution can be written
1
−1
1
+t
+ C2 e−2t
1
0
1
−1
1
(C1 + C2 t)
.
+ C2
0
1 y(t) = C1 e−2t
= e −2 t
2.39. The matrix
A= −1
1 3
1 has one eigenvalue, λ = 2. However, the nullspace of
A − 2I = 1
1 −1
−1 is generated by the single eigenvector, v1 = (1, 1)T , with corresponding solution
y1 (t) = e2t 1
.
1 To ﬁnd another solution, we need to ﬁnd a vector v2 which satisﬁes (A − 2I )v2 = v1 . Choose w = (1, 0)T ,
which is independent of v1 , and note that
(A − 2I )w = 1
1 −1
−1 1
1
= v1 .
=
0
0 Thus, choose v2 = w = (1, 0)T . Our second solution is
y2 (t) = e2t (v2 + t v1 )
1
1
+t
= e 2t
1
0 . 572 Chapter 9. Linear Systems with Constant Coefﬁcients
Thus, the general solution can be written
1
1
1
+ C2 e2t
+t
1
1
0
1
1
(C1 + C2 t)
+ C2
1
0 y(t) = C1 e2t
= e 2t
2.40. The matrix −2
4 A= −1
2 has one eigenvalue, λ = 0. However, the nullspace of
−2
4 A + 0I = −1
2 is generated by a single eigenvector, v1 = (1, −2)T , with corresponding solution
y1 (t) = e0t 1
1
=
.
−2
−2 To ﬁnd another solution, we need to ﬁnd a vector v2 which satisﬁes (A + 0I )v2 = v1 . Choose w = (1, 0)T ,
which is independent of v1 and note that
(A + 0I )w = −2
4 −2
1
= −2v1 .
=
4
0 −1
2 Thus, chose v2 = −(1/2)w = (−1/2, 0)T . Our second solution is
y2 (t) = e0t (v2 + t v1 )
−1/2
1
=
+t
.
0
−2
Thus, the general solution can be written
1
−1/2
1
+ C2
+t
−2
0
−2
1
−1/2
.
= (C1 + C2 t)
+ C2
−2
0 y(t) = C1 2.41. The matrix
A= −2
−9 1
4 has one eigenvalue, λ = 1. However, the nullspace of
A−I = −3
−9 1
3 is generated by the single eigenvector, v1 = (1, 3)T , with corresponding solution
y1 (t) = et 1
.
3 To ﬁnd another solution, we need to ﬁnd a vector v2 which satisﬁes (A − I )v2 = v1 . Choose w = (1, 0)T ,
which is independent of v1 , and note that
(A − I )w = −3
−9 1
3 1
−3
1
= −3v1 .
= −3
=
3
−9
0 Thus, choose v2 = −(1/3)w = (−1/3, 0)T . Our second solution is
y2 (t) = et (v2 + t v1 )
1
−1/3
+t
= et
3
0 . 9.2. Planar Systems 573 Thus, the general solution can be written
1
−1/3
1
+ C2 et
+t
3
0
3
−1/3
1
+ C2
= et (C1 + C2 t)
0
3 y(t) = C1 et 2.42. The matrix 51
−4 1
has one eigenvalue, λ = 3. However, the nullspace of
2
1
A − 3I =
−4 −2
A= is generated by a single eigenvector, v1 = (1, −2)T , with corresponding solution
1
y1 (t) = e3t
.
−2
To ﬁnd another solution, we need to ﬁnd a vector v2 which satisﬁes (A − 3I )v2 = v1 . Choose w = (1, 0)T ,
which is independent of v1 and note that
2
1
1
2
(A − 3I )w =
=
= 2 v1 .
−4 −2
0
−4
Thus, choose v2 = (1/2)w = (1/2, 0)T . Our second solution is
y2 (t) = e3t (v2 + t v1 )
1/2
1
= e3t
+t
.
0
−2
Thus, the general solution can be written
1
1/2
1
+ C2 e3t
+t
y(t) = C1 e3t
−2
0
−2
1
1/2
= e3t (C1 + C2 t)
+ C2
.
−2
0
2.43. From Exercise 37,
y(t) = C1 e−2t 1
0
+ C2 e−2t
.
0
1 If y(0) = (3, −2)T , then 1
0
3
+ C2
,
= C1
0
1
−2
and C1 = 3 and C2 = −2. Thus, the particular solution is
1
0
− 2 e −2 t
y(t) = 3e−2t
0
1
3
−2 t
=e
.
−2 2.44. From Exercise 38,
y(t) = e−2t (C1 + C2 t)
If y(0) = (0, −3)T , then −1
1
+ C2
0
1 1
−1
0
+ C2
.
= C1
1
0
−3 The augmented matrix reduces,
1
1 −1
0 1
0
→
0
−3 0
1 −3
,
−3 . 574 Chapter 9. Linear Systems with Constant Coefﬁcients
and C1 = −3 and C2 = −3. Thus, the particular solution is
−1
1
−3
0
1 y(t) = e−2t (−3 − 3t)
= e −2 t
2.45. −3t
.
−3 − 3t From Exercise 39,
y(t) = e2t (C1 + C2 t)
If y(0) = (2, −1)T , then 1
1
+ C2
0
1 . 2
1
1
= C1
+ C2
.
−1
1
0 The augmented matrix reduces,
1
1 1
0 2
1
→
−1
0 −1
,
3 0
1 and C1 = −1 and C2 = 3. Thus, the particular solution is
y(t) = e2t (−1 + 3t)
= e 2t
2.46. 2 + 3t
.
−1 + 3t From Exercise 40,
y(t) = (C1 + C2 t)
If y(0) = (1, 1)T , then 1
1
+3
0
1 1
−1/2
.
+ C2
−2
0 1
1
−1/2
.
= C1
+ C2
1
−2
0 The augmented matrix reduces,
1
−2 −1/2
0 1
1
→
1
0 0
1 −1/2
−3 and C1 = −1/2 and C2 = −3. Thus, the particular solution is
1
y(t) = − − 3t
2
1 − 3t
=
.
1 + 6t
2.47. −1/2
1
−3
0
−2 y(t) = et (C1 + C2 t) 1
−1/3
+ C2
3
0 From Exercise 41, If y(0) = (5, 3)T , then 5
1
−1/3
= C1
+ C2
.
3
3
0 The augmented matrix reduces,
1
3 −1/3 5
1
→
0
3
0 0
1 1
,
−12 and C1 = 1 and C2 = −12. Thus, the particular solution is
y(t) = et (1 − 12t)
= et 5 − 12t
.
3 − 36t −1/3
1
− 12
0
3 . 9.2. Planar Systems
2.48. From Exercise 42, 1/2
1
+ C2
0
−2 y(t) = e3t (C1 + C2 t)
If y(0) = (0, 2)T , then 575 . 0
1
1/2
= C1
+ C2
.
2
−2
0 The augmented matrix reduces,
1/2
0 1
−2 1
0
→
0
2 0
1 −1
2 and C1 = −1 and C2 = 2. Thus, the particular solution is
1
1/2
+2
−2
0 y(t) = e3t (−1 + 2t)
2t
.
2 − 4t = e3t
2.49. The matrix 2
−1 A= 4
6 has characteristic polynomial p(λ) = λ2 − 8λ + 16 and one eigenvalue, λ = 4. Moreover, the nullspace of
−2
−1 A − 4I = 4
2 is generated by the single eigenvector, v1 = (2, 1)T , with corresponding solution
2
.
1 y1 (t) = e4t To ﬁnd another solution, we need to ﬁnd a vector v2 which satisﬁes (A − 4I )v2 = v1 . Choose w = (1, 0)T ,
which is independent of v1 , and note that
−2
−1 (A − 4I )w = 4
2 1
2
=−
= −v1 .
0
1 Thus, choose v2 = −w = (−1, 0)T . Our second solution is
y2 (t) = e4t (v2 + t v1 )
−1
2
= e 4t
+t
0
1 . Thus, the general solution can be written
2
2
−1
+ C2 e4t
+t
1
1
0
2
−1
+ C2
(C1 + C2 t)
1
0 y(t) = C1 e4t
= e 4t
2.50. The matrix
A= −8
5 −10
7 has characteristic polynomial p(λ) = λ2 + λ − 6 with eigenvalues λ1 = −3 and λ2 = 2. The nullspace of
A + 3I = −5
5 −10
10 is generated by the single eigenvector, v1 = (−2, 1)T , with corresponding solution
y1 (t) = e−3t −2
.
1 576 Chapter 9. Linear Systems with Constant Coefﬁcients
The nullspace of
A − 2I = −10
5 −10
5 is generated by the single eigenvector, v2 = (1, −1)T , with corresponding solution
y2 (t) = e2t 1
.
−1 Thus, the general solution can be written
−2
1
+ C2 e2t
.
1
−1 y(t) = C1 e−3t
2.51. The matrix
5
12
−4 − 9 A= has characteristic polynomial p(λ) = λ2 + 4λ + 3 and eigenvalues λ1 = −1 and λ2 = −3. The nullspace of
A − (−1)I = 6
−4 12
−8 is generated by the single eigenvector, v1 = (−2, 1)T , with corresponding solution
y1 (t) = e−t −2
.
1 The nullspace of
A − (−3)I = 8
−4 12
−6 is generated by the single eigenvector, v2 = (−3/2, 1)T , with corresponding solution
y2 (t) = e−3t −3/2
.
1 Thus, the general solution can be written
y(t) = C1 e−t
2.52. −2
−3/2
+ C2 e−3t
.
1
1 The matrix
A= −6
0 1
−6 has repeated eigenvalue, λ = −6, but the nullspace of
A + 6I = 0
0 1
0 is generated by the single eigenvector, v1 = (1, 0)T , with corresponding solution
y1 (t) = e−6t 1
.
0 We need a solution v2 satisfying (A + 6I )v2 = v1 . Choose w = (0, 1), which is independent of v1 , and note
that
1
0
01
= v1 .
=
(A + 6I )w =
0
1
00
Thus, choose v2 = (0, 1)T , giving a second solution
y2 (t) = e−6t (v1 + t v2 ) = e−6t 0
1
+t
1
0 . 9.2. Planar Systems 577 Thus, the general solution can be written
1
0
1
+ C2 e−6t
+t
0
1
0
1
0
+ C2
(C1 + C2 t)
.
0
1 y(t) = C1 e−6t
= e−6t
2.53. The matrix
A= −4
2 −5
2 has characteristic polynomial p(λ) = λ2 + 2λ + 2 and eigenvalues λ1 = −1 + i and λ2 = −1 − i . The
nullspace of
−3 − i −5
A − (−1 + i)I =
2
3−i
is generated by the single eigenvector, v1 = (5, −3 − i)T , with corresponding solution
5
.
−3 − i z(t) = e(−1+i)t
Breaking this solution into real and imaginary parts,
z(t) = e(−1+i)t 5
−3 − i 5
0
+i
−3
−1
5
0
0
5
cos t
− sin t
+ i cos t
+ i sin t
−3
−1
−1
−3
5 cos t
5 sin t
.
+ ie−t
−3 cos t + sin t
− cos t − 3 sin t = e−t (cos t + i sin t)
= e −t
= e −t Thus, the general solution is
y(t) = C1 e−t
2.54. 5 cos t
−3 cos t + sin t The matrix + C 2 e −t −6
−8 A= 5 sin t
.
− cos t − 3 sin t 4
2 has characteristic polynomial p(λ) = λ2 + 4λ + 20 with eigenvalues −2 ± 4i . Trusting that
A − (−2 + 4i)I = −4 − 4i
−8 4
4 − 4i is nonsingular, examination of the ﬁrst row shows that v = (1, 1 + i)T is an eigenvector. Thus,
z = e(−2+4i)t 1
1+i 0
1
+i
1
1
0
1
1
0
+ ie−2t cos 4t
− sin 4t
+ sin 4t
cos 4t
1
1
1
1 = e−2t (cos 4t + i sin 4t)
= e −2 t is a complex solution. The real and imaginary parts of z form a fundamental set of solutions that lead to the
general solution.
y(t) = e−2t C1
2.55. cos 4t
cos 4t − sin 4t The matrix
A= −10
−12 + C2
4
4 sin 4t
cos 4t + sin 4t 578 Chapter 9. Linear Systems with Constant Coefﬁcients
has characteristic polynomial p(λ) = λ2 + 6λ + 8 and eigenvalues λ1 = −4 and λ2 = −2. The nullspace of
A − (−4)I = −6
−12 4
8 is generated by the single eigenvector, v1 = (2, 3)T , with corresponding solution
2
.
3 y1 (t) = e−4t
The nullspace of
A − (−2)I = −8
−12 4
6 is generated by a single eigenvector, v2 = (1, 2)T , with corresponding solution
1
.
2 y2 (t) = e−2t
Thus, the general solution is
y(t) = C1 e−4t
2.56. The matrix
A= 2
1
+ C2 e−2t
.
3
2
−1
−5 5
−1 has characteristic polynomial p(λ) = λ2 + 2λ + 26 with eigenvalues −1 ± 5i . Trusting that
−5i
−5 A − (−1 + 5i)I = 5
−5i is nonsingular, examination of the ﬁrst row shows that v = (1, i)T is an eigenvector. Thus,
z = e(−1+5i)t 1
i 1
0
+i
0
1
1
0
0
1
cos 5t
− sin 5t
+ ie−t cos 5t
+ sin 5t
0
1
1
0 = e−t (cos 5t + i sin 5t)
= e −t , is a complex solution. The real and imaginary parts of z form a fundamental set of solutions that lead to the
general solution
cos 5t
sin 5t
y(t) = e−t C1
+ C2
.
− sin 5t
cos 5t
2.57. From Exercise 49, the general solution is
y(t) = e4t (C1 + C2 t)
Because y(0) = (3, 1)T , −1
2
+ C2
0
1 2
−1
3
+ C2
.
= C1
1
0
1 Reduce the augmented matrix.
2
1 −1
1 3
1
→
0
0 0
1 1
−1 Thus, C1 = 1 and C2 = −1 and the particular solution is
y(t) = e4t (1 − t)
= e 4t 2
−1
−
1
0 3 − 2t
.
1−t . 9.2. Planar Systems
2.58. 579 From Exercise 50, the general solution is
y(t) = C1 e−3t
Because y(0) = (3, 1)T , −2
1
+ C2 e2t
.
1
−1 −2
1
3
+ C2
.
= C1
1
−1
1 Reduce the augmented matrix
−2
1 1
−1 3
1
→
1
0 0
1 −4
.
−5 Thus, C1 = −4 and C2 = −5 and the particular solution is
−2
1
− 5e2t
y(t) = −4e−3t
1
−1
8e − 3t − 5e 2 t
=
.
−4e−3t + 5e2t
2.59. From Exercise 51, the general solution is
y(t) = C1 e−t
Because y(0) = (1, 0)T , −2
−3/2
+ C2 e−3t
.
1
1 −2
−3/2
1
+ C2
.
= C1
1
1
0 Reduce the augmented matrix.
−2 −3/2 1
1 0 −2
→
1
1
0
01 2
Thus, C1 = −2 and C2 = 2 and the particular solution is
−2
−3/2
+ 2e−3t
y(t) = −2e−t
1
1
4 e − t − 3e − 3t
.
=
−2e−t + 2e−3t
2.60. From Exercise 52, the general solution is
y(t) = e−6t (C1 + C2 t) 1
0
+ C2
0
1 . Because y(0) = (1, 0)T , 0
1
1
.
+ C2
= C1
1
0
0
It is easy to see that C1 = 1 and C2 = 0 and the particular solution is
e − 6t
1
=
.
y(t) = e−6t
0
0
2.61. From Exercise 53, the general solution is
5 cos t
y(t) = C1 e−t
−3 cos t + sin t
Because y(0) = (−3, 2)T , + C 2 e −t 5 sin t
.
− cos t − 3 sin t 5
0
−3
+ C2
.
= C1
−3
−1
2 Reduce the augmented matrix.
5
−3 0
−1 −3
1
→
2
0 0
1 −3/5
−1/5 580 2.62. Chapter 9. Linear Systems with Constant Coefﬁcients
Thus, C1 = −3/5 and C2 = −1/5 and the particular solution is
1
3
5 cos t
5 sin t
− e −t
y(t) = − e−t
−3 cos t + sin t
− cos t − 3 sin t
5
5
−t −3 cos t − sin t
=e
.
2 cos t
From Exercise 54, the general solution is
cos 4t
sin 4t
y(t) = e−2t C1
+ C2
.
cos 4t − sin 4t
cos 4t + sin 4t
Because y(0) = (4, 0)T , 1
0
4
+ C2
.
= C1
1
1
0 Reduce the augmented matrix. 2.63. 104
10 4
→
110
0 1 −4
Thus, C1 = 4 and C2 = −4 and the particular solution is
cos 4t
sin 4t
−4
y(t) = e−2t 4
cos 4t − sin 4t
cos 4t + sin 4t
4 cos 4t − 4 sin 4t
.
= e −2 t
−8 sin 4t
From Exercise 55, the general solution is
2
1
y(t) = C1 e−4t
+ C2 e−2t
.
3
2
Because y(0) = (2, 1)T , 2
1
2
+ C2
.
= C1
3
2
1 Reduce the augmented matrix. 2.64. 212
10 3
→
321
0 1 −4
Thus, C1 = 3 and C2 = −4 and the particular solution is
2
1
− 4 e −2 t
y(t) = 3e−4t
3
2
6e − 4 t − 4 e − 2 t
=
9e−4t − 8e−2t
From Exercise 56, the general solution is
y(t) = e−t C1 cos 5t
− sin 5t + C2 sin 5t
cos 5t Because y(0) = (5, 5)T , 1
0
5
+ C2
.
= C1
0
1
5
Thus, C1 = 5 and C2 = 5 and the particular solution is
y(t) = e−t 5
= 5e−t
2.65. sin 5t
cos 5t
+5
cos 5t
− sin 5t
cos 5t + sin 5t
.
cos 5t − sin 5t (a) Let
(A − λI )2 = a
c b
d . 9.2. Planar Systems 581 and assume that (A − λI )2 v = 0 for all v in R. Then
a
0
1
ab
⇒
=
c
0
0
cd
and Thus, a
c b
d 0
0
b
=
⇒
1
0
d
a
c b
d = 0
0 = 0
0 = 0
.
0 0
,
0 so (A − λI )2 = 0I .
(b) Let v1 be an eigenvector of A associated with the eigenvalue λ. Note that this means that Av1 − λv1 = 0.
Let v = α v1 be a multiple of the eigenvector v1 . Then
(A − λI )2 v = (A − λI )2 (α v1 )
= α(A − λI )2 v1
= α(A − λI )(Av1 − λv1 )
= α(A − λI )0
= α0
= 0.
(c) Now choose v in R2 such that v is not a multiple of the eigenvector v1 . Note that this means that v is
not an eigenvector associated with the eigenvalue λ. The set B = {v, v1 } is independent with dimension
two. Therefore, it must span all of R2 and is a basis for R2 .
(d) Set w = (A − λI )v. Note that this means that w is nonzero, for otherwise v would be an eigenvector
associated with the eigenvalue λ. In part (c), we saw that B = {v, v1 } was a basis for R2 . Thus, B spans
R2 and we can ﬁnd a and b such that
w = a v1 + bv.
(e) From (d), w = (A − λI )v and w = a v1 + bv. Thus,
(A − λI )w = (A − λI )(a v1 + bv)
= a(A − λI )v1 + b(A − λI )v
= 0 + bw
= b w,
Hence, (A − λI )w = bw
Aw − λw = bw
Aw = (λ + b)w. Thus, w, being nonzero, is an eigenvector of A with eigenvalue λ + b. But, λ is the only eigenvalue, so
b must equal zero and w must be a multiple of v1 .
(f) Finally, because b = 0 and (A − λI )v = w,
(A − λI )2 v = (A − λI )(A − λI )v
= (A − λI )w
= bw
= 0.
Consequently, whether v is a multiple of v1 or not, (A − λI )2 v = 0. Since this is true for any arbitrary v
in R2 , by part (a), (A − λI )2 = 0I .
582 Chapter 9. Linear Systems with Constant Coefﬁcients Section 3. Phase Plane Portraits
3.1. If
−10
5 A= −25
,
10 then T = 0 and D = 25, leading to
p(λ) = λ2 − T λ + D = λ2 + 25.
On the other hand,
p(λ) = det (A − λI )
−10 − λ −25
=
5
10 − λ
= (−10 − λ)(10 − λ) + 125
= λ2 + 25.
3.2. If
0
−1 A= 5
,
4 then T = 4 and D = 5, leading to
p(λ) = λ2 − T λ + D = λ2 − 4λ + 5.
On the other hand,
p(λ) = det (A − λI )
−λ
5
=
−1 4 − λ
= −λ(4 − λ) + 5
= λ2 − 4λ + 5.
3.3.
y2 1.2e1.2 (1, 1)T 1.2e0.6 (1, 1)T
1.2(1, 1)T
y1 9.3. Phase Plane Portraits 583 3.4.
5 y1 −0.8(−2, 1)T
−0.8e0.6 (−2, 1)T −0.8e1.2 (−2, 1)T −5 y2 3.5.
y2 0.8(4, 4)T 0.8e−0.6 (4, 4)T
0.8e−1.2 (4, 4)T
y1 3.6.
y2 5
−1.2(4, −4)T −1.2e−0.6 (4, −4)T −1.2e−1.2 (4, −4)T y1 −5 584 Chapter 9. Linear Systems with Constant Coefﬁcients 3.7.
14 14 3.8.
14 14 3.9.
14 14 9.3. Phase Plane Portraits 585 3.10. Both eigenvalues are negative, so the equilibrium point at the origin is a sink. Solutions dive toward the origin
tangent to the slow exponential solution, e−t (2, 1)T . As solutions move backward in time, the eventually
parallel the fast exponential solution, e−2t (−1, 1)T . 3.11. Both eigenvalues are positive, so the equilibrium point at the origin is a source. Solutions emanate from
the origin tangent to the slow exponential solution, et (−1, 2)T , eventually paralleling the fast exponential
solution, e2t (3, −1)T . 586 Chapter 9. Linear Systems with Constant Coefﬁcients 3.12. One eigenvalue is positive, the other negative, so the equilibrium point at the origin is a saddle. As t → +∞,
solutions parallel the exponential solution et (1, 1)T . As t → −∞, solutions parallel the exponential solution
e−2t (1, −1)T . 3.13. Both eigenvalues are negative, so the equilibrium point at the origin is a sink. Solutions dive toward the origin
tangent to the slow exponential solution, e−t (1, 2)T . As solutions move backward in time, they eventually
parallel the fast exponential solution, e−3t (−4, 1)T . 9.3. Phase Plane Portraits 587 3.14. One eigenvalue is positive, the other negative, so the equilibrium point at the origin is a saddle. As t → +∞,
solutions parallel the exponential solution e2t (−1, 4)T . As t → −∞, solutions parallel the exponential
solution e−t (−5, 2)T . 3.15. Both eigenvalues are positive, so the equilibrium point at the origin is a source. Solutions emanate from the
origin tangent to the slow exponential solution, et (1, 5)T , eventually paralleling the fast exponential solution,
e3t (4, 1)T . 3.16. Matrix
A= −4
−4 8
4 has trace T = 0 and determinant D = 16. Thus, the characteristic polynomial is
p(λ) = λ2 − T λ + D = λ2 + 16,
which produces eigenvalues λ1 = −4i and λ2 = 4i . Because the real part of these eigenvalues is zero, the
equilibrium point at the origin is a center. At (1, 0),
−4
−4 8
4 −4
1
.
=
−4
0 588 Chapter 9. Linear Systems with Constant Coefﬁcients
Thus, the motion is clockwise. A hand sketch follows. 5 5 The phase portrait, drawn using a numeric solver, follows. y 5 0 −5
−5 3.17. 0
x 5 Matrix
A= 0
−3 3
0 has trace T = 0 and determinant D = 9. Thus, the characteristic polynomial is
p(λ) = λ2 − T λ + D = λ2 + 9,
which produces eigenvalues λ1 = 3i and λ2 = −3i . Because the real part of these eigenvalues is zero, the
equilibrium point at the origin is a center. At (1, 0),
0
−3 3
0 0
1
.
=
−3
0 9.3. Phase Plane Portraits 589 Thus, the motion is clockwise. A hand sketch follows.
3 3 The phase portrait, drawn using a numerical solver, follows. y 2
0
−2
−2 3.18. 0
x 2 Matrix
A= 2
−4 2
−2 has trace T = 0 and determinant D = 4. Thus, the characteristic polynomial is
p(λ) = λ2 − T λ + D = λ2 + 4,
which produces eigenvalues λ1 = 2i and λ2 = −2i . Because the real part of these eigenvalues is zero, the
equilibrium point at the origin is a center. At (1, 0),
2
−4 2
−2 2
1
.
=
−4
0 590 Chapter 9. Linear Systems with Constant Coefﬁcients
Thus, the motion is clockwise. A hand sketch follows. 5 5 The phase portrait, drawn using a numeric solver, follows. y 5 0 −5
−5 3.19. 0
x 5 Matrix
A= 0
−4 1
0 has trace T = 0 and determinant D = 4. Thus, the characteristic polynomial is
p(λ) = λ2 − T λ + D = λ2 + 4,
which produces eigenvalues λ1 = 2i and λ2 = −2i . Because the real part of these eigenvalues is zero, the
equilibrium point at the origin is a center. At (1, 0),
0
−4 1
0 0
1
.
=
−4
0 9.3. Phase Plane Portraits 591 Thus, the motion is clockwise. A hand sketch follows.
3 3 The phase portrait, drawn using a numerical solver, follows. y 2
0
−2
−2 3.20. 0
x 2 Matrix
A= −2
−1 2
0 has trace T = −2 and determinant D = 2. thus, the characteristic polynomial is
p(λ) = λ2 − T λ + D = λ2 + 2λ + 2,
which produces eigenvalues λ1 = −1 + i and λ2 = −1 − i . Because the real part of the eigenvalues is
negative, the equilibrium point at the origin is a spiral sink. At (1, 0),
−2
−1 2
0 1
−2
=
,
0
−1 592 Chapter 9. Linear Systems with Constant Coefﬁcients
so the motion is clockwise. A hand sketch follows.
5 5 The phase portrait, draw in a numeric solver, follows. y 5 0 −5
−5 3.21. 0
x 5 Matrix
A= −1
−5 1
3 has trace T = 2 and determinant D = 2. Thus, the characteristic polynomial is
p(λ) = λ2 − T λ + D = λ2 − 2λ + 2,
which produces eigenvalues λ1 = 1 + i and λ2 = 1 − i . Because the real part of the eigenvalues is positive,
the equilibrium point at the origin is a spiral source. At (1, 0),
−1
−5 1
3 1
−1
=
,
0
−5 9.3. Phase Plane Portraits 593 so the motion is clockwise. A hand sketch follows.
5 5 The phase portrait, drawn in a numerical solver, follows. y 5 0 −5
−5 3.22. 0
x 5 Matrix
A= 7
4 −10
−5 has trace T = 2 and determinant D = 5. Thus, the characteristic polynomial is
p(λ) = λ2 − T λ + D = λ2 − 2λ + 5,
which produces eigenvalues λ1 = 1 + 2i and λ2 = 1 − 2i . Because the real part of the eigenvalues is positive,
the equilibrium point at the origin is a spiral source. At (1, 0),
7
4 −10
−5 7
1
,
=
4
0 594 Chapter 9. Linear Systems with Constant Coefﬁcients
so the motion is counterclockwise. A hand sketch follows.
10 10 The phase portrait, draw with a numerical solver, follows. 10 y 5
0
−5
−10
−10 3.23. −5 0
x 5 10 Matrix
A= −3
−4 2
1 has trace T = −2 and determinant D = 5. Thus, the characteristic polynomial is
p(λ) = λ2 − T λ + D = λ2 + 2λ + 5,
which produces eigenvalues λ1 = −1 + 2i and λ2 = −1 − 2i . Because the real part of the eigenvalues is
negative, the equilibrium point at the origin is a spiral sink. At (1, 0),
−3
−4 2
1 −3
1
,
=
−4
0 9.3. Phase Plane Portraits 595 so the motion is clockwise. A hand sketch follows. 5 5 The phase portrait, drawn in a numerical solver, follows. y 5 0 −5
−5 3.24. 0
x 5 If
A= 8
−4 20
,
−8 then the trace is T = 0 and the determinant is D = 16. Further, the characteristic polynomial is
p(λ) = λ2 − T λ + D = λ2 + 16,
which produces eigenvalues λ1 = 4i and λ2 = −4i . Therefore, the equilibrium point at the origin is a center.
At (1, 0),
8
−4 20
−8 8
1
,
=
−4
0 596 Chapter 9. Linear Systems with Constant Coefﬁcients
so the motion is clockwise. A hand sketch follows.
10 10 The phase portrait, draw with a numerical solver, follows. 10 y 5
0
−5
−10
−10 3.25. −5 If
A= 0
x −16
−18 5 10 9
,
11 then the trace is T = −5 and the determinant is D = −14 < 0. Hence, the equilibrium point at the origin is
a saddle. Further, the characteristic polynomial is
p(λ) = λ2 − T λ + D = λ2 + 5λ − 14,
which produces eigenvalues λ1 = −7 and λ2 = 2. Because
A + 7I = −9
−18 9
,
18 v1 = (1, 1)T , leading to the exponential solution e−7t (1, 1)T . Because
A − 2I = −18
−18 9
,
9 v2 = (1, 2)T , leading to the exponential solution e2t (1, 2)T . Thus, the general solution is
y(t) = C1 e−7t 1
1
+ C2 e2t
.
1
2 9.3. Phase Plane Portraits 597 Solutions approach the halﬂine generated by C2 (1, 2)T as they move forward in time, but they approach the
halﬂine generated by C1 (1, 1)T as they move backward in time. A hand sketch follows. The phase portrait, drawn in a numerical solver, follows. y 5 0 −5
−5 3.26. 0
x 5 If
A= 2
8 −4
−6 then the trace is T = −4 and the determinant is D = 20. Further, T 2 − 4D = (−4)2 − 4(20) = −64 < 0,
so the equilibrium point at the origin is a spiral sink. At (1, 0),
2
8 −4
−6 2
1
,
=
8
0 598 Chapter 9. Linear Systems with Constant Coefﬁcients
so the motion is counterclockwise. A hand sketch follows.
10 10 The phase portrait, draw with a numerical solver, follows. 10 y 5
0
−5
−10
−10 3.27. −5 If
A= 0
x 8
−6 5 10 3
,
−1 then the trace is T = 7 and the determinant is D = 10 > 0. Further, T 2 − 4D = (7)2 − 4(10) = 9 > 0, so
the equilibrium point at the origin is a nodal source. Further, the characteristic polynomial is
p(λ) = λ2 − T λ + D = λ2 − 7λ + 10,
which produces eigenvalues λ1 = 2 and λ2 = 5. Because
A − 2I = 6
−6 3
,
−3 v1 = (1, −2)T , leading to the exponential solution e2t (1, −2)T . Because
A − 5I = 3
−6 3
,
−6 v2 = (1, −1)T , leading to the exponential solution e5t (1, −1)T . Thus, the general solution is
y(t) = C1 e2t 1
1
+ C2 e5t
.
−2
−1 9.3. Phase Plane Portraits 599 Solutions emanate from the source tangent to the “slow” halﬂine solution generated by C1 (1, −2)T and
eventually parallel the “fast” halﬂine generated by C2 (1, −1)T as they move forward in time. A hand sketch
follows. The phase portrait, drawn in a numerical solver, follows. y 5 0 −5
−5 3.28. 0
x If −11
10 A= 5 −5
,
4 then the trace is T = −7 and the determinant is D = 6. Further, T 2 − 4D = (−7)2 − 4(6) = 25 > 0, so the
equilibrium point at the origin is a nodal sink. Further, the characteristic polynomial is
p(λ) = λ2 − T λ + D = λ2 + 7λ + 6,
which produces eigenvalues λ1 = −1 and λ2 = −6. Because
A+I = −10
10 −5
,
5 v1 = (1, −2)T , leading to the exponential solution e−t (1, −2)T . Because
A + 6I = −5
10 −5
,
10 v2 = (1, −1)T , leading to the exponential solution e−6t (1, −1)T . Thus, the general solution is
y(t) = C1 e−t 1
1
+ C 2 e − 6t
.
−2
−1 600 Chapter 9. Linear Systems with Constant Coefﬁcients
Solutions approach the origin tangent to the “slow” halﬂine solution generated by C1 (1, −2)T . As time moves
backwards, solutions eventually parallel the halﬂine generated by C2 (1, −1)T , the “fast” solution. A hand
sketch follows. The phase portrait, draw in a numerical solver, follows. y 5 0 −5
−5 3.29. 0
x 5 If
A= 6
10 −5
,
−4 then the trace is T = 2 and the determinant is D = 26 > 0. Further, T 2 − 4D = (2)2 − 4(26) = −100 < 0,
so the equilibrium point at the origin is a spiral source. Further,
6
10 −5
−4 1
6
=
,
0
10 9.3. Phase Plane Portraits 601 so the motion is counterclockwise. A hand sketch follows. The phase portrait, drawn in a numerical solver, follows. y 5 0 −5
−5 3.30. 0
x If
A= −7
−5 5 10
,
8 then the trace is T = 1 and the determinant is D = −6, so the origin is a saddle point. Further, the characteristic
polynomial is
p(λ) = λ2 − T λ + D = λ2 − λ − 6.
which produces eigenvalues λ1 = −2 and λ2 = 3. Because
A + 2I = −5
−5 10
10 v1 = (2, 1)T , leading to the exponential solution e−2t (2, 1)T . Because
A − 3I = −10
−5 10
,
5 v2 = (1, 1)T , leading to the exponential solution e3t (1, 1)T . Thus, the general solution is
y(t) = C1 e−2t 2
1
+ C2 e3t
.
1
1 602 Chapter 9. Linear Systems with Constant Coefﬁcients
Solutions approach the halﬂine C2 (1, 1) as they move forward in time, but they approach the halﬂine generated
by C1 (2, 1) as they move backward in time. A hand sketch follows. The phase portrait, drawn in a numerical solver, follows. y 5 0 −5
−5 3.31. 0
x 5 If
A= 4
−15 3
,
−8 then the trace is T = −4 and the determinant is D = 13 > 0. Further, T 2 − 4D = (−4)2 − 4(13) = −36 < 0,
so the equilibrium point at the origin is a spiral sink. Further,
4
−15 3
−8 1
4
=
,
0
−15 9.3. Phase Plane Portraits 603 so the motion is clockwise. A hand sketch follows. The phase portrait, drawn in a numerical solver, follows. 10 y 5
0
−5
−10
−10 3.32. −5 0
x 5 10 If
A= 3
−4 2
,
−1 then the trace T = 2 and the determinant is D = 5, and the discriminant is T 2 − 4D = (2)2 − 4(5) = −16 < 0.
Thus, the origin is a spiral source. At (1, 0),
3
−4 2
−1 1
3
=
,
0
−4 604 Chapter 9. Linear Systems with Constant Coefﬁcients
so the motion is clockwise. A hand sketch follows. The phase portrait, drawn in a numerical solver, follows. y 5 0 −5
−5 3.33. 0
x If −5
−6 A= 5 2
,
2 then the trace is T = −3 and the determinant is D = 2 > 0. Further, T 2 − 4D = (−3)2 − 4(2) = 1 > 0, so
the equilibrium point at the origin is a nodal sink. Further, the characteristic polynomial is
p(λ) = λ2 − T λ + D = λ2 + 3λ + 2,
which produces eigenvalues λ1 = −1 and λ2 = −2. Because
A+I = −4
−6 2
,
3 v1 = (1, 2)T , leading to the exponential solution e−t (1, 2)T . Because
A + 2I = −3
−6 2
,
4 v2 = (2, 3)T , leading to the exponential solution e−2t (2, 3)T . Thus, the general solution is
y(t) = C1 e−t 1
2
+ C2 e−2t
.
2
3 9.3. Phase Plane Portraits 605 Solutions sink into the origin tangent to the “slow” halﬂine solution generated by C1 (1, 2)T and eventually
parallel the “fast” halﬂine generated by C2 (2, 3)T as they move backward in time. A hand sketch follows. The phase portrait, drawn in a numerical solver, follows. y 5 0 −5
−5 3.34. 0
x 5 If
A= −4
−2 10
4 then the trace is T = 0 and the determinant is D = 4, so the origin is a center. At (1, 0),
−4
−2 10
4 1
−4
=
,
0
−2 606 Chapter 9. Linear Systems with Constant Coefﬁcients
so the rotation is clockwise. A hand sketch follows.
5 5 The phase portrait, drawn in a numerical solver, follows. y 5 0 −5
−5 3.35. 0
x If
A= −2
4 5 −6
,
8 then the trace is T = 6 and the determinant is D = 8 > 0. Further, T 2 − 4D = (6)2 − 4(8) = 4 > 0, so the
equilibrium point at the origin is a nodal source. Further, the characteristic polynomial is
p(λ) = λ2 − T λ + D = λ2 − 6λ + 8,
which produces eigenvalues λ1 = 2 and λ2 = 4. Because
A − 2I = −4
4 −6
,
6 v1 = (3, −2)T , leading to the exponential solution e2t (3, −2)T . Because
A − 4I = −6
4 −6
,
4 v2 = (1, −1)T , leading to the exponential solution e4t (1, −1)T . Thus, the general solution is
y(t) = C1 e2t 3
1
+ C2 e 4 t
.
−2
−1 9.3. Phase Plane Portraits 607 Solutions emanate from the source tangent to the “slow” halﬂine solution generated by C1 (3, −2)T and
eventually parallel the “fast” halﬂine generated by C2 (1, −1)T as they move forward in time. A hand sketch
follows. The phase portrait, drawn in a numerical solver, follows. y 5 0 −5
−5 3.36. 0
x 5 (a) For
A= 1
−1 4
−3 we have T = t r(A) = −2 and D = det (A) = 1. Since the discriminant T 2 − 4D = 0 the point (T , D)
lies on the parabola that divides nodal sinks from spiral sinks in the tracedeterminant plane.
(b) The general solution can be written
y(t) = e−t (C1 + C2 t) 0
2
+ C2
1/2
−1 . Because te−t → 0 as t → ∞ (use l’Hopital’s rule), both e−t (C1 + C2 t)(2, −1)T → 0 and
ˆ
C2 e−t (0, 1/2)T → 0 as t → ∞. However, the ﬁrst term is larger for large values of t . Thus, as t → ∞,
y(t) ≈ e−t (C1 + C2 t)(2, −1)T , which implies that solutions approach the origin tangent to the halﬂine
generated by (2, −1)T . In a similar manner, as t → −∞, the term e−t (C1 + C2 t)(2, −1)T is larger
than the term C2 e−t (0, 1/2)T , so solutions eventually parallel the halﬂine generated by (2, −1)T as time
moves backwards. 608 Chapter 9. Linear Systems with Constant Coefﬁcients
(c) The following ﬁgure shows the halfline solutions and one other in each sector. The solutions clearly
exhibit the behavior predicted in part (a). y 5 0 −5
−5 3.37. 3.38. 0
x 5 (a) There is one exponential solution, eλt v1 . Because λ < 0, this solution decays to the equilibrium point at
the origin along the half line generated by C v1 .
(b) The general solution is
y(t) = eλt [(C1 + C2 t)v1 + C2 v2 ].
Because λ < 0, the terms eλt (C1 + C2 t)v1 and C2 eλt v2 both decay to zero. However, the ﬁrst term is
larger for large values of t . Thus, as t → ∞, y(t) ≈ eλt (C1 + C2 t)v1 , which implies that the solution
approaches zero tangent to the halﬂine generated by C2 v1 .
(c) Because λ < 0, the terms eλt (C1 + C2 t)v1 and C2 eλt v2 get inﬁnitely large in magnitude as t → −∞.
However, the ﬁrst term is larger in magnitude for negative values of t that are large in magnitude. Thus,
as t → −∞, y(t) ≈ eλt (C1 + C2 t)v1 ,, which implies that the solution eventually parallels the halﬂine
generated by −C2 v1 .
(d) Degenerate nodal sink.
In general everything moves in the opposite direction in comparison to the situation in Exercise 37.
(a) As t → ∞ the exponential solution tends to ∞ along the halfline generated by C1 v1 .
(b) As t → ∞ the general solution tends to ∞ and becomes parallel to the halfline generated by C2 v1 .
(c) As t → −∞ the general solution tends to 0 tangent to the halfline generated by −C2 v1 . 3.39. 3.40. The origin is a degenerate nodal source.
Because the linear degenerate nodal sources and sinks have only one eigenvalue, and because the eigenvalues
are given by
√
T ± T 2 − 4D
,
λ1 , λ2 =
2
we must have T 2 − 4D = 0. Therefore, the degenerate nodal sources and sinks lie on the parabola T 2 − 4D = 0
in the tracedeterminant plane. This positioning on the boundary between the nodal sink and sources and the
spiral sinks and sources is signiﬁcant. The solutions attempt to spiral, but they cannot. The present of the
halﬂine solutions prevents them from spiralling (solutions cannot cross).
If y = Ay, where
64
A=
,
−1 2
then the trace is T = 8 and the determinant is D = 16. Further, T 2 − 4D = 82 − 4(16) = 0, so this system
lies on the parabola T 2 − 4D = 0 that separates spiral sources and sinks from nodal sources and sinks in the
trace determinant plane. Thus, the equilibrium point at the origin is a degenerate nodal source (T = 8).
The characteristic equation is
p(λ) = λ2 − T λ + D = λ2 − 8λ + 16, 9.3. Phase Plane Portraits
which produces a single eigenvalue λ = 4. Because
2
A − 4I =
−1 609 4
,
−2 v1 = (2, −1)T and we have the exponential solution e4t (2, −1)T . To ﬁnd another solution, we must solve
(A − λI )v2 = v1 . Start with any vector that is not a multiple of v1 , say w = (1, 0)T . Then
2
4
1
2
(A − 4I )w =
=
= v1 .
−1 −2
0
−1
Thus, let v2 = w = (1, 0)T . Thus, a second, independent solution is
1
2
e4t (v2 + t v1 ) = e4t
+t
,
0
−1
and the general solution is
2
1
2
y(t) = C1 e4t
+ C2 e 4 t
+t
−1
0
−1
2
1
= e4t (C1 + C2 t)
+ C2
.
−1
0
We know that solutions must emanate from the origin parallel to the halﬂines generated by C1 (2, −1). Not
only that, the solutions must also turn parallel to the halﬂines as time marches forward. At (1, 0),
64
1
6
=
,
−1 2
0
−1
so the rotation is clockwise. A hand sketch follows. The phase portrait, drawn in a numerical solver, follows.
10 y 5
0
−5
−10
−10 −5 0
x 5 10 610
3.41. Chapter 9. Linear Systems with Constant Coefﬁcients
If y = Ay, where
−4
1 A= −4
,
0 then the trace is T = −4 and the determinant is D = 4. Further, T 2 − 4D = (−4)2 − 4(4) = 0, so this system
lies on the parabola T 2 − 4D = 0 that separates the spiral sources and sinks from the nodal sources and sinks
in the trace determinant plane. Thus, the equilibrium point at the origin is a degenerate nodal sink (T = −4).
The characteristic equation is
p(λ) = λ2 − T λ + D = λ2 + 4λ + 4,
which produces the single eigenvalue λ = −2. Because
A + 2I = −2
1 −4
,
2 v1 = (2, −1)T and we have the exponential solution e−2t (2, −1)T . To ﬁnd another solution, we must solve
(A − λI )v2 = v1 . Start with any vector that is not a multiple of v1 , say w = (1, 0)T . Then,
−2
1
−2
=
1 (A + 2I )w = −4
2 1
0 2
−1
= −1v1 .
= −1 Thus, let v2 = −w = (−1, 0)T . Thus, a second, independent solution is
e−2t (v2 + t v1 ) = e−2t −1
2
+t
0
−1 , and the general solution is
2
2
−1
+ C2 e−2t
+t
−1
−1
0
2
−1
+ C2
(C1 + C2 t)
.
−1
0 y(t) = C1 e−2t
= e −2 t We know that solution must decay to the origin in a manner parallel to the halﬂines generated by C1 (2, −1)T .
Not only that, the solutions must also turn parallel to the half lines as time marches backward. We need only
ﬁnd whether the rotation is clockwise or counterclockwise. But,
−2
1 −4
2 −2
1
,
=
1
0 9.3. Phase Plane Portraits 611 so the rotation is counterclockwise. A hand sketch follows. The phase portrait, drawn in a numerical solver, follows.
5 y 0 −5
−5 3.42. 0
x 5 (a) In matrix form, the system
x = x + ay
y =x+y
is written x
y = 1
1 a
1 x
.
y The trace of the coefﬁcient matrix is T = 2 and the determinant is D = 1 − a . The discriminant is
T 2 − 4D = (2)2 − 4(1 − a) = 4a.
If the origin is a nodal source, then we must have D > 0 and T 2 − 4D > 0. Thus,
1−a >0 and This leads to the requirement 0 < a < 1.
(b) Let
A= 1
1 4a > 0 . a
.
1 In the case that 0 < a < 1,
p(λ) = λ2 − T λ + D = λ2 − 2λ + (1 − a). 612 Chapter 9. Linear Systems with Constant Coefﬁcients
The quadratic formula reveals the eigenvalues, λ1 = 1 + A − λI = √ a and λ2 = 1 − √ a . Because 1−λ
a
,
1
1−λ √
√
v = (λ − 1, 1)T is the eigenvector associated with λ. If λ1 = 1 + a , then v1 = ( a, 1)T is its associated
√
√
eigenvector. If λ2 = 1 − a , then v2 √ (− a, 1)T is its associated eigenvector. Thus, the equations
=
of the halﬂine solutions are y = ± x/ a . As a → 0, the halﬂine solutions coalesce into one halﬂine
solution, which lies on the y axis with equation x = 0.
(c) When a = 0, T = 2 and D = 1. Moreover, T 2 − 4D = (2)2 − 4(1) = 0, and we lie on the parabola
T 2 − 4D = 0 in the tracedeterminant plane. By part (b), the eigenvalues and eigenvectors coalesce, and
we have a degenerate nodal source. If a < 0, then T 2 − 4D = 4a < 0, and we move above the parabola
T 2 − 4D = 0 into the land of spiral sources.
3.43. (a) If y = B y, where
B= 2
0 0
,
2 then B has a single eigenvalue λ = 2 and all vectors in R2 are eigenvectors. Thus, e2t (a, b)T is an
exponential solution for all (a, b)T ∈ R2 . Moreover, these solutions will have to increase to inﬁnity along
the halﬂines generated by C(a, b)T . If y = C y, where
C= −2
0 0
,
−2 9.3. Phase Plane Portraits 613 then the eigenvalue is −2, making e−2t (a, b)T an exponential solution for all (a, b)T ∈ R2 . Thus, the
phase portrait is identical to the ﬁrst graph, only with time reversed. Solutions now decay to zero along
the halﬂines generated by C(a, b)T . (b) The system y = B y has a star source at the origin, but the system y = C y has a star sink.
(c) Because
20
B=
,
02
the trace is T = 4 and the determinant is D = 4. Further, T 2 − 4D = (4)2 − 4(4) = 0, so this case lives
on the parabola T 2 − 4D = 0 in the trace determinant plane. Moreover, because T = 4, it lives on the
right half of the parabola, nestled in the land of sources. In the case of
−2 0
C=
,
0 −2
this case also lives on the parabola T 2 − 4D = 0, but because T = −4, it lives on the left half, in the
land of the sinks.
(d) If y = Ay, where
a0
A=
,
0a 3.44. then the trace is T = 2a and the determinant is a 2 . Further, T 2 − 4D = (2a)2 − 4(a)2 = 0, placing
the star sinks and sources on the parabola T 2 − 4D = 0 in the tracedeterminant plane. If a > 0, then
T = 2a > 0, placing it on the right half of the parabola, making the equilibrium point at the origin a star
source. A similar argument shows that if a < 0, then the equilibrium point is a star sink.
Let A be a 2 × 2 matrix with real entries. If D = det (A) = 0, then the characteristic polynomial becomes
p(λ) = λ2 − T λ + D
= λ2 − T λ
= λ(λ − T ).
Thus, λ = 0 is an eigenvalue. On the other hand, if one eigenvalue is λ = 0, then λ must be a factor of the
characteristic equation λ2 − T λ + D . This can only happen if D = 0. 3.45.
(1) If 2
1
,
−10 −5
then the trace is T = −3 and the determinant is D = 0. Thus, this degenerate case lies on the horizontal
axis, separating the saddles from the nodal sinks.
A= 614 Chapter 9. Linear Systems with Constant Coefﬁcients
(2) To ﬁnd the equilibrium points, we set the right hand side of y = Ay equal to zero, as in Ay = 0.
Consequently, the equilibrium points are simply the nullspace of A, which is generated by a single
vector, v1 = (1, −2)T . Thus, we have a whole line of equilibrium points. Everything on the line
y = −2x is an equilibrium point.
(3) The characteristic polynomial is
p(λ) = λ2 − T λ + D = λ2 + 3λ,
which produces eigenvalues λ1 = 0 and λ2 = −3. Because
A − 0I = A = 2
−10 1
,
−5 the eigenvector is v1 = (1, −2)T , the same vector that produces a line of equilibrium points. Because
A + 3I = 5
−10 1
,
−2 v2 = (1, −5)T . Thus, the general solution is
y(t) = C1 e0t 1
1
+ C 2 e − 3t
,
−2
−5 or
y(t) = C1 1
1
+ C2 e−3t
.
−2
−5 Note that each solution in this family is the sum of a ﬁxed multiple of (1, −2)T and a decaying multiple
of (1, −5)T . Thus, as t → ∞, solutions move in lines parallel to (1, −5)T , decaying into the line of
equilibrium points as shown in the following ﬁgure. 9.3. Phase Plane Portraits 615 Our numerical solver provides further evidence of this behavior. 10 y 5
0
−5
−10
−5 0
x 5 3.46.
(1) If
8
−10 A= 4
−5 then the trace is T = 3 and the determinant is D = 0. Thus, this degenerate case lies on the horizontal
axis, separating saddles from the nodal sources.
(2) To ﬁnd the equilibrium points, we set the righthand side of y = Ay equal to zero, as in Ay = 0.
Consequently, the equilibrium points are simply the nullspace of A, which is generated by a single
vector, v1 = (1, −2)T . Thus, everything on the line y = −2x is an equilibrium point.
(3) The characteristic polynomial is
p(λ) = λ2 − T λ + D = λ2 − 3λ,
which produces eigenvalues λ1 = 0 and λ2 = 3. Because
A + 0I = 8
−10 4
,
−5 the eigenvector is v1 = (1, −2)T , the same vector that produces a line of equilibrium points. Because
A − 3I = 5
−10 4
,
−8 v2 = (4, −5)T . Thus, the general solution is
y(t) = C1 e0t 1
4
+ C2 e3t
,
−2
−5 or
y(t) = C1 1
4
+ C2 e3t
.
−2
−5 616 Chapter 9. Linear Systems with Constant Coefﬁcients
Note that each solutions in this family is the sum of a ﬁxed multiple of (1, −2)T and an increasing
multiple of (4, −5)T . Thus, as t → ∞, solutions move away from the line of equilibrium points along
lines parallel to (4, −5)T , as shown in the following ﬁgure. Our numerical solver provides further evidence of this behavior. 10 y 5
0
−5
−10
−10 3.47. −5 0
x 5 The solutions emanate from a line of equilibrium points, rather than decaying into the line of equilibrium
points. Section 4. Higher Dimensional Systems
4.1. 10 If
A= 2
0
6 1
1
10 0
0
−1 , then
p(λ) = det (A − λI )
2−λ
1
0
0
1−λ
0
=
6
10
−1 − λ 9.4. Higher Dimensional Systems 617 Expanding down the third column,
2−λ
1
0
1−λ
= (−1 − λ)(2 − λ)(1 − λ) p(λ) = (−1 − λ) = −(λ + 1)(λ − 2)(λ − 1).
Thus, the eigenvalues are −1, 2, and 1, respectively. The graph of the characteristic polynomial follows. Note
that the graph crosses the horizontal axis at the eigenvalues −1, 2, and 1. p(λ)
15
10
5
0
−2 −1 0 1 2 λ
3 −5
−10 4.2. If
A= −1
0
−1 6
−1
11 2
0
2 , then
p(λ) = det (A − λI ) = −1 − λ
0
−1 6
2
−1 − λ
0
.
11
2−λ Expanding across the second row,
−1 − λ
2
−1
2−λ
= −(λ + 1)((−1 − λ)(2 − λ) + 2) p(λ) = (−1 − λ) = −(λ + 1)(λ2 − λ)
= −λ(λ + 1)(λ − 1). 618 Chapter 9. Linear Systems with Constant Coefﬁcients
Thus, the eigenvalues are 0, −1, and 1, respectively. The graph of the characteristic polynomial follows. Note
that the graph crosses the horizontal axis at the eigenvalues 0, −1, and 1.
p (λ)
5 5 4.3. If
A=
then 2
−6
−3 0
1
0 0
−4
−1 λ , p(λ) = det (A − λI )
2−λ
0
0
−4
= −6 1 − λ
−3
0
−1 − λ Expanding across the ﬁrst row,
1−λ
−4
0
−1 − λ
= (2 − λ)(1 − λ)(−1 − λ)
= −(λ − 2)(λ − 1)(λ + 1). p(λ) = (2 − λ) Thus, the eigenvalues are 2, 1, and −1, respectively. Because A − 2I reduces,
10 1
0
0
0
A − 2I = −6 −1 −4 → 0 1 −2 ,
00 0
−3 0 −3 4.4. it is easily seen that the nullspace of A − 2I is generated by the eigenvector v1 = (−1, 2, 1)T . In a similar
manner, we arrive at the following eigenvalueeigenvector pairs.
0
0
1→ 1
and
−1→ 2
1
0
Because
−1 0 0
det 2 1 2 = −1,
1 01
the eigenvectors are independent.
If
100
A = 3 −2 1 ,
5 −5 2 9.4. Higher Dimensional Systems
then
p(λ) = det (A − λI ) = 1−λ
3
5 619 0
0
−2 − λ
1
.
−5
2−λ Expanding across the ﬁrst row,
−2 − λ
1
−5
2−λ
= (1 − λ)((−2 − λ)(2 − λ) + 5) p(λ) = (1 − λ) = (1 − λ)(λ2 + 1)
Thus, the eigenvalues are 1, i , and −i , respectively. Because A − iI reduces,
A − iI = 1−i
3
5 0
−2 − i
−5 0
1
2−i 1
0
0 → 0
1
0 0
−2/5 + 1/5i
0 , it is easily seen that the nullspaces of A − iI is generated by the eigenvector v = (0, 2 − i, 5)T . In a similar
manner, we arrive at the following eigenvalueeigenvector pairs.
−i −→ 0
2+i
5 det 0
2−i
5 Because 4.5. the eigenvectors are independent.
If −4
12
−6 A=
then 0
2+i
5 0
2
0 1
1
0 1 −→ and 1
1
0 = −10i, 2
−6
3 , p(λ) = det (A − λI )
−4 − λ
0
2
12
2 − λ −6
=
−6
0
3−λ Expanding down the second column,
−4 − λ
2
−6
3−λ
= −(λ − 2)(λ2 + λ)
= −λ(λ − 2)(λ + 1). p(λ) = (2 − λ) Thus, the eigenvalues are 0, 2, and −1, respectively. Because A − 0I reduces,
A − 0I = −4
12
−6 0
2
0 2
−6
3 → 1
0
0 0
1
0 −1/2
0
0 , it is easily seen that the nullspace of A − 0I is generated by the eigenvector v1 = (1, 0, 2)T . In a similar
manner, we arrive at the following eigenvalueeigenvector pairs.
2→ 0
1
0 and −1→ −2
2
−3 620 Chapter 9. Linear Systems with Constant Coefﬁcients
Because
det 4.6. the eigenvectors are independent.
If
A= 1
0
2 0
1
0 −2
2
−3 −5 −2
4
1
−3 −1 then
p(λ) = det(A − λI ) = = 1, 0
0
−2 −5 − λ
4
−3 . −2
0
1−λ
0
.
−1 −2 − λ Expanding down the third column,
−5 − λ −2
4
1−λ
= −(λ + 2)((−5 − λ)(1 − λ) + 8) p(λ) = (−2 − λ) = −(λ + 2)(λ2 + 4λ + 3)
= −(λ + 2)(λ + 1)(λ + 3).
Thus, the eigenvalues are −2, −1, and −3, respectively. Because
−3 −2 0
10
4
3 0→01
A + 2I =
−3 −1 0
00 4.7. 0
0
0 , it is easily seen that the nullspace of A + 2I is generated by the eigenvector v = (0, 0, 1)T . In a similar
manner, we arrive at the following eigenvalueeigenvector pairs.
−1
−1
2
1
−1 −→
and
− 3 −→
1
−2
Because
0 −1 −1
1
= 1,
det 0 2
1 1 −2
the eigenvectors are independent.
The system in matrix form,
x
x
4 −5 4
y,
y = 0 −1 4
z
z
001
reveals that the matrix
4 −5 4
A = 0 −1 4
001
is upper triangular. Thus, the eigenvalues are located on the main diagonal and are −1, 4, and 1. Because
1 −1 0
5 −5 4
A+I = 0 0 4 → 0 0 1 ,
000
002
it is easily seen that −1 → (1, 1, 0)T is an eigenvalueeigenvector pair. Similarly,
2
1
and
1→ 2
4→ 0
1
0 9.4. Higher Dimensional Systems 621 are the remaining eigenvalueeigenvector pairs. These lead to the general solution
x
y
z
4.8. = C1 e −t 1
1
0 For
A=
we have
A − λI = + C2 e
−3
3
2 1
0
0 4t 0
2
0 + C3 e t 2
2
1 . −1
3
0 −3 − λ
0
3
2−λ
2
0 −1
3
−λ We can compute the characteristic polynomial p(λ) = det (A − λI ) by expanding along the second column
to get
−3 − λ −1
p(λ) = (2 − λ) det
2
−λ
= −(λ − 2)(λ2 + 3λ + 2)
= −(λ − 2)(λ + 1)(λ + 2).
Hence the eigenvalues are λ1 = −2, λ2 = −1, and λ3 = 2.
For λ1 = −2 we have
−1 0
34
A − λ1 I = A + 2 I =
20 −1
3
2 The nullspace is generated by the vector v1 = (−1, 0, 1)T .
For λ2 = −1 we have
−2 0
33
A − λ2 I = A + I =
20 −1
3
1 The nullspace is generated by the vector v2 = (1, 1, −2)T .
For λ3 = 2 we have
−5 0
30
A − λ3 I = A − 2 I =
20 −1
3
−2 The nullspace is generated by the vector v3 = (0, 1, 0)T .
Thus we have three exponential solutions:
y1 (t) = eλ1 t v1 = e−2t
y2 (t) = eλ2 t v2 = e−t
y3 (t) = eλ3 t v3 = e2t −1
0
1
1
1
−2
0
1
0 Since the three eigenvalues are distinct, these solutions are linearly independent and form a fundamental set
of solutions. The general solution is
y(t) = C1 y1 (t) + C2 y2 (t) + C3 y3 (t). 622
4.9. Chapter 9. Linear Systems with Constant Coefﬁcients
In matrix form, x
y=
z
the characteristic polynomial of matrix
A=
is found by calculating −3
−5
−5 0
6
2 −3
−5
−5 0
6
2 x
y
z 0
−4
0
0
−4
0 , , p(λ) = det (A − λI )
−3 − λ
0
0
−5
6 − λ −4 .
=
−5
2
−λ Expanding across the ﬁrst row,
6 − λ −4
2
−λ
2
= −(λ + 3)(λ − 6λ + 8)
= −(λ + 3)(λ − 4)(λ − 2). p(λ) = (−3 − λ) Thus, the eigenvalues are 4, −3, and 2. Because A − 4I reduces,
−7 0 0
10 0
A − 4I = −5 2 −4 → 0 1 −2
−5 2 −4
00 0 , it is easily seen that the nullspace of A − 4I is generated by the eigenvector v1 = (0, 2, 1)T . In a similar
manner, we arrive at the following eigenvalueeigenvector pairs.
1
0
−3 → 1
and
2→ 1
1
1
Thus, the general solution is
x
0
1
0
y = C1 e4t 2 + C2 e−3t 1 + C3 e2t 1 .
z
1
1
1
4.10. For
A= −3
0
0 −6
1
−2 −2
0
−1 , we have −3 − λ −6
−2
0
1−λ
0
.
0
−2 −1 − λ
We can compute the characteristic polynomial p(λ) = det (A − λI ) by expanding across the second row to
get
−3 − λ
−2
p(λ) = (1 − λ)
0
−1 − λ
= (1 − λ)(−3 − λ)(−1 − λ)
A − λI = Hence, the eigenvalues are λ1 = 1, λ2 = −3, and λ3 = −1, respectively. For λ1 = 1, we have
−4 −6 −2
1 0 −1
0
0
0
A−I =
→01 1
0 −2 −2
00 0 9.4. Higher Dimensional Systems 623 The nullspace is generated by the vector v1 = (1, −1, 1)T . For λ2 = −3,
0
0
0 A + 3I = −6
4
−2 −2
0
2 0
0
0 → 1
0
0 0
1
0 . The nullspace is generated by the vector v2 = (1, 0, 0)T . For λ3 = −1
−2
0
0 A+I = −6
2
−2 −2
0
0 1
0
0 → 0
1
0 1
0
0 . The nullspace is generated by v3 = (−1, 0, 1)T . Thus, we have three exponential solutions.
y1 (t) = et 1
−1
1 1
0
0 y2 (t) = e−3t , , −1
0
1 y3 (t) = e−t Since the three eigenvalues are distinct, these solutions are linearly independent and form a fundamental set
of solutions. The general solution is
y(t) = C1 et
4.11. 1
−1
1 1
0
0 + C2 e−3t −1
0
1 + C3 e−t . The characteristic polynomial of matrix
−3
−2
0 A= 4
3
0 8
2
2 is found by calculating
p(λ) = det (A − λI )
−3 − λ
4
8
−2
3−λ
2
.
=
0
0
2−λ
Expanding across the third row,
−3 − λ
4
−2
3−λ
= −(λ − 2)(λ2 − 1)
= −(λ − 2)(λ + 1)(λ − 1). p(λ) = (2 − λ) Thus, the eigenvalues are −1, 1, and 2. Because A + I reduces,
A+I = −2
−2
0 4
4
0 8
2
3 → 1
0
0 −2
0
0 0
1
0 , it is easily seen that the nullspace of A + I is generated by the eigenvector v1 = (2, 1, 0)T . In a similar
manner, we arrive at the following eigenvalueeigenvector pairs.
1→ 1
1
0 2→ and 0
−2
1 Thus, the general solution is
y(t) = C1 e−t 2
1
0 + C2 et 1
1
0 + C3 e2t 0
−2
1 . 624
4.12. Chapter 9. Linear Systems with Constant Coefﬁcients
In matrix form, x 2 1 4
0
0
2 x2 3
x = 0
3 x4 3 −4 −1 .
0
−3 0
−2
1
−2 Using a computer, we ﬁnd the following eigenvalueeigenvector pairs.
1
1
0 −2 −2 −→ ,
0
−1 1
2 −→ ,
0
1 1
−1 −→ ,
0
1 0
1
1 −→ −1 1 Because the eigenvalues are distinct, the eigenvectors are independent and the exponential solutions
1
1 −2 1
, y2 (t) = e2t ,
y1 (t) = e−2t 0
0
−1
1
0
0
1
1
y3 (t) = e−t , and y4 (t) = et 0
−1 1
1
form a fundamental set of solutions. Thus,
1 x (t) 1 0 x2 (t) − 2 t −2 2t 1 −t 1 t 1 x (t) = C1 e 0 + C2 e 0 + C3 e 0 + C4 e −1 3
−1
1
1
1
x4 (t)
1 4.13. 0 is the general solution.
The general solution in Exercise 7 was
x
y
z = C1 e−t 1
1
0 1
0
0 + C2 e4t + C3 et 2
2
1 If x(0) = 1, y(0) = −1, and z(0) = 2, then
1
−1
2 1
1
0 + C2 1
−1
2 → = C1 1
0
0 2
2
1 + C3 . The augmented matrix reduces.
1
1
0 1
0
0 2
2
1 1
0
0 0
1
0 0
0
1 −5
2
2 Thus, C1 = −5, C2 = 2, and C3 = 2, and the particular solution is
x (t)
y(t)
z(t) =
4.14. 1
1
1 + 2 e 4t 0
0
0
−5e−t + 2e4t + 4et
−5e−t + 4et
2e t = −5e−t + 2e t The solution has the form
y(t) = C1 y1 (t) + C2 y2 (t) + C3 y3 (t). 2
2
1 . 9.4. Higher Dimensional Systems 4.15. 4.16. 625 where y1 , y2 , and y3 are the fundamental set of solutions found in Exercise 9.4.8. Hence we must have
1
−1 = y(0)
2
= C1 y1 (0) + C2 y2 (0) + C3 y3 (0)
= C1 v 1 + C 2 v 2 + C 3 v 3
C1
= [v1 , v2 , v3 ] C2 ,
C3
where v1 , v2 , and v3 are the eigenvectors of A found in Exercise 9.4.8. To solve the system we form the
augmented matrix
−1 1 0 1
0
1 1 −1
[v1 , v2 , v3 , y(0)] =
1 −2 0 2
This is reduced to the row echelon form
1 0 0 −4
0 1 0 −3 .
001 2
Backsolving, we ﬁnd that C1 = −4, C2 = −3, and C3 = 2. Hence the solution is
−1
1
0
0 − 3e − t
1 + 2 e 2t 1
y(t) = −4e−2t
1
−2
0
4 e − 2 t − 3e − t
= −3e−t + 2e2t .
−4e−2t + 6e−t
The general solution in Exercise 9 was
x
0
1
0
y = C1 e4t 2 + C2 e−3t 1 + C3 e2t 1 .
z
1
1
1
If x(0) = −2, y(0) = 0, and z(0) = 2, then
1
0
−2
0
0
= C1 2 + C2 1 + C3 1 .
1
1
2
1
The augmented matrix reduces.
0 1 0 −2
1 0 0 −2
211 0
→ 0 1 0 −2
111 2
001 6
Thus, C1 = −2, C2 = −2, and C3 = 6, and the particular solution is
x (t)
0
1
0
y(t) = −2e4t 2 − 2e−3t 1 + 6e2t 1
z(t)
1
1
1
−2e−3t
= −4e4t − 2e−3t + 6e2t
−2e4t − 2e−3t + 6e2t
The general solution in Exercise 10 was
1
1
−1
0.
y(t) = C1 et −1 + C2 e−3t 0 + C3 e−t
1
0
1 626 Chapter 9. Linear Systems with Constant Coefﬁcients
Because y(0) = (−3, −3, 0),
−3
−3
0 + C2 1
−1
1 = C1 1
0
0 + C3 −1
0
1 → 1
0
0 0
1
0 3
−9
−3 . The augmented matrix reduces.
1
−1
1 1
0
0 −1
0
1 −3
−3
0 0
0
1 Thus, C1 = 3, C2 = −9, and C3 = −3, and the particular solution is
y(t) = 3et
4.17. 1
−1
1 − 9e−3t 1
0
0 − 3e − t −1
0
1 . + C2 et 1
1
0 + C3 e2t 0
−2
1 . + C2 1
1
0 + C3 0
−2
1 → 1
0
0 0
1
0 1
−1
1 The general solution in Exercise 11 was
2
1
0 y(t) = C1 e−t
If y(0) = (1, −2, 1)T , then
1
−2
1 = C1 2
1
0 . The augmented matrix reduces.
2
1
0 1
1
0 0
−2
1 1
−2
1 0
0
1 Thus, C1 = 1, C2 = −1, and C3 = 1, and the particular solution is
2
1
1 − et 1
0
0
2 e −t − e t
e −t − e t − 2 e 2 t
e 2t y(t) = e−t
=
4.18. The general solution in Exercise 12 was x (t) 1 0
−2
1 + e 2t 0 x2 (t) − 2 t −2 2t 1 −t 1 t 1 x (t) = C1 e 0 + C2 e 0 + C3 e 0 + C4 e −1 .
3
x4 (t)
−1
1
1
1
1 1 0 Because x1 (0) = 1, x2 (0) = −1, x3 (0) = 0, and x4 (0) = 2,
1
1
1
0 0 −1 2 1
1
1 0 = C1 0 + C2 0 + C3 0 + C4 −1 .
2
−1
1
1
1 The augmented matrix reduces.
1 1 −2
0
−1 1
0
1 0
1
0
1 0
1
−1
1 1
1
−1 0
→
0
0
0
2 0
1
0
0 0
0
1
0 0
0
0
1 3
−2 7
0 9.4. Higher Dimensional Systems 627 Thus, C1 = 3, C2 = −2, C3 = 7, and C4 = 0, and the particular solution is x (t) 1
1
0
1 x2 (t) −2t −2 2t 1 −t 1 x (t) = 3e 0 − 2e 0 + 7e 0 .
3 −1 x4 (t) 4.19. 1 1 Using Euler’s formula
y(t) = e2it 1
1 + 2i
−3i = (cos 2t + i sin 2t) 1
1
−3 0
2
−3 +i 1
0
0
2 + i cos 2t
2 + i sin 2t
1 − sin 2t
−3
−3
−3
cos 2t
sin 2t
cos 2t − 2 sin 2t
2 cos 2t + sin 2t
.
+i
−3 cos 2t − 3 sin 2t
−3 cos 2t + 3 sin 2t = cos 2t
= 1
1
−3 Thus, the real and imaginary parts of the complex solution y(t) = y1 (t) + i y2 (t) are
y1 (t) =
4.20. cos 2t
cos 2t − 2 sin 2t
−3 cos 2t + 3 sin 2t y2 (t) = and sin 2t
2 cos 2t + sin 2t
−3 cos 2t − 3 sin 2t . Using Euler’s formula, 1
1 + i y(t) = e(1+i)t 1−i
0 0 1 1 = et (cos t + i sin t) + i −1 1
0
0 0 1 1 0 1 1 1 1
= et cos − sin t + iet cos t + sin t −1 1
1
−1 0
0
0
0 1 Thus, the real and imaginary parts of the complex solution y(t) = y1 (t) + i y2 (t) are cos t
sin t cos t − sin t y1 (t) = et cos t + sin t 0 4.21. In matrix form, x
y
z = Using a computer, matrix
A= cos t + sin t y2 (t) = et .
− cos t + sin t 0 and −4
−4
0
−4
−4
0 8
4
0 x
y
z 8
2
2
8
4
0 8
2
2 . 628 Chapter 9. Linear Systems with Constant Coefﬁcients
has eigenvalues 2, 4i , and −4i . For the eigenvalue 2, we look for an vector in the nullspace (eigenspace) of
−6
−4
0 A − 2I = 8
2
0 8
2
0 . The computer tells us that (0, −1, 1)T is in the nullspace of A − 2I . Thus, one solution is y1 (t) =
e2t (0, −1, 1)T . In a similar vein, our computer tells us that (1 − i, 1, 0)T is in the nullspace of A − (4i)I .
Thus, we have conjugate solutions
1−i
1
0 z(t) = e4it 1+i
1
0 z(t) = e−4it and . Using Euler’s formula, we ﬁnd the real and imaginary parts of the solution z(t).
z(t) = e4it 1−i
1
0 = (cos 4t + i sin 4t)
cos 4t + sin 4t
cos 4t
0 = 1
1
0
+i −1
+i 0
0
− cos 4t + sin 4t
sin 4t
0 . The real and imaginary parts of z are solutions and we can write the general solution
x (t)
y(t)
z(t)
4.22. = C1 e2t 0
−1
1 cos 4t + sin 4t
cos 4t
0 + C2 + C3 − cos 4t + sin 4t
sin 4t
0 . Using a computer, matrix
A= 2
1
−3 4
2
−4 4
3
−5 has the following eigenvalueeigenvector pairs.
−1 −→ 0
−1
1 , 2i −→ −2
−1 − i
2 , −2i −→ −2
−1 + i
2 . Using Euler’s formula,
z(t) = e2it −2
−1 − i
2 = (cos 2t + i sin 2t)
= cos 2t −2
−1
2 −2
−1
2 − sin 2t +i
0
−1
0 0
−1
0
+ cos 2t 0
−1
0 + sin 2t −2
−1
2 The real and imaginary parts of z are solutions and we can write the general solution.
y(t) = C1 e−t 0
−1
1 + C2 −2 cos 2t
− cos 2t + sin 2t
2 cos 2t + C3 −2 sin 2t
− cos 2t − sin 2t
2 sin 2t . 9.4. Higher Dimensional Systems
4.23. 629 In matrix form,
x
y
z 6
8
8 = −4
0
−2 0
−2
0 x
y
z . Using a computer, matrix
A= 6
8
8 −4
0
−2 0
−2
0 has eigenvalues −2, 2 + 4i , and 2 − 4i . For the eigenvalue −2, we look for a vector in the nullspace (eigenspace)
of
8 0 −4
A + 2I = 8 0 0 .
80 0
The computer tells us that (0, 1, 0)T is in the nullspace of A + 2I . Thus, one solution is y1 (t) = e−2t (0, 1, 0)T .
In a similar vein, our computer tells us that (1 + i, 2, 2)T is in the nullspace of A − (2 + 4i)I . Thus, we have
conjugate solutions
z(t) = e(2+4i)t 1+i
2
2 z(t) = e(2−4i)t and 1−i
2
2 . Using Euler’s formula, we ﬁnd the real and imaginary parts of the solution z(t).
z(t) = e2t e4it 1+i
2
2 = e2t (cos 4t + i sin 4t)
= e 2t cos 4t − sin 4t
2 cos 4t
2 cos 4t 1
2
2 1
0
0
cos 4t + sin 4t
2 sin 4t
2 sin 4t +i + ie2t The real and imaginary parts of z are solutions and we can write the general solution
x (t)
y(t)
z(t)
4.24. = C1 e−2t 0
1
0 + C2 e2t cos 4t − sin 4t
2 cos 4t
2 cos 4t + C3 e 2 t cos 4t + sin 4t
2 sin 4t
2 sin 4t Using a computer, matrix
A= −1
−52
−20 0
−11
−4 0
26
9 has the following eigenvalueeigenvector pairs.
−1 + 2i −→ 0
5−i
2 , −1 − 2i −→ 0
5+i
2 , −1 −→ 1
0
2 . 630 Chapter 9. Linear Systems with Constant Coefﬁcients
Using Euler’s formula,
0
5−i
2 z(t) = e(−1+2i)t 0
5
2 = e−t (cos 2t + i sin 2t)
= e−t cos 2t 0
5
2 − sin 2t 0
−1
0 +i
0
−1
0 0
−1
0 + ie−t cos 2t 0
5
2 + sin 2t . The real and imaginary parts of z are solutions and we can write the general solution.
y(t) = C1 e−t
4.25. 0
5 cos 2t + sin 2t
2 cos 2t In system y = Ay, where
A=
we have
A − λI = 0
− cos 2t + 5 sin 2t
2 sin 2t + C 2 e −t −7
2
3 −13
3
8 −7 − λ
2
3 0
0
−2 + C 3 e −t 1
0
2 , −13
0
3−λ
0
8
−2 − λ . We can compute the characteristic polynomial by expanding along the third column. We get
p(λ) = det (A − λI )
−7 − λ −13
2
3−λ
= −(λ + 2)(λ2 + 4λ + 5).
= (−2 − λ) det Hence we have one real eigenvalue λ1 = −2, and the quadratic λ2 + 4λ + 5 has complex roots λ2 = −2 + i ,
and λ2 = −2 − i . For the eigenvalue λ1 = −2, we look for a vector in the nullspace (eigenspace) of
A − λ1 I = A + 2 I = −5
2
3 −13
5
8 0
0
0 . The eigenspace is generated by v1 = (0, 0, 1)T . Thus, one solution is
y1 (t) = e−2t 0
0
1 . For the eigenvalue λ2 = −2 + i , we look for an vector in the nullspace (eigenspace) of
A − λ1 I = A + (2 − i)I = −5 − i
2
3 −13
5−i
8 0
0
−i . The eigenspace is generated by (−5 + i, 2, 3 − i)T . Thus, we have the complex conjugate solutions
z(t) = e(−2+i)t −5 + i
2
3−i and z(t) = e(−2−i)t −5 − i
2
3+i . 9.4. Higher Dimensional Systems 631 Using Euler’s formula, we ﬁnd the real and imaginary parts of the solution z(t).
−5 + i
2
3−i z(t) = e−2t eit −5
2
3 = e−2t (cos t + i sin t)
= e −2 t −5 cos t − sin t
2 cos t
3 cos t + sin t 1
0
−1
cos t − 5 sin t
2 sin t
− cos t + 3 sin t +i + ie−2t Thus we have the solutions
−5 cos t − sin t
2 cos t
3 cos t + sin t
cos t − 5 sin t
2 sin t
− cos t + 3 sin t y2 (t) = Re(z(t)) = e−2t
y3 (t) = Im(z(t)) = e−2t and The general solution is
y(t) = C1 y1 (t) + C2 y2 (t) + C3 y3 (t).
4.26. If −2
10
1 A= 0
5
2 0
−10
−3 , then the characteristic polynomial is
p(λ) = det (A − λI ) = −2 − λ
0
0
10
5−λ
−10 .
1
2
−3 − λ Expanding across the ﬁrst row,
5−λ
−10
2
−3 − λ
= (−2 − λ)(λ2 − 2λ + 5), p(λ) = (−2 − λ) which has roots −2 and 1 ± 2i . For λ1 = −2,
A + 2I = 0
10
1 0
7
2 0
−10
−1 → 1
0
0 0
1
0 −1
0
0 → 1
0
0 and v1 = (1, 0, 1)T is its associated eigenvector and
y1 (t) = e−2t 1
0
1 is an exponential solution. For λ2 = 1 + 2i ,
A − (1 + 2i)I = −3 − 2i
10
1 0
4 − 2i
2 0
−10
−4 − 2i 0
1
0 0
−2 − i
0 , 632 Chapter 9. Linear Systems with Constant Coefﬁcients
and v2 = (0, 2 + i, 1)T is its associated eigenvector. Using Euler’s formula,
0
2+i
1 z(t) = e(1+2i)t 0
2
1 = et (cos 2t + i sin 2t)
0
2
1 = et cos 2t 0
1
0 +i
0
1
0 − sin 2t 0
1
0 + iet cos 2t + sin 2t 0
2
1 . The real and imaginary parts of z(t) give two more independent solutions
y2 (t) = e t 0
2 cos 2t − sin 2t
cos 2t and y2 (t) = e 0
cos 2t + 2 sin 2t
sin 2t t and the general solution is
y(t) = C1 y1 (t) + C2 y2 (t) + C3 y3 (t).
4.27. In Exercise 21, the general solution was
x (t)
y(t)
z(t) 0
−1
1 = C1 e2t + C2 cos 4t + sin 4t
cos 4t
0 + C3 − cos 4t + sin 4t
sin 4t
0 . If x(0) = 1, y(0) = 0, and z(0) = 0, then
1
0
0 + C2 0
−1
1 = C1 1
1
0 + C3 −1
0
0 → 1
0
0 0
1
0 0
0
−1 . The augmented matrix reduces.
0
−1
1 −1
0
0 1
1
0 1
0
0 0
0
1 Thus, C1 = C2 = 0 and C3 = −1, giving the particular solution
x (t)
y(t)
z(t)
4.28. = cos 4t − sin 4t
− sin 4t
0 . In Exercise 22, the general solution was
y(t) = C1 e−t 0
−1
1 + C2 −2 cos 2t
− cos 2t + sin 2t
2 cos 2t + C3 −2 sin 2t
− cos 2t − sin 2t
2 sin 2t = C1 0
−1
1 −2
−1
2 + C2 0
−1
0 . 0
0
1 1
−1/2
1/2 . If y(0) = (1, −1, 0)T , then
1
−1
0 + C2 The augmented matrix reduces.
0
−1
1 −2
−1
2 0
−1
0 1
−1
0 → 1
0
0 0
1
0 . 9.4. Higher Dimensional Systems 633 Thus, C1 = 1, C2 = −1/2, C3 = 1/2 and the solution is
−2 cos 2t
0
1
− cos 2t + sin 2t
−1 −
2
2 cos 2t
1
cos 2t − sin 2t
−e−t − sin 2t
.
−t
e − cos 2t + sin 2t y(t) = e−t
=
4.29. + −2 sin 2t
− cos 2t − sin 2t
2 sin 2t 1
2 In Exercise 23, the general solution was
x (t)
y(t)
z(t) 0
1
0 = C1 e−2t cos 4t − sin 4t
2 cos 4t
2 cos 4t + C2 e2t cos 4t + sin 4t
2 sin 4t
2 sin 4t + C3 e 2 t . If x(0) = −2, y(0) = −1, and z(0) = 0, then
−2
−1
0 = C1 0
1
0 + C2 −2
−1
0 → 1
2
2 1
0
0 + C3 . The augmented matrix reduces.
0
1
0 1
2
2 1
0
0 1
0
0 0
1
0 0
0
1 −1
0
−2 Thus, C1 = −1, C2 = 0 and C3 = −2, giving the particular solution
x (t)
y(t)
z(t)
4.30. = e2t (−2 cos 4t − 2 sin 4t)
−e−2t − 4e2t sin 4t
−4e2t sin 4t . In Exercise 24, the general solution was
y(t) = C1 e−t 0
5 cos 2t + sin 2t
2 cos 2t + C 2 e −t 0
− cos 2t + 5 sin 2t
2 sin 2t 0
5
2 0
−1
0 1
0
2 + C 3 e −t If y(0) = (−2, 4, −2)T , then
−2
4
−2 = C1 + C2 + C3 1
0
2 . The augmented matrix reduces.
0
5
2 0
−1
0 1
0
2 −2
4
−2 → 1
0
0 0
1
0 0
0
1 1
1
−2 Thus, C1 = 1, C2 = 1, C3 = −2, and the solution is
y(t) = e−t
= e −t 0
0
5 cos 2t + sin 2t + e−t − cos 2t + 5 sin 2t
2 cos 2t
2 sin 2t
−2
4 cos 2t + 6 sin 2t
.
2 cos 2t + 2 sin 2t − 4 + −2e−t 1
0
2 . 634
4.31. Chapter 9. Linear Systems with Constant Coefﬁcients
In Exercise 25, the general solution was
x (t)
y(t) =C1 e−2t
z(t) 0
0
1 −5 cos t − sin t
2 cos t
3 cos t + sin t
cos t − 5 sin t
2 sin t
.
− cos t + 3 sin t + C2 e−2t + C 3 e −2 t
If y(0) = (−1, 1, 1)T , then 4.32. −1
0
−5
1
1
0 + C2 2 + C3 0 .
= C1
1
1
3
−1
The augmented matrix reduces.
100 1
0 −5 1 −1
02
0
1
→ 0 1 0 1/2
0 0 1 3 /2
1 3 −1 1
Thus, C1 = 1, C2 = 1/2 and C3 = 3/2, giving the particular solution
− cos t − sin t
y(t) = e−2t cos t + 3 sin t .
1 + 5 sin t
In Exercise 26, the general solution was
1
0
0
y(t) = C1 e−2t 0 + C2 et 2 cos 2t − sin 2t + C3 et cos 2t + 2 sin 2t
1
cos 2t
sin 2t
If y(0) = (−1, 1, −1)T , then 4.33. −1
1
0
0
1
0 + C2 2 + C3 1 .
= C1
−1
1
1
0
The augmented matrix reduces.
1 0 0 −1
1 0 0 −1
021 1
→010 0
001 1
1 1 0 −1
Thus, C1 = −1, C2 = 0, C3 = 1, and the solution is
1
0
y(t) = −e−2t 0 + et cos 2t + 2 sin 2t
1
sin 2t
−e−2t
= et (cos 2t + 2 sin 2t) .
−e−2t + et sin 2t
In matrix form,
x
1
00
x
y.
1
10
y=
z
−10 8 5
z
Using a computer, matrix
1
00
1
10
A=
−10 8 5
has characteristic polynomial
p(λ) = (λ − 1)2 (λ − 5). 9.4. Higher Dimensional Systems 635 Thus, A has eigenvalues 1 and 5 with algebraic multiplicities 2 and 1, respectively. For the eigenvalue 1, we
look for a vector in the nullspace (eigenspace) of
0
1
−10 A−I = 0
0
8 0
0
4 1
0
0 → 0
1
0 0
1/2
0 . Note that there is one free variable and the eigenspace is generated by the single eigenvector (0, 1, −2)T .
Therefore, the eigenvalue 1 has geometric multiplicity 1. For the eigenvalue 5, we look for a vector in the
nullspace (eigenspace) of
−4
1
−10 A − 5I = 4.34. 0
−4
8 0
0
0 1
0
0 → 0
1
0 0
0
0 . Note again that there is only one free variable and the eigenspace is generated by the single eigenvector
(0, 0, 1)T . Therefore, the eigenvalue 5 has geometric multiplicity 1. Consequently, there are not enough
independent eigenvectors to form a fundamental solution set.
Using a computer, matrix
200
A = −6 2 3
6 0 −1
has characteristic polynomial
p(λ) = −(λ − 2)2 (λ + 1).
Thus, A has eigenvalues 2 and −1 with algebraic multiplicities 2 and 1, respectively. For λ1 = 2,
A − 2I = 0
−6
6 0
0
0 0
3
−3 1
0
0 → 0
0
0 −1/2
0
0 . Note that there are two free variables, so λ1 = 2 has geometric multiplicity 2. Thus, there are two independent
eigenvectors and independent exponential solutions
y1 (t) = e2t 1
0
2 A+I = 3
−6
6 and 0
1
0 y2 (t) = e2t . For λ2 = −1,
0
3
0 0
3
0 1
0
0 → 0
1
0 0
1
0 . Note that there is one free variable so λ2 = −1 has geometric multiplicity 1. Thus, the nullspace is generated
by the single eigenvector v = (0, −1, 1)T , and
y3 (t) = e −t 0
−1
1 is another independent solution. Thus, the general solution is
y(t) = C1 e2t
4.35. In matrix form, x
y
z 1
0
2 + C2 e2t = 4
−6
7 0
−2
1 0
1
0 0
−1
1 + C3 e−t 0
0
−2 x
y
z . . 636 Chapter 9. Linear Systems with Constant Coefﬁcients
Using a computer, matrix
A= 4
−6
7 0
−2
1 0
0
−2 has characteristic polynomial
p(λ) = (λ − 4)(λ + 2)2 .
Thus, A has eigenvalues 4 and −2 with algebraic multiplicities 1 and 2, respectively. For the eigenvalue 4,
we look for a vector in the nullspace (eigenspace) of
0
0
0
1 0 −1
A − 4I = −6 −6 0
→01 1 .
7
1 −6
00 0 4.36. Note that there is one free variable and the eigenspace is generated by the single eigenvector (1, −1, 1)T .
Therefore, the eigenvalue 4 has geometric multiplicity 1. For the eigenvalue −2, we look for a vector in the
nullspace (eigenspace) of
100
6 00
A + 2I = −6 0 0 → 0 1 0 .
000
7 10
Note again that there is only one free variable and the eigenspace is generated by the single eigenvector
(0, 0, 1)T . Therefore, the eigenvalue −4 has geometric multiplicity 1. Consequently, there are not enough
independent eigenvectors to form a fundamental solution set.
Using a computer, matrix
6 −5 10
A = −1 2 −2
−1 1 −1
has characteristic polynomial
p(λ) = −(λ − 5)(λ − 1)2 .
Thus, A has eigenvalues 5 and 1, with algebraic multiplicities 1 and 2, respectively. For λ1 = 5,
10 5
1 −5 10
A − 5I = −1 −3 −2 → 0 1 −1 .
00 0
−1 1 −6
Note that there is one free variable, so λ1 = 5 has geometric multiplicity 1. The eigenvector v = (−5, 1, 1)T
gives the exponential solution
−5
1.
y1 (t) = e5t
1
For λ2 = 1,
1 −1 2
5 −5 10
A − I = −1 1 −2 → 0 0 0 .
000
−1 1 −2
Note that there are two free variables, so λ1 = 1 has geometric multiplicity 2. The eigenvectors (1, 1, 0)T and
(−2, 0, 1)T produce two independent exponential solutions
1
−2
0.
and y3 (t) = et
y2 (t) = et 1
0
1
Thus, the general solution is
−5
1
−2
1 + C2 et 1 + C3 et
0.
y(t) = C1 e5t
1
0
1 9.4. Higher Dimensional Systems
4.37. Using a computer, matrix −6 2
−1 −1
4 −2 A= 637 −3
−1
1 has eigenvalue–eigenvector pairs
−2 → 1
−1
−2 , −1
0
1 −3 → , −1
−1
1 −1→ and . Therefore,
y1 (t) = e−2t 4.38. 1
−1
−2 , −1
0
1 y2 (t) = e−3t , and y3 (t) = e−t −1
−1
1 form a fundamental set of solutions.
Using a computer we ﬁnd that the eigenvalues of
A= −7
42
38 −4
18
18 2
−11
−10 are −2 and the complex conjugate pair 1 ± 2i . For the eigenvalue −2 we look for a basis of the eigenspace,
which is the nullspace of A − λi = A + 2I . Using a computer we ﬁnd that the eigenspace has dimension 1
and is spanned by the vector v = (0, 2, 1)T . Hence we have the solution
0
2
1 y1 (t) = e−2t . Next we look at the eigenspace for the eigenvalue 1 + 2i . The computer tells us that it has dimension 1 and
is spanned by w = (−1 + i, 3 + 3i, 4)T . Therefore we have the complex valued solution.
−1 + i
3 + 3i
4 z(t) = e(1+2i)t . Expanding using Euler’s formula, we get
−1
3
4 z(t) = et [cos 2t + i sin 2t ]
= et cos 2t · −1
3
4 + iet cos 2t · − sin 2t ·
1
3
0 +i 1
3
0 1
3
0 + sin 2t · −1
3
4 Since the real and imaginary parts of z(t) are solutions we get two real solutions
y1 (t) = Re(z(t)) = et
y2 (t) = Im(z(t)) = et − cos 2t − sin 2t
3 cos 2t − 3 sin 2t
4 cos 2t
cos 2t − sin 2t
3 cos 2t + 3 sin 2t
4 sin 2t The functions y1 , y2 , and y3 form a fundamental set of solutions. . 638
4.39. Chapter 9. Linear Systems with Constant Coefﬁcients
Using a computer, matrix
8
−9
−1 A= 12 −4
−13 4
−3
0 has eigenvalue–eigenvector pairs
0
1
3 −1 → , −2
2
1+i −2 + 2 i → , −2
2
1−i − 2 − 2i → and . Therefore,
0
1
3 y1 (t) = e−t
is a solution. Because
−2
2
1+i z(t) = e(−2+2i)t −2
2
1 = e−2t (cos 2t + i sin 2t)
−2 cos 2t
2 cos 2t
cos 2t − sin 2t = e −2 t + ie−2t the set
0
1
3 y1 (t) = e−t and 4.40. −2 cos 2t
2 cos 2t
cos 2t − sin 2t y2 (t) = e−2t , y3 (t) = e−2t 0
0
1
−2 sin 2t
2 sin 2t
cos 2t + sin 2t +i , , −2 sin 2t
2 sin 2t
cos 2t + sin 2t forms a fundamental set of solutions.
Using a computer, matrix
−1
−1
−1 −2
0
2 4
−4
−6 −2 −→ −4
0
1 , A=
has eigenvalueeigenvector pairs
2
1
0 −2 −→ , −3 −→ −1
1
1 . Note that the eigenvalue −2 has algebraic multiplicity 2 and geometric multiplicity 2, so we have enough
eigenvectors to form a fundamental set of solutions.
y1 (t) = e−2t
4.41. 2
1
0 , y2 (t) = e−2t −4
0
1 , Using a computer, matrix
A= −18
18
10 −18
17
10 10
−10
−7 y3 (t) = e−3t −1
1
1 . 9.4. Higher Dimensional Systems 639 has eigenvalue–eigenvector pairs
−2 → 1
−2
−2 , −6 + 2i
8−i
5 −3 + 2i → , −6 − 2 i
8+i
5 − 3 − 2i → and . Therefore,
1
−2
−2 y1 (t) = e−2t
is a solution. Because,
−6 + 2 i
8−i
5 z(t) = e(−3+2i)t −6
2
8 + i −1
5
0
−6 cos 2t − 2 sin 2t
2 cos 2t − 6 sin 2t
8 cos 2t + sin 2t
+ ie−3t − cos 2t + 8 sin 2t
5 cos 2t
5 sin 2t = e−3t (cos 2t + i sin 2t)
= e−3t
the set
1
−2
−2 y1 (t) = e−2t , and y3 (t) = e−3t 4.42. forms a fundamental set of solutions.
The matrix −6 cos 2t − 2 sin 2t
8 cos 2t + sin 2t
5 cos 2t
2 cos 2t − 6 sin 2t
− cos 2t + 8 sin 2t
5 sin 2t
y2 (t) = e−3t A= −6
−12
8 6
16
−12 , −2 −→ 3
2
0 . , 8
24
−18 has eigenvalueeigenvector pairs
2
0
1 −2 −→ , −4 −→ −1
−3
2 . Note that the eigenvalue −2 has algebraic multiplicity 2 and geometric multiplicity 2, so we have enough
eigenvectors to forma fundamental set of solutions.
y1 (t) = e−2t
4.43. The matrix 2
0
1 , y2 (t) = e−2t 1 −6
A=
3
−3 4
−10
4
−4 3
2
0
1
−2
−1
−1 , y3 (t) = e−4t
−5 10 −5 3 has characteristic polynomial
p(λ) = (λ + 1)(λ + 2)3 , −1
−3
2 . 640 Chapter 9. Linear Systems with Constant Coefﬁcients
indicating eigenvalues −1 and −2, with algebraic multiplicities 1 and 3, respectively. The matrix
2
1 0 0 1 4
1 −5 −6 −9
A+I =
3
4
−3 − 4 −2
0
−1 10 0
→
−5 0
4
0 1
0
0 0
1
0 −2 1
0 has one free variable, generating a single eigenvector and the solution −1 2
.
y1 (t) = e−t −1 1 The matrix 3 −6
A + 2I = 3
−3 4
−8
4
−4 1
−2
1
−1 1
−5 10 0
→
−5 0
5
0 4 /3
0
0
0 1/3
0
0
0 −5/3 0
0
0 has three free variables. A basis for the nullspace (eigenspace) of A + 2I contains the vectors
4
1
5 −3 0 , 0 −3 , 0 0 and 0
0,
3 so, together with y1 (t) = e−t (−1, 2, −1, 1)T ,
4 −3 ,
y2 = e−2t 0
0
4.44. 1
0
y3 = e−2t ,
−3 0 complete a fundamental set of solutions.
The matrix 6
8
A=
−1
−8 has eigenvalueeigenvector pairs −6
−8
7
6 −6
−6
−10
6 5
and −8 −8 −9 6 11 8
−2 −→ 0
5 0
y4 (t) = ,
0
3 9
and 7
− 2 −→ .
5
0 Note that the eigenvalue 2 has algebraic multiplicity 2 and geometric multiplicity 2, so there are sufﬁcient
independent eigenvectors to form the independent solutions 11 9
8
y1 (t) = e−2t 0
5
Further, the eigenvalueeigenvector pair and 7
y2 (t) = e−2t .
5
0 3+i 3+i −1 + 3i −→ 5
−3 − i 9.4. Higher Dimensional Systems 641 allows us to form the complex solution 3+i 3+i z(t) = e(−1+3i)t 5
−3 − i 3 1 3 1 = e−t (cos 3t + i sin 3t) +i
5
0 −3
−1 3 1 3 1 = e−t cos 3t − sin 3t 5
0 −3
−1 3 1 3 1
.
+ sin 3t + i cos 3t 5 0
−3
−1 The real and imaginary parts provide two additional independent solutions 3 cos 3t − sin 3t cos 3t + 3 sin 3t 3 cos 3t − sin 3t −t cos 3t + 3 sin 3t y3 (t) = e−t and y4 (t) = e .
5 cos 3t
5 sin 3t
−3 cos 3t + sin 3t
− cos 3t − 3 sin 3t
4.45. Thus, y1 (t), y2 (t), y3 (t), and y4 (t) form a fundamental set of solutions.
In Exercise 37, the fundamental set of solutions found there lead to the general solution
y(t) = C1 e−2t 1
−1
−2 + C2 e−3t −1
0
1 + C3 e−t −1
−1
1 The initial condition y(0) = (−6, 2, 9)T provides
−6
2
9 = C1 1
−1
−2 + C2 −1
0
1 + C3 −1
−1
1 0
1
0 −3
2
1 The augmented matrix reduces.
1
−1
−2 −1
0
1 −1
−1
1 −6
2
9 → 1
0
0 0
0
1 Thus, C1 = −3, C2 = 2 and C3 = 1, leading to
y(t) =
4.46. −3e−2t − 2e−3t − e−t
3e−2t − e−t
−2 t
6e + 2e−3t + e−t . In Exercise 38 we found the fundamental set of solutions
− cos 2t − sin 2t
y1 (t) = Re(z(t)) = et 3 cos 2t − 3 sin 2t
4 cos 2t
cos 2t − sin 2t
y2 (t) = Im(z(t)) = et 3 cos 2t + 3 sin 2t
4 sin 2t . . 642 Chapter 9. Linear Systems with Constant Coefﬁcients
Our solution has the form y(t) = C1 y1 (t) + C2 y2 (t) + C3 y3 (t). At t = 0 we have
−2
2
5 −1
3
4 = y(0) = C1
−1
3
4 = 1
3
0 C1
C2
C3 0
2
1 1
3
0 + C2 0
2
1 + C3 . We can allow our computer to solve this system of equations, obtaining C1 = 1, C2 = −1, and C3 = 1. Thus
our solution is
−2et cos 2t
−6et sin 2t + 2e−2t
.
y(t) = y1 (t) − y2 (t) + y3 (t) =
t
4e cos 2t − 4et sin 2t + e−2t
4.47. In Exercise 39, the fundamental set of solutions found there lead to the general solution
y(t) = C1 e−t 0
1
3 −2 cos 2t
2 cos 2t
cos 2t − sin 2t + C2 e−2t + C 3 e −2 t −2 sin 2t
2 sin 2t
cos 2t + sin 2t The initial condition y(0) = (0, 8, 5)T provides
0
8
5 0
1
3 = C1 + C2 −2
2
1 + C3 0
0
1 . The augmented matrix reduces.
0
1
3 −2
2
1 0
0
1 0
8
5 → 1
0
0 0
1
0 0
0
1 8
0
−19 Thus, C1 = 8, C2 = 0 and C3 = −19, leading to
y(t) =
4.48. 38e−2t sin 2t
8e − 38e−2t sin 2t
− 19e−2t cos 2t − 19e−2t sin 2t
−t 24e−t . In Exercise 40, the fundamental set of solutions found there lead to the general solution
y(t) = C1 e−2t 2
1
0 + C2 e−2t −4
0
1 + C3 e−3t −1
1
1 The initial condition y(0) = (1, 0, 0)T provides
1
0
0 = C1 2
1
0 + C2 −4
0
1 + C3 −1
1
1 . 0
1
0 −1
−1
1 . The augmented matrix reduces
2
1
0 −4
0
1 −1
1
1 1
0
0 → 1
0
0 Thus, C1 = −1, C2 = −1, and C3 = 1, leading to
y(t) = 2 e − 2 t − e − 3t
−e−2t + e−3t
−e−2t + e−3t 0
0
1 . . 9.4. Higher Dimensional Systems
4.49. 643 In Exercise 41, the fundamental set of solutions found there lead to the general solution
y(t) =C1 e−2t 1
−2
−2 −6 cos 2t − 2 sin 2t
8 cos 2t + sin 2t
5 cos 2t
2 cos 2t − 6 sin 2t
− cos 2t + 8 sin 2t .
5 sin 2t + C 2 e − 3t + C3 e−3t The initial condition y(0) = (−1, 7, 3)T provides
−1
7
3 1
−2
−2 = C1 + C2 −6
8
5 2
−1
0 + C3 . The augmented matrix reduces.
1
−2
−2 −6
8
5 −1
7
3 2
−1
0 1
0
0 → 0
1
0 0
0
1 7
17/5
31/5 Thus, C1 = 7, C2 = 17/5 and C3 = 31/5, leading to
y(t) =
4.50. 7e−2t − e−3t (8 cos 2t − 44 sin 2t)
−14e−2t + e−3t (21 cos 2t + 53 sin 2t)
−14e−2t + e−3t (17 cos 2t + 31 sin 2t) . In Exercise 42, the fundamental set of solutions found there lead to the general solution
y(t) = C1 e−2t 2
0
1 3
2
0 + C3 e−4t + C2 3
2
0 + C3 → 1
0
0 0
1
0 + C2 e−2t −1
−3
2 . The initial condition y(0) = (−1, −4, 1)T provides
−1
−4
1 = C1 2
0
1 −1
−3
2 . The augmented matrix reduces.
2
0
1 3
2
0 −1
−3
2 −1
−4
1 0
0
1 13
−11
−6 Thus, C1 = 13, C2 = −11, and C3 = −6, leading to
y(t) =
4.51. −7e−2t + 6e−4t
−22e−2t + 18e−4t
13e−2t − 12e−4t . In Exercise 43, the fundamental set of solutions found there lead to the general solution −1 4
1
5 2 −3 0
0
+ C2 e−2t + C3 e − 2 t + C4 e−2t .
y(t) = C1 e−t −1 0
−3 0
1
0
0
3 The initial condition y(0) = (−1, 5, 2, 4)T provides −1 −1 4
1
5
5
2 −3 0
0 2 = C1 −1 + C2 0 + C3 −3 + C4 0 .
4
1
0
0
3 644 Chapter 9. Linear Systems with Constant Coefﬁcients
The augmented matrix reduces. −1 4
2 −1
1 −3
0
0 1
0
−3
0 5
0
0
3 1
−1 5
0
→
0
2
0
4 0
1
0
0 0
0
1
0 0
0
0
1 1
−1 −1 1 Thus, C1 = 1, C2 = −1, C3 = −1 and C4 = 1, leading to −e−t 2 e − t + 3e − 2 t y(t) = −t
.
−e + 3e−2t e − t + 3e − 2 t 4.52. In Exercise 44, the fundamental set of solutions found there lead to the general solution 11 9 3 cos 3t − sin 3t 8
7 3 cos 3t − sin 3t y(t) = C1 e−2t + C2 e−2t + C3 e−t 5
5 cos 3t
0
0
−3 cos 3t + sin 3t
5 cos 3t + 3 sin 3t cos 3t + 3 sin 3t + C 4 e −t .
5 sin 3t
− cos 3t − 3 sin 3t The initial condition y(0) = (−2, −1, 6, −5) provides −2 11 9 3
1 −1 8
7
3
1 6 = C1 0 + C2 5 + C3 5 + C4 0 .
−5
5
0
−3
−1 The augmented matrix reduces 11 9 3
8 7 3
0 5 5
5 0 −3 1
−2 −1 0
→
6
0
−5
0 1
1
0
−1 0
1
0
0 0
0
1
0 0
0
0
1 −1 1
1/5 −3/5 Thus, C1 = −1, C2 = 1, C3 = 1/5, and C3 = −3/5, leading to 11 9 3 cos 3t − sin 3t 8
7 1 3 cos 3t − sin 3t y(t) = −e−2t + e−2t + e−t 5 cos 3t
0
5
5
−3 cos 3t + sin 3t
5
0 cos 3t + 3 sin 3t 3 cos 3t + 3 sin 3t − e −t 5 sin 3t
5
− cos 3t − 3 sin 3t −2e−2t − 2e−t sin 3t
−t
−2 t
−e − 2e sin 3t y(t) = −2t
.
5e + e−t cos 3t − 3e−t sin 3t −2 t
−t
−5e + 2e sin 3t Section 5. The Exponential of a Matrix
5.1. It is easily checked that
A2 = 0
0 0
.
0 9.5. The Exponential of a Matrix 645 Therefore, the series
12
A + ···
2! eA = I + A +
truncates and 5.2. 0
−2
+
1
1 1
0 eA = I + A = −4
−1
=
2
1 −4
.
3 It is easily checked that
1
−1 A2 = 1
−1 1
−1 1
0
=
−1
0 0
.
0 Therefore, the series
12
A + ···
2! eA = I + A +
truncates and 1
0 eA = I + A =
5.3. 0
1
+
1
−1 1
2
=
−1
−1 1
.
0 It is easily checked that
0
0
0 A2 = 0
0
0 0
0
0 . Therefore, the series
12
A + ···
2! A=I +A+
truncates and
eA = I + A =
5.4. 1
0
0 0
1
0 0
0
1 If
A= +
−2
−1
1 1
1
0
1
1
−1 −1
−1
0 0
0
0 2
1
0 = −1
0
0 0
0
1 . −3
−1
1 use a computer to check that
A3 = AA2 = −2
−1
1 1
1
−1 −3
−1
1 0
0
0 2
1
−1 2
1
−1 = 0
0
0 0
0
0 2
1
−1 0
0
0 0
0
0 Therefore, the series
eA = I + A + 12
A + ···
2! truncates and
1
e A = I + A + A2
2
−2 1
100
= 0 1 0 + −1 1
1 −1
001
−1
2
−2
= −1 5/2 −1/2 .
1 −3/2 3/2
5.5. −3
−1
1 + 1
2 (a) If A2 = αA, α = 0, then
A3 = AA2 = A(αA) = αA2 = α(αA) = α 2 A. 2
1
−1 . 646 Chapter 9. Linear Systems with Constant Coefﬁcients
Similarly,
A4 = AA3 = A(α 2 A) = α 2 A2 = α 2 (αA) = α 3 A.
Proceeding inductively, Ak = α k−1 A. Now, t2 2 t3 3
A + A + ···
2!
3!
t2
t3
+ tA + (αA) + (α 2 A) + · · ·
2!
3!
αt 2
α2 t 3
+ t+
+
+ ··· A
2!
3!
αt + α 2 t 2 /2! + α 3 t 3 /3! + · · ·
+
A
α
(1 + αt + α 2 t 2 /2! + α 3 t 3 /3! + · · · ) − 1
+
A
α
eαt − 1
+
A.
α etA = I + tA +
=I
=I
=I
=I
=I
(b) One can easily show that 1112
333
A= 1 1 1
= 3 3 3 = 3A.
111
333
Thus, we can apply the formula developed in part (a). With α = 3,
2 e3t − 1
A
3
100
111
e3t − 1
010+
111
3
001
111
(e3t + 2)/3 (e3t − 1)/3 (e3t − 1)/3
(e3t − 1)/3 (e3t + 2/3 (e3t − 1)/3
(e3t − 1)/3 (e3t − 1)/3 (e3t + 2/3 etA = I +
=
=
5.6. First, if
A=
note that
A2 = −1
0 0
,
−1 A3 = . −1
,
0 0
1 0
−1 1
,
0 1
0 A4 = 0
,
1 after which A5 = A and the sequence repeats with period 4. Thus,
t2 2 t3 3 t4 4
A + A + A + ···
2!
3!
4!
t 2 −1 0
t3 0
0 −1
10
+
+t
+
=
10
01
2! 0 −1
3! −1
1 − t 2 /2! + t 4 /4! · · ·
−t + t 3 /3! − · · ·
=
t − t 3 /3! + · · ·
1 − t 2 /2! + t 4 /4! − · · ·
cos t − sin t
.
=
sin t
cos t eAt = I + At + 5.7. Note that
A= a
b −b
a = a
0 0
0
+
b
a −b
0 t4 1
1
+
0
4! 0 = aI + b 0
1 −1
.
0 0
+ ···
1 9.5. The Exponential of a Matrix 647 Thus, by the result shown in Exercise 6,
etA = e atI +bt 0 −1
10
bt 0 −1 = eat I e 1 0
cos bt − sin bt
.
= eat
sin bt
cos bt
5.8. We can write
A= a
0 b
a where
B= =a
0
0 1
0 0
0
+b
1
0 1
0 and B2 = 1
= aI + bB,
0
0
0 0
.
0 Note that B commutes with I , so
etA = et (aI +bB)
= eat I ebtB
= eat I + btB + b2 t 2 2 b3 t 3 3
B+
B + ···
2!
3! = eat (I + btB)
10
0
+
= eat
01
0
at 1 bt
=e
.
01
5.9. (a) On the one hand
AB = bt
0 −4
0 0
,
0 but 00
.
0 −4
(b) Note that if t = 1, the result from Exercise 7 becomes
BA = e a −b
ba = ea cos b
sin b − sin b
.
cos b Thus,
e A+B = e
=e
= 0 −2 + 0 0
00
20
0 −2
20 cos 2
sin 2 − sin 2
.
cos 2 (c) Both A2 and B 2 equal the zero matrix, so the series expansions for eA and eB truncate.
1 −2
0 −2
10
=
+
eA = I + A =
01
00
01
10
00
10
eB = I + B =
+
=
01
20
21
Thus,
1 −2
10
−3 −2
=
,
eA eB =
01
21
2
1
which is not the same as eA+B calculated in part (b). The problem arises because AB = BA, as was
shown in part (a). 648
5.10. Chapter 9. Linear Systems with Constant Coefﬁcients
If A = P DP −1 , then
etA = I + tA + t2 2 t3 3
A + A + ··· .
2!
3! However, note that
A2 = (P DP −1 )2 = P DP −1 P DP −1 = P D 2 P −1 .
In a similar manner,
Ak = P D k P −1 ,
for k = 3, 4, 5, . . . . Thus,
etA = I + P (tD)P −1 + P (t 2 D 2 /2!)P −1 + P (t 3 D 3 /3!)P −1 + · · ·
= P I + tD + t 2 D 2 /2! + t 3 D 3 /3! + · · · P −1
= P etD P −1 .
5.11. If
A= 2
0 6
,
−1 then the characteristic polynomial is p(λ) = (λ − 2)(λ + 1), giving eigenvalues λ1 = 2 and λ2 = −1. Set
D= 2
0 0
.
−1 The nullspace (eigenspace) of
0
0 A − 2I = 6
−3 is generated by the single eigenvector v1 = (1, 0)T . The nullspace of
3
0 A+I = 6
0 is generated by the single eigenvector v1 = (2, −1)T . Set
P= 1
0 2
.
−1 It is easily checked that
P −1 =
Now, 1
0 and A = P DP −1 . etA = P etD P −1
1
0
1
=
0
1
=
0
e 2t
=
0
= 5.12. 2
−1 2t 0 12
2
e 0 −t
0 −1
−1
e 2t
2
12
0
−1
0 −1
0 e −t
e 2t 2 e 2t
2
−1
0 −e−t
2t
2 e − 2 e −t
.
e −t Matrix
A= −2
−3 0
−3 is lower triangular, so −2 and −3 (diagonal elements) are eigenvalues. For λ = −2,
A + 2I 0
−3 0
,
−1 9.5. The Exponential of a Matrix 649 so v = (1, −3)T is its eigenvector. For λ = −3,
1
−3 A + 3I = 0
,
0 so v = (0, 1)T is its eigenvector. Set
P= 1
−3 0
1 and D= −2
0 0
.
−3 It is easily checked that
P −1 = 1
3 0
1 and A = P DP −1 . Thus,
etA = P etD P −1
−2t
0
10
10
−3t
e0
−3 1
31
10
e −2 t
10
0
=
−3 1
31
0
e − 3t
−2 t
10
e
0
=
−3 1
3e−3t e−3t
−2 t
0
e
=
.
−3e−2t + 3e−3t e−3t = 5.13. Matrix
A= −2
−1 1
,
0 has characteristic polynomial p(λ) = (λ + 1)2 and repeated eigenvalue λ = −1. We can write
etA = et (−I +(A+I ))
= e−tI et (A+I )
= e−t I + t (A + I ) + t2
(A + I )2 + · · ·
2! . Matrix A must satisfy its characteristic polynomial, so (A + I )2 = 0 and (A + I )k = 0 for k ≥ 2. Thus, the
series truncates.
etA = e−t (I + t (A + I ))
−1 1
10
+t
= e −t
−1 1
01
t
−t 1 − t
=e
−t
1+t
5.14. Matrix
A= −1
1 0
−1 has characteristic polynomial p(λ) = (λ + 1)2 and repeated eigenvalue λ = −1. We can write
etA = et (−I +(A+I ))
= e−tI et (A+I )
= e−t I + t (A + I ) + t2
(A + I )2 + · · · .
2! 650 Chapter 9. Linear Systems with Constant Coefﬁcients
Matrix A must satisfy its characteristic polynomial, so (A + I )2 = 0 and (A + I )k = 0 for k ≥ 2. Thus, the
series truncates.
etA = e−t (I + t (A + I ))
10
00
= e −t
+t
01
10
10
= e −t
.
t1 5.15. Matrix
A= 0
−1 1
,
2 has characteristic polynomial p(λ) = (λ − 1)2 and repeated eigenvalue λ = 1. We can write
etA = et (I +(A−I ))
= etI et (A−I )
t2
(A − I )2 + · · ·
2! = et I + t (A − I ) + Matrix A must satisfy its characteristic polynomial, so (A − I )2 = 0 and (A − I )k = 0 for k ≥ 2. Thus, the
series truncates.
etA = et (I + t (A − I ))
10
−1 1
+t
= et
01
−1 1
t
t 1−t
=e
−t
1+t
5.16. Matrix
A= −3
4 −1
1 has characteristic polynomial p(λ) = (λ + 1)2 and repeated eigenvalue λ = −1. We can write
etA = et (−I +(A+I ))
= e−tI et (A+I )
= e−t I + t (A + I ) + t2
(A + I )2 + · · · .
2! Matrix A must satisfy its characteristic polynomial, so (A + I )2 = 0 and (A + I )k = 0 for k ≥ 2. Thus the
series truncates.
etA = e−t (I + t (A + I ))
−2 −1
10
+t
= e −t
4
2
01
1 − 2t
−t
= e −t
.
4t
1 + 2t
5.17. Using a computer, matrix
A= −1
−1
−2 0
1
4 0
−1
−3 has characteristic polynomial p(λ) = −(λ + 1)3 and repeated eigenvalue λ = −1. We can write
etA = et (−I +(A+I ))
= e−tI et (A+I )
= e−t I + t (A + I ) + t2
(A + I )2 + · · ·
2! 9.5. The Exponential of a Matrix 651 Matrix A must satisfy its characteristic polynomial, so (A + I )3 = 0 and (A + I )k = 0 for k ≥ 3. But,
0
−1
−2 A+I = 0
2
4 0
−1
−2 0
0
0 (A + I )2 = and 0
0
0 0
0
0 , so (A + I )k = 0 for k ≥ 2 and the series will truncate earlier.
etA = e−t (I + t (A + I ))
100
0
0 1 0 + t −1
= e −t
001
−2
1
0
0
−t
= e−t −t 1 + 2t
−2t
4t
1 − 2t
5.18. 0
2
4 0
−1
−2 Using a computer we ﬁnd that A has eigenvalue −1 with algebraic multiplicity 3. We also ﬁnd that
A+I = 0
−1
−1 −1
1
2 0
−1
−1 , −1
0
1 1
0
−1 (A + I )2 = 1
0
−1 , and
(A + I )3 =
Thus 0
0
0 0
0
0 0
0
0 . etA = eλt et (A−λI )
= e−t et (A+I )
t2
(A + I )2
2
0
0 −1 0
t2
0 + t −1 1 −1 +
2
1
−1 2 −1
2
2
−t − t /2
t /2
1+t
−t
.
2t + t 2 /2 1 − t − t 2 /2 = e−t I + t (A + I ) +
10
01
00
1 + t 2 /2
−t
−t − t 2 /2 = e −t
=
5.19. 1
0
−1 −1
0
1 1
0
−1 Using a computer, matrix
A= −2
0
0 −1
0
−4 0
1
−4 has characteristic polynomial p(λ) = −(λ + 2)3 and repeated eigenvalue λ = −2. We can write
etA = et (−2I +(A+2I ))
= e−2tI et (A+2I )
= e−2t I + t (A + 2I ) + t2
(A + 2I )2 + · · ·
2! Matrix A must satisfy its characteristic polynomial, so (A + 2I )3 = 0, but
A + 2I = 0
0
0 −1
2
−4 0
1
−2 and (A + 2I )2 = 0
0
0 −2
0
0 −1
0
0 , 652 Chapter 9. Linear Systems with Constant Coefﬁcients
so (A + I )k = 0 for k ≥ 3 and the series truncates at this point.
t2
(A + 2I )2
2!
100
0 −1 0
0 1 0 +t 0 2
1
001
0 −4 −2
1 −t − t 2 −t 2 /2
0 1 + 2t
t
0
−4t
1 − 2t etA = e−2t I + t (A + 2I ) +
= e −2 t
= e −2 t
5.20. Using a computer, matrix −2
0
−1 A= 0
−2
1 + t2
2 0
0
0 −2
0
0 −1
0
0 0
0
−2 has characteristic polynomial p(λ) = −(λ + 2)3 and repeated eigenvalue −2. We can write
etA = et (−2I +(A+2I ))
= e−2tI et (A+2I )
= e−2t I + t (A + 2I ) + t2
(A + 2I )2 + · · ·
2! Matrix A must satisfy its characteristic polynomial, so (A + 2I )3 = 0, but
A + 2I = 00
00
−1 1 0
0
0 and 0
0
0 (A + 2I )2 = 0
0
0 0
0
0 , so (A + 2I )k = 0 for k ≥ 2 and the series truncates at this point.
etA = e−2t (I + t (A + 2I ))
100
0 1 0 +t
= e −2 t
001
1 00
= e −2 t 0 1 0 .
−t t 1
5.21. Using a computer, matrix 1
0
A=
0
0 −1
1
0
−1 2
0
1
2 0
0
−1 0
0
1 0
0
0 0
0
0
1 has characteristic polynomial p(λ) = (λ − 1)4 and repeated eigenvalue λ = 1. We can write
etA = et (I +(A−I ))
= etI et (A−I )
= et I + t (A − I ) + t2
(A − I )2 + · · ·
2! Matrix A must satisfy its characteristic polynomial, so (A − I )4 = 0, but 0 −1 2 0 0 0
0 0 0 0
0 0
and (A − I )2 = A−I =
0 0 0 0
00
0 −1 2 0
00 0
0
0
0 0
0
0
0 9.5. The Exponential of a Matrix 653 so (A − I )k = 0 for k ≥ 2 and the series truncates at this point.
etA = et (I + t (A − I )) 1 0 0 0 0 0 0 1 0 0 +t
= et 0
0 0 1 0
0
0001 1 −t 2t 0 0 0
0 1
= et 00
1 0
0 −t 2 t 1
5.22. Using a computer, matrix −5 −4
A=
4
0 −1
1
−5
−1 0
0
−4
−1 −1
0
0
−1 2
0
0
2 0 0 0 0 4
5
−4 −2 has characteristic polynomial p(λ) = (λ + 3)4 and repeated eigenvalue λ = −3. We can write
etA = et (−3I +(A+3I ))
= e−3tI et (A+3I )
= e−3t I + t (A + 3I ) + t2
(A + 3I )2 + · · ·
2! Matrix A must satisfy its characteristic polynomial, so (A + 3I )4 = 0, but −2 0 −1 4 0
1
5 −4 3
0
and (A + 3I )2 = A + 3I = 4 −4 −2 −4 0
0 −1 −1 1
0 0
0
0
0 0
0
0
0 0
0
,
0
0 So (A + 3I )k = 0 for k ≥ 2 and the series truncates at this point.
etA = e−3t (I + t (A + 3I )) 1 0 0 0 −2 0 −1
1 −4 3
−3t 0 1 0 0 = e +t
4 −4 −2
0 0 1 0
0 −1 −1
0001 1 − 2t
0
−t
4t 1 + 3t
t
5t −4t
= e−3t .
4t
−4t
1 − 2t −4t 0
−t
−t
1+t 5.23. Using a computer, matrix 0
1
A=
0
3 4
−5
2
−10 5
−7
3
−13 4 5 −4 1 −2 3
−1 6 has characteristic polynomial p(λ) = (λ − 1)4 and repeated eigenvalue λ = 1. We can write
etA = et (I +(A−I ))
= etI et (A−I )
= et I + t (A − I ) + t2
(A − I )2 + · · ·
2! 654 Chapter 9. Linear Systems with Constant Coefﬁcients
Matrix A must satisfy its characteristic polynomial, so (A − I )4 = 0, but −1 1
A−I =
0
3 4
−6
2
−10 5
−2 −7
3
,
2
−1 −13 5 and −1 2
(A − I )2 = −1
2 0
0
(A − I )3 = 0
0 0
0
0
0 0
0
0
0 2
−4
2
−4 3
−6
3
−6 −1 2
,
−1 2 0
0
,
0
0 so (A − I )k = 0 for k ≥ 3 and the series truncates at this point.
t2
etA = et I + t (A − I ) + (A − I )2
2 1 0 0 0 −1 0 1 0 0 1
= et +t
0 0 1 0
0
0001
3 2 − 2t − t 2
8t + 2t 2
1 2t + 2t 2
2 − 12t − 4t 2
= et −t 2
4t + 2 t 2
2
6t + 2t 2
−20t − 4t 2 5.24. Using a computer, matrix −1 2
5
−2 2
−7
3 t 2 −4
+
2
−1 2 −1 2
−13 5
2 −4
10t + 3t 2
−4t − t 2 −14t − 6t 2
6t + 2t 2 2 + 4t + 3t 2
−2t − t 2 −26t − 6t 2 2 + 10t + 2t 2 4
−6
2
−10 1 −9
A=
13
2 0
4
−3
−1 3
−6
3
−6 −1 2 −1 2 0
4
−5 0 0
1
−1
0 has characteristic polynomial p(λ) = (λ − 1)4 and repeated eigenvalue λ = 1. We can write
etA et (I +(A−I ))
= etI et (A−I )
= et A + t (A − I ) + t2
(A − I )2 + · · · .
2! Matrix A must satisfy its characteristic polynomial, so (A − I )4 = 0. But
0
0 −9 3
A−I =
13 −3
2 −1
and 0
1
−2
0 0
4
,
−5 −1 0 −6
(A − I )2 = −9
7 0
1
(A − I )3 = 1
−1 0
0
0
0 0
0
0
0 0
0
,
0
0 0
2
2
−2 0
1
1
−1 0
3
,
3
−3 9.5. The Exponential of a Matrix 655 so (A − I )k = 0 for k ≥ 4 and the series truncates at this point.
t2
t3
etA = I + t (A − I ) + (A − I )2 + (A − I )3
2!
3! 1 0 0 0 0
0
0
0
1
4 0 1 0 0 −9 3
= et +t
0 0 1 0
13 −3 −2 −5 0001
2 −1 0 −1
0 0 0 0 0 0
0
0
2
3
t −6 2
1
3 t 1 0 0 0 +
+ 1 0 0 0 1
3
2 −9 2
6
7 −2 −1 −3
−1 0 0 0 1
0
0
0
2
3
2
2
2
t + t /2
4t + 3t /2 −9t − 3t + t /6 1 + 3t + t
.
= et 1 − 2t + t 2 /2 −5t + 3t 2 /2 13t − 9t 2 /2 + t 3 /6 −3t + t 2
2
3
2
2
2
2t + 7t /2 − t /6
−t − t
−t /2
1 − t − 3t /2
5.25. If
A= −2
1
3 1
−3
−5 −1
0
0 , then p(λ) = det (A − λI )
−2 − λ
1
−1
1
−3 − λ 0
=
3
−5
−λ
Expanding down the third column,
−2 − λ
1
1 −3 − λ
−λ
p(λ) = −1
1
−3 − λ
3
−5
= −1(4 + 3λ) − λ(λ2 + 5λ + 5)
= −λ3 − 5λ2 − 8λ − 4.
Zeros must be factors of the constant term, so −1 is a possibility. Dividing by λ + 1 leads to the following
factorization
p(λ) = −(λ + 1)(λ + 2)2
and eigenvalues λ1 = −1 and λ2 = −2. Because
−1 1 −1
1 −2 0
A+I =
3 −5 1 → 1
0
0 0
1
0 2
1
0 , the geometric multiplicity of λ1 = −1 is one, and an eigenvector is v1 = (−2, −1, 1)T , leading to the solution
−2
y1 (t) = etA v1 = e−t −1 .
1
Next,
1 0 −1
0 1 −1
→ 0 1 −1 ,
A + 2I = 1 −1 0
00 0
3 −5 2
the geometric multiplicity of λ2 = −2 is one, and an eigenvector is v2 (t) = (1, 1, 1)T , leading to the solution
1
y2 (t) = etA v2 = e−2t 1 .
1 656 Chapter 9. Linear Systems with Constant Coefﬁcients
Notice that −2
−1
1 (A + 2I )2 = −2
−1
1 4
2
−2 → 1
0
0 −2
0
0 1
0
0 has dimension two, equalling the algebraic multiplicity of λ2 . Thus, we can pick a vector in the nullspace of
(A + 2I )2 that is not in the nullspace of A + 2I . Choose v3 = (−1, 0, 1)T , which is not a multiple of v2 ,
making the set {v2 , v3 } independent, and giving a third solution,
y3 (t) = etA v3
= e−2t [v3 + t (A + 2I )v3 ]
−1
01
0 + t 1 −1
= e −2 t
1
3 −5
−1
−1
0 + t −1
= e −2 t
−1
1
−1 − t
−t
= e −2 t
.
1−t
Because 5.26. −2
det[y1 (0), y2 (0), y3 (0)] = −1
1 −1
0
2 1
1
1 −1
0
1 −1
0 = 1,
1 the solutions are independent for all t and form a fundamental set of solutions.
If
10 1
A = 2 2 −2 ,
00 2
then 1−λ
2
0 p(λ) = det (A − λI ) = 0
1
2 − λ −2 .
0
2−λ Expanding across the third row,
p(λ) = (2 − λ) 1−λ
2 0
= (2 − λ)2 (1 − λ),
2−λ so the eigenvalues are 2 and 1, with algebraic multiplicities 2 and 1, respectively. For λ1 = 1,
A−I = 0
2
0 0
1
0 1
−2
1 → 1/2
0
0 1
0
0 0
1
0 , so the geometric multiplicity of λ1 is 1 and an eigenvector is v1 = (−1, 2, 0)T , providing exponential solution
y1 (t) = etA v1 = et
For λ2 = 2,
A − 2I = −1
2
0 0
0
0 1
−2
0 → −1
2
0
1
0
0 . 0
0
0 −1
0
0 , 9.5. The Exponential of a Matrix 657 so there are two free variables and the geometric multiplicity is 2. Thus, v2 = (0, 1, 0)T and v3 = (1, 0, 1)T
are independent eigenvectors and
y2 (t) = etA v2 = e2t 0
1
0 and y3 (t) = etA v3 = e2t 1
0
1 are independent solutions. Because
det y1 (0), y2 (0), y3 (0) = 5.27. −1
2
0 0
1
0 1
0 = −1,
1 the solutions are independent for all t and form a fundamental set of solutions.
If
0 10
A = −4 4 0 ,
−2 0 1
then p(λ) = det (A − λI )
−λ
1
0
0
= −4 4 − λ
−2
0
1−λ Expanding down the third column,
−λ
1
−4 4 − λ
= −(λ − 1)(λ2 − 4λ + 4) p(λ) = (1 − λ) = −(λ − 1)(λ − 2)2 ,
providing eigenvalues λ1 = 1 and λ2 = 2, with algebraic multiplicities 1 and 2, respectively. Because
A−I = −1
−4
−2 1
3
0 0
0
0 → 1
0
0 0
1
0 0
0
0 , the geometric multiplicity of λ1 = 1 is one, and an eigenvector is v1 = (0, 0, 1)T , leading to the solution
tA y1 (t) = e v1 = e
Next,
A − 2I = −2
−4
−2 1
2
0 0
0
−1 t → 0
0
1 . 1
0
0 0
1
0 1/2
1
0 , the geometric multiplicity of λ2 = 2 is one, and an eigenvector is v2 (t) = (−1, −2, 2)T , leading to the
solution
−1
y2 (t) = etA v2 = e2t −2 .
2
Next
(A − 2I )2 = 0
0
6 0
0
−2 0
0
1 has dimension two, equaling the algebraic multiplicity of λ2 . Thus, we can pick a vector in the nullspace of
(A − 2I )2 that is not in the nullspace of A − 2I . Choose v3 = (1, 0, −6)T , which is not a multiple of v2 , 658 Chapter 9. Linear Systems with Constant Coefﬁcients
making the set {v2 , v3 } independent, and giving a third solution,
y3 (t) = etA v3
= e2t [v3 + t (A − 2I )v3 ]
1
−2 1
0 + t −4 2
= e 2t
−6
−2 0
1
−2
0 + t −4
= e 2t
−6
4
1 − 2t
−4t
= e 2t
.
−6 + 4t
Because 5.28. 0
0
−1 −1
−2
2 0
det[y1 (0), y2 (0), y3 (0)] = 0
1 1
0
−6 1
0 = 2,
−6 the solutions are independent for all t and form a fundamental set of solutions.
If
−1 0
0
2 −5 −1 ,
A=
0
4 −1
then
p(λ) = det(A − λI ) −1 − λ
2
0 0
0
−5 − λ
−1
.
4
−1 − λ Expanding across the ﬁrst row,
−5 − λ
−1
4
−1 − λ
= −(λ + 1)(λ2 + 6λ + 9) p(λ) = (−1 − λ) = −(λ + 1)(λ + 3)2 ,
providing eigenvalues λ1 = −1 and λ2 = −3, with algebraic multiplicities 1 and 2, respectively. Because
0
2
0 A+I = 0
−4
4 0
−1
0 → 1
0
0 −1/2
0
0 0
1
0 , the geometric multiplicity of λ1 = −1 is 1, and an eigenvector is v1 = (1, 0, 2)T , providing the exponential
solution
1
y1 (t) = etA v1 = e−t 0 .
2
For λ2 = −3
A + 3I = 2
2
0 0
−2
4 0
−1
2 → 1
0
0 0
1
0 0
1/2
0 has one free variable, so the geometric multiplicity of λ2 = −3 is 1. An eigenvector is v2 = (0, −1, 2)T ,
giving a second exponential solution,
y2 (t) = etA v2 = e−3t 0
−1
2 . 9.5. The Exponential of a Matrix 659 Next,
(A + 3I )2 = 4
0
8 0
0
0 0
0
0 1
0
0 → 0
0
0 0
0
0 has dimension 2, equaling the algebraic multiplicity of λ2 . Thus, we can pick a vector in the nullspace of
(A + 3I )2 that is not in the nullspace of A + 3I . Choose v3 = (0, 1, 0)T , which is not a multiple of v2 , making
the set {v2 , v3 } independent and giving a third solution,
y3 (t) = etA v3
= e−3t [v3 + t (A + 3I )v3 ]
0
20
1 + t 2 −2
= e−3t
0
04
0
0
1 + t −2
= e−3t
4
0
0
= e−3t 1 − 2t .
4t
Because 5.29. 1
det y1 (0), y2 (0), y3 (0) = 0
2 0
−1
2 0
−1
2 0
1
0 0
1 = −2 ,
0 the solutions are independent for all t and form a fundamental set of solutions.
Using a computer, matrix 11 −42 4
28 −12
A=
0
−24 39
0
81 −4
−1
−8 −28 ,
0
−57 has characteristic polynomial,
p(λ) = (λ + 3)2 (λ + 1)2 ,
providing eigenvalues λ1 = −3 and λ2 = −1, with algebraic multiplicities 2 and 2, respectively. Because 14 −42 4
1 0 0
28 0 −12
A + 3I = 0
−24 42
0
81 −4
2
−8 −28 0
→
0
0
−54
0 1
0
0 0
1
0 −2/3 ,
0
0 the geometric multiplicity of λ1 = −3 is one, and an eigenvector is v1 = (0, −2, 0, −3)T , leading to the
solution
0 −2 .
y1 (t) = etA v1 = e−3t 0
−3
Next, 28
0
(A + 3I )2 = 0
−12 −84
0
0
36 8
0
4
−4 1
56 0
0
→
0
0
0
−24 −3
0
0
0 0
1
0
0 2
0
,
0
0 has dimension two, equaling the algebraic multiplicity of λ1 . Thus, we can pick a vector in the nullspace of
(A + 3I )2 that is not in the nullspace of A + 3I . Choose v2 = (−2, 0, 0, 1)T , which is not a multiple of v1 , 660 Chapter 9. Linear Systems with Constant Coefﬁcients
making the set {v1 , v2 } independent, and giving a second solution,
y2 (t) = etA v2
= e−3t [v2 + t (A + 3I )v2 ] 0 14 −42 −2 −12 42
= e−3t +t
0
0
0
3
−24 81 −2 0 0 −4 = e−3t +t
0
0 1
−6 −2 −4t .
= e−3t 0
1 − 6t 4
−4
2
−8 28 −2 −28 0 0 0 −54
1 Because 1 0 1/3 7/3 12 −42 4
28 0
0 1 0 −12 40 −4 −28 →
A+I =
0
0
0
0
00 0
0
−24 81 −8 −56
00 0
0
has dimension two, equaling the algebraic multiplicity of λ2 , we can pick two independent eigenvectors in the
nullspace of A + I . Choose v3 = (−1, 0, 3, 0)T and v4 = (−7, 0, 0, 3)T . Note that they are not multiples of
one another, making the set {v3 , v4 } independent, and giving a third and fourth solution, −1 −7 0
v3 (t) = e−t 3
0 and 0
y4 (t) = e−t .
0
3 Because 5.30. 0 −2 −1 −7
−2 0
0
0
det[y1 (0), y2 (0), y3 (0), y4 (0)] =
= 6,
0
0
3
0
−3 1
0
3
the solutions are independent for all t and form a fundamental set of solutions.
Using a computer, matrix 18 −7 24
24 16 15 −8 20
A=
0
0
−1
0
−12 4 −15 −17
has characteristic polynomial p(λ) = (λ + 3)2 (λ + 1)2 , providing eigenvalues λ1 = −3 and λ2 = −1, with
algebraic multiplicities 2 and 2, respectively. Because 1 −1/3 0 0 21 −7 24
24 15
A + 3I = 0
−12 −5
0
4 20
2
−15 16 0
→
0
0
0
−14 0
0
0 1
0
0 0
,
1
0 the geometric multiplicity of λ1 = −3 is 1, and an eigenvector is v1 = (1, 3, 0, 0)T , giving solution
1
3
y1 (t) = etA v1 = e−3t .
0
0 9.5. The Exponential of a Matrix 661 Next, 48 48
(A + 3I )2 = 0
−24 −16
−16
0
8 52
60
4
−28 1
56 56 0
→
0
0
−28
0 −1/3
0
0
0 0
1
0
0 7/6 0
0
0 has dimension 2, equalling the algebraic multiplicity of λ1 . Thus, we can pick a vector in the nullspace of
(A + 3I )2 that is not in the nullspace of A + 3I . Choose v2 = (−7, 0, 0, 6)T , which is not a multiple of v1 .,
making the set {v1 , v2 } independent, and giving a second solution
y2 (t) = etA v2
= e−3t [v2 + t (A + 3I )v2 ] −7 21 −7 0 15 −5
= e−3t +t
0
0
0
6
−12 4 −3 −7 −9 0 +t
= e−3t 0 0
0
6 −7 − 3t −9t = e−3t .
0
6 24 −7 16 0 0 0 −14
6 24
20
2
−15 For λ2 = −1, 19 15
A+I =
0
−12 −7
−7
0
4 24
20
0
−15 1
24 16 0
→
0
0
−16
0 0
1
0
0 0
0
1
0 2
2
,
0
0 the geometric multiplicity of λ2 = −1 is 1, and an eigenvector is v3 = (−2, −2, 0, 1)T , giving the exponential
solution −2 −2 .
y3 (t) = etA v3 = e−t 0
1
Next, −32 −12
(A + I )2 = 0
24 12
8
0
−8 −44
−20
0
32 1
−40 −8 0
→
0
0
0
32 0
1
0
0 1
−1
0
0 2
2
0
0 has dimension 2, equaling the algebraic multiplicity of λ2 , so we can pick a vector in the nullspace of (A + I )2
that is not in the nullspace of A + I . Choose v4 = (−1, 1, 1, 0)T , which is not a multiple of v3 , making the 662 Chapter 9. Linear Systems with Constant Coefﬁcients
set {v3 , v4 } independent, and giving a fourth solution.
y4 (t) = etA v4
= e−t [v4 + t (A + I )v4 ] −1 19 −7 1 15 −7
= e−t +t
1
0
0
0
−12 4 −1 −2 1 −2 +t
= e−t 1
0 0
1 −1 − 2 t 1 − 2t = e −t .
1
t
Because
1
3
det y1 (0), y2 (0), y3 (0), y4 (0) =
0
0 5.31. 24
20
0
−15 −7
0
0
6 24 −1 16 1 0 1 −16
0 −2
−2
0
1 −1
1
= 3,
1
0 the solutions are independent for all t and form a fundamental set of solutions.
Using a computer, matrix 0 −30 −42 40 −48 14 7
9
−9
10
−2 1 5
8
−6
6
−2 −1
A=
,
45
64 −60 72 −20 2
2
33
47 −45 55 −15 0
7
11 −10 10
−1
has characteristic polynomial,
p(λ) = (λ − 1)3 (λ − 2)3 ,
providing eigenvalues λ1 = 1 and λ2 = 2, with algebraic multiplicities 3 and 3, respectively. Because −1 −30 −42 40 −48 14 1 0 0 0 4 −2 6
9
−9
10
−2 1 0 1 0 0 4 −3 5
7
−6
6
−2 −1 0 0 1 0 2 −1 A−I =
→
,
45
64 −61 72 −20 2 0 0 0 1 4 −3 2 0 0 0 0 0 0 33
47 −45 54 −15
0
7
11 −10 10
−2
00000 0
the geometric multiplicity of λ1 = 1 is two, and we can choose two independent eigenvectors form the
nullspace of A − I , v1 = (−4, −4, −2, −4, 1, 0)T and v2 = (2, 3, 1, 3, 0, 1)T . These eigenvectors provide
the solutions −4 2 −4 3 tA
t −2 tA
t 1
y1 (t) = e v1 = e and y2 (t) = e v2 = e . −4 3
1
0
0
1 9.5. The Exponential of a Matrix 663 Next, −3 −2 −1
(A − I ) = 1
2
−4
2 −46 −64
−38 −53
9
12
21
29
25
35
−37 −52 −76
−62
12
34
42
−64 62
51
−11
−28
−34
51 1
22 18 0 −4 0
→
−10 0
0
−12 0
18 0
1
0
0
0
0 0
0
1
0
0
0 −4/7
−5/7
−3/7
0
0
0 12/7
8/7
2/7
0
0
0 −2/7 −6/7 2/7 0
0
0 has dimension three, equaling the algebraic multiplicity of λ1 . Thus, we can pick a vector in the nullspace of
(A − I )2 that is not in the nullspace of A − I . We will try v3 = (4, 5, 3, 7, 0, 0)T , but we’ll need to check
independence before proceeding. However, −4 −4 −2 −4
1
0 2
3
1
3
0
1 1
4
5
0 3
0
→
7
0
0
0
0
0 0
1
0
0
0
0 0
0 1
,
0
0
0 and a pivot in each column tells us that {v1 , v2 , v3 } is an independent set. A third solution is
y3 (t) = etA v3
= et [v3 + t (A − I )v3 ] −1 −30 4 6
1 5 5 −1 3 = et + t 45
2 7 2 0 33
0
7
1 4 14 5 −4 3 −2 = et + t 7 −22 0 −16 1
−4 4 + 14t 5 − 4t = e t 3 − 2t . −16t 1 − 4t −42
9
7
64
47
11 40
−9
−6
−61
−45
−10 −48
10
6
72
54
10 14 4 −2 5 −2 3 −20 7 −15 0 1
−2 Next, −2 1 −1
A − 2I = 2
2
0 −30
5
5
45
33
7 −42
9
6
64
47
11 40
−9
−6
−62
−45
−10 −48
10
6
72
53
10 1
14 −2 0 −2 0
→
−20 0
0
−15 −3
0 0
1
0
0
0
0 0
0
1
0
0
0 0
0
0
1
0
0 0
0
0
0
1
0 −2 −2 1
,
1
1
0 664 Chapter 9. Linear Systems with Constant Coefﬁcients
has dimension one, giving a single eigenvalue v4 = (2, 2, −1, −1, −1, 1)T and a fourth solution,
2
2 tA
2t −1 y4 (t) = e v4 = e . −1 −1 1
Next,
0 −4 1
(A − 2I )2 = −3 −2
−4 14
−49
−1
−69
−41
−51 20
−71
−1
−99
−59
−74 −18
69
1
95
56
71 20
−82
0
−110
−65
−84 1
−6 22 0 0
0
→
30 0
0
18 0
23 0
1
0
0
0
0 0
0
1
0
0
0 0
0
0
1
0
0 2
3
−2
−1
0
0 0
1 −1 ,
0
0
0 which has dimension one. Pick v5 = (0, −1, 1, 0, 0, 1)T in the nullspace of (A − 2I )2 . Note that it is not a
multiple of v4 and is therefore independent of v4 . Now,
y5 (t) = etA v5
= e2t [v5 + t (A − 2I )v5 ] −2 −30 0 5
1 −1 5 −1 1 = e2t +t
45
2 0 2 0 33
0
7
1 2 0 2 −1 −1 1 = e2t +t −1 0 −1 0 1
1 2t −1 + 2 t 2t 1 − t =e . −t −t 1+t −42
9
6
64
47
11 40
−9
−6
−62
−45
−10 −48
10
6
72
53
10 14 0 −2 −1 −2 1 −20 0 0 −15
1
−3 Finally, examine −2
4 0
3
(A − 2I ) = 6
4
5 −22
73
5
105
61
79 −32
105
7
151
88
114 30
−101
−7
−145
−84
−109 −36
118
8
170
99
128 1
10 −32 0 −2 0
→
−46 0
0
−27 −35
0 0
1
0
0
0
0 0
0
1
0
0
0 1
0
−1
0
0
0 1
3
−1
0
0
0 0
1 −1 ,
0
0
0 9.5. The Exponential of a Matrix
which has dimension three. Pick v6 = (−1, −3, 1, 0, 1, 0)T
check independence. Since
2
1
0 −1 2 −1 −3 0 −1 1
1 0 →
−1 0
0 0 −1 0
0
1
0
1
1
0 665 in the nullspace of (A − 2I )3 . We will need to
0
1
0
0
0
0 0
0 1 0
0
0 has a pivot in each column, the set {v4 , v5 , v6 } is independent. A sixth solution is formed as follows.
y6 (t) = etA v6
t2
= e2t v6 + t (A − 2I )v6 + (A − 2I )2 v6
2 −1 −1 −1 −3 −3 −3 2
1 t 1 1 = e2t + (A − 2I )2 + t (A − 2I ) 0 2 0 0 1 1 1 0
0
0
2 −1 −2 3 −3 −2 2 −2 t 1 1 = e2t + +t −1 2 1 0 −1 1 1 0
0
−1 −1 + 2t − t 2 −3 + 3t − t 2 1 − 2t + t 2 /2 = e 2t 2 −t + t /2 1 − t + t 2 /2 −t 2 /2
Because
−4
−4
−2
det[y1 (0), y2 (0), y3 (0), y4 (0), y5 (0), y6 (0)] =
−4
1
0
= 1,
5.32. 2
3
1
3
0
1 4
5
3
7
0
0 2
2
−1
−1
−1
1 0
−1
1
0
0
1 the solutions are independent for all t and form a fundamental set of solutions.
Using a computer, matrix
2
0
0
0
0
1
11
−9
−8 −14 −2 −7 7
−6
−4 −9 −3 −3
A= 17 −12 −9 −19 −5 −9 −29 −7 −13 23 −16 −15 19
5
9
−15 12
11
has characteristic polynomial
p(λ) = (λ − 1)3 (λ − 2)3 , −1
−3
1
0
1
0 666 Chapter 9. Linear Systems with Constant Coefﬁcients
providing eigenvalues λ1 = 1 and λ2 = 2, with algebraic multiplicities 3 and 3, respectively. Using a
computer, the nullspace of A − I has dimension one, as A − I reduces to
1 0 0 0 0 1 0 0
A−I =
0
0
0 1
0
0
0
0 0
1
0
0
0 0
0
1
0
0 0 2
.
1
−1 0 0
0
0
1
0 Thus, λ1 = 1 has geometric multiplicity 1. Using a computer to reduce (A − I )2 , you can check that the
nullspace of (A − I )2 has dimension 2. The key here is that (A − I )3 reduces to
1 0 1 0
1
2
0 0
(A − I )3 → 0
0
0 −2
0
0
0
0 1
0
0
0
0 0
1
0
0
0 −3/2
0
0
0
0 A basis for the nullspace is provided by the vectors. −1 −1 v1 = 2
1
0
0
0 , 3/2 0
v2 = ,
0
1
0 −5/2 1
.
0
0
0 −2 and 5/2 0
v3 = −1 0
1 Because v1 , v2 , and v3 are in the nullspace of (A − I )3 we know that
y(t) = eAt v = v + t (A − I )v + t2
(A − I )v
2 for each v = v1 , v2 , and v3 . This fact, and a computer, provide the following solutions. −1 − t − t 2 /2 2+t 1 − t − t2 ,
−t 2 /2 2t + t 2 /2 t 2 /2 −1 − t − t 2 /4 3/2 + t/2 2
tA
t −3t/2 − t /2 y2 (t) = e v2 = e , −t/2 − t 2 /4 1 + 3t/2 + t 2 /4 t/2 + t 2 /4 y1 (t) = e v1 = e tA and t −2 − t − 3t 2 /4 5/2 + 3t/2 2
tA
t −t/2 − 3t /2 y3 (t) = e v3 = e . −1 + t/2 − 3t 2 /4 10t/4 + 3t 2 /4 1 − t/2 + 3t 2 /4 9.5. The Exponential of a Matrix 667 On the other hand, A − 2I reduces to
1
0 0
A − 2I → 0
0
0 0
1
0
0
0
0 1/6
7/6
0
0
0
0 −5/6
1/6
0
0
0
0 1/2
1/2
0
0
0
0 0
0 1
,
0
0
0 so the nullspace of A − 2I has dimension 3, and the geometric multiplicity of λ2 = 2 is 3. A basis for the
eigenspace contains the vectors −1/6 −7/6 1
v4 = ,
0
0
0 5/6 −1/6 0
v5 = ,
1
0
0 −1/2 and −1/2 0
v6 = .
0
1
0 The corresponding solutions are −1/6 −7/6 1
y4 (t) = e v4 = e ,
0
0
0 5/6 −1/6 tA
2t 0 y5 (t) = e v5 = e ,
1
0
0
tA and 2t −1/2 −1/2 0
y6 (t) = etA v6 = e2t .
0
1
0
Because
det y1 (0), y2 (0), y3 (0), y4 (0), y5 (0), y6 (0)
−1 −1 −2 −1/6 5/6 −1/2
2 3/2 5/2 −7/6 −1/6 −1/2
1
1
0
0
1
0
0
=
,
=
0
0
−1
0
1
0
12
0
1
0
0
0
1
0
0
1
0
0
0
the solutions y1 (t), y2 (t), y3 (t), y4 (t), y5 (t), and y6 (t) are independent for all t and form a fundamental set of
solutions. 668
5.33. Chapter 9. Linear Systems with Constant Coefﬁcients
Consider the ﬁrst student’s solution. If y1 (t) = e2t (1, 4, 4), then
2
8
8
2
8
8 y1 (t) = e2t
−2
−4
0 2
3
−1 −1
0
3 y1 (t) = e2t , and , so y1 is a solution. Similarly, y2 and y3 are seen to be solutions. Further,
1
det[y1 (0), y2 (0), y3 (0)] = 4
4 1
1
0 −1
0 = 1,
1 so the solutions are independent and form a fundamental solution set. Looking at the second student’s solution,
y2 (t) = et −5
−10 + et
−5
−2 2
−4 3
0 −1 3 − 5t
1 − 10t = et
−2 − 5t
−1
0 y2 (t) = et
3 −2 − 5t
−9 − 10t
−7 − 5t
−2 − 5t
−9 − 10t
−7 − 5t , and , so y2 is a solution. In a similar manner, you can check that the second student’s y1 and y3 are solutions.
Moreover,
13
3
4 1 −1 = −6,
det[y1 (0), y2 (0), y3 (0)] =
4 −2 −4 5.34. so the solutions are independent and form a fundamental solution set. Thus, both students are correct. They
both have fundamental solution sets. They are just using different bases.
If
6 0 −4
A = −2 4 5 ,
102
then the characteristic polynomial is found with the following computation
p(λ) = det (A − λI ) = 6−λ
0
−4
−2 4 − λ
5
.
1
0
2−λ Expanding down the second column,
6 − λ −4
1
2−λ
2
= (4 − λ)(λ − 8λ + 16) p(λ) = (4 − λ) = −(λ − 4)3 .
Because a matrix must satisfy it’s characteristic, the series
etA = et [4I +(A−4I )]
= e4tI et (A−4I )
= e4t I + t (A − 4I ) + t2
(A − 4I )2 + · · ·
2! 9.5. The Exponential of a Matrix 669 truncates, with (A − 4I )k = 0 for k = 3, 4, . . . . Thus,
etA = e4t
= e 4t
Choose
e1 = 1
0
0 , 0
1
0 e2 = y2 (t) = etA e2 = e4t
y3 (t) = etA e3 = e4t A= 8
0
−8 t2
2! 0
1
0 0
0
0 e3 = 0
0
1 . 0
−2
0 1 + 2t
−2t + t 2 /2 ,
t
0
1 , and
0
−4t
5t − t 2
1 − 2t y1 (t) = etA e1 = e4t form a fundamental set of solutions.
If + and , Then 5.35. −4
5
−2 100
20
0 1 0 + t −2 0
001
10
1 + 2t
0
−4 t
−2t + t 2 1 5t − t 2 .
t
0 1 − 2t 3
4
−6 2
0
0 , then the characteristic polynomial is found with the following computation.
p(λ) = det (A − λI )
8−λ
3
0
4−λ
=
−8
−6 2
0.
−λ Expanding across the second row,
8−λ 2
−8 −λ
= −(λ − 4)(λ2 − 8λ + 16) p(λ) = (4 − λ) = −(λ − 4)3 .
Thus, λ = 4 is a repeated eigenvalue having algebraic multiplicity 3. Because
A − 4I = 4
0
−8 3
0
−6 2
0
−4 → 1
0
0 3/4
0
0 1/2
0
0 has dimension two, we can select two eigenvectors from the nullspace of A − 4I , v1 = (−3, 4, 0)T and
v2 = (−1, 0, 2). Of course, these lead to the independent solutions
y1 (t) = etA v1 = e4t
y2 (t) = etA v2 = e4t −3
4
0
−1
0
2 670 Chapter 9. Linear Systems with Constant Coefﬁcients
Examining
0
0
0 (A − 4I )2 = 0
0
0 0
0
0 , we note that (A − 4I )k = 0 for k ≥ 2. We can write
etA = et (4I +(A−4I ))
= e4tI et (A−4I )
= e4t [I + t (A − 4I )] ,
knowing that the series truncates. Choose any vector independent from v1 and v2 , such as v3 = (1, 0, 0)
(check this), then
y3 (t) = e4t v3
= e4t [v3 + t (A − 4I )v3 ]
1
0 + t (A − 4I )
= e 4t
0
1
4
0 +t
0
= e 4t
0
−8
1 + 4t
0
= e 4t
−8t
5.36. provides the remaining solution.
In matrix form, x
−2
y=
0
z
0
which leads to the characteristic polynomial
p(λ) = det(A − λI ) = −4
5
1 1
0
0 x
y
z 13
−4
1 , −2 − λ −4
13
0
5 − λ −4 .
0
1
1−λ Expanding down the ﬁrst column,
5 − λ −4
1
1−λ
= (−2 − λ)(λ2 − 6λ + 9) p(λ) = (−2 − λ) = −(λ + 2)(λ − 3)2 .
for λ1 = −2,
A + 2I = 0
0
0 −4
7
1 13
−4
3 → 0
0
0 1
0
0 0
1
0 , and the eigenvector v1 = (1, 0, 0)T provides the solution
y1 (t) = etA v1 = e−2t
For λ2 = 3,
A − 3I = − −5
0
0 −4
2
1 13
−4
−2 → 1
0
0 . 1
0
0 0
1
0 −1
−2
0 , 9.5. The Exponential of a Matrix 671 and the eigenvector v2 = (1, 2, 1)T provides the solution
y2 (t) = etA v2 = e3t
Because 25
0
0 (A − 3I )2 = 25
0
0 −75
0
0 1
2
1
→ .
1
0
0 1
0
0 −3
0
0 , the nullspace of (A − 3I )2 has dimension 2. Pick v3 = (3, 0, 1)T in the nullspace of (A − 3I )2 . Note that v2
and v3 are independent. This gives a third solution,
y3 (t) = etA v3
= e3t [v3 + t (A − 3I )v3 ]
3
−5 −4
0 +t
0
2
= e3t
1
0
1
−2
3
0 + t −4
= e3t
−2
1
3 − 2t
−4t
= e3t
.
1 − 2t 13
−4
−2 3
0
1 Because 5.37. 113
det y1 (0), y2 (0), y3 (0) = 0 2 0 = 2,
011
the solutions y1 (t), y2 (t), and y3 (t) are independent for all g and forma fundamental set of solutions.
In matrix form,
x
−1 5
3
x
y=
0
1
1
y,
z
0 −2 −2
z
which leads to the characteristic polynomial
p(λ) = det (A − λI )
−1 − λ
5
3
0
1−λ
1
.
=
0
−2 −2 − λ
Expanding down the ﬁrst column,
1−λ
1
p(λ) = (−1 − λ)
−2 −2 − λ
= −(λ + 1)(λ2 + λ)
= −λ(λ + 1)2 .
Thus, λ1 = 0 and λ2 = −1 are eigenvalues having algebraic multiplicities 1 and 2, respectively. Because
102
−1 5
3
0
1
1
→011,
A − 0I =
000
0 −2 −2
v1 = (−2, −1, 1)T is an eigenvector, providing the solution
−2
y1 (t) = etA v1 = e0t −1 =
1 −2
−1
1 . 672 Chapter 9. Linear Systems with Constant Coefﬁcients
Because
A+I = 0
0
0 5
2
−2 3
1
−1 → 0
0
0 1
0
0 0
1
0 has dimension one, we can choose an eigenvector v2 = (1, 0, 0)T to produce a second solution,
1
y2 (t) = etA v2 = e−t 0 .
0
Examining
04
2
0 1 1/2
1
(A + I )2 = 0 2
→00 0
,
0 −2 −1
00 0
we note that the nullspace of (A + I )2 has dimension two, so we can choose v3 = (0, 1, −2) in the nullspace
(A + I )2 independent of v2 (it’s not a multiple of v2 ). Then,
y3 (t) = etA v3
= et (−I +(A+I ) v3
= e−tI et (A+I ) v3
t2
(A + I )2 + · · · v3
2
= e−t [v3 + t (A + I )v3 ] ,
= e−t I I + t (A + I ) + 5.38. because v3 is in the nullspace of (A + I )2 and (A + I )k v3 = 0 for all k ≥ 2. Thus,
0
0
1 + t (A + I ) 1
y3 (t) = e−t
−2
−2
0
−1
1 +t
0
= e −t
−2
0
−t
1.
= e −t
−2
If 5 −1 0
2
0
4
0 3
A=
,
1 1 −1 −3 0 −1 0
7
a computer reveals the characteristic equation
p(λ) = (λ + 1)(λ − 5)3 ,
so λ1 = 1 and λ2 = 5 are eigenvalues, with algebraic multiplicities 1 and 3 respectively. Because
1 0 0 0 6 −1 0 2 0 1 0 0
0 4 0 4 ,
→
A+I =
0 0 0 1
1 1 0 −3 0000
0 −1 0 8
the eigenvector v1 = (0, 0, 1, 0)T provides the solution 0 0
y1 (t) = eAt v1 = e−t .
1
0 9.5. The Exponential of a Matrix
Because 0
0
A − 5I = 1
0 −1
−2
1
−1 1
2
4
0
→
−3 0
2
0 0
0
−6
0 673 −6
0
0
0 0
1
0
0 −1 −2 ,
0
0 the nullspace of A − 5I has dimension 2 and the eigenvector v2 = (6, 0, 1, 0)T and v3 = (1, 2, 0, 1)T provide
two more solutions.
6
1
0
y2 (t) = eAt v2 = e5t 1
0
Because 0
0
(A − 5I )2 = −6
0 0
0
−6
0 and 0
0
36
0 2
y3 (t) = eAt v3 = e5t .
0
1 1
0
0
0
→
0
18 0
0 1
0
0
0 −6
0
0
0 −3 0
,
0
0 the nullspace of (A − 5I )2 has dimension 3 and we can pick v4 = (3, 0, 0, 1)T independent of v2 and v3
(check this). This gives solution
y2 (t) = eAt v4
= e5t [v4 + t (A − 5I )v4 ] 3 0 −1 0 0 −2
= e5t + t 0
11
1
0 −1 2 3 4 0 = e5t + t 0
0
2
1 3 + 2t 4t = e5t .
0
1 + 2t 0
0
−6
0 2 3 4 0 −3 0 2
1 Because
0
0
det y1 (0), y2 (0), y3 (0), y4 (0) =
1
0 6
0
1
0 1
2
0
1 3
0
= 12,
0
1 the solutions are independents for all t and form a fundamental set of solutions.
5.39. If −12 −8
A=
0
−17 −1
0
0
−1 8
−1
5
8 10 9
,
0
15 a computer reveals the characteristic equation
p(λ) = (λ + 1)2 (λ − 5)2 , 674 Chapter 9. Linear Systems with Constant Coefﬁcients
so λ1 = −1 and λ2 = 5 are eigenvalues, each having algebraic multiplicities 2. Because −17 −1 −8 −5
A − 5I = 0
0
−17 −1 8
−1
0
8 1
10 9
0
→
0
0
10
0 0
1
0
0 −41/77
81/77
0
0 −41/77 −73/77 ,
0
0 the nullspace of A − 5I provides two eigenvectors, v1 = (41, −81, 77, 0)T and v2 = (43, 73, 0, 77)T , and
solutions 41 −81 y1 (t) = etA v1 = e5t 77 0 41 73 y2 (t) = etA v2 = e5t .
0
77
Examining −11 −1
1 −8
A+I =
0
0
−17 −1 8
−1
6
8 1
10 9
0
→
0
0
16
0 0
1
0
0 0
0
1
0 −1 1
,
0
0 showing that A + I has dimension one, giving up only one eigenvector, v3 = (1, −1, 0, 1)T , and one solution
1 −1 y3 (t) = etA v3 = e−t .
0
1
But, −41 −73
(A + I )2 = 0
−77 0
0
0
0 41
1
36
41 1
41 73 0
→
0
0
77
0 0
0
0
0 0
1
0
0 −1 0
0
0 has dimension two, so v4 = (1, 0, 0, 1)T is in the nullspace of (A + I )2 , independent of v3 (it’s not a multiple
of v3 ), and (A + I )k v4 = 0 for k ≥ 2. Thus,
y4 (t) = etA v4
= et (−I +(A+I )) v4
= e−tI et (A+I ) v4
= e−t I I + t (A + I ) +
= e−t [v4 + t (A + I )v4 ] . t2
(A + I )2 + · · · v4
2 9.5. The Exponential of a Matrix 675 Thus, 1 1 0 0 y4 (t) = e−t + t (A + I ) 0
0
1
1 1 −1 0 1 = e−t + t 0
0 1
−1
1 − t t
= e −t .
0
1−t
5.40. If −1 −6
A=
0
−2 0
13
−6
5 0
0
−2
0 2
−42 ,
13 −16 a computer reveals the characteristic polynomial
p(λ) = (λ + 2)2 (λ2 + 2λ + 5).
For λ = −2, −3 −12
(A + 2I )2 = 10
−4 10 0
15 0
−25 0
5
0 1
−26 −54 0
→
70 0
−18
0 0
1
0
0 0
0
0
0 2
−2 ,
0
0 so we can pick v1 = (0, 0, 1, 0)T and v2 = (−2, 2, 0, 1)T . Thus,
0
0
y1 (t) = etA v1 = e−2t [v1 + t (A + 2I )v1 ] = e−2t 1
0 −2 2
y2 (t) = etA v2 = e−2t [v2 + t (A + 2I )v2 ] = e−2t .
t
1
The remaining eigenvalues are −1 ± 2i . A computer reveals an eigenvector w = (1, 3i, −2 − i, i)T associated
with λ = −1 + 2i . Thus,
1 3i .
z(t)etA w = e(−1+2i)t w = e−t e2it −2 − i i 676 Chapter 9. Linear Systems with Constant Coefﬁcients
Using Euler’s identity, 1 0 0 3 z(t) = e−t (cos 2t + i sin 2t) +i
−2 −1 0
1 1 0 0 3 = e−t cos 2t − sin 2t −2 −1 0
1 0 1 3 0 + sin 2t .
+ ie−t cos 2t −1 −2 1
0
Thus, cos 2t
−3 sin 2t y3 (t) = e−t −2 cos 2t + sin 2t − sin t sin 2t
3 cos 2t y4 (t) = e−t − cos 2t − 2 sin 2t cos 2t and are solutions. Because
0
0
det y1 (0), y2 (0), y3 (0), y4 (0) =
1
0
5.41. −2
2
0
1 1
0
−2
0 0
3
= 1,
−1
1 the solutions y1 (t), y2 (t), y3 (t), and y4 (t) are independent for all t and form a fundamental set of solutions.
If −8 −2 3
12 6 −3 −2 2
A=
,
2
0 −3 −4 −4 −1 2
6
a computer reveals the characteristic equation
p(λ) = (λ + 1)(λ + 2)3 ,
so λ1 = −1 and λ2 = −2 are eigenvalues, having algebraic multiplicities 1 and 3, respectively. Because −7 −2 3 1 0 0 −1 12 −3
A+I =
2
−4 −1
0
−1 2
−2
2 6
0
→
−4 0
7
0 1
0
0 0
1
0 −1 ,
1
0 the nullspace of A + I provides one eigenvector, v1 = (1, 1, −1, 1)T and one solution,
1
1
y1 (t) = etA v1 = e−t .
−1 1
Examining −6 −3
A + 2I = 2
−4 −2
0
0
−1 3
2
−1
2 1
12 6
0
→
0
−4 0
8 0
1
0
0 0
0
1
0 −2 0
,
0
0 9.5. The Exponential of a Matrix 677 showing that A + 2I has dimension one, giving up only one eigenvector, v2 = (2, 0, 0, 1)T , and one solution
2
0
y2 (t) = etA v2 = e−2t .
0
1
But, 0 −2
(A + 2I )2 = 2
−1 0
0
0
0 −1
1
−1
0 1
0
4
0
→
−4 0
2
0 0
0
0
0 0
1
0
0 −2 0
0
0 has dimension two, so v3 = (0, 1, 0, 0)T is in the nullspace of (A + 2I )2 , independent of v2 (it’s not a multiple
of v2 ), and (A + 2I )k v3 = 0 for k ≥ 2. Thus,
y3 (t) = etA v3
= et (−2I +(A+2I )) v3
= e−2tI et (A+2I ) v3
= e−2t I I + t (A + 2I ) + t2
(A + 2I )2 + · · · v3
2 = e−2t [v3 + t (A + 2I )v3 ] .
Thus, 0 0 1 1 y3 (t) = e−2t + t (A + 2I ) 0
0
0
0 0 −2 1 0 = e−2t + t 0
0 0
−1 −2t 1
= e −2 t .
0
−t
Examining −2 −2
(A + 2I )3 = 2
−2 0
0
0
0 1
1
−1
1 1
4
4
0
→
−4 0
4
0 0
0
0
0 −1/2
0
0
0 −2 0
,
0
0 we see that (A + 2I )3 has dimension three. Thus, we pick v4 = (1, 0, 2, 0)T from the nullspace of (A + 2I )3
independent of v2 and v3 (check this), having the property that (A + 2I )k = 0 for k ≥ 3. Thus,
y4 (t) = etA v4
= et (−2I +(A+2I )) v4
= e−2tI et (A+2I ) v4
t2
(A + 2I )2 + · · · v4
2!
t2
v4 + t (A + 2I )v4 + (A + 2I )2 v4
2 = e−2t I + t (A + 2I ) +
= e −2 t 678 Chapter 9. Linear Systems with Constant Coefﬁcients
Further, 1 1 1 0 0 t 0 y4 (t) = e−2t + t (A + 2I ) + (A + 2I )2 2
2
2
2
0
0
0 1 0 −2 2 0 1 t 0 = e−2t + t + 2
0
0 2
0
0
−1 1 − t2 t
.
= e −2 t 2
−t 2 /2
2 5.42. If −2 −1 A = 15 12
−5 2
0
−16
−13
5 −2
−1
−1
1
0 0
0
10
6
−3 −3 −3 33 ,
26 −12 a computer reveals the characteristic equation
p(λ) = −(λ + 1)(λ + 2)4 .
The eigenvalueeigenvector pair λ1 = −1, v1 = (−3, −2, −2, −2, −1)T give the solution −3 −2 y1 (t) = etA v1 = e−t −2 . −2 1
Because (A + 2I )4 reduces
1
0 (A + 2I )4 → 0
0
0 −4/3
0
0
0
0 1/3
0
0
0
0 2 /3
0
0
0
0 8/3 0 0 ,
0
0 the dimension of the nullspace of (A + 2I )4 is 4 and the collection
4
3 v2 = 0 ,
0
0 −1 0 v3 = 3 ,
0
0 −2 0 v4 = 0 ,
3
0 −8 0 v5 = 0 0
3 is a basis for (A + 2I )4 . Moreover, for i = 2, 3, 4, and 5,
yi (t) = etA vi = e−2t vi + t (A + 2I )vi + t2
t3
(A + 2I )2 vi + (A + 2I )3 vi .
2!
3! 9.5. The Exponential of a Matrix
Using this result and a computer, y2 (t) = etA v2 y3 (t) = etA v3 y4 (t) = etA v4 y5 (y) = etA v4 679 8 + 12t − 5t 2 + t 3 6 + 4t + t 2 + t 3 1 = e−2t 24t − 5t 2 + t 3 2
18t
−10t + 3t 2 2 + 12t − 5t 2 + t 3 4t + t 2 + t 3 1 −2 t = − e −6 + 24t − 5t 2 + t 3 2
18t
2
10t + 3t −4 + t 2 4+t 1 = e −2 t t 2 2
6
2t 8 + 9t − 5t 2 + t 3 t + t2 + t3 −2 t = −e 21t − 5t 2 + t 3 18t
2
−3 − 10t + 3t Because
−3
−2
det y1 (t), y2 (0), y3 (t), y4 (0), y5 (0) = −2
−2
1
5.43. 4
3
0
0
0 −1
0
3
0
0 −2
0
0
3
0 −8
0
0 = 27,
0
3 the solutions are independent for all t and form a fundamental set of solutions.
If −4 3
6
4
2 0 −8 −10 −8 2 10
9 −1 ,
A = −1 7 1 −4 −7 −7 0 −1 −1 −1 −1 0
a computer reveals the characteristic equation
p(λ) = (λ + 1)(λ + 2)4 ,
so λ1 = −1 and λ2 = −2 are eigenvalues, having algebraic multiplicities 1 and 4, respectively. Because
1 0 0 0 0 −3 3
6
4
2 0 −7 A + I = −1 7 1 −4
−1 −1 −10
11
−7
−1 −8
9
−6
−1 2
0 −1 → 0 0
0
1
0 1
0
0
0 0
1
0
0 0
0
1
0 −2 2 ,
−1 0 the nullspace of A + I provides one eigenvector, v1 = (0, 2, −2, 1, 1)T and one solution,
0
2 y1 (t) = etA v1 = e−t −2 .
1
1 680 Chapter 9. Linear Systems with Constant Coefﬁcients
Examining −2 0 A + 2I = −1
1
−1 3
−6
7
−4
−1 6
−10
12
−7
−1 1
4
2
−8 2 0 9 −1 → 0 0
−5 0
0
−1 2 0
1
0
0
0 0
0
1
0
0 1
−2
2
0
0 −1 −2 1 ,
0
0 showing that A + 2I has dimension two, giving two eigenvectors, v2 = (−1, 2, −2, 1, 0)T and v3 =
(1, 2, −1, 0, 1)T , and two solutions. −1 2 y2 (t) = etA v2 = e−2t −2 1
0
1
2 y3 (t) = etA v3 = e−2t −1 0
1
But,
0
0 (A + 2I )2 = 0
0
0 0
−4
4
−2
−2 0
−6
6
−3
−3 0
−4
4
−2
−2 0
0
2
0 −2 → 0 0
1
0
1 1
0
0
0
0 3/2
0
0
0
0 1
0
0
0
0 −1/2 0 0 ,
0
0 has dimension four, so v4 = (1, 0, 0, 0, 0)T and v5 = (0, −1, 0, 1, 0)T are in the nullspace of (A + 2I )2 .
Also,
1 0 0 0 −1 1 1 0 2 0 −1 0 1 0 0
2 −2 −1 0 0 → 0 0 1 0 , 0 0 0 1
1
001
0000
0
100
so each column is a pivot column and the vectors v2 , v3 , v4 and v5 are independent. Furthermore, (A + 2I )k v4 =
0 and (A + 2I )k v5 = 0 for k ≥ 2. Thus,
y4 (t) = e−2t [v4 + t (A + 2I )v4 ] −2 1 0 0 = e−2t 0 + t −1 1 0 −1
0 1 − 2t 0 = e − 2 t −t ,
t
−t 9.5. The Exponential of a Matrix 681 and
y5 (t) = e−2t [v5 + t (A + 2I )v5 ] 1 0 −2 −1 = e−2t 0 + t 2 −1 1 0
0 t −1 − 2t = e −2 t 2 t
, 1−t 0
5.44. In matrix form, x 1 x2 x3 x 4
x5 5
3 = −3
3
−4 7
6
−8
14
−9 1
5
−2
8
−6 1
4
−5
10
−5 8 x1 5 x2 −12 x3 .
18 x4 −9
x5 A computer reveals the characteristic equation
p(λ) = −(λ + 1)2 (λ − 4)3 .
For λ = −1, (A + I )2 reduces 1 0 (A + I )2 → 0
0
0 0
1
0
0
0 0
0
1
0
0 −1
1
0
0
0 −1 2 −1 ,
0
0 and v1 = (1, −1, 0, 1, 0)T and v2 = (1, −2, 1, 0, 1)T form a basis for the nullspace of (A + I )2 . Moreover,
yi (t) = etA vi = e−t [vi + t (A + I )vi ]
for i = 1, 2. Using this result and a computer, 1 −1 y1 (t) = etA v1 = e−t 0 1
0
1+t −2 − t y2 (t) = etA v2 = e−t 1 .
t 1 For λ = 4, (A − 4I )3 reduces 1
0 (A − 4I )3 → 0
0
0 0
1
0
0
0 1
0
0
0
0 1
0
0
0
0 1
1 0,
0
0 682 Chapter 9. Linear Systems with Constant Coefﬁcients
and v3 = (−1, 0, 1, 0, 0)T , v4 = (−1, 0, 0, 1, 0)T , and v5 = (−1, −1, 0, 0, 1)T form a basis for the nullspace
of (A − 4I )3 . Moreover,
yi (t) = etA vi = e4t vi + t (A − 4I )vi + t2
(A − 4I )2 vi
2! for i = 3, 4, and 5. Using this result and a computer, −2
2
4t − t 1 y3 (t) = eAt v3 = e4t 2 − 6t + t 2 2 10t − 2t 2 −4t + t 2 −2
2
2t − t 1 y4 (t) = eAt v4 = e4t −4t + t 2 2 2 + 6t − 2 t 2 −2t + t 2 −2 −2 − t 2 1 4t At
y5 (t) = e v5 = e −2t + t 2 2 2t − 2t 2 2 + t2
Because
1
−1
det[y1 (0), y2 (0), y3 (0), y4 (0), y5 (0)] = 0
1
0 1
−2
1
0
1 −1
0
1
0
0 −1
0
0
1
0 −1
−1
0 = 1,
0
1 the solutions y1 (t), y2 (t), y3 (t), y4 (t), and y5 (t) are independent for all t and forma fundamental set of
solutions. Section 6. Qualitative Analysis of Linear Systems
6.1. In matrix form,
x
y = −0.2
−2.0 2.0
−0.2 x
,
y the coefﬁcient matrix
A= −0.2
−2.0 2.0
−0.2 has characteristic polynomial
p(λ) = λ2 + 0.4λ + 4.04, 9.6. Qualitative Analysis of Linear Systems 683 producing eigenvalues λ = −0.2 ± 2i . Because the real part of each eigenvalue is negative, the equilibrium
point at the origin is asymptotically stable. y 5 0 −5
−5 6.2. 0
x 5 In matrix form,
x
y 4
3 = x
.
y 0
1 The coefﬁcient matrix,
A= 4
3 0
1 is lower triangular, so the eigenvalues lie on the diagonal, λ1 = 4 and λ2 = 1. Because both eigenvalues are
positive, the equilibrium point at the origin is a source and unstable. y 5 0 −5
−5 6.3. 0
x 5 In matrix form,
x
y = −6
3 −15
6 the coefﬁcient matrix
A= −6
3 −15
6 has characteristic polynomial
p(λ) = λ2 + 9, x
,
y 684 Chapter 9. Linear Systems with Constant Coefﬁcients
producing eigenvalues λ = ±3i . Therefore, the equilibrium point at the origin is a stable center. y 5 0 −5
−5 6.4. 0
x 5 In matrix form,
x
y = 2
−3 0
−1 x
.
y The coefﬁcient matrix
A= 2
−3 0
−1 is lower triangular, so the eigenvalues lie on the diagonal, λ1 = 2 and λ2 = −1. Because there is at least one
positive eigenvalue, the equilibrium point at the origin is unstable. Indeed, with one positive and one negative
eigenvalue, the origin is a saddle. y 5 0 −5
−5 6.5. 0
x 5 In system
y= 0.1
−2.0 2.0
y,
0.1 the coefﬁcient matrix
A= 0.1
−2.0 2 .0
0.1 has characteristic polynomial
p(λ) = λ2 − 0.2λ + 4.01, 9.6. Qualitative Analysis of Linear Systems 685 producing eigenvalues λ = 0.1 ± 2i . Therefore, the equilibrium point at the origin is a unstable. Indeed, the
equilibrium point is a spiral source. y 5 0 −5
−5 6.6. 0
x 5 In system
y= −0.2
−0.1 0 .0
y,
−0.1 the coefﬁcient matrix
A= −0.2
−0.1 0.0
−0.1 is lower triangular. The eigenvalues lie on the diagonal, λ1 = −0.2 and λ2 = −0.1. Since both eigenvalues
are negative, the equilibrium point at the origin is asymptotically stable. Indeed, the origin is a sink. y 5 0 −5
−5 6.7. 0
x In system
y= 1
1 −4
y,
−3 the coefﬁcient matrix
A= 1
1 −4
−3 has characteristic polynomial
p(λ) = λ2 + 2λ + 1, 5 686 Chapter 9. Linear Systems with Constant Coefﬁcients
producing the repeated eigenvalue λ = −1. Because the real part of every eigenvalue is negative, the
equilibrium point at the origin is asymtotically stable. Indeed, the equilibrium point is a degenerate sink. y 5 0 −5
−5 6.8. 0
x In system
2
1 y=
the coefﬁcient matrix
A= 2
1 5 −1
y,
0
−1
0 has characteristic polynomial
p(λ) = λ2 − 2λ + 1 = (λ − 1)2 ,
producing a repeated eigenvalue, λ = 1. Because the eigenvalue is positive, the equilibrium point at the origin
is unstable. y 5 0 −5
−5 6.9. 0
x Consider the system
y=
Using a computer, matrix
A= −3
−2
−3
−3
−2
−3 −4
−7
−8
−4
−7
−8 5 2
4
4 y. 2
4
4 has characteristic polynomial
p(λ) = −λ3 − 6λ2 − 11λ − 6, 9.6. Qualitative Analysis of Linear Systems 687 and eigenvalues λ1 = −3, λ2 = −2, and λ3 = −1. Because the real parts of all eigenvalues are negative, the
equilibrium point at the origin is asymptotically stable. One such solution, with initial condition (1, 1, 1)T , is
shown in the following ﬁgure. z y
x
6.10. Consider the system −3
2
−6 y=
Using a computer, matrix
A= −3
2
−6 −1
0
−1
−1
0
−1 0
0
3 y. 0
0
3 has characteristic polynomial
p(λ) = −(λ − 1)(λ − 2)(λ − 3)
and eigenvalues λ1 = 1, λ2 = 2, and λ3 = 3. Because there is a positive eigenvalue, the equilibrium point at
the origin is unstable. One solution, starting at the point (0.01, 0.01, 0.01) is shown in the following ﬁgure.
z
y
x 6.11. In matrix form, x
y
z = Using a computer, matrix
A= −1
0
0
−1
0
0 3
1
−3
3
1
−3 4
6
−5 x
y
x 4
6
−5 has characteristic polynomial
p(λ) = −λ3 − 5λ2 − 1tλ − 13, . 688 Chapter 9. Linear Systems with Constant Coefﬁcients
and eigenvalues λ1 = −1, λ2 = −2 + 3i , and λ3 = −2 − 3i . Because all the real parts of the eigenvalues are
negative, the equilibrium point at the origin is asymptotically stable. One such solution, with initial condition
(1, 1, 1)T , is shown in the following ﬁgure. z y
x 6.12. In matrix form, the system is
x
y
z 2
−2
−4 = 1
0
−6 x
y
z 0
0
−2 . The matrix has eigenvalues −2 and 1 ± i . Since the complex eigenvalues have positive real part, the origin is
an unstable equilibrium point. This is illustrated by the solution plotted in the accompanying ﬁgure.
600
400 z 200
0 x
0 1 2 y −200
−400 6.13. t
4 3 If
y= 0
−1
4 0
0
−2 then matrix
A= 0
−1
4 0
0
−2 −1
0
−3 y, −1
0
−3 has characteristic polynomial
p(λ) = −(λ + 1)(λ2 + 2λ + 2) 9.6. Qualitative Analysis of Linear Systems 689 and eigenvalues −1, −1 + i , and −1 − i . Therefore, the real part of eigenvalue is negative, so the hypotheses
of Theorem 6.2 are satisﬁed and the equilibrium point at the origin is asymptotically stable. One such solution,
with initial condition (2, −1, −2)T , is shown in the image that follows. z y
x 6.14. If y= 3
0
0 −3
1
−3 −5
0
−2 y, then a computer reveals that matrix A= 3
0
0 −3
1
−3 −5
0
−2 has characteristic equation p(λ) = −(λ − 3)(λ − 1)(λ + 2) 690 Chapter 9. Linear Systems with Constant Coefﬁcients
and eigenvalues λ1 = 3, λ2 = 1, and λ3 = −2. Thus, at least one eigenvalue has positive real part
and Theorem 6.2 predicts that the equilibrium point at the origin is unstable. One solution, starting at
(0.01, 0.01, 0.01) is shown in the following ﬁgure. z x 6.15. y If
3 16
y =
−14
−19 −2
−6
5
8 −5
−17
15
23 3
9
y,
−8 −13 then a computer reveals that matrix
3 16
A=
−14
−19 −2
−6
5
8 −5
−17
15
23 3
9
−8 −13 has characteristic equation
p(λ) = (λ − 2)(λ + 1)3
and eigenvalues λ1 = 2 and λ2 = −1, the latter having algebraic multiplicity 3. Thus, one eigenvalue has
positive real part and Theorem 6.2 predicts that the equilibrium point at the origin is unstable. One such
solution, with initial conation (0.1, 0.1, 0.1, 0.1)T , seems to approach the origin, only to veer away with 9.6. Qualitative Analysis of Linear Systems 691 the passage of time, much like a saddle point solution in the phase plane. This behavior is indicated in the
following plot of each component of the solution versus time.
0.5 0 −0.5
0 6.16. 5 10 15 With a computer we ﬁnd that the eigenvalues of the matrix −3 3
0 −4 −7
−1
−6 4
A=
0
4 0
−3
0 8
2
7 are −3 and −1, the latter having algebraic multiplicity 3. Since all of the eigenvalues are negative, Theorem 6.2
tells us that the origin is an asymptotically stable equilibrium point. This is veriﬁed by the solution plotted in
the accompanying ﬁgure. 2 y 1 0 −2 6.17. y2 0 t
8 4 y3
y4 (a) In matrix form, x
y
z = Using a computer, matrix
A= −3
−2
0
−3
−2
0 0
−1
0
0
−1
0 0
0
−2 x
y
z 0
0
−2 has characteristic polynomial
p(λ) = (λ + 3)(λ + 2)(λ + 1) . 692 Chapter 9. Linear Systems with Constant Coefﬁcients
and eigenvalues −3, −2 and −1. A computer also reveals the associated eigenvectors which lead to the
following exponential solutions.
0
0
1
y1 (t) = e−3t 1 , y2 (t) = e−2t 0 , and y3 (t) = e−t 1 .
1
0
0
These exponential solutions generate the halfline solutions shown in the following ﬁgure. Each of the
halfline solutions decay to the origin with the passage of time. (b) We selected initial conditions (1, 0, 1)T , (−1, 0, 1)T , (1/2, 1, 1)T , (−1/2, −1, 1)T , (1, 0, −1)T ,
(−1, 0, −1)T , (1/2, 1, −1)T , and (−1/2, −1, −1)T to craft the portrait in the following ﬁgure. z y
x 6.18. (c) Nodal sink
(a) If
y=
then the coefﬁcient matrix
A= 1
0
0
1
0
0 −1
2
0 0
0
3 −1 0
20
03 y, 9.6. Qualitative Analysis of Linear Systems 693 is upper triangular, so the eigenvalues lie on the diagonal, λ1 = 1, λ2 = 2, and λ3 = 3. A computer
reveals the associated eigenvectors, and consequently, the exponential solutions
y1 (t) = et 1
0
0 , y2 (t) = e2t −1
1
0 , y3 (t) = e3t 0
0
1 . These exponential solutions generate the halfline solutions shown in the following ﬁgure. (b) We selected initial conditions (1, 2, 1)T , (1, 2, −1)T , (2, −1, 1)T , (2, −1, −1)T , (−1, −2, 1)T ,
(−1, −2, −1)T , (−2, 1, 1)T , and (−2, 1, −1)T to craft the portrait in the following ﬁgure. Each was
scaled by a factor of 1 × 10−3 . z y x 6.19. (c) Nodal source.
(a) If
y= −1
10
0 −10
−1
0 0
0
−1 y, then, using a computer, matrix
A= −1
10
0 −10
−1
0 0
0
−1 has characteristic polynomial
p(λ) = (λ + 1)(λ2 + 2λ + 101) 694 Chapter 9. Linear Systems with Constant Coefﬁcients
and eigenvalues −1, −1 + 10i and −1 − 10i . A computer also generates associated eigenvectors, leading
to the real solution
0
y1 (t) = e−t 0
1
and the complex solution
1
−i
0 z(t) = e(−1+10i)t = e−t (cos 10t + i sin 10t)
= e −t cos 10t
sin 10t
0 This leads to the real solutions
cos 10t
y2 (t) = e−t sin 10t
0 , + ie−t and 1
0
0 + i −1
0
0
sin 10t
− cos 10t .
0
sin 10t
− cos 10t
0 y3 (t) = e−t . (b) Any solution starting on the zaxis lies on the halflines generated by the exponential solution
0
0.
1
Thus, the solution will remain on the zaxis as it decays to the equilibrium point at the origin. In the
following image, solutions with initial conditions (0, 0, 1)T and (0, 0, −1)T remain on the zaxis and
decay to the origin.
y(t) = C1 e−t z y
x (c) The general solution is
y(t) = C1 e−t 0
0
1 + C2 e−t cos 10t
sin 10t
0 + C3 e−t sin 10t
− cos 10t
0 . If a solution starts in the xy plane with initial condition y(0) = (a, b, 0)T , then
a
b
0 = C1 0
0
1 + C2 1
0
0 + C3 0
−1
0 , 9.6. Qualitative Analysis of Linear Systems 695 leading to C1 = 0, C2 = a , and C3 = −b. Thus, the particular solution is
cos 10t
sin 10t
0 y(t) = ae−t + be−t sin 10t
− cos 10t
0 , so these solutions will remain in the xy plane and spiral inward to the equilibrium point at the origin.
This is shown in the following ﬁgure, where we have plotted the solution with initial condition (1, 1, 0)T . z y
x (d) A solution having initial condition y(0) = (a, b, c)T , where c = 0, would lead to
a
b
c 0
0
1 = C1 + C2 1
0
0 + C3 0
−1
0 and C1 = c, C2 = a , and C3 = −b. Thus, the particular solution is
y(t) = e−t (a cos 10t − b sin 10t)
e−t (a sin 10t + b cos 10t)
ce−t . We saw in part (c) that if c = 0, solutions spiral into the origin while remaining in the xy plane. In this
case, the zcoordinate decays to zero, so it is reasonable to assume that solutions will spiral while the
zcoordinate decays to zero. Solutions with initial conditions (1, 1, 1)T and (−1, −1, −1)T are shown in
the following ﬁgure. z y
x 696 Chapter 9. Linear Systems with Constant Coefﬁcients Section 7. Higher Order Linear Equations
7.1. (a) If
x1 (t) = e3t
,
3e3t x1 (t) = 3e3t
9e3t then and 0
3 0
1
x (t) =
3
21 e3t
3e3t 1
2 = 3e3t
,
9e3t = −e−t
,
e −t so x1 is a solution of
0
3 x=
Similarly, if 1
x.
2 x2 (t) = e −t
,
−e−t x2 (t) = −e − t
e −t then and 0
1
x (t) =
3
22 1
2 e −t
−e−t x= 0
3 0
3 1
x.
2 so x2 is a solution of To show independence, we need only show that the functions are independent at one value of t . However,
x1 (0) = 1
3 and 1
−1 x2 (0) = are clearly independent (x2 (0) is not a multiple of x1 (0)).
(b) Because
x(t) = C1 x1 (t) + C2 x2 (t)
e3t
e −t
= C1
,
3t + C2
3e
−e−t
the ﬁrst component of x(t) is y(t) = C1 e3t + C2 e−t . Thus,
y = 3C1 e3t − C2 e−t
y = 9C1 e3t + C2 e−t ,
and y − 2y − 3y = (9C1 e3t + C2 e−t )
− 2(3C1 e3t − C2 e−t ) − 3(C1 e3t + C2 e−t )
=0 7.2. (a) If x1 (t) = (sin 2t, 2 cos 2t)T , then x1 (t) = (2 cos 2t, −4 sin 2t)T and
0
−4 0
1
x (t) =
−4
01 1
0 sin 2t
2 cos 2t so x1 is a solution of
x1 = 0
−4 1
x.
0 = 2 cos 2t
,
−4 sin 2t 9.7. Higher Order Linear Equations 697 Similarly, if x2 (t) = (cos 2t, −2 sin 2t)T , then x2 (t) = (−2 sin 2t, −4 cos 2t)T and
01
0
x (t) =
−4 0 2
−4
so x2 is also a solution of 1
0 cos 2t
−2 sin 2t = −2 sin 2t
,
−4 cos 2t 01
x.
−4 0
To show independence, we need only show that the functions are independent at one value of t . However,
1
0
x1 (0) =
and x2 (0) =
0
2
are clearly independent (x2 (0) is not a multiple of x1 (0)).
(b) Because
sin 2t
cos 2t
x(t) = C1 x1 (t) + C2 x2 (t) = C1
+ C2
,
2 cos 2t
−2 sin 2t
The ﬁrst component of x(t) is y(t) = C1 sin 2t + C2 cos 2t . Thus,
x= y = 2C1 cos 2t − 2C2 sin 2t, and
y = −4C1 sin 2t − 4C2 cos 2t,
and 7.3. y + 4y = (−4C1 sin 2t − 4C2 cos 2t) + 4(C1 sin 2t + C2 cos 2t)
= 0. If y1 (t) = et and y2 (t) = e2t , suppose that there exists constants c1 and c2 such that
c1 et + c2 e2t = 0
for all t . Then, t = 0 ⇒ c1 + c2 = 0
t = 1 ⇒ c1 e + c2 e2 = 0. Solving the ﬁrst equation, c1 = −c2 , and substituting into the second equation gives
−c2 e + c2 e2 = 0
c2 (e2 − e) = 0.
7.4. 7.5. Because e2 − e = 0, this give c2 = 0, whence c1 = −c2 = 0. Hence, y1 and y2 are independent.
Suppose y1 (t) = et cos t and y2 (t) = et sin t , and there are constants C1 and C2 such that C1 y1 (t) + C2 y2 (t) =
et [C1 cos t + C2 sin t = 0 for all t . Then at t = 0 we have C1 = 0, and at t = π/2 we have C2 eπ/2 = 0.
Hence both constants are 0, so the functions are linearly independent.
If y1 (t) = cos t , y2 (t) = sin t , and y3 (t) = et , suppose that there exists constants c1 , c2 , and c3 such that
c1 cos t + c2 sin t + c3 et = 0
for all t . Then, t = 0 ⇒ c1 + c3 = 0
t = π/2 ⇒ c2 + c3 eπ/2 = 0
t = π ⇒ −c1 + c3 eπ . Solving the ﬁrst equation, c1 = −c3 , and substituting this into the third equation give
0 = c3 + c3 eπ = c3 (1 + eπ ).
Because eπ + 1 = 0, this give c3 = 0, whence c1 = −c3 = 0. Substituting c3 = 0 into the second equation
gives
0 = c2 + 0eπ = c2 .
Therefore, y1 , y2 , and y3 are linearly independent. 698
7.6. Chapter 9. Linear Systems with Constant Coefﬁcients
If y1 (t) = et , y2 (t) = tet , and y3 (t) = t 2 et , suppose that there exists constants C1 , C2 , and C3 such that
C1 et + C2 tet + C3 t 2 et = 0
for all t . If t = 1, then C1 e + C2 e + C3 e = 0
(C1 + C2 + C3 )e = 0
C1 + C2 + C3 = 0. If t = −1, then C1 e−1 − C2 e−1 + C3 e−1 = 0
(C1 − C2 + C3 )e−1 = 0
C1 − C2 + C3 = 0 Finally, if t = 0, then C1 = 0 and the last two equations become
C2 + C3 = 0
−C2 + C3 = 0.
Because the coefﬁcient matrix 7.7. 1
−1 1
1 C1
C2 = 0
0 has determinant D = 2, the coefﬁcient matrix is nonsingular and this last system has unique solution C2 =
C3 = 0. Hence, C1 = C2 = C3 = 0 and the solutions y1 (t) = et 4, y2 (t) = tet , and y3 (t) = t 2 et are linearly
independent.
If y1 (t) = cos 3t , then
y (t) = −3 sin 3t
y (t) = −9 cos 3t
and
y1 + 9y1 = −9 cos 3t + 9 cos 3t = 0.
Similarly, if y2 (t) = sin 3t , then y2 (t) = 3 cos 3t
y2 (t) = −9 sin 3t, and
y2 + 9y2 = −9 sin 3t + 9 sin 3t = 0.
Thus, both y1 and y2 are solutions of y + 9y = 0. Finally, the Wronskian is
y1 y2
y 1 y2
cos 3t
sin 3t
= det
−3 sin 3t 3 cos 3t
= 3 cos2 3tI + 3 sin2 3t
= 3, W (t) = det 7.8. 7.9. which is nonzero for all t . Hence, the solutions y1 and y2 are linearly independent.
If y1 (t) = e−10t and y2 (t) = et , we have y1 (t) = −10e−10t and y1 (t) = 100e−10t . Hence y1 + 9y1 − 10y1 =
(100 − 90 − 10)e−10t = 0, so y1 is a solution. Similarly, y2 (t) = y2 (t) = et , so y2 + 9y2 − 10y2 =
(1 + 9 − 10)et = 0, so y2 is a solution. We have y1 (t) = e−10t = e−11t et = e−11t y2 (t). Since e−11t is not
constant, the functions are linearly independent, and therefore form a fundamental set of solutions.
If y1 (t) = e2t , then
y (t) = 2e2t
y (t) = 4e2t . 9.7. Higher Order Linear Equations 699 and
y 1 − 4 y 1 + 4 y 1 = 4 e 2 t − 8e 2 t + 4 e 2 t = 0 .
Similarly, if y2 (t) =, then y2 (t) = e2t (2t + 1)
y2 (t) = e2t (4t + 4), and y2 − 4y2 + 4y2 = e2t (4t + 4) − 4e2t (2t + 1) + 4te2t
= e2t (4t + 4 − 8t − 4 + 4t)
= 0. Thus, both y1 and y2 are solutions of y − 4y + 4y = 0. Finally, the Wronskian is
y1 y2
y1 y2
te2t
e 2t
= det
2t
2t
2e
e (2t + 1)
4t
= e (2t + 1) − 2te4t W (t) = det = e 4t ,
7.10. which is nonzero for all t . Hence, the solutions y1 and y2 are linearly independent.
If y1 (t) = cos 3t , then
y1 = −3 sin 3t,
y1 = −9 cos 3t, and
y1 = 27 sin 3t.
Thus,
y1 − 3y1 + 9y1 − 27y1 = 27 sin 3t − 3(−9 cos 3t) + 9(−3 sin 3t) − 27(cos 3t)
=0
and y1 is a solution of y − 3y + 9y − 27y = 0. In a similar manner, y2 (t) = sin 3t and y3 (t) = e3t are
also solutions. Finally, the Wronskian is
W (t) = det 7.11. y1
y1
y1 y2
y2
y2 y3
y3
y3 = cos 3t
−3 sin 3t
−9 cos 3t sin 3t
3 cos 3t
−9 sin 3t e3t
33t
9e3t . Using a computer, W (t) = 54e3t , which is never zero. Therefore, the solutions y1 , y2 , and y3 are linearly
independent.
If y1 (t) = et , then
y1 − 3y1 + 3y1 − y1 = et − 3et + 3et − et = 0.
If y2 (t) = tet , then y2 = (t + 1)et
y2 = (t + 2)et
y2 = (t + 3)et and y2 − 3y2 + 3y2 − y2 = (t + 3)et − 3(t + 2)et
3(t + 1)et − tet
t = e (t + 3 − 3t − 6 + 3t + 3 − t)
= 0. 700 Chapter 9. Linear Systems with Constant Coefﬁcients
If y3 (t) = t 2 et , then
y3 = (t 2 + 2t)et
y3 = (t 2 + 4t + 2)et
y3 = (t 2 + 6t + 6)et ,
and
y3 − 3y3 + 3y3 − y3 = (t 2 + 6t + 6)et − 3(t 2 + 4t + 2)et
3(t 2 + 2t)et − t 2 et
= et (t 2 + 6t + 6
− 3t 2 − 12t − 6 + 3t 2 + 6t − t 2 )
= 0.
Thus, y1 , y2 , and y3 are solutions of the equation y − 3y + 3y − y = 0. Finally, the Wronskian is
W (t) = det
= det 7.12. y1 y2 y3
y1 y2 y3
y 1 y 2 y3
tet
et
t
e (t + 1)et
et (t + 2)et t 2 et
(t + 2t)et
2
(t + 4t + 2)et
2 . Using a computer, W (t) = 2e3t , which is never zero. Therefore, the solutions y1 , y2 and y3 are linearly
independent.
If y1 = cos 3t , then
y1 = −3 sin 3t
y1 = −9 cos 3t
y1 = 27 sin 3t
(
y14) = 81 cos 3t. Thus,
(
y14) + 13y1 + 36y1 = 81 cos 3t + 13(−9 cos 3t) + 36 cos 3t
=0 and y1 is a solution of y (4) + 13y + 36y = 0. In a similar manner, y2 = sin 3t , y3 = cos 2t , and y4 = sin 2t
are also solutions. Finally, the Wronskian is
y
y1 y 3 y 4 1
y 2 y3 y4 y
W (t) = det 1
y1 y2 y3 y4 y
y
y3 y 4 1cos 32
t
sin 3t
cos 2t
sin 2t 3 cos 3t
−2 sin 2t
2 cos 2t −3 sin 3t
= det .
−9 cos 3t −9 sin 3t −4 cos 3t −4 sin 2t 27 sin 3t −27 cos 3t
8 sin 2t
−8 cos 2t
7.13. Using a computer, W (t) = 150, so the solutions y1 , y2 , y3 , and y4 are linearly independent.
(a) If y = eλt , then
y = λeλt
y = λ2 eλt
y = λ3 eλt . 9.7. Higher Order Linear Equations 701 Subbing these results into y + ay + by + cy = 0 gives
λ3 eλt + aλ2 eλt + bλeλt + ceλt = 0
eλt (λ3 + aλ2 + bλ + c) = 0.
Because eλt can never equal zero, we must have
λ3 + aλ2 + bλ + c = 0.
(b) If
y = −ay − by − cy,
let x1 = y , x2 = y , and x3 = y . Then
x1 = x2
x2 = x3
x3 = −ax3 − bx2 − cx1 .
In matrix form,
x1
x2
x3 = 0
0
−c 1
0
−b x1
x2
x3 0
1
−a , and if
A= 0
0
−c 1
0
−b 0
1
−a , the characteristic polynomial is
p(λ) = det (A − λI )
−λ 1
0
1
.
= 0 −λ
−c −b −a − λ
Expanding across the ﬁrst row,
−λ
1
0
−1
−b −a − λ
−c
= −λ(aλ + λ2 + b) − 1(c) p(λ) = −λ 1
−a − λ = −λ3 − aλ2 − bλ − c.
7.14. 7.15. The characteristic polynomial of the equation y − 2y − y + 2y = 0 is p(λ) = λ3 − 2λ2 − λ + 2. Notice that
2 is a root. Hence the polynomial factors as p(λ) = (λ − 2)(λ2 − 1) = (λ − 2)(λ − 1)(λ + 1). Consequently,
the roots are −1, 1, and 2. We have the exponential solutions y1 (t) = e−t , y2 (t) = et , and y3 (t) = e2t .
Since the roots are distinct, these solutions are linearly independent, and therefore form a fundamental set of
solutions.
If y − 3y − 4y + 12y = 0, then the characteristic equation factors
λ3 − 3λ2 − 4λ + 12 = 0
λ2 (λ − 3) − 4(λ − 3) = 0
(λ + 2)(λ − 2)(λ − 3) = 0.
Thus, the characteristic equation has roots −2, 2, and 3, leading to the general solution
y(t) = C1 e−2t + C2 e2t + C3 e3t . 702
7.16. Chapter 9. Linear Systems with Constant Coefﬁcients
If y (4) − 5y + 4y = 0, then the characteristic equation factors
λ 4 − 5λ 2 + 4 = 0
(λ2 − 4)(λ2 − 1) = 0
(λ + 2)(λ − 2)(λ + 1)(λ − 1) = 0.
Thus, the characteristic equation has roots −2, 2, −1, and 1, leading to the general solution
y(t) = C1 e−2t + C2 e2t + C3 e−t + C4 et . 7.17. If y (4) − 13y + 36y = 0, then the characteristic equation factors
λ4 − 13λ2 + 36 = 0
(λ2 − 4)(λ2 − 9) = 0
(λ + 2)(λ − 2)(λ + 3)(λ − 3) = 0.
Thus, the characteristic equation has roots −3, −2, 2, and 3, leading to the general solution
y(t) = C1 e−3t + C2 e−2t + C3 e2t + C4 e3t . 7.18. If y + 2y − 5y − 6y = 0, then the characteristic equation is p(λ) = λ3 + 2λ2 − 5λ − 6. Note that −1 is
a root of p , so
p(λ) = (λ + 1)(λ2 + λ − 6)
= (λ + 1)(λ + 3)(λ − 2).
Thus, the characteristic polynomial has roots −1, −3, and 2, leading to the general solution
y(t) = C1 e−t + C2 e−3t + C3 e2t . 7.19. If y − 4y − 11y + 30y = 0, then the characteristic equation is
λ3 − 4λ2 − 11λ + 30 = 0.
A plot of the characteristic equation (computer or calculator) reveals possible roots. −4 −3 −2 −1 01 2 3 4 5 6 The plot suggests that −3 is a root, but division by λ + 3 guarantees that −3 is a root and λ + 3 is a factor.
(λ + 3)(λ2 − 7λ + 10) = 0
(λ + 3)(λ − 2)(λ − 5) = 0
Thus, the roots are −3, 2, and 5, and the general solution is
y(t) = C1 e−3t + C2 e2t + C3 e5t . 9.7. Higher Order Linear Equations
7.20. 703 If y (5) + 3y (4) − 5y − 15y + 4y + 12y = 0, then the characteristic equation is
λ5 + 3λ4 − 5λ3 − 15λ2 + 4λ + 12 = 0.
A plot of the characteristic equation (computer or calculator) reveals possible roots. −5 −4 −3 −2 −1 01 2 3 4 5 The plot suggests a root at −3. Long (or synthetic) division reveals
(λ + 3)(λ4 − 5λ2 + 4) = 0
(λ + 3)(λ2 − 4)(λ2 − 1) = 0
(λ + 3)(λ + 2)(λ − 2)(λ + 1)(λ − 1) = 0.
Thus, the roots of the characteristic equation are −3, −2, −1, 1, and 2, and the general solution is
y(t) = C1 e−3t + C2 e−2t + C3 e−t + C4 et + C5 e2t .
7.21. If y (5) − 4y (4) − 13y + 52y + 36y − 144y = 0, then the characteristic equation is
λ5 − 4λ4 − 13λ3 + 52λ2 + 36λ − 144 = 0.
A plot of the characteristic equation (computer or calculator) reveals possible roots. −4 −3 −2 −1 01 2 3 4 5 The plot suggests a root at −3. Long (or synthetic) division reveals
(λ + 3)(λ4 − 7λ3 + 8λ2 + 28λ − 48) = 0.
The plot suggests a root at −2. Again, division reveals
(λ + 3)(λ + 2)(λ3 − 9λ2 + 26λ − 24) = 0. 704 Chapter 9. Linear Systems with Constant Coefﬁcients
The plot suggest a root at 2. Again, division reveals
(λ + 3)(λ + 2)(λ − 2)(λ2 − 7λ + 12) = 0
(λ + 3)(λ + 2)(λ − 2)(λ − 3)(λ − 4) = 0.
Thus, the roots of the characteristic equation are −3, −2, 2, 3, and 4, and the general solution is
y(t) = C1 e−3t + C2 e−2t + C3 e2t + C4 e3t + C5 e4t . 7.22. If y − 3y + 2y = 0, the characteristic equation is
λ 3 − 3λ + 2 = 0 .
The plot of the characteristic equation −5 −4 −3 −2 −1 01 2 3 4 5 suggests a root at λ = −2. Division by λ + 2 reveals
(λ + 2)(λ2 − 2λ + 1) = 0
(λ + 2)(λ − 1)2 = 0.
Thus, the roots are −2 and 1, with the latter having multiplicity of 2. Therefore, the general solution is
y(t) = C1 e−2t + C2 et + C3 tet .
7.23. If y + y − 8y − 12y = 0, then the characteristic equation is
λ3 + λ2 − 8λ − 12 = 0.
The plot of the characteristic equation −4 −3 −2 −1 01 2 3 4 9.7. Higher Order Linear Equations 705 suggests a root at −2. Division shows that
(λ + 2)(λ2 − λ − 6) = 0
(λ + 2)(λ + 2)(λ − 3) = 0.
Hence there are two roots, −2 and 3, with the former having algebraic multiplicity 2. Thus, the general
solution is
y(t) = C1 e−2t + C2 te−2t + C3 e3t . 7.24. If y + 6y + 12y + 8y = 0, then the characteristic equation is
λ3 + 6λ2 + 12λ + 8 = 0.
A plot of the characteristic equation −5 −4 −3 −2 −1 01 2 3 4 5 suggests a multiple root at −2. Division by λ + 2 reveals that
(λ + 2)(λ2 + 4λ + 4) = 0
(λ + 2)3 = 0.
Thus, λ = −2 is a root of algebraic multiplicity 3. Therefore, the general solution is
y(t) = C1 e−2t + C2 te−2t + C3 t 2 e−2t . 7.25. If y + 3y + 3y + y = 0, then the characteristic equation is
λ3 + 3λ2 + 3λ + 1 = 0. 706 Chapter 9. Linear Systems with Constant Coefﬁcients
The plot of the characteristic equation −3 −2 −1 0 1 suggests a root at −1. Division show that
(λ + 1)(λ2 + 2λ + 1) = 0
(λ + 1)3 = 0.
Thus, −1 is a root of algebraic multiplicity 3. Therefore, the general solution is
y(t) = C1 e−t + C2 te−t + C3 t 2 e−t .
7.26. If y (5) + 3y (4) − 6y − 10y + 21y − 9y = 0, then the characteristic equation is
λ5 + 3λ4 − 6λ3 − 10λ2 + 21λ − 9 = 0.
A plot of the characteristic equation −5 −4 −3 −2 −1 01 2 3 4 5 suggests multiple roots at −3 and 1. Repeated division by λ − 1 reveals
(λ − 1)3 (λ2 + 6λ + 9) = 0
(λ − 1)3 (λ + 3)2 = 0.
Thus, −3 and 1 are roots, with algebraic multiplicities 2 and 3, respectively. Therefore, the general solution is
y(t) = C1 e−3t + C2 te−3t + C3 et + C4 tet + C5 t 2 et .
7.27. If y (5) − y (4) − 6y + 14y − 11y + 3y = 0, then the characteristic equation is
λ5 − λ4 − 6λ3 + 14λ2 − 11λ + 3 = 0. 9.7. Higher Order Linear Equations 707 The plot of the characteristic equation −3 −2 −1 0 1 2 3 suggests a multiple root at 1. Repeated division by λ − 1 reveals that
(λ − 1)4 (λ + 3) = 0.
Thus, the roots are 1 and −3, with the former having algebraic multiplicity 4. Therefore, the general solution
is
y(t) = C1 et + C2 tet + C3 t 2 et + C4 t 3 et + C5 e−3t .
7.28. If y − y + 4y − 4y = 0, then the characteristic equation is
λ3 − λ2 + 4λ − 4 = 0.
A plot of the characteristic equation −5 −4 −3 −2 −1 01 2 3 4 reveals a possible root at λ = 1. Division by λ − 1 reveals
(λ − 1)(λ2 + 4) = 0.
Thus, the zero are 1, −2i , and 2i , and the general solution is
y(t) = C1 et + C2 cos 2t + C2 sin 2t.
7.29. If y − y + 2y = 0, then the characteristic equation is
λ 3 − λ2 + 2 = 0 . 5 708 Chapter 9. Linear Systems with Constant Coefﬁcients
A plot of the characteristic equation −3 −2 −1 0 1 2 3 suggests a root at −1. Division reveals
(λ + 1)(λ2 − 2λ + 2) = 0.
The quadratic formula provides the remaining roots, 1 ± i . Thus, the general solution is
y(t) = C1 e−t + C2 et cos t + C3 et sin t.
7.30. If y (4) + 17y + 16y = 0, then the characteristic equation is
λ4 + 17λ2 + 16 = 0.
This factors
(λ2 + 16)(λ2 + 1) = 0,
so we have zeros ±4i and ±i . Consequently, the general solution is
y(t) = C1 cos 4t + C2 sin 4t + C3 cos t + C4 sin t. 7.31. If y (4) + 2y + y = 0, then the characteristic equation is
λ 4 + 2 λ2 + 1 = 0 ,
which easily factors as
(λ2 + 1)2 = 0.
Thus, both i and −i are roots of multiplicity 2. Therefore, the general solution is
y(t) = C1 cos t + C2 t cos t + C3 sin t + C4 t sin t. 7.32. If y (5) − 9y (4) + 34y − 66y + 65y − 25y = 0, then the characteristic equation is
λ5 − 9y 4 + 34λ3 − 66λ2 + 65λ − 25 = 0. 9.7. Higher Order Linear Equations 709 A plot of the characteristic equation −5 −4 −3 −2 −1 01 2 3 4 5 reveals a possible zero at 1. Division by λ − 1 reveals
(λ − 1)(λ4 − 8λ3 + 26λ2 − 40λ + 25) = 0.
A computer is used to ﬁnd that 2 + i and 2 − i are zeros of the second factor, each with algebraic multiplicity
2. Thus,
y(t) = C1 e2t cos t + C2 e2t sin t + C3 te2t cos t + C4 te2t sin t + C5 et
7.33. is the general solution.
If y (6) + 3y (4) + 3y + y = 0, then the characteristic equation is
λ6 + 3λ4 + 3λ2 + 1 = 0.
The form of the characteristic equation suggests the binomial theorem and
(λ2 + 1)3 = 0.
Thus, i and −i are roots, each having algebraic multiplicity 3. Therefore, the general solution is
y(t) = C1 cos t + C2 t cos t + C3 t 2 cos t + C4 sin t + C5 t sin t + C6 t 2 sin t. 7.34. If y − 2y − 3y = 0, then the characteristic equation is
λ2 − 2λ − 3 = (λ − 3)(λ + 1) = 0.
Thus, the zeros are 3 And −1 and the general solution is
y(t) = C1 e3t + C2 e−t .
The initial condition y(0) = 4 provides
4 = C1 + C 2 .
Differentiating, y (t) = 3C1 e3t − C2 e−t . The initial condition y (0) = 0 provides
0 = 3C1 − C2 .
Thus, C1 = 1 and C2 = 3 and
7.35. y(t) = e3t + 3e−t . If y + 2y + 5y = 0, then the characteristic equation is
λ2 + 2 λ + 5 = 0 .
The quadratic formula provides the roots, −1 ± 2i . Thus, the general solution is
y(t) = C1 e−t cos 2t + C2 e−t sin 2t. 710 Chapter 9. Linear Systems with Constant Coefﬁcients
Substituting the initial condition y(0) = 2 provides C1 = 2. The derivative of the general solution is
y (t) = C1 e−t (− cos 2t − 2 sin 2t) + C2 e−t (− sin 2t + 2 cos 2t).
The initial condition y (0) = 0 provides
0 = −C 1 + 2 C 2 ,
which in turn, because C1 = 2, generates C2 = 1. Thus, the solution of the initial value problem is
y(t) = 2e−t cos 2t + e−t sin 2t. 7.36. If y + 4y + 4y = 0, then the characteristic equation is
λ2 + 4λ + 4 = (λ + 2)2 = 0.
Thus, λ = −2 is a zero of algebraic multiplicity 2 and the general solution is
y(t) = C1 e−2t + C2 te−2t = (C1 + C2 t)e−2t .
The initial condition y(0) = 2 provides C1 = 2. Differentiating,
y (t) = C2 e−2t − 2(C1 + C2 t)e−2t .
The initial condition y (0) = −1 provides
−1 = C2 − 2C1 .
Thus, C2 = 3 and the solution is
y(t) = (2 + 3t)e−2t . 7.37. If y − 2y + y = 0, then the characteristic equation is
λ2 − 2λ + 1 = (λ − 1)2 = 0.
Thus, 1 is a single root of multiplicity 2, and the general solution is
y(t) = C1 et + C2 tet .
The initial condition y(0) = 1 provides C1 = 1. The derivative of the general solution is
y (t) = C1 et + C2 (t + 1)et .
The initial condition y (0) = 0 provides
0 = C1 + C 2 ,
which in turn, because C1 = 1, generates C2 = −1. Therefore, the solution of the initial value problem is
y(t) = et − tet . 7.38. If y − 4y − 7y + 10y = 0, then the characteristic equation is
λ3 − 4λ2 − 7λ + 10 = 0. 9.7. Higher Order Linear Equations 711 The plot of the characteristic equation −3 −2 −1 01 2 3 4 5 6 suggests a zero at −2. Division by λ + 2 reveals
(λ + 2)(λ2 − 6λ + 5) = 0
(λ + 2)(λ − 5)(λ − 1) = 0.
Thus, the zeros are −2, 1, and 5 and the general solution and its derivatives are
y(t) = C1 e−2t + C2 et + C3 e5t
y (t) = −2C1 e−2t + C2 et + 5C3 e5t
y (t) = 4C1 e−2t + C2 et + 25C3 e5t .
The initial conditions y(0) = 1, y (0) = 0, and y (0) = −1 provide
1 = C1 + C2 + C3
0 = −2C1 + C2 + 5C3
−1 = 4C1 + C2 + 25C3
The augmented matrix reduces
1
−2
4 11
15
1 25 1
0
−1 → 1
0
0 0
1
0 0
0
1 4/21
11/12
−3/28 , revealing C1 = 4/21, C2 = 11/12, and C3 = −3/28. Thus, the solution is
y(t) = 7.39. 4 −2t 11 t
3
e + e − e5t .
12
28
21 If y − 7y + 11y − 5y = 0, then the characteristic equation is
λ3 − 7λ2 + 11λ − 5 = 0. 712 Chapter 9. Linear Systems with Constant Coefﬁcients
The plot of the characteristic equation −1 0 1 2 3 4 5 6 suggest a root at 1. Division reveals
(λ − 1)(λ2 − 6λ + 5) = 0
(λ − 1)2 (λ − 5) = 0.
Thus, the roots are 1 and 5, the former having algebraic multiplicity 2. Thus, the general solution is
y(t) = C1 et + C2 tet + C3 e5t .
The initial condition y(0) = −1 provides
C1 + C3 = −1.
The derivative of the general solution is
y (t) = C1 et + C2 et (t + 1) + 5C3 e5t .
The initial condition y (0) = 1 provides
C 1 + C 2 + 5C 3 = 1.
The second derivative of the general solution is
y (t) = C1 et + C2 et (t + 2) + 25C3 e5t .
The initial condition y (0) = 0 provides
C1 + 2C2 + 25C3 = 0.
The augmented matrix
1
1
1 0
1
2 1 −1
5
1
25 0 → 1
0
0 0
1
0 0
0
1 −13/16
11/4
−3/16 provides C1 = −13/16, C2 = 11/4, and C3 = −3/16. Therefore, the solution of the initial value problem is
y(t) = −
7.40. 13 t 11 t
3
e + te − e5t .
16
4
16 If y − 2y + 4y = 0, then the characteristic equation is
λ3 − 2 λ + 4 = 0 . 9.7. Higher Order Linear Equations 713 A plot of the characteristic equation −5 −4 −3 −2 −1 01 2 3 4 5 suggests a zero at −2. Division by λ + 2 reveals
(λ + 2)(λ2 − 2λ + 2) = 0,
and the quadratic formula produces the zeros of the second factor, 1 ± i . Therefore, the general solution is
y(t) = C1 e−2t + C2 et cos t + C3 et sin t.
The initial condition y(0) = 1 provides
1 = C1 + C2 .
Differentiating,
y (t) = −2C1 e−2t + C2 et cos t − C2 et sin t + C3 et sin t + C3 et cos t.
The initial condition y (0) = −1 provides
−1 = −2C1 + C2 + C3 .
Differentiating again,
y (t) = 4C1 e−2t + C2 et (cos t − sin t) + C2 et (− sin t − cos t)
+ C3 et (sin t + cos t) + C3 et (cos t − sin t).
The initial condition y (0) = 0 provides
0 = 4C1 + 2C3 .
The augmented matrix reduces
1
−2
4 1
1
0 0
1
2 1
−1
0 → 1
0
0 0
1
0 0
0
1 2/5
3/5
−4/5 so C1 = 2/5, C2 = 3/5, C3 = −4/5, and the solution is
y(t) =
7.41. 2 −2 t 3 t
4
e + e cos t − et sin t.
5
5
5 If y − 6y + 12y − 8y = 0, then the characteristic equation is
λ3 − 6λ2 + 12λ − 8 = 0. , 714 Chapter 9. Linear Systems with Constant Coefﬁcients
The plot of the characteristic equation −1 0 1 2 3 4 5 suggest a multiple root at 2. Repeated division by λ − 2 reveals
(λ − 2)3 = 0.
Thus, the characteristic polynomial has a single root, 2, with algebraic multiplicity 3. Thus, the general
solution is
y(t) = C1 e2t + C2 te2t + C3 t 2 e2t .
The initial condition y(0) = −2 provides C1 = −2. The derivative of the general solution is
y (t) = 2C1 e2t + C2 e2t (2t + 1) + C3 e2t (2t 2 + 2t).
The initial condition y (0) = 0 provides
2 C1 + C2 = 0 ,
which in turn, because C1 = −2, provides C2 = 4. The second derivative of the general solution is
y (t) = 4C1 e2t + C2 e2t (4 + 4t) + C3 e2t (4t 2 + 8t + 2).
The initial condition y (0) = 2 provides
4 C1 + 4 C2 + 2 C3 = 0 ,
which in turn, because C1 = −2 and C2 = 4, provides C3 = −3. Therefore, the solution of the initial value
problem is
y(t) = −2e2t + 4te2t − 3t 2 e2t .
7.42. If y − 3y + 52y = 0, then the characteristic equation is
λ3 − 3λ + 52 = 0. 9.7. Higher Order Linear Equations 715 The plot of the characteristic equation −5 −4 −3 −2 −1 01 2 3 4 5 suggests a zero at −4. Division by λ + 4 reveals
(λ + 4)(λ2 − 4λ + 13) = 0.
The quadratic formula reveals the zeros of the second factor, 2 ± 3i . Thus, the general solution is
y(t) = C1 e−4t + e2t (C2 cos 3t + C3 sin 3t).
The initial condition y(0) = 0 provides
0 = C1 + C2 .
Differentiate.
y (t) = −4C1 e−4t + e2t (−3C2 sin 3t + 3C3 cos 3t) + 2e2t (C2 cos 3t + C3 sin 3t)
= −4C1 e−4t + e2t ((3C3 + 2C2 ) cos 3t + (2C3 − 3C2 ) sin 3t)
The initial condition y (0) = −1 provides
−1 = −4C1 + 2C2 + 3C3 .
Differentiate again.
y (t) = 16C1 e−4t + e2t ((−9C3 − 6C2 ) sin 3t + (6C3 − 9C2 ) cos 3t)
+ 2e2t ((3C3 + 2C2 ) cos 3t + (2C3 − 3C2 ) sin 3t)
The initial condition y (0) = 2 provides
2 = 16C1 + (6C3 − 9C2 ) + (6C3 + 4C2 )
2 = 16C1 − 5C2 + 12C3 .
The augmented matrix reduces
1 0 0 2/15
1
1
0
0
−4 2
3 −1 → 0 1 0 −2/15
0 0 1 −1/15
16 −5 12 2
Thus, C1 = 2/15, C2 = −2/15, C3 = −1/15, and the solution is
y(t) =
7.43. . 2 −4 t
2
1
sin 3t .
e + e2t − cos 3t −
15
15
15 If y (4) + 8y + 16y = 0, then the characteristic equation is
λ4 + 8λ2 + 16 = (λ2 + 4)2 = 0.
Therefore, the roots are ±2i , each of which has algebraic multiplicity 2. Therefore, the general solution if
y(t) = C1 cos 2t + C2 t cos 2t + C3 sin 2t + C4 t sin 2t. 716 Chapter 9. Linear Systems with Constant Coefﬁcients
The initial condition y(0) = 0 provides C1 = 0. The derivative of the general solution is
y (t) = −2C1 sin 2t + C2 (cos 2t − 2t sin 2t)
+ 2C3 cos 2t + C4 (sin 2t + 2t cos 2t).
The initial condition y (0) = −1 generates C2 + 2C3 = −1. The second derivative of the general solution is
y (t) = −4C1 cos 2t + C2 (−4 sin 2t − 4t cos 2t)
− 4C3 sin 2t + C4 (4 cos 2t − 4t sin 2t).
The initial condition y (0) = 2 generates −4C1 + 4C4 = 2. The third derivative of the general solution is
y (t) = 8C1 sin 2t + C2 (−12 cos 2t + 8t sin 2t)
− 8C3 cos 2t + C4 (−12 sin 2t − 8t cos 2t).
The initial condition y (0) = 0 generates −12C2 − 8C3 = 0. The augmented matrix
1
0
0 0 0
0 −4
0
reduces to 1
0
−12 20
04
−8 0 −1 2
0 1 000
0 0 1 0 0 1/2 0 0 1 0 −3/4 .
0 0 0 1 1/2
Thus, C1 = 0, C2 = 1/2, C3 = −3/4, and C4 = 1/2. Therefore, the solution of the initial value problem is
y(t) =
7.44. 1
3
1
t cos 2t − sin 2t + t sin 2t.
2
4
2 Recall that for a = (a1 , a2 , . . . , aq )T ∈ Rq , we deﬁne
ya (t) = (a1 + a2 t + · · · + aq t q −1 )eλt .
Now, let b = (b1 , b2 , . . . , bq ) ∈ R and α , β ∈ R. Then,
α a + β b = (αa1 + βb2 , αa2 + βb2 , . . . , αaq + βbq )
and
yαa+β b (t) = (αa1 + βb1 ) + (αa2 + βb2 )t + · · · + (αaq + βbq )t q −1 eλt
= (αa1 + αa2 t + · · · + αaq t q −1 )eλt + (βb1 + βb2 t + · · · + βbq t q −1 )eλt
= α (a1 + a2 t + · · · + aq t q −1 )eλt + β (b1 + b2 t + · · · + bq t q −1 )eλt
= αya (t) + βy b (t). 7.45. Thus, yα a+β b = αya + βy b .
First, we must show that the set V is closed under addition. Let a and b be elements of V ⊂ Rq . Then, ya
and yb are solutions of
y (n) + a1 y (n−1) + · · · + an−1 y + an y = 0.
(∗)
However, ya + yb , being a linear combination of solutions of (∗), is also a solution of (∗). However, by (7.31),
ya + yb = ya+b .
q Recall that the set V ⊂ R is deﬁned
V = {a : ya is a solution of (∗)}.
Therefore, ya+b is a solution of (∗) and a + b ∈ V . Therefore, V is closed under addition. Next, we must
show that V is closed under scalar multiplication. Let a ∈ V and let α ∈ R be a scalar. Then, by deﬁnition 9.8. Inhomogeneous Linear Systems 717 of V , ya is a solution of (∗). However, αya , being a linear combination of solutions of (∗), is also a solution
of (∗). By (7.31),
αya = yαa . 7.46. Hence, yαa is a solution of (∗) and α a ∈ V . Therefore, V is closed under scalar multiplication and is a
subspace of Rq .
Recall that
yj (t) = Pj (t)eλt
are independent solutions of
y (n) + a1 y (n−1) + · · · + an−1 y + an y = 0, (∗∗) for i = 1, 2, . . . , q . Recall that a ∈ V ∈ Rq iff ya is a solution of (∗∗). For each j = 1, 2, . . . , q , let aj be
the vector of coefﬁcients of the polynomial Pj (t). Thus, yaj = yj for j = 1, 2, . . . , q . Let
C1 a1 + C2 a2 + · · · + Cq aq = 0.
Then,
yC1 a1 +C2 a2 +···+Cq aq = y0 .
Then
C1 ya1 + C2 ya2 + · · · + Cq yaq = y0 .
But, yj = yaj , so
C1 yj + C2 yj + · · · + Cq yq = y0 .
Note that y0 is the zero polynomial. However, the yj ’s are given as q independent solutions. Thus, C1 =
C2 = · · · = Cq = 0 and a1 , a2 , . . . , aq are independent. Section 8. Inhomogeneous Linear Systems
8.1. If
A= 5
−2 6
−2 f= and et
,
et then the characteristic polynomial is
p(λ) = λ2 − T λ + D = λ2 − 3λ + 2 = (λ − 1)(λ − 2),
generating eigenvalues λ1 = 1 and λ2 = 2. The associated eigenvectors are
4
−2
3
A − 2I =
−2
A−I = 6
−3
6
−4 ⇒
⇒ 3
,
−2
2
.
v2 =
−1
v1 = and Thus, the homogeneous solution is yh = C1 y1 + C2 y2 , where
y1 (t) = et v1 = 3e t
−2et and y2 (t) = e2t v2 = The fundamental matrix is
Y (t) = [y1 (t), y2 (t)] = 3et
−2et 2 e 2t
.
−e2t 2 e 2t
.
−e2t 718 Chapter 9. Linear Systems with Constant Coefﬁcients
The inverse1 of Y (t) is calculated
Y −1 (t) = −e2t
2e t 1
e3t Hence,
Y −1 (t)f (t) = −2e2t
3et −e−t
2 e −2 t −e−t
2 e −2 t −2e−t
3e−2t and −3
5e − t Y −1 (t)f (t) dt = −2e−t
.
3e−2t et
et = −3
,
5e − t = −3t
.
−5e−t dt = Thus,
Y −1 f (t) dt yp = Y (t) 2 e 2t
−3t
3et
t
−2e −e2t
−5e−t
−9tet − 10et
=
.
6tet + 5et
= Finally, the general solution is
y(t) = C1 et
8.2. 3
2
−9tet − 10et
+ C2 e 2 t
.
+
−2
6tet + 5et
−1 The matrix
A= 3
2 −4
−3 has eigenvalues λ1 = 1 and λ2 = −1, with associated eigenvectors v1 = (2, 1)T and v2 = (1, 1)T . Thus, the
homogeneous solution is yh = C1 y1 + C2 y2 , where
2
1
y1 (t) = et
and y2 (t) = e−t
.
1
1
The fundamental matrix is
2 e t e −t
Y (t) = [y1 (t), y2 (t)] =
.
e t e −t
The inverse is calculated with
e−t −e−t
Y −1 (t) =
.
−et
2e t
Hence,
e−t −e−t
e −t
e −2 t − 1
Y −1 (t)f (t) =
=
,
−1 + 2 e 2 t
−et
2e t
et
and
1
− 2 e −2 t − t
e −2 t − 1
.
Y −1 f (t) dt =
dt =
2t
2e − 1
e 2t − t
Thus,
yp = Y (t) Y −1 f (t) dt 1
− 2 e −2 t − t
2 e t e −t
t
−t
e
e
e 2t − t
−t
t
−e − 2te + et − te−t
=
.
1
− 2 e−t − tet + et − te−t = 1 Perhaps the easiest way to invert a 2 × 2 matrix is to use the following fact:
1
ab
⇒ A−1 =
A=
cd
det (A) d
−c −b
.
a 9.8. Inhomogeneous Linear Systems 719 Finally, the general solution is
−e−t − 2tet + et − te−t
2
1
+ C2 e−t
+
.
1
1
1
− 2 e−t − tet + et − te−t y(t) = C1 et
8.3. If
A= −3
−2 6
4 f= and 3
,
4 then the characteristic polynomial is
p(λ) = λ2 − T λ + D = λ2 − λ = λ(λ − 1),
generating eigenvalues λ1 = 0 and λ2 = 1. The associated eigenvectors are
−3 6
2
⇒ v1 =
A − 0I =
, and
−2 4
1
−4 6
3
⇒ v2 =
.
A−I =
−2 3
2
Thus, the homogeneous solution is yh = C1 y1 + C2 y2 , where
2
3e t
and y2 (t) = et v2 =
y1 (t) = e0t v1 =
.
1
2e t
The fundamental matrix is
2 3e t
Y (t) = [y1 (t), y2 (t)] =
.
1 2e t
The inverse of Y (t) is calculated
1 2et −3et
2
−3
=
.
Y −1 (t) = t
2
−e−t 2e−t
e −1
Hence,
2
−3
3
−6
Y −1 (t)f (t) =
=
,
−e − t 2 e − t
4
5e − t
and
−6
−6t
dt =
.
Y −1 (t)f (t) dt =
5e − t
−5e−t
Thus,
yp = Y (t) Y −1 f (t) dt −6t
2 3e t
−5e−t
1 2e t
−12t − 15
=
.
−6t − 10
= Finally, the general solution is
y(t) = C1
8.4. 2
3et
+ C2
1
2e t The matrix
A= −3
−3 + −12t − 15
.
−6t − 10 10
8 has eigenvalues λ1 = 2 and λ2 = 3, with associated eigenvectors v1 = (2, 1)T and v2 = (5, 3)T . Thus, the
homogeneous solution is yh = C1 y1 + C2 y2 , where
2
5
y1 (t) = e2t
and y2 (t) = e3t
.
1
3
The fundamental matrix is
2e2t 5e3t
Y (t) = [y1 (t), y2 (t)] =
.
e2t 3e3t 720 Chapter 9. Linear Systems with Constant Coefﬁcients
The inverse is calculated with
Y −1 (t) = 3e3t
−e2t 1
e5t Hence, −5e3t
2 e 2t 3e−2t
−e−3t Y −1 (t)f (t) = = 3e−2t
−e−3t −5e−2t
.
2 e − 3t Y −1 (t)f (t) dt = 3
−11e−2t
=
,
5e−3t
4 −11e−2t
5e−3t and −5e−2t
2e−3t dt = 11e−2t /2
.
−5e−3t /3 Thus,
yp = Y (t) Y −1 (t)f (t) dt 2e2t 5e3t
e2t 3e3t
8/3
=
.
1/2 11e−2t /2
−5e3t /3 = Finally, the general solution is
y(t) = C1 e2t
8.5. 2
5
8/3
+ C2 e3t
+
.
1
3
1/2 A has eigenvalues 2 ± i , and associated with the eigenvalue 2 + i has eigenvector w = (−1 − i, 1)T . Hence
the homogenous equation has complex solution
z(t) = e(2+i)t −1 − i
1
−1
−1
+i
1
0
− cos t − sin t
+i
sin t = e2t [cos t + i sin t ]
sin t − cos t
cos t = e 2t Thus the homogeneous equation has the real solutions
y1 (t) = Re z(t) = e2t
y2 (t) = Im z(t) = e2t sin t − cos t
and
cos t
− cos t − sin t
.
sin t The fundamental matrix is
Y (t) = e2t sin t − cos t
cos t Its inverse is
Y −1 (t) = e−2t
Hence
Y −1 (t)f (t) = e−2t sin t
− cos t sin t
− cos t − cos t − sin t
.
sin t
cos t + sin t
.
sin t − cos t cos t + sin t
sin t − cos t 0
e 2t = and
Y −1 (t)f (t) dt = sin t − cos t
.
− cos t − sin t sin t + cos t
,
sin t − cos t 9.8. Inhomogeneous Linear Systems 721 Then the particular solution is
yp (t) = Y (t)
= e 2t
= e 2t Y −1 f (t) dt
sin t − cos t
cos t
2
.
−1 − cos t − sin t
sin t sin t − cos t .
− cos t − sin t The general solution is
y(t) = C1 e2t
8.6. sin t − cos t
cos t + C2 e2t The matrix
A= 4
−1 − cos t − sin t
sin t + e 2t 2
.
−1 2
2 has eigenvalues 3 ± i . Associated with λ = 3 + i is w = (−1 − i, 1)T , so the homogeneous equation has
complex solution
−1 − i
z(t) = e(3+i)t
1
−1
−1
3t
= e (cos t + i sin t)
+i
1
0
−1
−1
−1
−1
3t
= e cos t
.
− sin t
+ sin t
+ ie3t cos t
1
0
1
0
The real and imaginary parts of z(t) help form the fundamental matrix
− cos t + sin t − cos t − sin t
Y (t) = e3t
.
cos t
sin t
Its inverse is
sin t
cos t + sin t
.
Y −1 (t) = e−3t
− cos t − cos t + sin t
Hence,
sin t
cos t + sin t
t
Y −1 (t)f (t) = e−3t
− cos t − cos t + sin t
e3t
t sin t + e3t (cos t + sin t)
= e − 3t
−t cos t + e3t (− cos t + sin t)
− 3t
t e sin t + cos t + sin t
=
.
−te−3t cos t − cos t + sin t
Needless to say,
to calculate Y −1 (t)f (t) dt is a tough antiderivative to ﬁnd. We will use a CAS (computer algebra system)
yp (t) = Y (t)
= Y −1 (t)f (t) dt −t/5 − 1/50 + 2e3t
.
−t/10 − 3/50 − e3t Hence, the general solution is
− cos t + sin t
− cos t − sin t
+ C2
cos t
sin t
−t/5 − 1/50 + 2e3t
+
.
−t/10 − 3/50 − e3t y(t) = e3t C1 8.7. A has eigenvalues 0, 2, and 1, with corresponding eigenvalues (−1, 2, 0)T , (1, 0, 1)T , and (0, 3, 1)T . Thus
−1 e 2 t 0
2
0 3e t .
V=
0 e 2t e t 722 Chapter 9. Linear Systems with Constant Coefﬁcients
If we form the augmented matrix [V , I ] and use row operations to reduce to row echelon form [I, V −1 ], we
discover that
−3
−1
3
V −1 = −2e−2t −e−2t 3e−2t .
2 e −t
e −t
−2e−t
Then sin t
0
0 V −1 f =
Hence . V −1 f dt = (− cos t, 0, 0)T , and the particular solution is
V −1 f dt yp (t) = V
=
= −1 e2t
2
0
0 e 2t
cos t
−2 cos t
0 − cos t
0
0 0
3e t
et
. The general solution is 8.8. + C2 e 2t
0
e 2t A= y(t) = C1 −1
2
0 1
0
0 The matrix 0
3e t
et + C3 −18
14
35 cos t
−2 cos t
0 + . 8
−6
−15 has eigenvalues λ1 = 0, λ2 = −1, and λ3 = 1, with associated eigenvectors v1 = (2, −3, −7)T , v2 =
(−2, 2, 5)T , and v3 = (1, 0, 0)T . Thus, the homogeneous solution is yh = C1 y1 + C2 y2 + C3 y3 , where
y1 (t) = 2
−3
−7 , y2 (t) = e−t −2
2
5 , and 1
0
0 y3 (t) = et The fundamental matrix is
Y (t) = [y1 (t), y2 (t), y3 (t)] =
with inverse
0
0
e −t Y −1 (t) =
Hence,
Y −1 (t)f (t) = 0
0
e −t −5
−7et
−4e−t and
Y −1 (t)f (t) dt = −2 e − t
2 e −t
5e − t 2
−3
−7 −5
−7e−t
−4e−t 2
3et
2 e −t 2
3et
2 e −t 0
1
0 −5
−7et
−4e−t dt = et
0
0 , . = −5
−7et
−4e−t
−5t
−7et
4 e −t . , . 9.8. Inhomogeneous Linear Systems 723 Thus,
yp (t) = Y (t) 2 −2e−t et
−3 2e−t
0
−7 5e−t
0
−10t + 18
15t − 14 .
35t − 35 =
=
Finally, the general solution is
2
y(t) = C1 −3
−7
8.9. Y −1 (t)f (t) dt + C2 e−t −2
2
5 −5t
−7et
4 e −t 1
0
0 + C3 et −10t + 18
15t − 14
35t − 35 + . A has eigenvalues 0, −2, and −1, with corresponding eigenvectors (−1, 4, 1)T , (−3, 2, 0)T , and (0, 3, 1)T ,
so
0
−1 −3e−2t
3e−t .
4
2 e −2 t
V=
1
0
e −t
If we form the augmented matrix [V , I ] and use row operations to reduce to row echelon form [I, V −1 ], we
discover that
2
3
−9
V −1 = −e2t −e2t 3e2t .
−2et −3et 10et
Then V −1 f = (6, −2e2t , −6et )T , and V −1 f dt = (6t, −e2t , −6et )T . Hence the particular solution is
V −1 f dt yp (t) = V
=
= −1 −3e−2t
4
2 e −2 t
1
0
3 − 6t
24t − 20 .
6t − 6 6t
−e2t
−6et 0
3e−t
e −t The general solution is
y(t) = C1
8.10. −1
4
1 + C2 The matrix
A= −3e−2t
2 e −2 t
0
11
−6
42 + C3
−7
4
−27 0
3e − t
e −t + 3 − 6t
24t − 20
6t − 6 . −4
2
−15 has eigenvalues λ1 = 1, λ2 = −1, and λ3 = 0, with associated eigenvectors v1 = (1, −2, 6)T , v2 = (1, 0, 3)T ,
and v3 = (1, 1, 1)T . Thus, the homogeneous solution is yh = C1 y1 + C2 y2 + C3 y3 , where
1
1
1
y1 (t) = et −2 , y2 (t) = e−t 0 , and y3 (t) = 1 .
6
3
1
The fundamental matrix is
et
e −t 1
t
0
1,
Y (t) = [y1 (t), y2 (t), y3 (t)] = −2e
6e t
3e−t 1 724 Chapter 9. Linear Systems with Constant Coefﬁcients
with inverse
Y −1 (t) = Hence,
Y −1 3e−t
−8et
6 3e−t
−8et
6 (t)f (t) = Y −e−t
3et
−2 −e−t
3et
−2 1
0
0 2 e −t
5et
−3 and
−1 −2e−t
5et
−3 3e − t
−8et
6 (t)f (t) dt = dt = . = 3e − t
−8et
6 −3e−t
−8et
6t , . Thus,
yp (t) = Y (t)
=
= Y −1 (t)f (t) dt e −t
et
0
−2et
6e t
3e−t
6t − 11
6t + 6 .
6t − 42 1
1
1 −3e−t
−8et
6t Finally, the general solution is
y(t) = C1 e t 1
−2
6 + C2 e −t 1
0
3 + C3 1
1
1 + 6t − 11
6t + 6
6t − 42 . 8.11. A has eigenvalues −1 and 3, with corresponding eigenvectors (0, 1)T and (−1, 1)T . Thus
0 −e3t
Y (t) = −t
.
e
e3t
11
0 −1
,
Y (0) =
and Y (0)−1 =
−1 0
11
so
0
e3t
etA = Y (t)Y (0)−1 = −t
.
e − e3t e−t 8.12. A has eigenvalues 1 and 3, with corresponding eigenvectors (2, 1)T and (1, 1)T . Thus
2et e3t
Y (t) =
.
e t e −t
1 −1
21
,
Y (0) =
and Y (0)−1 =
−1 2
11
so
2et − e3t 2e3t − 2et
etA = Y (t)Y (0)−1 =
.
et − e3t
2e3t − et 8.13. A has eigenvalues ±i . An eigenvector corresponding to i is (2 − i, 5)T . Hence a complex solution is
z(t) = eit w
−1
2
+i
0
5
2 cos t + sin t
2 sin t − cos t
=
+i
.
5 cos t
5 sin t
The real and imaginary parts of z are solutions to the homogeneous equation, and so
2 cos t + sin t 2 sin t − cos t
V (t) =
.
5 cos t
5 sin t
= [cos t + i sin t ] 9.8. Inhomogeneous Linear Systems
Thus −1
0 2
5 V (0) = and and
etA = V (t)V (0)−1 =
8.14. 1/5
,
2 /5 0
−1 V (0)−1 = cos t − 2 sin t
−5 sin t 725 sin t
.
cos t + 2 sin t A has eigenvalues 1 ± i . An eigenvector corresponding to i is (1 + 1, 1)T . Hence a complex solution is
z(t) = e(1+i)t w
1
1
+i
1
0
cos t + sin t
+i
sin t = et [cos t + i sin t ]
cos t − sin t
cos t = et . The real and imaginary parts of z are solutions to the homogeneous equation, and so
V (t) =
Thus
V (0) = 1
1 cos t − sin t
cos t
1
0 and
etA = V (t)V (0)−1 =
8.15. and cos t + sin t
.
sin t
0
1 V (0)−1 = cos t + sin t
sin t 1
,
−1 −2 sin t
.
cos t − sin t A has eigenvalue λ = −3, which has algebraic multiplicity 2 and geometric multiplicity 1. Thus
etA = eλt et (A−λI )
= e−3t [I + t (A + 3I )]
1−t
−t
.
= e−3t
t
1+t 8.16. A has eigenvalue λ = 2, which has algebraic multiplicity 2 and geometric multiplicity 1. Thus
etA = eλt et (A−λI )
= e2t [I + t (A − 2I )]
1 − 2t
t
= e 2t
.
−4t
1 + 2t 8.17. A has eigenvalues 2 and −1 with eigenvectors (−1, 1)T and (−1, 2)T . Thus
Y (t) =
Y (0) = −1
1 −1
2 Thus
etA = Y (t)Y (0)−1 = −e2t
e 2t
and −e−t
,
2 e −t
Y (0)−1 = 2 e 2 t − e −t
2 e −t − 2 e 2 t −2
1 −1
.
1 e 2 t − e −t
.
2 e −t − e 2 t The solution to the initial value problem is
y(t) = etA y0
e 2 t − e −t
2 e 2 t − e −t
=
−t
2t
2e − 2e
2 e −t − e 2 t
2t
−t
3e − 2e
=
.
4 e − t − 3e 2 t 1
1 726
8.18. Chapter 9. Linear Systems with Constant Coefﬁcients
A has eigenvalues −4 and −1 with eigenvectors (−1, 1)T and (−1, 2)T . Thus
Y (t) = −e−4t
e −4 t Y (0) =
Thus
etA = Y (t)Y (0)−1 = −1
1 −e−t
,
2 e −t
−1
.
2 2 e −4 t − e −t
2 e −t − 2 e −4 t e −4 t − e −t
.
2 e −t − e −4 t The solution to the initial value problem is
y(t) = etA y0
2 e −4 t − e −t
e −4 t − e −t
=
−t
−4 t
2e − 2e
2 e −t − e −4 t
−4 t
−t
2e − e
=
.
2 e −t − 2 e −4 t
8.19. 1
0 A has eigenvalues ±2i . Associated with the eigenvalue 2i there is the eigenvector (1 + i, 2)T . The associated
complex solution is
1+i
z(t) = e2it
2
1
1
= cos 2t
− sin 2t
2
0
1
1
+ i cos 2t
+ sin 2t
.
0
2
The real and imaginary parts of z are a fundamental set of solutions, so we can take
Y (t) = cos 2t − sin 2t
2 cos 2t cos 2t + sin 2t
1
, and Y (0) =
2 sin 2t
2 Then
etA = Y (t)Y (0)−1 = cos 2t + sin 2t
2 sin 2t − sin 2t
.
cos 2t − sin 2t The solution to the initial value problem is
y(t) = etA y0
cos 2t + sin 2t
− sin 2t
=
2 sin 2t
cos 2t − sin 2t
cos 2t
=
.
cos 2t + sin 2t
8.20. A has one eigenvalue, −3, with multiplicity 2. Hence
etA = e−3t et (A+3I )
= e−3t [I + t (A + 3I )]
1 + 2t
t
.
= e−3t
−4t
1 − 2t
The solution to the initial value problem is
y(t) = etA y0
= e − 3t
= e − 3t 1 + 2t
t
−4t
1 − 2t
2 + 3t
.
−1 − 6t 2
−1 1
1 1
.
0 9.8. Inhomogeneous Linear Systems
8.21. 727 The matrix A has eigenvalues −1 and 5, with associated eigenvectors (0, 1)T and (1, 1)T . Thus Y (0) = 0
1 1
1 e5t
,
e5t 0
e −t Y (t) = −1
1 Y (0)−1 = and Hence e5t
e − e −t etA = Y (t)Y (0)−1 = 1
.
0 0
.
e −t 5t The solution to the initial value problem is
−e5t −5
.
4e − e5t −5 y(t) = e(t −t0 )A y0 =
8.22. 1−t The matrix A has eigenvalues −1 and −2 with associated eigenvectors (−1, 1)T and (−1, 2)T . Hence
Y (t) =
−1
1 Y (0) = −e − t
e −t −1
2 and Thus −e−2t
,
2 e −2 t
Y (0)−1 = 2 e −t − e −2 t
2 e −2 t − 2 e −t etA = Y (t)Y (0)−1 = −2
1 −1
.
1 e −t − e −2 t
.
2 e −2 t − e −t The solution to the initial value problem is
y(t) = e(t −t0 )A y0 =
8.23. 2 e −t −1 − e −2 t −2
.
2 e −2 t −2 − 2 e −t −1 A has eigenvalues ±3i . The eigenvalue 3i has associated eigenvector (2 + 1, 5)T . The associated solution is
2
1
− sin 3t
5
0
1
2
i cos 3t
+ sin 3t
0
5 z(t) = cos 3t Thus
Y (t) =
Y (0) = 2 cos 3t − sin 3t
5 cos 3t
2
5 1
0 and Hence
etA = Y (t)Y (0)−1 = . cos 3t + 2 sin 3t
,
5 sin 3t Y (0)−1 = cos 3t + 2 sin 3t
5 sin 3t 10
55 1
.
−2 − sin 3t
.
cos 3t − 2 sin 3t The solution to the initial value problem is
y(t) = e(t −t0 )A y0 =
8.24. − cos 3(t − 1) − 2 sin 3(t − 1)
.
−5 sin 3(t − 1) A has the single eigenvalue −4. Hence
etA = e−4t et (A+4I )
= e−4t [I + t (A + 4I )]
10
= e −4 t
.
−t 1
The solution to the initial value problem is
y(t) = e(t −t0 )A y0 = e8−4t −2
.
2t − 2 728
8.25. Chapter 9. Linear Systems with Constant Coefﬁcients
The matrix A has eigenvalues −2, 3, and 1, with associated eigenvectors (0, 1, 0)T , (−2, 0, 1)T , and
(−1, 1, 1)T . Hence
0
−2e3t −et
−2 t
0
et
,
Y (t) = e
3t
0
e
et
Y (0) = −2
0
1 0
1
0 −1
1
1 Thus
e
8.26. tA = Y (t)Y (0) −1 = −1
−1
1 Y (0)−1 = and 2e3t − et
e t − e −2 t
et − e3t 2e3t − 2et
2 e t − 2 e −2 t
2et − e3t 0 e −2
−1
2 1
0
0 −2 t 0 . . A has eigenvalues ±i and 1. Associated to the eigenvalue i is the eigenvector w = (−1 − i, 0, 2)T . This leads
to the complex solution
z(t) = eit w
= cos t −1
0
2 + i cos t 1
0
0 + sin t
−1
0
0 −1
0
2 + sin t . The real and imaginary parts of z(t) provide two linearly independent solutions. We get a third from the
eigenvalue 1 and its eigenvector (0, 1, 2)T . It is y3 (t) = et (0, 1, 2)T . Hence we have the fundamental matrix
sin t − cos t
0
2 cos t Y (t) =
We have
Y (0) = −1
0
2 −1
0
0 0
1
2 and − cos t − sin t
0
2 sin t
Y (0)−1 = 0
et
2e t 0
−1
0 −1
1
1 . 1/2
−1/2
0 . Finally
etA = Y (t)Y (0)−1
cos t + sin t
0
=
−2 sin t
8.27. −2 sin t
et
t
2e − 2 cos t − 2 sin t sin t
0
cos t − sin t . A has eigenvalues −2, −1, 1, and 2 with associated eigenvectors (0, 1, 0, 1)T , (−1, 1, 1, 0)T , (0, 1, 0, 0)T ,
and (1, 2, 0, 1)T . Thus we have the fundamental matrix
0
−e−t 0 e2t e −2 t
Y (t) = 0
e −2 t We have 0
1
Y (0) = 0
1 −1
1
1
0 0
1
0
0 1
2
0
1 e −t
e −t
0 et
0
0 2 e 2t .
0
e 2t −1 and 0
Y (0)−1 = −1
1 0
0
1
0 −1
1
−2
1 1
0
.
−1 0 9.8. Inhomogeneous Linear Systems 729 Finally, 8.28. etA = Y (t)Y (0)−1 0
e 2 t − e −t
0
e 2t
2t
t
−2 t
t
2t
t
−t
−2 t
−2 t
t
e 2e − 2e + e − e
e −e 2e − e − e
=
.
0
0
e −t
0
0
e 2 t − e −2 t
e −2 t
e 2 t − e −2 t
A has eigenvalues −1, −2, and −3. −1 has algebraic multiplicity 2 and geometric multiplicity 1. The vector
v1 = (−2, 1, 0, 1)T is an eigenvector and v2 = (0, 0, 1, 1)T is a generalized eigenvector with (A+I )v2 = −v1 .
Hence v1 leads to the solution y1 (t) = e−t (−2, 1, 0, 1)T , and v2 leads to the solution
y2 (t) = e−t [v2 + t (A + I )v2 ]
= e−t [v2 − t v1 ] 2t −t .
= e −t 1
1−t 8.29. The eigenvalue −2 has eigenvector (−1, 0, 0, 1)T , and leads to the solution y3 (t) = e−2t (−1, 0, 0, 1)T . The
eigenvector −3 has eigenvector (0, 0, 1, 0)T and leads to the solution y4 (t) = e−3t (0, 0, 1, 0)T . Thus we have
the fundamental matrix −2e−t
2te−t
−e−2t
0
−t
−t
−te
0
0
e
.
Y (t) = 0
e −t
0
e−3t −t
−t
−2 t
e
e (1 − t) e
0
Then
0 −2 0 −1 0 1 0 0
1 0 1
1 1 0 0 0
and Y (0)−1 = .
Y (0) = 0 1 0 1
−1 −2 0 0 1110
−1 −1 1 −1
Finally
etA = Y (t)Y (0)−1 e−2t + 2te−t
e−t (2t − 2) + 2e−2t
0
2te−t −t
−t
−te
e (1 − t)
0
−te−t .
=
−t
−3t
−t
− 3t
− 3t
−t
e −e
e −e
e
e − e − 3t −t
−2 t
−t
−3t
−t
e (1 − t) − e
e (2 − t) − 2e
0
e (1 − t)
We write
t
y(t) = etA y0 + e−sA f (s) ds . 0 We can now prove the result by direct substitution.
y (t) = Ay(t) + etA e−tA f (t) = Ay(t) + f (t).
y(0) = e0A y0 = y0 . ...
View
Full
Document
 Spring '10
 dunno
 Math

Click to edit the document details